The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
Monte Carlo simulations are at the heart of many high-fidelity simulations and analyses for radiation transport systems. As is the case with any complex computational model, it is important to propagate sources of input uncertainty and characterize how they affect model output. Unfortunately, uncertainty quantification (UQ) is made difficult by the stochastic variability that Monte Carlo transport solvers introduce. The standard method to avoid corrupting the UQ statistics with the transport solver noise is to increase the number of particle histories, resulting in very high computational costs. In this contribution, we propose and analyze a sampling estimator based on the law of total variance to compute UQ variance even in the presence of residual noise from Monte Carlo transport calculations. We rigorously derive the statistical properties of the new variance estimator, compare its performance to that of the standard method, and demonstrate its use on neutral particle transport model problems involving both attenuation and scattering physics. We illustrate, both analytically and numerically, the estimator's statistical performance as a function of available computational budget and the distribution of that budget between UQ samples and particle histories. We show analytically and corroborate numerically that the new estimator is unbiased, unlike the standard approach, and is more accurate and precise than the standard estimator for the same computational budget.
Heterogenous materials under shock compression can be expected to reach different shock states throughout the material according to local differences in microstructure and the history of wave propagation. Here, a compact, multiple-beam focusing optic assembly is used with high-speed velocimetry to interrogate the shock response of porous tantalum films prepared through thermal-spray deposition. The distribution of particle velocities across a shocked interface is compared to results obtained using a set of defocused interferometric beams that sampled the shock response over larger areas. The two methods produced velocity distributions along the shock plateau with the same mean, while a larger variance was measured with narrower beams. The finding was replicated using three-dimensional, mesoscopically resolved hydrodynamics simulations of solid tantalum with a pore structure mimicking statistical attributes of the material and accounting for radial divergence of the beams, with agreement across several impact velocities. Accounting for pore morphology in the simulations was found to be necessary for replicating the rise time of the shock plateau. The validated simulations were then used to show that while the average velocity along the shock plateau could be determined accurately with only a few interferometric beams, accurately determining the width of the velocity distribution, which here was approximately Gaussian, required a beam dimension much smaller than the spatial correlation lengthscale of the velocity field, here by a factor of ∼30×, with implications for the study of other porous materials.
In the 1970’s and 1980’s, researchers at Sandia National Laboratories produced electron albedo data for a range of materials. Since that time, the electron albedo data has been used for a wide variety of purposes including the validation of Monte Carlo electron transport codes. This report was compiled to examine the electron albedo experiment results in the context of Integrated Tiger Series (ITS) validation. The report presents tables and figures that could provide insight into the underlying model form uncertainty present in the ITS code. Additionally, the report provides data on potential means to reduce these model form errors by highlighting potential refinements in the cross-section generation process.
The Integrated TIGER Series (ITS) transport code is a valuable tool for photon-electron transport. A seven-problem validation suite exists to make sure that the ITS transport code works as intended. It is important to ensure that data from benchmark problems is correctly compared to simulated data. Additionally, the validation suite did not previously make use of a consistent quantitative metric for comparing experimental and simulated datasets. To this end, the goal of this long-term project was to expand the validation suite both in problem type and in the quality of the error assessment. To accomplish that, the seven validation problems in the suite were examined for potential drawbacks. When a drawback was identified, the problems were ranked based on severity of the drawback and approachability of a solution. We determined that meaningful improvements could be made to the validation suite by improving the analysis for the Lockwood Albedo problem and by introducing the Ross dataset as an eighth problem to the suite. The Lockwood error analysis has been completed and will be integrated in the future. The Ross data is unfinished, but significant progress has been made towards analysis.
Proceedings of the 14th International Conference on Radiation Shielding and 21st Topical Meeting of the Radiation Protection and Shielding Division, ICRS 2022/RPSD 2022
The Saturn accelerator has historically lacked the capability to measure time-resolved spectra for its 3-ring bremsstrahlung x-ray source. This project aimed to create a spectrometer called AXIOM to provide this capability. The project had three major development pillars: hardware, simulation, and unfold code. The hardware consists of a ring of 24 detectors around an existing x-ray pinhole camera. The diagnostic was fielded on two shots at Saturn and over 100 shots at the TriMeV accelerator at Idaho Accelerator Center. A new Saturn x-ray environment simulation was created using measured data to validate. This simulation allows for timeresolved spectra computation to compare the experimental results. The AXIOM-Unfold code is a new parametric unfold code using modern global optimizers and uncertainty quantification. The code was written in Python, uses Gitlab version control and issue tracking, and has been developed with long term code support and maintenance in mind.
To understand the environment where a time-resolved hard x-ray spectrometer (AXIOM) might be fielded, experiments and simulations were performed to analyze the radiation dose environment underneath the Saturn vacuum dome. Knowledge of this environment is critical to the design and placement of the spectrometer. Experiments demonstrated that the machine performance, at least in terms of on-axis dose, has not significantly changed over the decades. Simulations of the off-axis dose were performed to identify possible spectrometer locations of interest. The effects from the source and dome hardware as well as source distributions and angles of incidence on the radiation environment were also investigated. Finally, a unified radiation transport model was developed for two widely used radiation transport codes to investigate the off-axis dose profiles and the time-dependent x-ray energy spectrum. The demonstrated equivalence of the unified radiation transport model between the radiation transport codes allows the team to tie future time-dependent x-ray environment calculations to previous integral simulations for the Saturn facility.
Current methods for stochastic media transport are either computationally expensive or, by nature, approximate. Moreover, none of the well-developed, benchmarked approximate methods can compute the variance caused by the stochastic mixing, a quantity especially important to safety calculations. Therefore, we derive and apply a new conditional probability function (CPF) for use in the recently developed stochastic media transport algorithm Conditional Point Sampling (CoPS), which 1) leverages the full intra-particle memory of CoPS to yield errorless computation of stochastic media outputs in 1D, binary, Markovian-mixed media, and 2) leverages the full inter-particle memory of CoPS and the recently developed Embedded Variance Deconvolution method to yield computation of the variance in transport outputs caused by stochastic material mixing. Numerical results demonstrate errorless stochastic media transport as compared to reference benchmark solutions with the new CPF for this class of stochastic mixing as well as the ability to compute the variance caused by the stochastic mixing via CoPS. Using previously derived, non-errorless CPFs, CoPS is further found to be more accurate than the atomic mix approximation, Chord Length Sampling (CLS), and most of memory-enhanced versions of CLS surveyed. In addition, we study the compounding behavior of CPF error as a function of cohort size (where a cohort is a group of histories that share intra-particle memory) and recommend that small cohorts be used when computing the variance in transport outputs caused by stochastic mixing.
Thermal spray processes involve the repeated impact of millions of discrete particles, whose melting, deformation, and coating-formation dynamics occur at microsecond timescales. The accumulated coating that evolves over minutes is comprised of complex, multiphase microstructures, and the timescale difference between the individual particle solidification and the overall coating formation represents a significant challenge for analysts attempting to simulate microstructure evolution. In order to overcome the computational burden, researchers have created rule-based models (similar to cellular automata methods) that do not directly simulate the physics of the process. Instead, the simulation is governed by a set of predefined rules, which do not capture the fine-details of the evolution, but do provide a useful approximation for the simulation of coating microstructures. Here, we introduce a new rules-based process model for microstructure formation during thermal spray processes. The model is 3D, allows for an arbitrary number of material types, and includes multiple porosity-generation mechanisms. Example results of the model for tantalum coatings are presented along with sensitivity analyses of model parameters and validation against 3D experimental data. The model's computational efficiency allows for investigations into the stochastic variation of coating microstructures, in addition to the typical process-to-structure relationships.
Current methods for stochastic media transport are either computationally expensive or, by nature, approximate. Moreover, none of the well-developed, benchmarked approximate methods can compute the variance caused by the stochastic mixing, a quantity especially important to safety calculations. Therefore, we derive and apply a new conditional probability function (CPF) for use in the recently developed stochastic media transport algorithm Conditional Point Sampling (CoPS), which 1) leverages the full intra-particle memory of CoPS to yield errorless computation of stochastic media outputs in 1D, binary, Markovian-mixed media, and 2) leverages the full inter-particle memory of CoPS and the recently developed Embedded Variance Deconvolution method to yield computation of the variance in transport outputs caused by stochastic material mixing. Numerical results demonstrate errorless stochastic media transport as compared to reference benchmark solutions with the new CPF for this class of stochastic mixing as well as the ability to compute the variance caused by the stochastic mixing via CoPS. Using previously derived, non-errorless CPFs, CoPS is further found to be more accurate than the atomic mix approximation, Chord Length Sampling (CLS), and most of memory-enhanced versions of CLS surveyed. In addition, we study the compounding behavior of CPF error as a function of cohort size (where a cohort is a group of histories that share intra-particle memory) and recommend that small cohorts be used when computing the variance in transport outputs caused by stochastic mixing.
The accurate construction of a surrogate model is an effective and efficient strategy for performing Uncertainty Quantification (UQ) analyses of expensive and complex engineering systems. Surrogate models are especially powerful whenever the UQ analysis requires the computation of statistics which are difficult and prohibitively expensive to obtain via a direct sampling of the model, e.g. high-order moments and probability density functions. In this paper, we discuss the construction of a polynomial chaos expansion (PCE) surrogate model for radiation transport problems for which quantities of interest are obtained via Monte Carlo simulations. In this context, it is imperative to account for the statistical variability of the simulator as well as the variability associated with the uncertain parameter inputs. More formally, in this paper we focus on understanding the impact of the Monte Carlo transport variability on the recovery of the PCE coefficients. We are able to identify the contribution of both the number of uncertain parameter samples and the number of particle histories simulated per sample in the PCE coefficient recovery. Our theoretical results indicate an accuracy improvement when using few Monte Carlo histories per random sample with respect to configurations with an equivalent computational cost. These theoretical results are numerically illustrated for a simple synthetic example and two configurations of a one-dimensional radiation transport problem in which a slab is represented by means of materials with uncertain cross sections.
Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1-D and 3-D calculations for binary mixtures with Markovian mixing statistics. In theory, CoPS has the capacity to be accurate for material structures beyond just those with Markovian statistics. However, realizing this capability will require development of conditional probability functions (CPFs) that are based, not on explicit Markovian properties, but rather on latent properties extracted from material structures. Here, we describe a first step towards extracting these properties by developing CPFs using deep neural networks (DNNs). Our new approach lays the groundwork for enabling accurate transport on many classes of stochastic media. We train DNNs on ternary stochastic media with Markovian mixing statistics and compare their CPF predictions to those made by existing CoPS CPFs, which are derived based on Markovian mixing properties. We find that the DNN CPF predictions usually outperform the existing approximate CPF predictions, but with wider variance. In addition, even when trained on only one material volume realization, the DNN CPFs are shown to make accurate predictions on other realizations that have the same internal mixing behavior. We show that it is possible to form a useful CoPS CPF by using a DNN to extract correlation properties from realizations of stochastically mixed media, thus establishing a foundation for creating CPFs for mixtures other than those with Markovian mixing, where it may not be possible to derive an accurate analytical CPF.
Conditional Point Sampling (CoPS) is a newly developed Monte Carlo method for computing radiation transport quantities in stochastic media. The algorithm involves a growing list of point-wise material designations during simulation that causes potentially unbounded increases in memory and runtime, making the calculation of probability density functions (PDFs) computationally expensive. In this work, we adapt CoPS by omitting material points used in the computation from being stored in persisting memory if they are within a user-defined “amnesia radius” from neighboring material points already defined within a realization. We conduct numerical studies to investigate trade-offs between accuracy, required computer memory, and computation time. We demonstrate CoPS's ability to produce accurate mean leakage results and PDFs of leakage results while improving memory and runtime through use of an amnesia radius. We show that a limit on required computer memory per cohort of histories and average runtime per history is imposed as a function of a non-zero amnesia radius. We find that, for the benchmark set investigated, using an amnesia radius of ra = 0.01 introduces minimal error (a 0.006 increase in CoPS3PO root mean squared relative error) in results while improving memory and runtime by an order of magnitude for a cohort size of 100.
Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1D and 3D simulations implemented for the CPU in Python. However, it is increasingly important that modern, production-level transport codes like CoPS be adapted for use on next-generation computing architectures. In this project, we describe the creation of a fast and accurate variant of CoPS implemented for the GPU in C++. As an initial test, we performed a code-to-code verification using single-history cohorts, which showed that the GPU implementation matched the original CPU implementation to within statistical uncertainty, while improving the speed by over a factor of 4000. We then tested the GPU implementation for cohorts up to size 64 and compared three variants of CoPS based on how the particle histories are grouped into cohorts: successive, simultaneous, and a successive-simultaneous hybrid. We examined the accuracy-efficiency tradeoff of each variant for 9 different benchmarks, measuring the reflectance and transmittance in a cubic geometry with reflecting boundary conditions on the four non-transmissive or reflective faces. Successive cohorts were found to be far more accurate than simultaneous cohorts for both reflectance (4.3 times) and transmittance (5.9 times), although simultaneous cohorts run more than twice as fast as successive cohorts, especially for larger cohorts. The hybrid cohorts demonstrated speed and accuracy behavior most similar to that of simultaneous cohorts. Overall, successive cohorts were found to be more suitable for the GPU due to their greater accuracy and reproducibility, although simultaneous and hybrid cohorts present an enticing prospect for future research.
Work on radiation transport in stochastic media has tended to focus on binary mixing with Markovian mixing statistics. However, although some real-world applications involve only two materials, others involve three or more. Therefore, we seek to provide a foundation for ongoing theoretical and numerical work with “N-ary” stochastic media comprised of discrete material phases with spatially homogenous Markovian mixing statistics. To accomplish this goal, we first describe a set of parameters and relationships that are useful to characterize such media. In doing so, we make a noteworthy observation: media that are frequently called Poisson media only comprise a subset of those that have Markovian mixing statistics. Since the concept of correlation length (as it has been used in stochastic media transport literature) and the hyperplane realization generation method are both tied to the Poisson property of the media, we argue that not all media with Markovian mixing statistics have a correlation length in this sense or are realizable with the traditional hyperplane generation method. Second, we describe methods for generating realizations of N-ary media with Markovian mixing. We generalize the chord- and hyperplane-based sampling methods from binary to N-ary mixing and propose a novel recursive hyperplane method that can generate a broader class of material structures than the traditional, non-recursive hyperplane method. Finally, we perform numerical studies that provide validation to the proposed N-ary relationships and generation methods in which statistical quantities are observed from realizations of ternary and quaternary media and are shown to agree with predicted values.
Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021
Vu, Emily H.; Brantley, Patrick S.; Olson, Aaron; Kiedrowski, Brian C.
We extend the Monte Carlo Chord Length Sampling (CLS) and Local Realization Preserving (LRP) algorithms to the N-ary stochastic medium case using two recently developed uniform and volume fraction models that follow a Markov-chain process for N-ary problems in one-dimensional, Markovian-mixed media. We use the Lawrence Livermore National Laboratory Mercury Monte Carlo particle transport code to compute CLS and LRP reflection and transmission leakage values and material scalar flux distributions for one-dimensional, Markovian-mixed quaternary stochastic media based on the two N-ary stochastic medium models. We conduct accuracy comparisons against benchmark results produced with the Sandia National Laboratories PlaybookMC stochastic media transport research code. We show that CLS and LRP produce exact results for purely absorbing N-ary stochastic medium problems and find that LRP is generally more accurate than CLS for problems with scattering.
Sobol' sensitivity indices (SI) provide robust and accurate measures of how much uncertainty in output quantities is caused by different uncertain input parameters. These allow analysts to prioritize future work to either reduce or better quantify the effects of the most important uncertain parameters. One of the most common approaches to computing SI requires Monte Carlo (MC) sampling of uncertain parameters and full physics code runs to compute the response for each of these samples. In the case that the physics code is a MC radiation transport code, this traditional approach to computing SI presents a workflow in which the MC transport calculation must be sufficiently resolved for each MC uncertain parameter sample. This process can be prohibitively expensive, especially since thousands or more particle histories are often required on each of thousands or so uncertain parameter samples. We propose a process for computing SI in which only a few MC radiation transport histories are simulated before sampling new uncertain parameter values. We use Embedded Variance Deconvolution (EVADE) to parse the desired parametric variance from the MC transport variance on each uncertain parameter sample. To provide a relevant benchmark, we propose a new radiation transport benchmark problem and derive analytic solutions for its outputs, including SI. The new EVADE-based approach is found to converge with MC convergence behavior and be at least an order of magnitude more precise for the same computational cost than the traditional approach for several SI on our test problem.
Thermal sprayed metal coatings are used in many industrial applications, and characterizing the structure and performance of these materials is vital to understanding their behavior in the field. X-ray Computed Tomography (CT) machines enable volumetric, nondestructive imaging of these materials, but precise segmentation of this grayscale image data into discrete material phases is necessary to calculate quantities of interest related to material structure. In this work, we present a methodology to automate the CT segmentation process as well as quantify uncertainty in segmentations via deep learning. Neural networks (NNs) are shown to accurately segment full resolution CT scans of thermal sprayed materials and provide maps of uncertainty that conservatively bound the predicted geometry. These bounds are propagated through calculations of material properties such as porosity that may provide an understanding of anticipated behavior in the field.
Radiation transport in stochastic media is a challenging problem type relevant for applications such as meteorological modeling, heterogeneous radiation shields, BWR coolant, and pebble-bed reactor fuel. A commonly cited challenge for methods performing transport in stochastic media is to simultaneously be accurate and efficient. Conditional Point Sampling (CoPS), a new method for transport in stochastic media, was recently shown to have accuracy comparable to the most accurate approximate methods for a common 1D benchmark set. In this paper, we use a pseudo-interface-based approach to extend CoPS to application in multi-D for Markovian-mixed media, compare its accuracy with published results for other approximate methods, and examine its accuracy and efficiency as a function of user options. CoPS is found to be the most accurate of the compared methods on the examined benchmark suite for transmittance and comparable in accuracy with the most accurate methods for reflectance and internal flux. Numerical studies examine accuracy and efficiency as a function of user parameters providing insight for effective parameter selection and further method development. Since the authors did not implement any of the other approximate methods, there is not yet a valid comparison for efficiency with the other methods.
Radiation transport in stochastic media is a problem found in a multitude of applications, and the need for tools that are capable of thoroughly modeling this type of problem remains. A collection of approximate methods have been developed to produce accurate mean results, but the demand for methods that are capable of quantifying the spread of results caused by the randomness of material mixing remains. In this work, the new stochastic media transport algorithm Conditional Point Sampling is expanded using Embedded Variance Deconvolution such that it can compute the variance caused by material mixing. The accuracy of this approach is assessed for 1D, binary, Markovian-mixed media by comparing results to published benchmark values, and the behavior of the method is numerically studied as a function of user parameters. We demonstrate that this extension of Conditional Point Sampling is able to compute the variance caused by material mixing with accuracy dependent on the accuracy of the conditional probability function used.
Radiation transport in stochastic media is a problem found in a multitude of applications, and the need for tools that are capable of thoroughly modeling this type of problem remains. A collection of approximate methods have been developed to produce accurate mean results, but the demand for methods that are capable of quantifying the spread of results caused by the randomness of material mixing remains. In this work, the new stochastic media transport algorithm Conditional Point Sampling is expanded using Embedded Variance Deconvolution such that it can compute the variance caused by material mixing. The accuracy of this approach is assessed for 1D, binary, Markovian-mixed media by comparing results to published benchmark values, and the behavior of the method is numerically studied as a function of user parameters. We demonstrate that this extension of Conditional Point Sampling is able to compute the variance caused by material mixing with accuracy dependent on the accuracy of the conditional probability function used.
The objective of this project is to investigate accuracy of error metrics in SCEPTRE and produce useful benchmarks, identify metrics that do not work well, identify metrics that do work well, and produce easy to reference results.