Astra, deployed in 2018, was the first petascale supercomputer to utilize processors based on the ARM instruction set. The system was also the first under Sandia's Vanguard program which seeks to provide an evaluation vehicle for novel technologies that with refinement could be utilized in demanding, large-scale HPC environments. In addition to ARM, several other important first-of-a-kind developments were used in the machine, including new approaches to cooling the datacenter and machine. This article documents our experiences building a power measurement and control infrastructure for Astra. While this is often beyond the control of users today, the accurate measurement, cataloging, and evaluation of power, as our experiences show, is critical to the successful deployment of a large-scale platform. While such systems exist in part for other architectures, Astra required new development to support the novel Marvell ThunderX2 processor used in compute nodes. In addition to documenting the measurement of power during system bring up and for subsequent on-going routine use, we present results associated with controlling the power usage of the processor, an area which is becoming of progressively greater interest as data centers and supercomputing sites look to improve compute/energy efficiency and find additional sources for full system optimization.
DOE maintains an up-to-date documentation of the number of available full drawdowns of each of the caverns at the U.S. Strategic Petroleum Reserve (SPR). This information is important for assessing the SPR’s ability to deliver oil to domestic oil companies expeditiously if national or world events dictate a rapid sale and deployment of the oil reserves. Sandia was directed to develop and implement a process to continuously assess and report the evolution of drawdown capacity, the subject of this report. This report covers impacts on drawdown availability due to SPR operations during Calendar Year 2022. A cavern has an available drawdown if, after that drawdown, the long-term stability of the cavern, the cavern field, or the oil quality are not compromised. Thus, determining the number of available drawdowns requires the consideration of several factors regarding cavern and wellbore integrity and stability, including stress states caused by cavern geometry and operations, salt damage caused by dilatant and tensile stresses, the effect of enhanced creep on wellbore integrity, and the sympathetic stress effect of operations on neighboring caverns. Finite-element geomechanical models have been used to determine the stress states in the pillars following successive drawdowns. By computing the tensile and dilatant stresses in the salt, areas of potential structural instability can be identified that may represent red flags for additional drawdowns. These analyses have found that many caverns will maintain structural integrity even when grown via drawdowns to dimensions resulting in a pillar-to-diameter ratio of less than 1.0. The analyses have also confirmed that certain caverns should only be completely drawn down one time. As the SPR caverns are utilized and partial drawdowns are performed to remove oil from the caverns (e.g., for oil sales, purchases, or exchanges authorized by the Congress or the President), the changes to the cavern caused by these procedures must be tracked and accounted for so that an ongoing assessment of the cavern’s drawdown capacity may be continued. A methodology for assessing and tracking the available drawdowns for each cavern is reiterated. This report is the latest in a series of annual reports, and it includes the baseline available drawdowns for each cavern, and the most recent assessment of the evolution of drawdown expenditures. A total of 222 million barrels of oil were released in calendar-year 2022. A nearly-equal amount of raw water was injected, resulting in an estimated 34 million barrels of cavern leaching. Twenty caverns have now expended a full drawdown. Cavern BC 18 has expended all its baseline available drawdowns, and has no drawdowns remaining. Cavern BM 103 has expended one of its two baseline drawdowns, and is now a single-drawdown cavern. All other caverns with an expenditure went from at-least-5 to at-least-4 remaining drawdowns.
The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.
This presentation provides information on the experiments to measure the effect of Tantalum (Ta) on critical systems. This talk presents details on the Sandia Critical Experiments Program with the Seven Percent Critical Experiment (7uPCX) and the Burnup Credit Critical Experiment (BUCCX). The presentation highlights motivations, experiment design, and evaluations and publications.
This lecture is on the design of a Uranium Dioxide-Beryllium Oxide UO2-BeO Critical Experiment at Sandia. This presentation provides background info on the Annular Core Research Reactor (ACRR). Additionally, this presentation shows experimental and alternative designs and concludes with a sensitivity analysis.
Finite element models can be used to model and predict the hysteresis and energy dissipation exhibited by nonlinear joints in structures. As a result of the nonlinearity, the frequency and damping of a mode is dependent on excitation amplitude, and when the modes remain uncoupled, quasi-static modal analysis has been shown to efficiently predict this behavior. However, in some cases the modes have been observed to couple such that the frequency and damping of one mode is dependent on the amplitude of other modes. To model the interactions between modes, one must integrate the dynamic equations in time, which is several orders of magnitude more expensive than quasi-static analysis. This work explores an alternative where quasi-static forces are applied in the shapes of two or more modes of vibration simultaneously, and the resulting load–displacement curves are used to deduce the effect of other modes on the effective frequency and damping of the mode in question. This methodology is demonstrated on a simple 2D cantilever beam structure with a single bolted joint which exhibits micro-slip nonlinearity over a range of vibration amplitudes. The predicted frequency and damping are compared with those extracted from a few expensive dynamic simulations of the structure, showing that the quasi-static approach produces reasonable albeit highly conservative bounds on the observed dynamics. This framework is also demonstrated on a 3D structure where dynamic simulations are infeasible.
Finite element models can be used to model and predict the hysteresis and energy dissipation exhibited by nonlinear joints in structures. As a result of the nonlinearity, the frequency and damping of a mode is dependent on excitation amplitude, and when the modes remain uncoupled, quasi-static modal analysis has been shown to efficiently predict this behavior. However, in some cases the modes have been observed to couple such that the frequency and damping of one mode is dependent on the amplitude of other modes. To model the interactions between modes, one must integrate the dynamic equations in time, which is several orders of magnitude more expensive than quasi-static analysis. This work explores an alternative where quasi-static forces are applied in the shapes of two or more modes of vibration simultaneously, and the resulting load–displacement curves are used to deduce the effect of other modes on the effective frequency and damping of the mode in question. This methodology is demonstrated on a simple 2D cantilever beam structure with a single bolted joint which exhibits micro-slip nonlinearity over a range of vibration amplitudes. The predicted frequency and damping are compared with those extracted from a few expensive dynamic simulations of the structure, showing that the quasi-static approach produces reasonable albeit highly conservative bounds on the observed dynamics. This framework is also demonstrated on a 3D structure where dynamic simulations are infeasible.
Moglen, Rachel L.; Barth, Julius; Gupta, Shagun; Kawai, Eiji; Klise, Katherine A.; Leibowicz, Benjamin D.
Natural disasters pose serious threats to Critical Infrastructure (CI) systems like power and drinking water, sometimes disrupting service for days, weeks, or months. Decision makers can mitigate this risk by hardening CI systems through actions like burying power lines and installing backup generation for water pumping. However, the inherent uncertainty in natural disasters coupled with the high costs of hardening activities make disaster planning a challenging task. We develop a disaster planning framework that recommends asset-specific hardening projects across interdependent power and water networks facing the uncertainty of natural disasters. We demonstrate the utility of our model by applying it to Guayama, Puerto Rico, focusing on the risk posed by hurricanes. Our results show that our proposed optimization approach identifies hardening decisions that maintain a high level of service post-disaster. The results also emphasize power system hardening due to the dependency of the water system on power for water treatment and a higher vulnerability of the power network to hurricane damage. Finally, choosing optimal hardening decisions by hedging with respect to all potential hurricane scenarios and their probabilities produces results that perform better on extreme events and are less variable compared to optimizing for only the average hurricane scenario.
We report pulsed dielectric barrier discharges (DBD) in He–H2O and He–H2O–O2 mixtures are studied in near atmospheric conditions using temporally and spatially resolved quantitative 2D imaging of the hydroxyl radical (OH) and hydrogen peroxide (H2O2 ). The primary goal was to detect and quantify the production of these strongly oxidative species in water-laden helium discharges in a DBD jet configuration, which is of interest for biomedical applications such as disinfection of surfaces and treatment of biological samples. Hydroxyl profiles are obtained by laser-induced fluorescence (LIF) measurements using 282 nm laser excitation. Hydrogen peroxide profiles are measured by photo-fragmentation LIF (PF-LIF), which involves photo-dissociating H2O2 into OH with a 212.8 nm laser sheet and detecting the OH fragments by LIF. The H2O2 profiles are calibrated by measuring PF-LIF profiles in a reference mixture of He seeded with a known amount of H2O2 . OH profiles are calibrated by measuring OH-radical decay times and comparing these with predictions from a chemical kinetics model. Two different burst discharge modes with five and ten pulses per burst are studied, both with a burst repetition rate of 50 Hz. In both cases, dynamics of OH and H2O2 distributions in the afterglow of the discharge are investigated. Gas temperatures determined from the OH-LIF spectra indicate that gas heating due to the plasma is insignificant. The addition of 5% O2 in the He admixture decreases the OH densities and increases the H2O2 densities. The increased coupled energy in the ten-pulse discharge increases OH and H2O2 mole fractions, except for the H2O2 in the He–H2O–O2 mixture which is relatively insensitive to the additional pulses.
Cesium vapor thermionic converters are an attractive method of converting high-temperature heat directly to electricity, but theoretical descriptions of the systems have been difficult due to the multi-step ionization of Cs through inelastic electron–neutral collisions. This work presents particle-in-cell simulations of these converters, using a direct simulation Monte Carlo collision model to track 52 excited states of Cs. Here, these simulations show the dominant role of multi-step ionization, which also varies significantly based on both the applied voltage bias and pressure. The electron energy distribution functions are shown to be highly non-Maxwellian in the cases analyzed here. A comparison with previous approaches is presented, and large differences are found in ionization rates due especially to the fact that previous approaches have assumed Maxwellian electron distributions. Finally, an open question regarding the nature of the plasma sheaths in the obstructed regime is discussed. The one-dimensional simulations did not produce stable obstructed regime operation and thereby do not support the double-sheath hypothesis.
Partitioned methods allow one to build a simulation capability for coupled problems by reusing existing single-component codes. In so doing, partitioned methods can shorten code development and validation times for multiphysics and multiscale applications. In this work, we consider a scenario in which one or more of the “codes” being coupled are projection-based reduced order models (ROMs), introduced to lower the computational cost associated with a particular component. We simulate this scenario by considering a model interface problem that is discretized independently on two non-overlapping subdomains. Here we then formulate a partitioned scheme for this problem that allows the coupling between a ROM “code” for one of the subdomains with a finite element model (FEM) or ROM “code” for the other subdomain. The ROM “codes” are constructed by performing proper orthogonal decomposition (POD) on a snapshot ensemble to obtain a low-dimensional reduced order basis, followed by a Galerkin projection onto this basis. The ROM and/or FEM “codes” on each subdomain are then coupled using a Lagrange multiplier representing the interface flux. To partition the resulting monolithic problem, we first eliminate the flux through a dual Schur complement. Application of an explicit time integration scheme to the transformed monolithic problem decouples the subdomain equations, allowing their independent solution for the next time step. We show numerical results that demonstrate the proposed method’s efficacy in achieving both ROM-FEM and ROM-ROM coupling.
Experimental-analytical substructuring has been a popular field of research for several years and has seen many great advances for both frequency-based substructuring (FBS) and component mode synthesis (CMS) techniques. To examine these technical advances, a new benchmark structure has been designed through the SEM dynamic substructuring technical division to act as a benchmark study for anyone researching in the field. This work contains the first attempts at experimental dynamic substructuring using the new SEM testbed. Complete dynamic substructuring predictions will be presented along with an assessment of variability and nonlinear response in the testbed assembly. Systems will be available to check out through the authors beginning in December of 2021, and this paper intends to initiate in full the round-robin challenge.
Multiple Input Multiple Output (MIMO) vibration testing provides the capability to expose a system to a field environment in a laboratory setting, saving both time and money by mitigating the need to perform multiple and costly large-scale field tests. However, MIMO vibration test design is not straightforward oftentimes relying on engineering judgment and multiple test iterations to determine the proper selection of response Degree of Freedom (DOF) and input locations that yield a successful test. This work investigates two DOF selection techniques for MIMO vibration testing to assist with test design, an iterative algorithm introduced in previous work and an Optimal Experiment Design (OED) approach. The iterative-based approach downselects the control set by removing DOF that have the smallest impact on overall error given a target Cross Power Spectral Density matrix and laboratory Frequency Response Function (FRF) matrix. The Optimal Experiment Design (OED) approach is formulated with the laboratory FRF matrix as a convex optimization problem and solved with a gradient-based optimization algorithm that seeks a set of weighted measurement DOF that minimize a measure of model prediction uncertainty. The DOF selection approaches are used to design MIMO vibration tests using candidate finite element models and simulated target environments. The results are generalized and compared to exemplify the quality of the MIMO test using the selected DOF.
When exposed to mechanical environments such as shock and vibration, electrical connections may experience increased levels of contact resistance associated with the physical characteristics of the electrical interface. A phenomenon known as electrical chatter occurs when these vibrations are large enough to interrupt the electric signals. It is critical to understand the root causes behind these events because electrical chatter may result in unexpected performance or failure of the system. The root causes span a variety of fields, such as structural dynamics, contact mechanics, and tribology. Therefore, a wide range of analyses are required to fully explore the physical phenomenon. This paper intends to provide a better understanding of the relationship between structural dynamics and electrical chatter events. Specifically, electrical contact assembly composed of a cylindrical pin and bifurcated structure were studied using high fidelity simulations. Structural dynamic simulations will be performed with both linear and nonlinear reduced-order models (ROM) to replicate the relevant structural dynamics. Subsequent multi-physics simulations will be discussed to relate the contact mechanics associated with the dynamic interactions between the pin and receptacle to the chatter. Each simulation method was parametrized by data from a variety of dynamic experiments. Both structural dynamics and electrical continuity were observed in both the simulation and experimental approaches, so that the relationship between the two can be established.
Unlike traditional base excitation vibration qualification testing, multi-axis vibration testing methods can be significantly faster and more accurate. Here, a 12-shaker multiple-input/multiple-output (MIMO) test method called intrinsic connection excitation (ICE) is developed and assessed for use on an example aerospace component. In this study, the ICE technique utilizes 12 shakers, 1 for each boundary condition attachment degree of freedom to the component, specially designed fixtures, and MIMO control to provide an accurate set of loads and boundary conditions during the test. Acceleration, force, and voltage control provide insight into the viability of this testing method. System field test and ICE test results are compared to traditional single degree of freedom specification development and testing. Results indicate the multi-shaker ICE test provided a much more accurate replication of system field test response compared with single degree of freedom testing.
The Big Hill SPR site has a rich data set consisting of multi-arm caliper (MAC) logs collected from the cavern wells. This data set provides insight into the on-going casing deformation at the Big Hill site. This report summarizes the MAC surveys for each well and presents well longevity estimates where possible. Included in the report is an examination of the well twins for each cavern and a discussion on what may or may not be responsible for the different levels of deformation between some of the well twins. The report also takes a systematic view of the MAC data presenting spatial patterns of casing deformation and deformation orientation in an effort to better understand the underlying causes. The conclusions present a hypothesis suggesting the small-scale variations in casing deformation are attributable to similar scale variations in the character of the salt-caprock interface. These variations do not appear directly related to shear zones or faults.
Visualization of mode shapes is a crucial step in modal analysis. However, the methods to create the test geometry, which typically require arduous hand measurements and approximations of rotation matrices, are crude. This leads to a lengthy test set-up process and a test geometry with potentially high measurement errors. Test and analysis delays can also be experienced if the orientation of an accelerometer is documented incorrectly, which happens more often than engineers would like to admit. To mitigate these issues, a methodology has been created to generate the test geometry (coordinates and rotation matrices) with probe data from a portable coordinate measurement machine (PCMM). This methodology has led to significant reductions in the test geometry measurement time, reductions in test geometry measurement errors, and even reduced test times. Simultaneously, a methodology has also been created to use the PCMM to easily identify desired measurement locations, as specified by a model. This paper will discuss the general framework of these methods and the realized benefits, using examples from actual tests.
Resonant plate and other resonant fixture shock techniques were developed in the 1980s at Sandia National Laboratories as flexible methods to simulate mid-field pyroshock for component qualification. Since that time, many high severity shocks have been specified that take considerable time and expertise to setup and validate. To aid in test setup and to verify the shock test is providing the intended shock loading, it is useful to visualize the resonant motion of the test hardware. Experimental modal analysis is a valuable tool for structural dynamics visualization and model validation. This chapter describes a method to perform experimental modal testing at pyroshock excitation levels, utilizing input forces calculated via the SWAT-TEEM (Sum of Weighted Accelerations Technique—Time Eliminated Elastic Motion) method and the measured acceleration responses. The calculated input force and the measured acceleration data are processed to estimate natural frequencies, damping, and scaled mode shapes of a resonant plate test system. The modal properties estimated from the pyroshock-level test environment are compared to a traditional low-level modal test. The differences between the two modal tests are examined to determine the nonlinearity of the resonant plate test system.
We study both conforming and non-conforming versions of the practical DPG method for the convection-reaction problem. We determine that the most common approach for DPG stability analysis - construction of a local Fortin operator - is infeasible for the convection-reaction problem. We then develop a line of argument based on a direct proof of discrete stability; we find that employing a polynomial enrichment for the test space does not suffice for this purpose, motivating the introduction of a (two-element) subgrid mesh. The argument combines mathematical analysis with numerical experiments.
Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.
While research in multiple-input/multiple-output (MIMO) random vibration testing techniques, control methods, and test design has been increasing in recent years, research into specifications for these types of tests has not kept pace. This is perhaps due to the very particular requirement for most MIMO random vibration control specifications – they must be narrowband, fully populated cross-power spectral density matrices. This requirement puts constraints on the specification derivation process and restricts the application of many of the traditional techniques used to define single-axis random vibration specifications, such as averaging or straight-lining. This requirement also restricts the applicability of MIMO testing by requiring a very specific and rich field test data set to serve as the basis for the MIMO test specification. Here, frequency-warping and channel averaging techniques are proposed to soften the requirements for MIMO specifications with the goal of expanding the applicability of MIMO random vibration testing and enabling tests to be run in the absence of the necessary field test data.
Bayesian inference is a technique that researchers have recently employed to solve inverse problems in structural dynamics and acoustics. More specifically, this technique can identify the spatial correlation of a distributed set of pressure loads generated during vibroacoustic testing. In this context, Bayesian inference augments the experimenter’s prior knowledge of the acoustic field prior to testing with vibration measurements at several locations on the test article to update these pressure correlations. One method to incorporate prior knowledge is to use a theoretical form of the correlations; however, theoretical forms only exist for a few special cases, e.g., a diffuse field or uncorrelated pressures. For more complex loading scenarios, such as those arising in a direct-field acoustic test, utilizing one of these theoretical priors may not be able to accurately reproduce the acoustic loading generated during the experiment. As such, this work leverages the pressure correlations generated from an acoustic simulation as the Bayesian prior to increase the accuracy of the inference for complex loading scenarios.
Piezoelectric stack actuators can convert an electrical stimulus into a mechanical displacement, which facilitates their use as a vibration-excitation mechanism for modal and vibration testing. Due to their compact nature, they are especially suitable for applications where typical electrodynamic shakers may not be physically feasible, e.g., on small-scale centrifuge/vibration (vibrafuge) testbeds. As such, this work details an approach to extract modal parameters using a distributed set of stack actuators incorporated into a vibrafuge system to provide the mechanical inputs. A derivation that considers a lumped-parameter stack actuator model shows that the transfer functions relating the mechanical responses to the piezoelectric voltages are in a similar form to conventional transfer functions relating the mechanical responses to mechanical forces, which enables typical curve-fitting algorithms to extract the modal parameters. An experimental application consisted of extracting modal parameters from a simple research structure on the centrifuge’s arm excited by the vibrafuge’s stack actuators. A modal test that utilized a modal hammer on the same structure with the centrifuge arm stationary produced similar modal parameters as the modal parameters extracted from the combined-environments testing with low-level inertial loading.
Reactive classical molecular dynamics simulations of sodium silicate glasses, xNa2O–(100 − x)SiO2 (x = 10–30), under quasi-static loading, were performed for the analysis of molecular scale fracture mechanisms. Mechanical properties of the sodium silicate glasses were consistent with experimentally reported values, and the amount of crack propagation varied with reported fracture toughness values. The most crack propagation occurred in NS20 systems (20-mol% Na2O) compared with the other simulated compositions. Dissipation via two mechanisms, the first through sodium migration as a lower activation energy process and the second through structural rearrangement as a higher activation energy process, was calculated and accounted for the energy that was not stored elastically or associated with the formation of new fracture surfaces. A correlation between crack propagation and energy dissipation was identified, with systems with higher crack propagation exhibiting less energy dissipation. Sodium silicate glass compositions with lower energy dissipation also exhibited the most sodium movement and structural rearrangement within 10 Å of the crack tip during loading. Therefore, high sodium mobility near the crack tip may enable energy dissipation without requiring formation of structural defects. Therefore, the varying mobilities of the network modifiers near crack tips influence the brittleness and the crack growth rate of modified amorphous oxide systems.
Aerospace structures are often subjected to combined inertial acceleration and vibration environments during operation. Traditional qualification approaches independently assess a system under inertial and vibration environments but are incapable of addressing couplings in system response under combined environments. Considering combined environments throughout the design and qualification of a system requires development of both analytical and experimental capabilities. Recent ground testing efforts have improved the ability to replicate flight conditions and aid qualification by incorporating combined centrifuge acceleration and vibration environments in a “vibrafuge” test. Modeling these loading conditions involves the coupling of multiple physical phenomena to accurately capture dynamic behavior. In this work, finite element analysis and model validation of a simple research structure was conducted using Sandia’s SIERRA analysis suite. Geometric preloading effects due to an applied inertial load were modeled using SIERRA coupled analysis capability, and structural dynamics analysis was performed to evaluate the updated structural response compared to responses under vibration environments alone. Results were validated with vibrafuge testing, using a test setup of amplified piezoelectric actuators on a centrifuge arm.
Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.
Clem, Paul G.; Nieves, Cesar A.; Yuan, Mengxue Y.; Ogrinc, Andrew L.; Furman, Eugene F.; Kim, Seong H.; Lanagan
, Michael T.
Ionic conduction in silicate glasses is mainly influenced by the nature, concentration, and mobility of the network-modifying (NWM) cations. The electrical conduction in SLS is dominated by the ionic migration of sodium moving from the anode to the cathode. An activation energy for this conduction process was calculated to be 0.82eV and in good agreement with values previously reported. The conduction process associated to the leakage current and relaxation peak in TSDC for HPFS is attributed to conduction between nonbridging oxygen hole centers (NBOHC). It is suggested that ≡Si-OH = ≡Si-O- + H0 under thermo-electric poling, promoting hole or proton injection from the anode and responsible for the 1.5eV relaxation peak. No previous TSDC data have been found to corroborate this mechanism. The higher activation energy and lower current intensity for the coated HPFS might be attributed to a lower concentration of NBOHC after heat treatment (Si-OH + OH-Si = SiO-Si + H2O). This could explain the TSDC signal around room temperature for the coated HPFS. Another possible explanation could be a redox reaction at the anode region dominating the current response.
National security applications require artificial neural networks (ANNs) that consume less power, are fast and dynamic online learners, are fault tolerant, and can learn from unlabeled and imbalanced data. We explore whether two fundamentally different, traditional learning algorithms from artificial intelligence and the biological brain can be merged. We tackle this problem from two directions. First, we start from a theoretical point of view and show that the spike time dependent plasticity (STDP) learning curve observed in biological networks can be derived using the mathematical framework of backpropagation through time. Second, we show that transmission delays, as observed in biological networks, improve the ability of spiking networks to perform classification when trained using a backpropagation of error (BP) method. These results provide evidence that STDP could be compatible with a BP learning rule. Combining these learning algorithms will likely lead to networks more capable of meeting our national security missions.
In order to meet 2025 goals for enhanced peak power (100 kW), specific power (50 kW/L), and reduced cost (3.3 $\$$/kW) in a motor that can operate at ≥ 20,000 rpm, improved soft magnetic materials must be developed. Better performing soft magnetic materials will also enable rare earth free electric motors. In fact, replacement of permanent magnets with soft magnetic materials was highlighted in the Electrical and Electronics Technical Team (EETT) Roadmap as a R&D pathway for meeting 2025 targets. Eddy current losses in conventional soft magnetic materials, such as silicon steel, begin to significantly impact motor efficiency as rotational speed increases. Soft magnetic composites (SMCs), which combine magnetic particles with an insulating matrix to boost electrical resistivity (ρ) and decrease eddy current losses, even at higher operating frequencies (or rotational speeds), are an attractive solution. Today, SMCs are being fabricated with values of ρ ranging between 10-3 to 10-1 μohm∙m, which is significantly higher than 3% silicon steel (~0.05 μohm∙m). The isotropic nature of SMCs is ideally suited for motors with 3D flux paths, such as axial flux motors. Additionally, the manufacturing cost of SMCs is low and they are highly amenable to advanced manufacturing and net-shaping into complex geometries, which further reduces manufacturing costs. There is still significant room for advancement in SMCs, and therefore additional improvements in electrical machine performance. For example, despite the inclusion of a non-magnetic insulating material, the electrical resistivities of SMCs are still far below that of soft ferrites (10 – 108 μohm∙m).
Clays are known for their small particle sizes and complex layer stacking. We show here that the limited dimension of clay particles arises from the lack of long-range order in low-dimensional systems. Because of its weak interlayer interaction, a clay mineral can be treated as two separate low-dimensional systems: a 2D system for individual phyllosilicate layers and a quasi-1D system for layer stacking. The layer stacking or ordering in an interstratified clay can be described by a 1D Ising model while the limited extension of individual phyllosilicate layers can be related to a 2D Berezinskii–Kosterlitz–Thouless transition. This treatment allows for a systematic prediction of clay particle size distributions and layer stacking as controlled by the physical and chemical conditions for mineral growth and transformation. Clay minerals provide a useful model system for studying a transition from a 1D to 3D system in crystal growth and for a nanoscale structural manipulation of a general type of layered materials.
We report the method-of-moments implementation of the electric-field integral equation (EFIE) yields many code-verification challenges due to the various sources of numerical error and their possible interactions. Matters are further complicated by singular integrals, which arise from the presence of a Green's function. To address these singular integrals, an approach is presented in wherein both the solution and Green's function are manufactured. Because the arising equations are poorly conditioned, they are reformulated as a set of constraints for an optimization problem that selects the solution closest to the manufactured solution. In this paper, we demonstrate how, for such practically singular systems of equations, computing the truncation error by inserting the exact solution into the discretized equations cannot detect certain orders of coding errors. On the other hand, the discretization error from the optimal solution is a more sensitive metric that can detect orders less than those of the expected convergence rate.
The effect of crystallography on transgranular chloride-induced stress corrosion cracking (TGCISCC) of arc welded 304L austenitic stainless steel is studied on >300 grains along crack paths. Schmid and Taylor factor mismatches across grain boundaries (GBs) reveal that cracks propagate either from a hard to soft grain, which can be explained merely by mechanical arguments, or soft to hard grain. In the latter case, finite element analysis reveals that TGCISCC will arrest at GBs without sufficient mechanical stress, favorable crystallographic orientations, or crack tip corrosion. GB type does not play a significant role in determining TGCISCC cracking behavior nor susceptibility. TGCISCC crack behaviors at GBs are discussed in the context of the competition between mechanical, crystallographic, and corrosion factors.
Picuris Pueblo is a small tribal community in Northern New Mexico consisting of about 306 members and 86 homes. Picuris Pueblo has made advances with renewable energy implementation, including the installation of a 1 megawatt photovoltaic (PV) array. This array has provided the tribe with economic and other benefits that contribute toward the tribe's goal of tribal sovereignty. The tribe is seeking to implement more PV generation as well as battery energy storage systems. Picuris Pueblo is considering different implementation methods, including the formation of a microgrid system. This report studies the potential implementation of a PV and battery storage microgrid system and the associated benefits and challenges. The benefits of a microgrid system include cost savings, increased resiliency, and increased tribal sovereignty and align with the tribe's goals of becoming energy independent and lowering the cost of electricity.
The Strategic Petroleum Reserve (SPR) is the world's largest supply of emergency crude oil. The reserve consists of four sites in Louisiana and Texas. Each site stores crude in deep, underground salt caverns. It is the mission of the SPR's Enhanced Monitoring Program to examine all available data to inform our understanding of each site. This report discusses the monitoring data, processes, and results for each of the four sites for fiscal year 2022.
Mobile sources is a term most commonly used to describe radioactive sources that are used in applications requiring frequent transportation. Such radioactive sources are in common use world-wide where typical applications include radiographic non-destructive evaluation (NDE) and oil and gas well logging, among others requiring lesser amounts of radioactivity. This report provides a general overview of mobile sources used for well logging and industrial radiography applications including radionuclides used, equipment, and alternative technologies. Information presented here has been extracted from a larger study on common mobile radiation sources and their use.
As part of the project ? Designing Resilient Communities (DRC) : A Consequence - Based Approach for Grid Investment , ? funded by the United States (US) Department of Energy?s (DOE) Grid Modernization Laboratory Consortium (GMLC), Sandia National Labora tories (Sandia) is partnering with a variety of government , industry, and university participants to develop and test a framework for community resilience planning focused on modernization of the electric grid. This report provides a summary of the section of the project focused on h ardware demonstration of ?resilience nodes? concept . Acknowledgements ? SAG members ? P roject partners ? Project team/management ? P roject sponsors ? O ther stakeholders
The growing demand for bandwidth makes photonic systems a leading candidate for future telecommunication and radar technologies. Integrated photonic systems offer ultra-wideband performance within a small footprint, which can naturally interface with fiber-optic networks for signal transmission. However, it remains challenging to realize narrowband (∼MHz) filters needed for high-performance communications systems using integrated photonics. In this paper, we demonstrate all-silicon microwave-photonic notch filters with 50× higher spectral resolution than previously realized in silicon photonics. This enhanced performance is achieved by utilizing optomechanical interactions to access long-lived phonons, greatly extending available coherence times in silicon. We use a multi-port Brillouin-based optomechanical system to demonstrate ultra-narrowband (2.7 MHz) notch filters with high rejection (57 dB) and frequency tunability over a wide spectral band (6 GHz) within a microwave-photonic link. We accomplish this with an all-silicon waveguide system, using CMOS-compatible fabrication techniques.
Migration of seismic events to deeper depths along basement faults over time has been observed in the wastewater injection sites, which can be correlated spatially and temporally to the propagation or retardation of pressure fronts and corresponding poroelastic response to given operation history. The seismicity rate model has been suggested as a physical indicator for the potential of earthquake nucleation along faults by quantifying poroelastic response to multiple well operations. Our field-scale model indicates that migrating patterns of 2015–2018 seismicity observed near Venus, TX are likely attributed to spatio-temporal evolution of Coulomb stressing rate constrained by the fault permeability. Even after reducing injection volumes since 2015, pore pressure continues to diffuse and steady transfer of elastic energy to the deep fault zone increases stressing rate consistently that can induce more frequent earthquakes at large distance scales. Sensitivity tests with variation in fault permeability show that (1) slow diffusion along a low-permeability fault limits earthquake nucleation near the injection interval or (2) rapid relaxation of pressure buildup within a high-permeability fault, caused by reducing injection volumes, may mitigate the seismic potential promptly.
Hydrogen is an attractive option for energy storage because it can be produced from renewable sources and produces environmentally benign byproducts. However, the volumetric energy density of molecular hydrogen at ambient conditions is low compared to other storage methods like batteries, so it must be compressed to attain a viable energy density for applications such as transportation. Nanoporous materials have attracted significant interest for gas storage because they can attain high storage density at lower pressure than conventional compression. In this work, we examine how to improve the cryogenic hydrogen storage capacity of a series of porous aromatic frameworks (PAFs) by controlling the pore size and increasing the surface area by adding functional groups. We also explore tradeoffs in gravimetric and volumetric measures of the hydrogen storage capacity and the effects of temperature swings using grand canonical Monte Carlo simulations. We also consider the effects of adding functional groups to the metal–organic framework NU-1000 to improve its hydrogen storage capacity. We find that highly flexible alkane chains do not improve the hydrogen storage capacity in NU-1000 because they do not extend into the pores; however, rigid chains containing alkyne groups do increase the surface area and hydrogen storage capacity. Finally, we demonstrate that the deliverable capacity of hydrogen in NU-1000 can be increased from 40.0 to 45.3 g/L (at storage conditions of 100 bar and 77 K and desorption conditions of 5 bar and 160 K) by adding long, rigid alkyne chains into the pores.
In this article, we present a general methodology to combine the Discontinuous Petrov–Galerkin (DPG) method in space and time in the context of methods of lines for transient advection–reaction problems. We first introduce a semidiscretization in space with a DPG method redefining the ideas of optimal testing and practicality of the method in this context. Then, we apply the recently developed DPG-based time-marching scheme, which is of exponential-type, to the resulting system of Ordinary Differential Equations (ODEs). We also discuss how to efficiently compute the action of the exponential of the matrix coming from the space semidiscretization without assembling the full matrix. Finally, we verify the proposed method for 1D+time advection–reaction problems showing optimal convergence rates for smooth solutions and more stable results for linear conservation laws comparing to the classical exponential integrators.
Morris, K.; Snook, C.; Hoang, T.S.; Hulette, G.; Armstrong, Robert C.; Butler, M.
State chart notations with ‘run to completion’ semantics are popular with engineers for designing controllers that react to environment events with a sequence of state transitions but lack formal refinement and rigorous verification methods. State chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. Abstraction and formal verification provide greater assurance that critical (e.g. safety or security) properties are not violated by the control system. In this paper, we introduce a notion of refinement into a ‘run to completion’ state chart modelling notation and leverage Event-B’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how models can be validated at different refinement levels using our scenario checker animation tools. We show how critical invariant properties can be verified by proof despite the reactive nature of the system and how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic, model checking approach. To verify liveness, we outline a proof that the run to completion is deadlock-free and converges to complete the run.
Engineering arrays of active optical centers to control the interaction Hamiltonian between light and matter has been the subject of intense research recently. Collective interaction of atomic arrays with optical photons can give rise to directionally enhanced absorption or emission, which enables engineering of broadband and strong atom-photon interfaces. Here, we report on the observation of long-range cooperative resonances in an array of rare-earth ions controllably implanted into a solid-state lithium niobate micro-ring resonator. We show that cooperative effects can be observed in an ordered ion array extended far beyond the light’s wavelength. We observe enhanced emission from both cavity-induced Purcell enhancement and array-induced collective resonances at cryogenic temperatures. Engineering collective resonances as a paradigm for enhanced light-matter interactions can enable suppression of free-space spontaneous emission. The multi-functionality of lithium niobate hosting rare-earth ions can open possibilities of quantum photonic device engineering for scalable and multiplexed quantum networks.
Numerical algorithms for stiff stochastic differential equations are developed using linear approximations of the fast diffusion processes, under the assumption of decoupling between fast and slow processes. Three numerical schemes are proposed, all of which are based on the linearized formulation albeit with different degrees of approximation. The schemes are of comparable complexity to the classical explicit Euler-Maruyama scheme but can achieve better accuracy at larger time steps in stiff systems. Convergence analysis is conducted for one of the schemes, that shows it to have a strong convergence order of 1/2 and a weak convergence order of 1. Approximations arriving at the other two schemes are discussed. Numerical experiments are carried out to examine the convergence of the schemes proposed on model problems.
Mitra, Aritra; Richards, John R.; Bagchi, Saurabh; Sundaram, Shreyas
We study the problem of designing a distributed observer for an LTI system over a time-varying communication graph. The limited existing work on this topic imposes various restrictions either on the observation model or on the sequence of communication graphs. In contrast, we propose a single-time-scale distributed observer that works under mild assumptions. Specifically, our communication model only requires strong-connectivity to be preserved over nonoverlapping, contiguous intervals that are even allowed to grow unbounded over time. We show that under suitable conditions that bound the growth of such intervals, joint observability is sufficient to track the state of any discrete-time LTI system exponentially fast, at any desired rate. We also develop a variant of our algorithm that is provably robust to worst-case adversarial attacks, provided the sequence of graphs is sufficiently connected over time. The key to our approach is the notion of a 'freshness-index' that keeps track of the age-of-information being diffused across the network. Such indices enable nodes to reject stale estimates of the state, and, in turn, contribute to stability of the error dynamics.
Understanding the adsorption of isolated metal cations from water on to mineral surfaces is critical for toxic waste retention and cleanup in the environment. Heterogeneous nucleation of metal oxyhydroxides and other minerals on material surfaces is key to crystal growth and dissolution. The link connecting these two areas, namely cation dimerization and polymerization, is far less understood. In this work we apply ab initio molecular dynamics calculations to examine the coordination structure of hydroxide-bridged Cu(II) dimers, and the free energy changes associated with Cu(II) dimerization on silica surfaces. The dimer dissociation pathway involves sequential breaking of two Cu2+-OH− bonds, yielding three local minima in the free energy profiles associated with 0-2 OH− bridges between the metal cations, and requires the design of a (to our knowledge) novel reaction coordinate for the simulations. Cu(II) adsorbed on silica surfaces are found to exhibit stronger tendency towards dimerization than when residing in water. Cluster-plus-implicit-solvent methods yield incorrect trends if OH− hydration is not correctly depicted. The predicted free energy landscapes are consistent with fast equilibrium times (seconds) among adsorbed structures, and favor Cu2+ dimer formation on silica surfaces over monomer adsorption.
Nonlocal vector calculus, which is based on the nonlocal forms of gradient, divergence, and Laplace operators in multiple dimensions, has shown promising applications in fields such as hydrology, mechanics, and image processing. In this work, we study the analytical underpinnings of these operators. We rigorously treat compositions of nonlocal operators, prove nonlocal vector calculus identities, and connect weighted and unweighted variational frameworks. We combine these results to obtain a weighted fractional Helmholtz decomposition which is valid for sufficiently smooth vector fields. Our approach identifies the function spaces in which the stated identities and decompositions hold, providing a rigorous foundation to the nonlocal vector calculus identities that can serve as tools for nonlocal modeling in higher dimensions.
Commercial vendors, trying to tap into the physical protection of critical infrastructure, are offering nuclear facilities the opportunity to borrow detection counter-unmanned aircraft systems (CUAS) equipment to survey the airspace over and around the facility. However, using one vendor or method of detection (e.g., radio frequency [RF], radar, acoustic, visual) will not necessarily provide a complete airspace profile since no single method can detect all UAS threats. Using several detection technologies, the unmanned aircraft systems (UAS) Team, who supports the U.S. National Nuclear Security Administration (NNSA) Office of International Nuclear Security (INS), would like to offer partners a comprehensive airspace profile of the types and frequency of UAS that fly within and around critical infrastructure. Improved UAS awareness will aid in the risk assessment process.
Unmanned aircraft systems (UAS/drones) are rapidly evolving and are considered an emerging threat by nuclear facilities throughout the world. Due to the wide range of UAS capabilities, members of the workforce and security/response force personnel need to be prepared for a variety of drone incursion situations. Tabletop exercises are helpful, but actual live exercises are often needed to evaluate the quick chain of events that might ensue during a real drone fly-in and the essential kinds of information that will help identify the type of drone and pilot. Even with drone detection equipment, the type of UAS used for incursion drills can have a major impact on detection altitude and finding the UAS in the sky. Using a variety of UAS, the U.S. National Nuclear Security Administration (NNSA) Office of International Nuclear Security (INS) would like to offer partners the capability of adding actual UAS into workforce and response exercises to improve overall UAS awareness as well as the procedures that capture critical steps in dealing with intruding drones.
Metal-assisted chemical etching (MACE) is a flexible technique for texturing the surface of semiconductors. In this work, we study the spatial variation of the etch profile, the effect of angular orientation relative to the crystallographic planes, and the effect of doping type. We employ gold in direct contact with germanium as the metal catalyst, and dilute hydrogen peroxide solution as the chemical etchant. With this catalyst-etchant combination, we observe inverse-MACE, where the area directly under gold is not etched, but the neighboring, exposed germanium experiences enhanced etching. This enhancement in etching decays exponentially with the lateral distance from the gold structure. An empirical formula for the gold-enhanced etching depth as a function of lateral distance from the edge of the gold film is extracted from the experimentally measured etch profiles. The lateral range of enhanced etching is approximately 10–20 μ m and is independent of etchant concentration. At length scales beyond a few microns, the etching enhancement is independent of the orientation with respect to the germanium crystallographic planes. The etch rate as a function of etchant concentration follows a power law with exponent smaller than 1. The observed etch rates and profiles are independent of whether the germanium substrate is n-type, p-type, or nearly intrinsic.
As machine learning (ML) models are deployed into an ever-diversifying set of application spaces, ranging from self-driving cars to cybersecurity to climate modeling, the need to carefully evaluate model credibility becomes increasingly important. Uncertainty quantification (UQ) provides important information about the ability of a learned model to make sound predictions, often with respect to individual test cases. However, most UQ methods for ML are themselves data-driven and therefore susceptible to the same knowledge gaps as the models themselves. Specifically, UQ helps to identify points near decision boundaries where the models fit the data poorly, yet predictions can score as certain for points that are under-represented by the training data and thus out-of-distribution (OOD). One method for evaluating the quality of both ML models and their associated uncertainty estimates is out-of-distribution detection (OODD). We combine OODD with UQ to provide insights into the reliability of the individual predictions made by an ML model.
Neural operators [1–5] have recently become popular tools for designing solution maps between function spaces in the form of neural networks. Differently from classical scientific machine learning approaches that learn parameters of a known partial differential equation (PDE) for a single instance of the input parameters at a fixed resolution, neural operators approximate the solution map of a family of PDEs [6,7]. Despite their success, the uses of neural operators are so far restricted to relatively shallow neural networks and confined to learning hidden governing laws. In this work, we propose a novel nonlocal neural operator, which we refer to as nonlocal kernel network (NKN), that is resolution independent, characterized by deep neural networks, and capable of handling a variety of tasks such as learning governing equations and classifying images. Our NKN stems from the interpretation of the neural network as a discrete nonlocal diffusion reaction equation that, in the limit of infinite layers, is equivalent to a parabolic nonlocal equation, whose stability is analyzed via nonlocal vector calculus. The resemblance with integral forms of neural operators allows NKNs to capture long-range dependencies in the feature space, while the continuous treatment of node-to-node interactions makes NKNs resolution independent. The resemblance with neural ODEs, reinterpreted in a nonlocal sense, and the stable network dynamics between layers allow for generalization of NKN's optimal parameters from shallow to deep networks. This fact enables the use of shallow-to-deep initialization techniques [8]. Our tests show that NKNs outperform baseline methods in both learning governing equations and image classification tasks and generalize well to different resolutions and depths.
Dr. Fitzgerald, a postdoc at Sandia National Laboratories, works in a materials of mechanics group characterizing material properties of ductile materials. Her presentation focuses specifically on increasing throughput of coefficient of thermal expansion (CTE) measurements with the use of optical strain measurements, called digital image correlation (DIC). Currently, the coefficient of thermal expansion is found through a time intensive process called dilatometry. There are multiple types of dilatometers. One type, a double push rod mechanical dilatometer, uses and LVDT to measure the expansion of a specimen in one direction. It uses a reference material with known properties to determine the CTE of the specimen in question. Testing about 500 samples using the double push rod mechanical dilatometer would take about 2 years if testing Monday through Friday, because the reference material needs to be at a constant temperature and heating must done slowly to ensure no thermal gradients across the rod. A second type, scissors type dilatometer, pinches a sample using a “scissor-like” appendage that also uses a LVDT to measure thermal expansion as the sample is heated. Finally, laser dilatometry, was created to provide a non-contact means to measure thermal expansion. This process greatly reduces the time required to setup a measurement but is still only able to measure one sample at a time. The time required to test 500 samples gets reduced to 3.5 weeks. Additionally, to measure expansion in different directions, multiple lasers must be used. Dr. Fitzgerald solved this conundrum by using an optical measurement technique called digital image correlation to create strain maps in multiple orientations as well as measuring multiple samples at once. Using this technique, Dr. Fitzgerald can test 500 samples, conservatively, in 2 days.
This report describes research and development (R&D) activities conducted during Fiscal Year 2022 (FY22) specifically related to the Engineered Barrier System (EBS) R&D Work Package in the Spent Fuel Waste Science and Technology (SFWST) Campaign supported by the United States (U.S.) Department of Energy (DOE). The R&D activities focus on understanding EBS component evolution and interactions within the EBS, as well as interactions between the host media and the EBS. The R&D team represented in this report consists of individuals from Sandia National Laboratories, Lawrence Berkeley National Laboratory (LBNL), Los Alamos National Laboratory (LANL), and Vanderbilt University. EBS R&D work also leverages international collaborations to ensure that the DOE program is active and abreast of the latest advances in nuclear waste disposal.
The methyl radical plays a central role in plasma-assisted hydrocarbon chemistry but is challenging to detect due to its high reactivity and strongly pre-dissociative electronically excited states. In this work, we report the development of a photo-fragmentation laser-induced fluorescence (PF-LIF) diagnostic for quantitative 2D imaging of methyl profiles in a plasma. This technique provides temporally and spatially resolved measurements of local methyl distributions, including in near-surface regions that are important for plasma-surface interactions such as plasma-assisted catalysis. The technique relies on photo-dissociation of methyl by the fifth harmonic of a Nd:YAG laser at 212.8 nm to produce CH fragments. These photofragments are then detected with LIF imaging by exciting a transition in the B-X(0, 0) band of CH with a second laser at 390 nm. Fluorescence from the overlapping A-X(0, 0), A-X(1, 1), and B-X(0, 1) bands of CH is detected near 430 nm with the A-state populated by collisional B-A electronic energy transfer. This non-resonant detection scheme enables interrogation close to a surface. The PF-LIF diagnostic is calibrated by producing a known amount of methyl through photo-dissociation of acetone vapor in a calibration gas mixture. We demonstrate PF-LIF imaging of methyl production in methane-containing nanosecond pulsed plasmas impinging on dielectric surfaces. Absolute calibration of the diagnostic is demonstrated in a diffuse, plane-to-plane discharge. Measured profiles show a relatively uniform distribution of up to 30 ppm of methyl. Relative methyl measurements in a filamentary plane-to-plane discharge and a plasma jet reveal highly localized intense production of methyl. The utility of the PF-LIF technique is further demonstrated by combining methyl measurements with formaldehyde LIF imaging to capture spatiotemporal correlations between methyl and formaldehyde, which is an important intermediate species in plasma-assisted oxidative coupling of methane.
This manual describes the installation and use of the Xyce™ XDM Netlist Translator. XDM simplifies the translation of netlists generated by commercial circuit simulator tools into Xyce-compatible netlists. XDM currently supports translation from PSpice, HSPICE, and Spectre netlists into Xyce™ netlists.
Elastomeric rubbers serve a vital role as sealing materials in the hydrogen storage and transport infrastruc- ture. With applications including O-rings and hose-liners, these components are exposed to pressurized hydrogen at a range of temperatures, cycling rates, and pressure extremes. Cyclic (de)pressurization is known to degrade these materials through the process of cavitation. This readily visible failure mode occurs as a fracture or rupture of the material and is due to the oversaturated gas localizing to form gas bubbles. Computational modeling in the Hydrogen Materials Compatibility Program (H-Mat), co-led by Sandia National Laboratories and Pacific Northwest National Laboratory, employs multi-scale sim- ulation efforts to build a predictive understanding of hydrogen-induced damage in materials. Modeling efforts within the project aim to provide insight into how to formulate materials that are less sensitive to high-pressure hydrogen-induced failure. In this document, we summarize results from atomistic molec- ular dynamics simulations, which make predictive assessments of the effects of compositional variations in the commonly used elastomer, ethylene propylene diene monomer (EPDM).
Cyber security has been difficult to quantify from the perspective of defenders. The effort to develop a cyber-attack with some ability, function, or consequence has not been rigorously investigated in Operational Technologies. This specification defines a testing structure that allows conformal and repeatable cyber testing on equipment. The purpose of the ETE is to provide data necessary to analyze and reconstruct cyber-attack timelines, effects, and observables for training and development of Cyber Security Operation Centers. Standardizing the manner in which cyber security on equipment is investigated will allow a greater understanding of the progression of cyber attacks and potential mitigation and detection strategies in a scientifically rigorous fashion.
Sangoleye, Fisayo S.; Johnson, Jay; Chavez, Adrian R.; Tsiropoulou, Eirini E.; Marton, Nicholas
L.; Hentz, Charles R.; Yannarelli, Albert Y.
Microgrids require reliable communication systems for equipment control, power delivery optimization, and operational visibility. To maintain secure communications, Microgrid Operational Technology (OT) networks must be defensible and cyber-resilient. The communication network must be carefully architected with appropriate cyber-hardening technologies to provide security defenders the data, analytics, and response capabilities to quickly mitigate malicious and accidental cyberattacks. In this work, we outline several best practices and technologies that can support microgrid operations (e.g., intrusion detection and monitoring systems, response tools, etc.). Then we apply these recommendations to the New Jersey TRANSITGRID use case to demonstrate how they would be deployed in practice.
Complex challenges across Sandia National Laboratories? (SNL) mission areas underscore the need for systems level thinking, resulting in a better understanding of the organizational work systems and environments in which our hardware and software will be used. SNL researchers have successfully used Activity Theory (AT) as a framework to clarify work systems, informing product design, delivery, acceptance, and use. To increase familiarity with AT, a working group assembled to select key resources on the topic and generate an annotated bibliography. The resources in this bibliography are arranged in six categories: 1) An introduction to AT; 2) Advanced readings in AT; 3) AT and human computer interaction (HCI); 4) Methodological resources for practitioners; 5) Case studies; and 6) Related frameworks that have been used to study work systems. This annotated bibliography is expected to improve the reader?s understanding of AT and enable more efficient and effective application of it.
Members of the Workforce (MOW) who are exposed to noise levels above 140 dBC, regardless of hearing protection worn, are required to be enrolled into the SNL Hearing Conservation Program which includes audiometric testing, online training (HCP100) and wearing hearing protection. Based on the area impact noise sample results, the attenuation provided by the MFCP was protective for mitigating noise to levels below the ACGIH TLV of 140 dBC. The results also validated the scaled distance equation in an open-air environment as the results at K635 (864 feet) were below 140 dBC.
This report examines the localization of high frequency electromagnetic fields in general three-dimensional cavities along periodic paths between opposing sides of the cavity. The focus is on the case where the mirrors at the ends of the orbit are concave and have two different radii of curvature. The cases where these orbits lead to unstable localized modes are known as scars. The ellipsoidal coordinate system is utilized in the construction of the scarred modes. The field at the interior foci is examined as well as trigonometric projections along the periodic scarred ray path.
Metal hydride hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas. This paper reports on the development of a laboratory scale two-stage Metal Hydride Compressor (MHC) system with a feed pressure of 150 bar delivering high purity H2 gas at outlet pressures up to 875 bar. Stage 1 and stage 2 AB 2 metal hydrides are identified based on experimental characterization of the pressure-composition-temperature (PCT) behavior of candidate materials. The selected metal hydrides are each combined with expanded natural graphite, increasing the thermal conductivity of the composites by an order of magnitude. These composites are integrated in two compressor beds with internal heat exchangers that alternate between hydrogenation and dehydrogenation cycles by thermally cycling between 20 C and 150 C. The prototype compressor achieved compression of hydrogen from 150 bar to 700 bar with an average flow rate of 33.6 g/hr .
This document presents the facility - recommended characterization of the neutron, prompt gamma ray, and delayed gamma ray radiation fields in the University of Texas at Austin Nuclear Engineering Teaching Laboratory (NETL) TRIGA reactor for the beam port 1/5 free - field environment at the 12 8 - inch location adjacent to the core centerline. The designation for this environment is NETL - FF - BP1/5 - 12 8 - cca. The neutron, prompt gamma ray, and delayed gamma ray energy spectra, uncertainties, and covariance matrices are presented as well as radi al and axial neutron and gamma ray fluence profiles within the experiment area of the cavity. Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse o perations are presented with conversion examples. _______________________________ 1 Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy?s National Nuclear Security Admini stration under contract DE - NA0003 525 2 R &D Scientist and Engineer, Radiation Modeling and Metrology , Sandia National Laboratories, P.O. Box 5800, MS 1146, Albuquerque, New Mexico 87185, USA 3 Director of the Nuclear Engineering Teaching Laboratory and John J. McKetta Energy Professor of Mechanical Engineering, Unive rsity of Texas at Austin, Pickle Research Campus R - 9000, Austin, Texas 78785 4 PO Contractor: Eden Radioisotopes, Applied Nuclear T echnologies, Sandia National Laboratories, P.O. Box 5800, MS 1146, Albuquerque, New Mexico 87185, USA 5 R&D Scientist and Engineer, Advanced Nuclear Concepts, Sandia National Laboratories, P.O. Box 5800, MS 1146, Albuquerque, N ew Mexico 87185, USA 6 Resear ch Engineer/Scientist Associate I , Nuclear Engineering Teaching Laboratory , University of Texas at Austin, Pickle Research Campus R - 9000, Austin, Texas 78785 7 Technologist, Applied Nuclear Technologies, Sandia National Laboratories, P.O. Box 5800, MS 1146 , Albuquerque, New Mexico 87185, USA 8 Technologist, Advanced Nuclear Concepts, Sandia National Laboratories, P.O. Box 5800, MS 1146, Albuquerque, New Mexico 8718 5, USA
Members of the Nuclear Criticality Safety (NCS) Program at Sandia National Laboratories (SNL) have updated the suite of benchmark problems developed to validate MCNP6 Version 2.0 for use in NCS applications. The updated NCS benchmark suite adds approximately 600 new benchmarks and includes peer review of all input files by two different NCS engineers (or one NCS engineer and one candidate NCS engineer). As with the originally released benchmark suite, the updated suite covers a broad range of fissile material types, material forms, moderators, reflectors, and neutron energy spectra. The benchmark suite provides a basis to establish a bias and bias uncertainty for use in NCS analyses at SNL.
The recent discovery of bright, room-temperature, single photon emitters in GaN leads to an appealing alternative to diamond best single photon emitters given the widespread use and technological maturity of III-nitrides for optoelectronics (e.g. blue LEDs, lasers) and high-speed, high-power electronics. This discovery opens the door to on-chip and on-demand single photon sources integrated with detectors and electronics. Currently, little is known about the underlying defect structure nor is there a sense of how such an emitter might be controllably created. A detailed understanding of the origin of the SPEs in GaN and a path to deterministically introduce them is required. In this project, we develop new experimental capabilities to then investigate single photon emission from GaN nanowires and both GAN and AlN wafers. We ion implant our wafers with the ion implanted with our focused ion beam nanoimplantation capabilities at Sandia, to go beyond typical broad beam implantation and create single photon emitting defects with nanometer precision. We've created light emitting sources using Li+ and He+, but single photon emission has yet to be demonstrated. In parallel, we calculate the energy levels of defects and transition metal substitutions in GaN to gain a better understanding of the sources of single photon emission in GaN and AlN. The combined experimental and theoretical capabilities developed throughout this project will enable further investigation into the origins of single photon emission from defects in GaN, AlN, and other wide bandgap semiconductors.
The Sandia National Laboratories site sustainability plan and its associated DOE Sustainability Dashboard data entries encompass Sandia National Laboratories contributions toward meeting the DOE sustainability goals. This site sustainability plan fulfills the contractual requirement for National Technology & Engineering Solutions of Sandia, LLC, the management and operating contractor for Sandia National Laboratories, to deliver an annual sustainability plan to the DOE National Nuclear Security Administration Sandia Field Office.
The goal of the project was to protect US critical infrastructure and improve energy security through technical analysis of the risk landscape presented by the anticipated massive deployment of interoperable EV chargers.
While the use of machine learning (ML) classifiers is widespread, their output is often not part of any follow-on decision-making process. To illustrate, consider the scenario where we have developed and trained an ML classifier to find malicious URL links. In this scenario, network administrators must decide whether to allow a computer user to visit a particular website, or to instead block access because the site is deemed malicious. It would be very beneficial if decisions such as these could be made automatically using a trained ML classifier. Unfortunately, due to a variety of reasons discussed herein, the output from these classifiers can be uncertain, rendering downstream decisions difficult. Herein, we provide a framework for: (1) quantifying and propagating uncertainty in ML classifiers; (2) formally linking ML outputs with the decision-making process; and (3) making optimal decisions for classification under uncertainty with single or multiple objectives.
Battery storage systems are increasingly being installed at photovoltaic (PV) sites to address supply-demand balancing needs. Although there is some understanding of costs associated with PV operations and maintenance (O&M), costs associated with emerging technologies such as PV plus storage lack details about the specific systems and/or activities that contribute to the cost values. This study aims to address this gap by exploring the specific factors and drivers contributing to utility-scale PV plus storage systems (UPVS) O&M activities costs, including how technology selection, data collection, and related and ongoing challenges. Specifically, we used semi-structured interviews and questionnaires to collect information and insights from utility-scale owners and operators. Data was collected from 14 semi-structured interviews and questionnaires representing 51.1 MW with 64.1 MWh of installed battery storage capacity within the United States (U.S.). Differences in degradation rate, expected life cycle, and capital costs are observed across different storage technologies. Most O&M activities at UPVS related to correcting under-performance. Fires and venting issues are leading safety concerns, and owner operators have installed additional systems to mitigate these issues. There are ongoing O&M challenges due the lack of storage-specific performance metrics as well as poor vendor reliability and parts availability. Insights from this work will improve our understanding of O&M consideration at PV plus storage sites.
This document contains the design and operation principles for the wind turbine emulator (WTE) located in the Distributed Energy Technologies Laboratory (DETL) at Sandia National Laboratories (Sandia). The wind turbine emulator is a power hardware -in-the-loop (PHIL) representation of the research wind turbines located in Lubbock, Texas at the Sandia Scaled Wind Farm Technology (SWiFT) facility. This document describes installation and commissioning steps, and it provides references to component manuals and specifications.
The HyRAM+ software toolkit provides a basis for conducting quantitative risk assessment and consequence modeling for hydrogen, natural gas, and autogas systems. HyRAM+ is designed to facilitate the use of state-of-the-art models to conduct robust, repeatable assessments of safety, hazards, and risk. HyRAM+ integrates deterministic and probabilistic models for quantifying leak sizes and rates, predicting physical effects, characterizing hazards (thermal effects from jet fires, overpressure effects from delayed ignition), and assessing impacts on people. HyRAM+ is developed at Sandia National Laboratories to support the development and revision of national and international codes and standards, and to provide developed models in a publicly-accessible toolkit usable by all stakeholders. This document provides a description of the methodology and models contained in HyRAM+ version 5.0. The most significant change for HyRAM+ version 5.0 from HyRAM+ version 4.1 is the ability to model blends of different fuels. HyRAM+ was previously only suitable for use with hydrogen, methane, or propane, with users having the ability to use methane as a proxy for natural gas and propane as a proxy for autogas/liquefied petroleum gas. In version 5.0, real natural gas or autogas compositions can be modeled as the fuel, or even blends of natural gas with hydrogen. These blends can be used in the standalone physics models, but not yet in the quantitative risk assessment mode of HyRAM+.
Self-determination has been an on-going effort for Native American people and gained much traction with the passing of The Energy Policy Act of 2005, which included the Indian Tribal Energy Development and Self-Determination Act. Congress passed this act to assist Native American tribes and Alaska Native villages with planning, development, and assistance to achieve their energy goals. The Ute Mountain Ute Tribe (UMUT) has relied on oil and natural gas for economic support the last 70 years. Burning fossil fuels, along with oil and gas development, decreases the quality of air and leads to increased greenhouse gas emissions. Subsequently, the burning of fossil fuels to produce energy is now more costly than many renewable energy sources, including solar photovoltaic (PV) systems. Environmental stewardship, along with the need to maintain revenue generation, has led UMUT’s efforts to achieve energy self-determinism employing PV and exploring other technology. In the past, the tribe completed a 1 megawatt PV project near Towaoc, Colorado, which serves as a case study on the tribe’s energy goals: a future where renewables will dominate their energy landscape. This paper explores UMUT’s past and on-going efforts toward energy independence and how it relates to the broader landscape of Native American energy sovereignty.
The wide variety of inverter control settings for solar photovoltaics (PV) causes the accurate knowledge of these settings to be difficult to obtain in practice. This paper addresses the problem of determining inverter reactive power control settings from net load advanced metering infrastructure (AMI) data. The estimation is first cast as fitting parameterized control curves. We argue for an intuitive and practical approach to preprocess the AMI data, which exposes the setting to be extracted. We then develop a more general approach with a data-driven reactive power disaggregation algorithm, reframing the problem as a maximum likelihood estimation for the native load reactive power. These methods form the first approach for reconstructing reactive power control settings of solar PV inverters from net load data. The constrained curve fitting algorithm is tested on 701 loads with behind-the-meter (BTM) PV systems with identical control settings. The settings are accurately reconstructed with mean absolute percentage errors between 0.425% and 2.870%. The disaggregation-based approach is then tested on 451 loads with variable BTM PV control settings. Different configurations of this algorithm reconstruct the PV inverter reactive power timeseries with root mean squared errors between 0.173 and 0.198 kVAR.
Numerical Methods for Partial Differential Equations
Aulisa, Eugenio; Capodaglio, Giacomo; Chierici, Andrea; D'Elia, Marta D.
In this paper, we design efficient quadrature rules for finite element (FE) discretizations of nonlocal diffusion problems with compactly supported kernel functions. Two of the main challenges in nonlocal modeling and simulations are the prohibitive computational cost and the nontrivial implementation of discretization schemes, especially in three-dimensional settings. In this work, we circumvent both challenges by introducing a parametrized mollifying function that improves the regularity of the integrand, utilizing an adaptive integration technique, and exploiting parallelization. We first show that the “mollified” solution converges to the exact one as the mollifying parameter vanishes, then we illustrate the consistency and accuracy of the proposed method on several two- and three-dimensional test cases. Furthermore, we demonstrate the good scaling properties of the parallel implementation of the adaptive algorithm and we compare the proposed method with recently developed techniques for efficient FE assembly.
Fractional equations have become the model of choice in several applications where heterogeneities at the microstructure result in anomalous diffusive behavior at the macroscale. In this work we introduce a new fractional operator characterized by a doubly-variable fractional order and possibly truncated interactions. Under certain conditions on the model parameters and on the regularity of the fractional order we show that the corresponding Poisson problem is well-posed. We also introduce a finite element discretization and describe an efficient implementation of the finite-element matrix assembly in the case of piecewise constant fractional order. Through several numerical tests, we illustrate the improved descriptive power of this new operator across media interfaces. Furthermore, we present one-dimensional and two-dimensional h-convergence results that show that the variable-order model has the same convergence behavior as the constant-order model.
To impact physical mechanical system design decisions and realize the full promise of high-fidelity computational tools, simulation results must be integrated at the earliest stages of the design process. This is particularly challenging when dealing with uncertainty and optimizing for system-level performance metrics, as full-system models (often notoriously expensive and time-consuming to develop) are generally required to propagate uncertainties to system-level quantities of interest. Methods for propagating parameter and boundary condition uncertainty in networks of interconnected components hold promise for enabling design under uncertainty in real-world applications. These methods avoid the need for time consuming mesh generation of full-system geometries when changes are made to components or subassemblies. Additionally, they explicitly tie full-system model predictions to component/subassembly validation data which is valuable for qualification. These methods work by leveraging the fact that many engineered systems are inherently modular, being comprised of a hierarchy of components and subassemblies that are individually modified or replaced to define new system designs. By doing so, these methods enable rapid model development and the incorporation of uncertainty quantification earlier in the design process. The resulting formulation of the uncertainty propagation problem is iterative. We express the system model as a network of interconnected component models, which exchange solution information at component boundaries. We present a pair of approaches for propagating uncertainty in this type of decomposed system and provide implementations in the form of an open-source software library. We demonstrate these tools on a variety of applications and demonstrate the impact of problem-specific details on the performance and accuracy of the resulting UQ analysis. This work represents the most comprehensive investigation of these network uncertainty propagation methods to date.
The DOE Office of Electricity views sodium batteries as a priority in pursuing a safe, resilient, and reliable grid. Improvements in solid-state electrolytes are key to realizing the potential of these large-scale batteries. NaSICON structure consists of SiO4 or PO4 tetrahedra sharing common corners with ZrO6 octahedra. Structure forms “tunnels” in three dimensions that can transport interstitial sodium ion. 3D structure provides higher ionic conductivity than other conductors (β’’-alumina), particularly at low temperature. Lower temperature (cheaper) processing compared to β’’-alumina. Our objective was to identify fundamental structure-processing-property relationships in NaSICON solid electrolytes to inform design for use in sodium batteries. In this work, the mechanical properties of NaSICON sodium ion conductors are affected by sodium conduction. Electrochemical cycling can alter modulus and hardness in NaSICON. Excessive cycling can lead to secondary phases and/or dendrite formation that change mechanical properties in NaSICON. Mechanical and electrochemical properties can be correlated with topographical features to further inform design decisions
The prevalence of COVID-19 is shaped by behavioral responses to recommendations and warnings. Available information on the disease determines the population’s perception of danger and thus its behavior; this information changes dynamically, and different sources may report conflicting information. We study the feedback between disease, information, and stay-at-home behavior using a hybrid agent-based-system dynamics model that incorporates evolving trust in sources of information. We use this model to investigate how divergent reporting and conflicting information can alter the trajectory of a public health crisis. The model shows that divergent reporting not only alters disease prevalence over time, but also increases polarization of the population’s behaviors and trust in different sources of information.
To detect a specific radio-frequency (rf) magnetic field, rf optically pumped magnetometers (OPMs) require a static magnetic field to set the Larmor frequency of the atoms equal to the frequency of interest. However, unshielded and variable magnetic field environments (e.g., an rf OPM on a moving platform) pose a problem for rf OPM operation. Here, we demonstrate the use of a natural-abundance rubidium vapor to make a comagnetometer to address this challenge. Our implementation builds upon the simultaneous application of several OPM techniques within the same vapor cell. First, we use a modified implementation of an OPM variometer based on 87Rb to detect and actively cancel unwanted external fields at frequencies ≲60 Hz using active feedback to a set of field control coils. In this experiment, we exploit this stabilized field environment to implement a high-sensitivity rf magnetometer using 85Rb. Using this approach, we demonstrate the ability to measure rf fields with a sensitivity of approximately 9 fT Hz-1/2 inside a magnetic shield in the presence of an applied field of approximately 20 μT along three mutually orthogonal directions. This demonstration opens up a path toward completely unshielded operation of a high-sensitivity rf OPM.
The long-standing problem of predicting the electronic structure of matter on ultra-large scales (beyond 100,000 atoms) is solved with machine learning.
Physics-constrained machine learning is emerging as an important topic in the field of machine learning for physics. One of the most significant advantages of incorporating physics constraints into machine learning methods is that the resulting model requires significantly less data to train. By incorporating physical rules into the machine learning formulation itself, the predictions are expected to be physically plausible. Gaussian process (GP) is perhaps one of the most common methods in machine learning for small datasets. In this paper, we investigate the possibility of constraining a GP formulation with monotonicity on three different material datasets, where one experimental and two computational datasets are used. The monotonic GP is compared against the regular GP, where a significant reduction in the posterior variance is observed. The monotonic GP is strictly monotonic in the interpolation regime, but in the extrapolation regime, the monotonic effect starts fading away as one goes beyond the training dataset. Imposing monotonicity on the GP comes at a small accuracy cost, compared to the regular GP. The monotonic GP is perhaps most useful in applications where data are scarce and noisy, and monotonicity is supported by strong physical evidence.
The recently-developed ability to control phosphorous-doping of silicon at an atomic level using scanning tunneling microscopy, a technique known as atomic precision advanced manufacturing (APAM), has allowed us to tailor electronic devices with atomic precision, and thus has emerged as a way to explore new possibilities in Si electronics. In these applications, critical questions include where current flow is actually occurring in or near APAM structures as well as whether leakage currents are present. In general, detection and mapping of current flow in APAM structures are valuable diagnostic tools to obtain reliable devices in digital-enhanced applications. In this report, we used nitrogen-vacancy (NV) centers in diamond for wide-field magnetic imaging (with a few-mm field of view and micron-scale resolution) of magnetic fields from surface currents flowing in an APAM test device made of a P delta-doped layer on a Si substrate, a standard APAM witness material. We integrated a diamond having a surface NV ensemble with the device (patterned in two parallel mm-sized ribbons), then mapped the magnetic field from the DC current injected in the APAM device in a home-built NV wide-field microscope. The 2D magnetic field maps were used to reconstruct the surface current densities, allowing us to obtain information on current paths, device failures such as choke points where current flow is impeded, and current leakages outside the APAM-defined P-doped regions. Analysis on the current density reconstructed map showed a projected sensitivity of ~0.03 A m-1, corresponding to a smallest-detectable current in the 200 μm wide APAM ribbon of ~6 μA. These results demonstrate the failure analysis capability of NV wide-field magnetometry for APAM materials, opening the possibility to investigate other cutting-edge microelectronic devices.
Multiphysics and analytical calculations were conducted for a heat exchanger with passive, natural circulation flow. A glycol/water working fluid convects the heat to a dimpled heat exchanger shell, which subsequently transfers the heat to the soil, which acts as the ultimate heat sink. Because the system is fully-passive, it is not subject to the expenses, maintenance, and mechanical breakdowns associated with moving parts. Density, heat capacity, and thermal conductivity material properties were measured for various soil samples, and subsequently included as input for the soil heat conduction model. The soil model was coupled to a computational fluid dynamics (CFD) heat exchanger model that included the dynamic Smagorinsky large eddy simulation and k- omega turbulence models. The analysis showed that the fluid dynamics and heat transfer models worked properly, albeit at a slow pace. Nevertheless, the coupled CFD/heat conduction simulation ran long enough to determine a key parameter—the amount of heat conducted from the heat exchanger to the ground. This unique performance value, along with experimental data, was used as input for stand-alone, fast-running CFD models, as well as boundaries to obtain solutions to partial differential equations for soil heat conduction.
Cryogenic plasma focused ion beam (PFIB) electron microscopy analysis is applied to visualizing ex situ (surface industrial) and in situ (subsurface geologic) carbonation products, to advance understanding of carbonation kinetics. Ex situ carbonation is investigated using NIST fly ash standard #2689 exposed to aqueous sodium bicarbonate solutions for brief periods of time. In situ carbonation pathways are investigated using volcanic flood basalt samples from Schaef et al. (2010) exposed to aqueous CO2 solutions by them. The fly ash reaction products at room temperature show small amounts of incipient carbonation, with calcite apparently forming via surface nucleation. Reaction products at 75° C show beginning stages of an iron carbonate phase, e.g., siderite or ankerite, common phases in subsurface carbon sequestration environments. This may suggest an alternative to calcite in carbonation low calcium-bearing fly ashes. Flood basalt carbonation reactions show distinct zonation with high calcium and calcium-magnesium bearing zones alternating with high iron-bearing zones. The calcium-magnesium zones are notable with occurrence of localized pore space. Oscillatory zoning in carbonate minerals is distinctly associated with far-from-equilibrium conditions where local chemical environments fluctuate via a coupling of reaction with transport. The high porosity zones may reflect a precursor phase (e.g., aragonite) with higher molar volume that then “ripens” to the high-Mg calcite phase-plus-porosity. These observations reveal that carbonation can proceed with evolving local chemical environments, formation and disappearance of metastable phases, and evolving reactive surface areas. Together this work shows that future application of cryo-PFIB in carbonation studies would provide advanced understanding of kinetic mechanisms for optimizing industrial-scale and commercial-scale applications.
Laser-induced photoemission of electrons offers opportunities to trigger and control plasmas and discharges. However, the underlying mechanisms are not sufficiently characterized to be fully utilized. Photoemission is highly nonlinear, achieved through multiphoton absorption, above threshold ionization, photo-assisted tunneling, etc., where the dominant process depends on the work function of the material, photon energy and associated fields, surface heating, background fields, etc. To characterize the effects of photoemission on breakdown, breakdown experiments were performed and interpreted using a 0D plasma discharge circuit model and quantum model of photoemission.
Single particle aerosol mass spectrometry (SPAMS), an analytical technique for measuring the size and composition of individual micron-scale particles, is capable of analyzing atmospheric pollutants and bioaerosols much more efficiently and with more detail than conventional methods which require the collection of particles onto filters for analysis in the laboratory. Despite SPAMS’ demonstrated capabilities, the primary mechanisms of ionization are not fully understood, which creates challenges in optimizing and interpreting SPAMS signals. In this paper, we present a well-stirred reactor model for the reactions involved with the laser-induced vaporization and ionization of an individual particle. The SPAMS conditions modeled in this paper include a 248 nm laser which is pulsed for 8 ns to vaporize and ionize each particle in vacuum. The ionization of 1 μm, spherical Al particles was studied by approximating them with a 0-dimensional plasma chemistry model. The primary mechanism of absorption of the 248 nm photons was pressure-broadened direct photoexcitation to Al(y2D). Atoms in this highly excited state then undergo superelastic collisions with electrons, heating the electrons and populating the lower energy excited states. We found that the primary ionization mechanism is electron impact ionization of various excited state Al atoms, especially Al(y2D). Because the gas expands rapidly into vacuum, its temperature decreases rapidly. The rate of three-body recombination (e- + e- + Al+ → Al + e-) increases at low temperature, and most of the electrons and ions produced recombine within several μs of the laser pulse. The importance of the direct photoexcitation indicates that the relative peak heights of different elements in SPAMS mass spectra may be sensitive to the available photoexcitation transitions. We also discuss the effects of laser intensity, particle diameter, and expansion dynamics.
Monitoring of cooling tower performance in a nuclear reactor facility is necessary to ensure safe operation; however, instrumentation for measuring performance characteristics can be difficult to install and may malfunction or break down over long duration experiments. This paper describes employing a thermodynamic approach to quantify cooling tower performance, the Merkel model, which requires only five parameters, namely, inlet water temperature, outlet water temperature, liquid mass flowrate, gas mass flowrate, and wet bulb temperature. Using this model, a general method to determine cooling tower operation for a nuclear reactor was developed in situations when neither the outlet water temperature nor gas mass flowrate are available, the former being a critical piece of information to bound the Merkel integral. Furthermore, when multiple cooling tower cells are used in parallel (as would be in the case of large-scale cooling operations), only the average outlet temperature of the cooling system is used as feedback for fan speed control, increasing the difficulty of obtaining the outlet water temperature for each cell. To address these shortcomings, this paper describes a method to obtain individual cell outlet water temperatures for mechanical forced-air cooling towers via parametric analysis and optimization. In this method, the outlet water temperature for an individual cooling tower cell is acquired as a function of the liquid-to-gas ratio (L/G). Leveraging the tight tolerance on the average outlet water temperature, an error function is generated to describe the deviation of the parameterized L/G to the highly controlled average outlet temperature. The method was able to determine the gas flowrate at rated conditions to be within 3.9% from that obtained from the manufacturer’s specification, while the average error for the four individual cooling cell outlet water temperatures were 1.6 °C, -0.5 °C, -1.0 °C, and 0.3 °C.
Stangebye, Sandra S.; Lei, Changhui L.; Kinghorn, Aubri K.; robertson, ian m.; Kacher, Josh K.; Hattar, Khalid M.
We report the dynamics of the gold–silicon eutectic reaction in limited dimensions were studied using in situ transmission electron microscopy and scanning transmission electron microscopy heating experiments. The phase transformation, viewed in both plan-view and cross-section of the film, occurs through a complex combination of dislocation and grain boundary motion and diffusion of silicon along gold grain boundaries, which results in a dramatic change in the microstructure of the film. The conversion observed in cross-section shows that the eutectic mixture forms at the Au–Si interface and proceeds into the Au film at a discontinuous growth rate. This complex process can lead to a variety of microstructures depending on sample geometry, heating temperature, and the ratio of gold to silicon which was found to have the largest impact on the eutectic microstructure. The eutectic morphology varied from dendrites to hollow rectangular structures to Au–Si eutectic agglomerates with increasing silicon to gold ratio.
Yarritu, Kevin A.; Hongkyu, Yoon H.; Roesler, Erika R.
Reliable climate predictions are important for making robust decisions in response to the changing climate. This project aims to reduce mis-modeling uncertainties arising from the representation of the land-atmosphere coupling in the Energy Exascale Earth System Model (E3SM) by using a machine learning approach. This approach will use an encoder-decoder architecture to represent the information that is developed in the land model and given to the atmosphere model. The simulated data will be taken from the E3SM simulation. However, the incorporation of observed data into the simulated dataset reduces mis-modeling uncertainties.
Many distributed applications implement complex data flows and need a flexible mechanism for routing data between producers and consumers. Recent advances in programmable network interface cards, or SmartNICs, represent an opportunity to offload data-flow tasks into the network fabric, thereby freeing the hosts to perform other work. System architects in this space face multiple questions about the best way to leverage SmartNICs as processing elements in data flows. In this paper, we advocate the use of Apache Arrow as a foundation for implementing data-flow tasks on SmartNICs. We report on our experiences adapting a partitioning algorithm for particle data to Apache Arrow and measure the on-card processing performance for the BlueField-2 SmartNIC. Our experiments confirm that the BlueField-2’s (de)compression hardware can have a significant impact on in-transit workflows where data must be unpacked, processed, and repacked.
Sanz-Matias, Ana S.; Roychoudhury, Subhayan x.; Feng, Xuefei F.; Yang, Feipeng Y.; Cheng, Kao.L.; Zavadil, Kevin R.; Guo, Jinghua G.; Prendergast, David P.
Given their natural abundance and thermodynamic stability, fluoride salts may appear as evolving components of electrochemical interfaces in Li-ion batteries and emergent multivalent ion cells. This is due to the practice of employing electrolytes with fluorine-containing species (salt, solvent, or additives) that electrochemically decompose and deposit on the electrodes. Operando X-ray absorption spectroscopy (XAS) can probe the electrode–electrolyte interface with a single-digit nanometer depth resolution and offers a wealth of insights into the evolution and Coulombic efficiency or degradation of prototype cells, provided that the spectra can be reliably interpreted in terms of local oxidation state, atomic coordination, and electronic structure about the excited atoms. Here we explore fluorine K-edge XAS of mono- (Li, Na, and K) and di-valent (Mg, Ca, and Zn) fluoride salts from a theoretical standpoint and discover a surprising level of detailed electronic structure information about these materials despite the relatively predictable oxidation state and ionicity of the fluoride anion and the metal cation. Utilizing a recently developed many-body approach based on the ΔSCF method, we calculate the XAS using density functional theory and experimental spectral profiles are well reproduced despite some experimental discrepancies in energy alignment within the literature, which we can correct for in our simulations. We outline a general methodology to explain shifts in the main XAS peak energies in terms of a simple exciton model and explain line-shape differences resulting from the mixing of core-excited states with metal d character (for K and Ca specifically). Given ultimate applications to evolving interfaces, some understanding of the role of surfaces and their terminations in defining new spectral features is provided to indicate the sensitivity of such measurements to changes in interfacial chemistry.
Dirac semimetals have attracted a great deal of current interests due to their potential applications in topological quantum computing, low-energy electronic devices, and single photon detection in the microwave frequency range. Herein are results from analyzing the low magnetic (B) field weak-antilocalization behaviors in a Dirac semimetal Cd3As2 thin flake device. At high temperatures, the phase coherence length lφ first increases with decreasing temperature (T) and follows a power law dependence of lφ ∝ T-0.4. Below ~3 K, lφ tends to saturate to a value of~180 nm. Another fitting parameter α, which is associated with independent transport channels, displays a logarithmic temperature dependence for T>3 K, but also tends to saturate below~3 K. The saturation value,~1.45, is very close to 1.5, indicating three independent electron transport channels, which we interpret as due to decoupling of both the top and bottom surfaces as well as the bulk. This result, to our knowledge, provides first evidence that the surfaces and bulk states can become decoupled in electronic transport in Dirac semimetal Cd3As2.
The International Atomic Energy Agency (IAEA) applies safeguards to nuclear facilities that are not operating, including those undergoing decommissioning, and the IAEA’s effort in this area is both considerable and increasing. Specifically, the IAEA Department of Safeguards’ Division of Concepts and Planning (SGCP-003: Safeguards Approaches) identified an R&D need to “Develop safeguards implementation guidelines for facilities under decommissioning and safeguards concepts for post-accident facilities under decommissioning”. Nuclear facilities undergoing decommissioning are not exempt from safeguards agreements between the IAEA and Host State, and, accordingly, the requirement for verification of no diversion of nuclear material and detection of undeclared activities at decommissioned facilities remain even after facility shutdown. However, the effort required to meet safeguards objectives diminishes as nuclear material and essential equipment are removed during the decommissioning process which shifts the emphasis from verification of ever-diminishing fissile or source material inventories to verification of changes in facility design and equipment operability.
The New Mexico Small Business Assistance Program (NMSBA) has once again paired with Optical Radio Communications Technology (ORC Tech). A New Mexico startup Limited Liability Company (LLC), with Sandia National Laboratories (SNL) Engineers at the Sensors and Textiles Innovatively Tailored for Complex, High-Efficiency Detection (STITCHED) laboratory, to aid in the development of an ultra-passive, portable, deployable wireless signal booster technology.
Artificial solid electrolyte interphases have provided a path to improved cycle life for high energy density, next-generation anodes like lithium metal. Although long cycle life is necessary for widespread implementation, understanding and mitigating the effects of aging and self-discharge are also required. In this report we investigate several coating materials and their role in calendar life aging of lithium. We find that the oxide coatings are electronically passivating whereas the LiF coating slows charge transfer kinetics. Furthermore, the Coulombic loss during self-discharge measurements improves with the oxide layers and worsens with the LiF layer. It is found that none of the coatings create a continuous conformal, electronically passivating layer on top of the deposited lithium nor are they likely to distribute evenly through a porous deposit, suggesting that none of the materials are acting as an artificial solid electrolyte interphase. Instead, they likely alter performance through modulating lithium nucleation and growth.
This project matured a new understanding (a “modern synthesis”) of the structure and evolution of science and technology. It created an understanding and framework for how Sandia National Labs, the Department of Energy, and the nation, might improve their research productivity, with significant ramifications on national security and economic competitiveness.
A zero-dimensional magnetic implosion model with a coupled equivalent circuit for the description of an imploding nested wire array or gas puff is presented. Circuit model results have been compared with data from imploding stainless steel wire arrays, and good agreement has been found. The total energy coupled to the load, , has been applied to a simple semi-analytic K-shell yield model, and excellent agreement with previously reported K-shell yields across all wire array and gas puff platforms is seen. Trade space studies in implosion radius and mass have found that most platforms operate near the predicted maximum yield. In some cases, the K-shell yield may be increased by increasing the mass or radius of the imploding array or gas puff.
The solution processability of ionogel solid electrolytes has recently garnered attention in the Li-ion battery community as a means to address the interface and fabrication issues commonly associated with most solid electrolytes. However, the trapped ionic liquid (ILE) component has hindered the electrochemical performance. In this report we present a process to tune the properties by replacing the ILE in a silica-based ionogel after fabrication with a liquid component befitting the desired application. Electrochemical cycling under various conditions showcases gels containing different liquid components incorporated into LiFePO4 (LFP)/gel/Li cells: high power (455 W kg–1 at a 1 C discharge) systems using carbonates, low temperatures (-40 °C) using ethers, or high temperatures (100 °C) using ionic liquids. Fabrication of additive-manufactured cells utilizing the exchanged carbonate-based system is demonstrated in a planar LFP/Li4Ti5O12 (LTO) system, where a marked improvement over an ionogel is found in terms of rate capability, capacity, and cycle stability (118 vs 41 mA h g–1 at C/4). This process represents a promising route to create a separator-less cell, potentially in complex architectures, where the electrolyte properties can be facilely tuned to meet the required conditions for a wide range of battery chemistries while maintaining a uniform electrolyte access throughout cast electrodes.
For 2D-temperature monitoring applications, a variant of EIT (Electrical Impedance Tomography) is evaluated computationally in this work. Literature examples of poor sensor performance in the center of the 2D domains away from the side electrodes motivated this study which seeks to overcome some of the previously noted shortcomings. In particular, the use of ‘sensing skins’ with novel tailored baseline conductivities were examined using the EIDORS package for EIT. It was found that the best approach for detecting a hot spot depends on several factors such as the current injection (stimulation) patterns, the measurement patterns, and the reconstruction algorithms. For a well-performing combination of these factors, tailored baseline conductivities were assessed and compared to the baseline uniform conductivity. It was discovered that for some EIT applications, a tailored distribution needs to be smooth and that sudden changes in the conductivity gradients should be avoided. Still, the benefits in terms of improved EIT performance were small for conditions for which the EIT measurements had been ‘optimized’ for the uniform baseline case. Within the limited scope of this study, only two specific cases showed benefits from tailored distributions. For one case, a smooth tailored distribution with increased baseline conductivity in the center provided a better separation of two centrally located hot spots. For another case, a smooth tailored distribution with reduced conductivity in the center provided better estimates of the magnitudes of two hot spots near the center of the sensing skin.
A zero-dimensional magnetic implosion model with a coupled equivalent circuit for the description of an imploding nested wire array or gas puff is presented. Circuit model results have been compared with data from imploding stainless steel wire arrays, and good agreement has been found. The total energy coupled to the load, E j × B, has been applied to a simple semi-analytic K-shell yield model, and excellent agreement with previously reported K-shell yields across all wire array and gas puff platforms is seen. Trade space studies in implosion radius and mass have found that most platforms operate near the predicted maximum yield. In some cases, the K-shell yield may be increased by increasing the mass or radius of the imploding array or gas puff.
This research effort examined the application of Nafion polymers in alcohol solvents as an anti-ice surface coating, as a mixture with hydrophilic polymers and freezing point depressant salt systems. Co-soluble systems of Nafion, polymer and salt were applied using dip coating methods to create smooth films for frost observation over a Peltier plate thermal system in ambient laboratory conditions. Cryo-DSC was applied to examine freezing events of the Nafion-surfactant mixtures, but the sensitivity of the measurement was insufficient to determine frost behavior. Collaborations with the Fog Chamber at Sandia-Albuquerque, and in environmental SAXS measurements with CINT-LANL were requested but were not able to be performed under the research duration. Since experimental characterization of these factors is difficult to achieve directly, computational modeling was used to guide the scientific basis for property improvement. Computational modeling was performed to improve understanding of the dynamic association between ionomer side groups and added molecules and deicing salts. The polyacrylic acid in water system was identified at the start of the project as a relevant system for exploring the effect of varying counterions on the properties of fully deprotonated polyacrylic acid (PAA) in the presence of water. Simulations were modeled with four different counterions, two monovalent counterions (K+ and Na+) and two divalent counterions (Ca2+ and Mg2+). The wt% of PAA in these systems was varied from ~10 to 80 wt% PAA for temperatures from 250K to 400K. In the second set of simulations, the interpenetration of water into a dry PAA film was studied for Na+ or Ca2+ counterions for temperatures between 300K and 400K. The result of this project is a sprayable Nafion film composite which resists ice nucleation at -20 °C for periods of greater than three hours. It is composed of Nafion polymer, hydrophilic polyethylene oxide polymer and CaCl2 anti-ice crosslinker. Durability and field performance properties remain to be determined.
This manual describes the use of the Xyce™ Parallel Electronic Simulator. Xyce™ has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. (2) A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. (3) Device models that are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices. Xyce™ is a parallel code in the most general sense of the phrase—a message passing parallel implementation—which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel eficiency is achieved as the number of processors grows.
This document is a reference guide to the Xyce™ Parallel Electronic Simulator, and is a companion document to the Xyce™ Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce™. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce™ Users' Guide.
The V31 containment vessel was procured by the US Army Recovered Chemical Material Directorate (RCMD) as a third - generation EDS containment vessel. It is the fifth EDS vessel to be fabricated under Code Case 2564 of the 2019 ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the c ode c ase, is twenty - four (24) pounds TNT - equivalent for up to 1092 detonations. This report documents the results of explosive tests that were performed on the vessel at Sandia National Laboratories in Albuquerque , New Mexico to qualify the vessel for field ope rations use. There were three design basis configurations for qualification testing. Qualification test (1) consisted of a simulated M55 rocket motor and warhead assembly of 24lbs of Composition C - 4 (30 lb TNT equivalent). This test was considered the maxi mum load case, based on modeling and simulation methods performed by Sandia prior to the vessel design phase. Qualification test (2) consisted of a regular, right circular cylinder, unitary charge, located central to the vessel interior of 19.2 lb of Compo sition C - 4 (24 lb TNT equivalent). Qualification test (3) consisted of a 12 - pack of regular, right circular cylinders of 2 lb each, distributed evenly inside the vessel (totaling 19.2 lb of C - 4, or 24 lb TNT equivalent). All vessel acceptance criteria were met.
The purpose of our report is to discuss the notion of entropy and its relationship with statistics. Our goal is to provide a manner in which you can think about entropy, its central role within information theory and relationship with statistics. We review various relationships between information theory and statistics—nearly all are well-known but unfortunately are often not recognized. Entropy quantities the "average amount of surprise" in a random variable and lies at the heart of information theory, which studies the transmission, processing, extraction, and utilization of information. For us, data is information. What is the distinction between information theory and statistics? Information theorists work with probability distributions. Instead, statisticians work with samples. In so many words, information theory using samples is the practice of statistics. Acknowledgements. We thank Danny Dunlavy, Carlos Llosa, Oscar Lopez, Arvind Prasadan, Gary Saavedra, Jeremy Wendt for helpful discussions along the way. Our report was supported by the Laboratory Directed Research and Development program at San- dia National Laboratories, a multimission laboratory managed and operated by National Technol- ogy and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Inter- national, Inc., for the U.S. Department of Energy's National Nuclear Adminstration under contract DE-NA0003525.
Recent efforts at Sandia such as DataSEA are creating search engines that enable analysts to query the institution’s massive archive of simulation and experiment data. The benefit of this work is that analysts will be able to retrieve all historical information about a system component that the institution has amassed over the years and make better-informed decisions in current work. As DataSEA gains momentum, it faces multiple technical challenges relating to capacity storage. From a raw capacity perspective, data producers will rapidly overwhelm the system with massive amounts of data. From an accessibility perspective, analysts will expect to be able to retrieve any portion of the bulk data, from any system on the enterprise network. Sandia’s Institutional Computing is mitigating storage problems at the enterprise level by procuring new capacity storage systems that can be accessed from anywhere on the enterprise network. These systems use the simple storage service, or S3, API for data transfers. While S3 uses objects instead of files, users can access it from their desktops or Sandia’s high-performance computing (HPC) platforms. S3 is particularly well suited for bulk storage in DataSEA, as datasets can be decomposed into object that can be referenced and retrieved individually, as needed by an analyst. In this report we describe our experiences working with S3 storage and provide information about how developers can leverage Sandia’s current systems. We present performance results from two sets of experiments. First, we measure S3 throughput when exchanging data between four different HPC platforms and two different enterprise S3 storage systems on the Sandia Restricted Network (SRN). Second, we measure the performance of S3 when communicating with a custom-built Ceph storage system that was constructed from HPC components. Overall, while S3 storage is significantly slower than traditional HPC storage, it provides significant accessibility benefits that will be valuable for archiving and exploiting historical data. There are multiple opportunities that arise from this work, including enhancing DataSEA to leverage S3 for bulk storage and adding native S3 support to Sandia’s IOSS library.
New approaches to preventing and treating infections, particularly of the respiratory tract, are needed. One promising strategy is to reconfigure microbial communities (microbiomes) within the host to improve defense against pathogens. Probiotics and prebiotics for gastrointestinal (GI) infections offer a template for success. We sought to develop comparable countermeasures for respiratory infections. First, we characterized interactions between the airway microbiome and a biodefense-related respiratory pathogen ( Burkholderia thailandensis ; Bt), using a mouse model of infection. Then, we recovered microbiome constituents from the airway and assessed their ability to re-colonize the airway and protect against respiratory Bt infection. We found that microbiome constituents belonging to Bacillus and related genuses frequently displayed colonization and anti-Bt activity. Comparative growth requirement profiling of these Bacillus strains vs Bt enabled identification of candidate prebiotics. This work serves as proof of concept for airway probiotics, as well as a strong foundation for development of airway prebiotics.
The purpose and scope of the viga span tables project for Rachel Wood Consulting (RWC) is focused on producing tabulated beam span tables for three species of wood vigas commonly used in New Mexico to allow producers, designers and builders to incorporate vigas into their building designs in a prescriptive manner similar to the span tables for sawn lumber incorporated into the International Residential Code (IRC) or the International Log Builders Association (ILBA) publication. The information provided in this report and the associated viga span tables also attempts to address and clarify questions raised by RWC during their review of the 2018 Los Alamos National Laboratory (LANL) New Mexico Small Business Assistance (NMSBA) program report by August Mosimann pertaining to span lengths, loading, deflection calculations, and log grading certification prior to submitting the span tables to the Construction Industries Division (CID) of New Mexico.
Thermographic phosphors (TP) are combined with stereo digital image correlation (DIC) in a novel diagnostic, TP + DIC, to measure full-field surface strains and temperatures simultaneously. The TP + DIC method is presented, including corrections for nonlinear CMOS camera detectors and generation of pixel-wise calibration curves to relate the known temperature to the ratio of pixel intensities between two distinct wavelength bands. Additionally, DIC is employed not only for strain measurements but also for accurate image registration between the two cameras for the two-colour ratio method approach of phosphoric thermography. TP + DIC is applied to characterize the thermo-mechanical response of 304L stainless steel dog bones during tensile testing at different strain rates. The dog bones are patterned for DIC with Mg3F2GeO4:Mn (MFG) via aerosol deposition through a shadow mask. Temperatures up to 425°K (150°C) and strains up to 1.0 mm/mm are measured in the localized necking region, with conservative noise levels of 10°K and 0.01 mm/mm or less. Finally, TP + DIC is compared to the more established method of combining infrared (IR) thermography with DIC (IR + DIC), with results agreeing favourably. Three topics of continued research are identified, including cracking of the aerosol-deposited phosphor DIC features, incomplete illumination for pixels on the border of the phosphor features, and phosphor emission evolution as a function of applied substrate strain. This work demonstrates the combination of phosphor thermography and DIC and lays the foundation for further development of TP + DIC for testing in combined thermo-mechancial environments.
The time integration scheme is probably one of the most fundamental choices in the development of an ocean model. In this paper, we investigate several time integration schemes when applied to the shallow water equations. This set of equations is accurate enough for the modeling of a shallow ocean and is also relevant to study as it is the one solved for the barotropic (i.e. vertically averaged) component of a three dimensional ocean model. We analyze different time stepping algorithms for the linearized shallow water equations. High order explicit schemes are accurate but the time step is constrained by the Courant-Friedrichs-Lewy stability condition. Implicit schemes can be unconditionally stable but, in practice lack accuracy when used with large time steps. In this paper we propose a detailed comparison of such classical schemes with exponential integrators. The accuracy and the computational costs are analyzed in different configurations.
The On-Line Waste Library is a website that contains information regarding United States Department of Energy-managed high-level waste, spent nuclear fuel, and other wastes that are likely candidates for deep geologic disposal, with links to supporting documents for the data. This report provides supporting information for the data for which an already published source was not available.
This report examines the problem of magnetic penetration of a conductive layer, including nonlinear ferromagnetic layers, excited by an electric current filament. The electric current filament is, for example, a nearby wire excited by a lightning strike. The internal electric field and external magnetic field are determined. Numerical results are compared to various analytical approximations to help understand the physics involved in the penetration.
We construct a family of embedded pairs for optimal explicit strong stability preserving Runge–Kutta methods of order 2≤p≤4 to be used to obtain numerical solution of spatially discretized hyperbolic PDEs. In this construction, the goals include non-defective property, large stability region, and small error values as defined in Dekker and Verwer (1984) and Kennedy et al. (2000). The new family of embedded pairs offer the ability for strong stability preserving (SSP) methods to adapt by varying the step-size. Through several numerical experiments, we assess the overall effectiveness in terms of work versus precision while also taking into consideration accuracy and stability.
Critical vulnerabilities continue to be discovered in the boot process of Android smartphones used around the world. The entire device's security is compromised if boot security is compromised, so any weakness presents undue risk to users. Vulnerabilities persist, in part, because independent security analysts lack access and appropriate tools. In response to this gap, we implemented a procedure for emulating the early phase of the Android boot process. This work demonstrated feasibility and utility of emulation in this space. By using HALucinator, we derived execution context and data flow, as well as incorporated peripheral hardware behavior. While smartphones with shared processors have substantial code overlap regardless of vendor, generational changes can have a significant impact. By applying our approach to older and modern devices, we learned interesting characteristics about the system. Such capabilities introduce new levels of introspection and operation understanding not previously available to mobile researchers.
Synthetic Aperture Radar (SAR) creates imagery of the earth?s surface from airborne or spaceborne radar platforms. However, the nature of any radar is to geolocate its echo data, i.e., SAR images, relative to its own measured radar location. Acceptable accuracy and precision of such geolocation can be quite di fficult to achieve, and is limite d by any number of parameters. However, databases of geolocated earth imagery do exist, often using other imaging modalities, with Google Earth being one such example. Thes e can often be much more accurate than what might be achievable by the radar itself. Cons equently, SAR images may be aligned to some higher accuracy database, there by improving the geolocation of features in the SAR image. Examples offer anecdotal evidence of the viability of such an approach. - 4 - Acknowledgements This report is the result of an unf unded Research and Development effort. A special thank you to Tommy Burks for his da ta collections in the Albuquerque area.
Adopting reduced order models (ROMs) of springs lowers the computational cost of stronglink simulations. However, ROMs introduce currently unquantified error to such analyses. This study addresses that lack of data by comparing a hexahedral mesh to a commonly used ROM beam mesh. Two types of analyses were performed, a quasi-static displacement-controlled pull and a haversine shock, examining basic spring properties as well as dynamics and stress/strain data. Both tests showed good similarities between the hexahedral and beam meshes, especially when comparing reaction force and stress trends and maximums. Equivalent plastic strain results were not quite as favorable, indicating that the beam model may be less likely to correctly predict spring failure. Despite reducing computation times by over 48 hours in all shock cases, appropriate use of the ROM should carefully balance this advantage with its reduction in accuracy, especially when examining spring failure and outputting variables such as equivalent plastic strain.
This work details the development of a concentrating solar power (CSP) and thermal (CST) library archive. This work included digitization of one-of-a-kind documents that could be degraded or destroyed over time. Sandia National Laboratories (SNL) National Solar Thermal Test Facility (NSTTF) and Sandia?s Technical Library departments collaborated to establish and maintain the first and only digital collection in the world of Concentrating Solar Power (CSP) related historical documents. These date back to the CSP program inception here at Sandia in the early 1970?s thru to the present.
This document presents the facility-recommended characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields for the White Sands Missile Range (WSMR) Fast Burst Reactor, also known as molybdenum-alloy Godiva (Molly-G), at the 6-inch and the 24-inch irradiation locations. The neutron, prompt gamma-ray, and delayed gamma-ray energy spectra, uncertainties, and covariance matrices are presented. Code dependent recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse operations are presented with conversion examples.
Efficient restoration of the electric grid from significant disruptions – both natural and manmade – that lead to the grid entering a failed state is essential to maintaining resilience under a wide range of threats. Restoration follows a set of black start plans, allowing operators to select among these plans to meet the constraints imposed on the system by the disruption. Restoration objectives aim to restore power to a maximum number of customers in the shortest time. Current state-of-the-art for restoration modeling breaks the problem into multiple parts, assuming a known network state and full observability and control by grid operators. These assumptions are not guaranteed under some threats. This paper focuses on a novel integration of modeling and analysis capabilities to aid operators during restoration activities. A power flow-informed restoration framework, comprised of a restoration mixed-integer program informed by power flow models to identify restoration alternatives, interacts with a dynamic representation of the grid through a cognitive model of operator decision-making, to identify and prove an optimal restoration path. Application of this integrated approach is illustrated on exemplar systems. Validation of the restoration is performed for one of these exemplars using commercial solvers, and comparison is made between the steps and time involved in the commercial solver, and that required by the restoration optimization in and of itself, and by the operator model in acting on the restoration optimization output. Publications and proposals developed under this work, along with a path forward for additional expansion of the work, and summary of what was achieved, are also documented.
Capacitance/inductance corrections for grid induced errors for a thin slot models are given for both one and four point testing on a rectangular grid for surface currents surrounding the slot. In addition a formula for translating from one equivalent radius to another is given for the thin-slot transmission line model. Additional formulas useful for this slot modeling are also given.
An iteration method is introduced to obtain the asymptotic form of the impedance per unit length of a rectangular conductor when the half side lengths are large compared to the skin depth. The first terms of the asymptotic expansion are extracted in closed form. The manner in which the corner corrections fit into the expansion are illustrated. The asymptotic results are compared to a numerical solution in the square limit. The odd corner correction for a right angle edge is also discussed.
This report is the final documentation for the one-year LDRD project 226360: Simulated X-ray Diffraction and Machine Learning for Optimizing Dynamic Experiment Analysis. As Sandia has successfully developed in-house X-ray diffraction tools for study of atomic structure in experiments, it has become increasingly important to develop computational analysis methods to support these experiments. When dynamically compressed lattices and orientations are not known a priori, the identification requires a cumbersome and sometimes intractable search of possible final states. These final states can include phase transition, deformation and mixed/evolving states. Our work consists of three parts: (1) development of an XRD simulation tool and use of traditional data science methods to match XRD patterns to experiments; (2) development of ML-based models capable of decomposing and identifying the lattice and orientation components of multicomponent experimental diffraction patterns; and (3) conducting experiments which showcase these new analysis tools in the study of phase transition mechanisms. Our target material has been cadmium sulfide, which exhibits complex orientation-dependent phase transformation mechanisms. In our current one-year LDRD, we have begun the analysis of high-quality c-axis CdS diffraction data from DCS and Thor experiments, which had until recently eluded orientation identification.
A method used to solve the problem of water waves on a sloping beach is applied to a thin conducting half plane described by a thin layer impedance boundary condition. The solution for the electric field behavior near the edge is obtained and a simple fit for this behavior is given. This field is used to determine the correction to the impedance per unit length of a conductor due to a sharp edge. The results are applied to the strip conductor. The final appendix also discusses the solution to the dual-sided (impedance surface & perfect conductor surface) half plane problem.
Here we investigate the application of ground-coupled airwaves observed by seismoacoustic stations at local to near-regional scales to detect signals of interest and determine back-azimuth information. Ground-coupled airwaves are created from incident pressure waves traveling through the atmosphere that couple to the earth and transmit as a seismic wave with retrograde elliptical motion. Previous studies at sub-local scales (<10 km from a source of interest) found the back-azimuth to the source could be accurately determined from seismoacoustic signals recorded by acoustic and 3-component seismic sensors spatially separated on the order of 10 to 150 m. The potential back-azimuth directions are estimated from the coherent signals between the acoustic and vertical seismic data, via a propagation-induced phase shift of the seismoacoustic signal. A unique solution is then informed by the particle motion of the 3-component seismic station, which was previously found to be less accurate than the seismoacoustic-sensor method. We investigate the applicability of this technique to greater source-receiver distances, from 50-100 km and up to 400 km, which contains pressure waves with tropospheric and stratospheric ray paths, respectively. Specifically, we analyze seismoacoustic sources with ground truth from rocket motor fuel elimination events at the Utah Test and Training Range (UTTR) as well as a 2020 rocket launch in Southern California. From these sources we observe evidence that while coherent signals can be seen from both sources on multiple seismoacoustic station pairs, the determined ground-coupled airwave back-azimuths are more complicated than results at more local scales. Our findings suggest more complex factors including incidence angle, coupling location, subsurface material, and atmospheric propagation effects need to be fully investigated before the ground-coupled airwave back-azimuth determination method can be applied or assessed at these further distances.
This year, we focused on completing the light squeezing and building the imaging station. In this report, we present a detailed description of a quantum imaging experiment utilizing squeezed light. The entire experimental setup has two parts, namely, the squeezing station where we produce quantum-noise squeezed light where a light quadrature (either the amplitude of the phase) has reduced quantum error below the shot noise of coherent light, and the imaging station where the squeezed light is used to image an object. The squeezing station consists of an optical parametric oscillator operating below the laser threshold. We provide the status quo and the plans for the squeezing imaging experiment.
The kinetic codes used to model the coupled dynamics of electromagnetic fields and charged particle transport have requirements for spatial, temporal, and charge resolution. These requirements may vary by the solution technique and scope of the problem. In this report, we investigate the resolution limits in the energy-conserving implicit particle-in-cell code CHICAGO. This report has the narrow aim of determining the maximum acceptable grid spacing for the dense plasmas generated in models of z-pinch target gases and power-flow electrode plasmas. In the 2D sample problem, the plasma drifts without external forces with velocity of 10 cm/µs. Simulations are scaled by plasma density to maintain uniform strides across the plasma and from the plasma to the boundaries. Additionally, the cloud-in-cell technique is used with 400 particles per cell and Δt = 0.85× the Courant limit. For the linear cloud distribution, the criterion for conserving energy is ΔE/Etot < 0.01 for 50,000 time steps. The grid resolution is determined to crudely be Δx ≲ 3ls, where ls is the electron collisionless skin depth. For the second-order cloud distribution the criterion is ΔE/Etot < 0.005 yielding Δx ≤ 15ls. These scalings are functions of the chosen vd, Δt, particles-per-cell, and number of steps.
Cyberattacks against industrial control systems have increased over the last decade, making it more critical than ever for system owners to have the tools necessary to understand the cyber resilience of their systems. However, existing tools are often qualitative, subject matter expertise-driven, or highly generic, making thorough, data-driven cyber resilience analysis challenging. The ADROC project proposed to develop a platform to enable efficient, repeatable, data-driven cyber resilience analysis for cyber-physical systems. The approach consists of two phases of modeling: computationally efficient math modeling and high-fidelity emulations. The first phase allows for scenarios of low concern to be quickly filtered out, conserving resources available for analysis. The second phase supports more detailed scenario analysis, which is more predictive of real-world systems. Data extracted from experiments is used to calculate cyber resilience metrics. ADROC then ranks scenarios based on these metrics, enabling prioritization of system resources to improve cyber resilience.
Machine learning-based data-driven modeling can allow computationally efficient time-dependent solutions of PDEs, such as those that describe subsurface multiphysical problems. In this work, our previous approach (Kadeethum et al., 2021d) of conditional generative adversarial networks (cGAN) developed for the solution of steady-state problems involving highly heterogeneous material properties is extended to time-dependent problems by adopting the concept of continuous cGAN (CcGAN). The CcGAN that can condition continuous variables is developed to incorporate the time domain through either element-wise addition or conditional batch normalization. Moreover, this framework can handle training data that contain different timestamps and then predict timestamps that do not exist in the training data. As a numerical example, the transient response of the coupled poroelastic process is studied in two different permeability fields: Zinn & Harvey transformation and a bimodal transformation. The proposed CcGAN uses heterogeneous permeability fields as input parameters while pressure and displacement fields over time are model output. Our results show that the model provides sufficient accuracy with computational speed-up. This robust framework will enable us to perform real-time reservoir management and robust uncertainty quantification in poroelastic problems.
Additive manufactured Ti-5Al-5V-5Mo-3Cr (Ti-5553) is being considered as an AM repair material for engineering applications because of its superior strength properties compared to other titanium alloys. Here, we describe the failure mechanisms observed through computed tomography, electron backscatter diffraction (EBSD), and scanning electron microscopy (SEM) of spall damage as a result of tensile failure in as-built and annealed Ti-5553. We also investigate the phase stability in native powder, as-built and annealed Ti-5553 through diamond anvil cell (DAC) and ramp compression experiments. We then explore the effect of tensile loading on a sample containing an interface between a Ti-6Al-V4 (Ti-64) baseplate and additively manufactured Ti-5553 layer. Post-mortem materials characterization showed spallation occurred in regions of initial porosity and the interface provides a nucleation site for spall damage below the spall strength of Ti-5553. Preliminary peridynamics modeling of the dynamic experiments is described. Finally, we discuss further development of Stochastic Parallel PARticle Kinteic Simulator (SPPARKS) Monte Carlo (MC) capabilities to include the integration of alpha (α)-phase and microstructural simulations for this multiphase titanium alloy.
X-ray stereo digital image correlation (DIC) measurements were performed at 10 kHz on the internal surface of a jointed structure in a shock tube at a shock Mach number of 1.42 and compared with optical stereo DIC measurements on the outer, visible surface of the structure. The shock tube environment introduces temperature and density gradients in the gas through which the structure was imaged, resulting in spatial and temporal index of refraction variations. These variations cause bias errors in optical DIC measurements due to beam-steering but have minimal influence on x-ray DIC measurements. These results demonstrate the utility of time-resolved x-ray DIC measurements in complicated environments where optical measurements suffer severe errors and/or are precluded by lack of optical access.
The frequency of unintended releases in a compressed natural gas system is an important aspect of the system quantitative risk assessment. The frequencies for possible release scenarios, along with engineering models, are utilized to quantify the risks for compressed natural gas facilities. This report documents component leakage frequencies representative of compressed natural gas components that were estimated as a function of the normalized leak size. A Bayesian statistical method was used which results in leak frequency distributions for each component which represent variation and uncertainty in the leak frequency. The analysis shows that there is high uncertainty in the estimated leak frequencies due to sparsity in compressed natural gas data. These leak frequencies may still be useful in compressed natural gas system risk assessments, as long as this high uncertainty is acknowledged and considered appropriately.
The time dependence of phase diagrams and how to model rate dependent transitions remains one of the key unanswered questions in physics. When a material is loaded dynamically through equilibrium phase boundaries, it is the kinetics that determines the real time expression of a phase transition. Here we report the atomic and nanosecond-scale quantification of kinetics of shock-driven phase transition in multiple materials. We uniquely make use of a both a simple shock as well as shock-and-hold loading pathways compress different crystalline solids and induce structural phase transitions below melt. Coupling shock loading with time-resolved synchrotron x-ray diffraction (DXRD), we probe the structural transformations of these solids in the short-lived high pressure and temperature states generated. The novelty and power of using DXRD for the assessment of kinetics of phase transitions lies in the ability to discover and identify new phases and to examine kinetics without prior knowledge of a material's phase diagram. Our results provide a quantified expression and a physics model of kinetics of formation of high-pressure phases under shock loading: transition incubation time, evolution, completion time and crystallization rate.
This report discusses the progress on the collaboration between Sandia National Laboratories (Sandia) and Japan Atomic Energy Agency (JAEA) on the sodium fire research in fiscal year (FY) 2022 and is a continuation of the FY 2021 progress report. We only report the changes made to the current sodium pool fire model in MELCOR. We modified and corrected many control functions to enhance the fraction of oxygen consumed that reacts to form monoxide (FO2) parameter in the current model from the FY2021 report. This year's enhancements relate to better agreement of the suspended aerosol measurement from JAEA's F7 series tests. Staff from Sandia and JAEA conducted the validation studies of the sodium pool fire model in MELCOR. To validate this pool fire model with the latest enhancement, JAEA sodium pool fire experiments (F7-1 and F7-2) were used. The results of the calculation, including the code-to-code comparisons are discussed as well as suggestions for further model improvement. Finally, recommendations are made for new MELCOR simulations for FY 2023.
We present the results of an LDRD project, funded by the Nuclear Deterrence IA, to develop capabilities for quantitative assessment of pyrotechnic thermal output. The thermal battery igniter is used as an exemplar system. Experimental methodologies for thermal output evaluation are demonstrated here, which can help designers and engineers better specify pyrotechnic components , provide thermal output guidelines for new formulations, and generate new metrics for assessing component performance and margin given a known failure condition. A heat-transfer analysis confirms that the dominant mode of energy transfer from the pyrotechnic output plume to the heat pellet is conduction via deposition of hot titanium particles. A simple lumped-parameter model of titanium particle heat transfer and a detailed multi-phase model of deposition heat transfer are discussed. Pyrotechnic function, as defined by "go/no-go" standoff testing of a heat pellet, is correlated with experimentally measured igniter plume temperature, titanium metal particle temperature, and energy deposition. Three high-speed thermal diagnostics were developed for this task. A three-color imaging pyrometer, acquiring 100k images per second on three color channels, is deployed for measurement of titanium particle temperatures. Complimentary measurements of the overall igniter plume emission ("color") temperature were conducted using a transmission-grating spectrograph in line-imaging mode. Heat flux and energy deposition to a cold wall at the heat-pellet location were estimated using an eroding thermocouple probe, with a frequency response of ~5 kHz. Ultimate "go/no-go" function in the igniter/heat-pellet system was correlated with quantitative thermal metrics, in particular surface energy deposition and plume color temperature. Titanium metal-particle and plume color temperatures both experience an upper bound approximated by the 3245-K boiling point of TiO2. Average metal-particle temperatures remained nearly constant for all standoff distances at T = 2850 K, ± 300 K, while plume color temperature and heat flux decay with standoff—suggesting that heat-pellet failure results from a drop in metal-particle flux and not particle temperature. At 50% likelihood of heat-pellet failure, peak time-resolved plume color temperatures drop well below TiO2 boiling to ~2000 - 2200 K, near the TiO2 melting point. Estimates of peak heat flux decline from up to 1 GW/m2 for near-field standoffs to below 320 MW/m2 at 50% failure likelihood.
High-enthalpy hypersonic flight represents an application space of significant concern within the current national-security landscape. The hypersonic environment is characterized by high-speed compressible fluid mechanics and complex reacting flow physics, which may present both thermal and chemical nonequilibrium effects. We report on the results of a three-year LDRD effort, funded by the Engineering Sciences Research Foundation (ESRF) investment area, which has been focused on the development and deployment of new high-speed thermochemical diagnostics capabilities for measurements in the high-enthalpy hypersonic environment posed by Sandia's free-piston shock tunnel. The project has additionally sponsored model development efforts, which have added thermal nonequilibrium modeling capabilities to Sandia codes for subsequent design of many of our shock-tunnel experiments. We have cultivated high-speed, chemically specific, laser-diagnostic approaches that are uniquely co-located with Sandia's high-enthalpy hypersonic test facilities. These tools include picosecond and nanosecond coherent anti-Stokes Raman scattering at 100-kHz rates for time-resolved thermometry, including thermal nonequilibrium conditions, and 100-kHz planar laser-induced fluorescence of nitric oxide for chemically specific imaging and velocimetry. Key results from this LDRD project have been documented in a number of journal submissions and conference proceedings, which are cited here. The body of this report is, therefore, concise and summarizes the key results of the project. The reader is directed toward these reference materials and appendices for more detailed discussions of the project results and findings.
Many materials of interest to Sandia transition from fluid to solid or have regions of both phases coexisting simultaneously. Currently there are, unfortunately, no material models that can accurately predict this material response. This is relevant to applications that "birth stress" related to geoscience, nuclear safety, manufacturing, energy production and bioscience. Accurately capturing solidification and residual stress enables fully predictive simulations of the evolving front shape or final product. Accurately resolving flow of proppants or blood could reduce environmental impact or lead to better treatments for heart attacks, thrombosis, or aneurism. We will address a science question in this proposal: When does residual stress develop during the critical transition from liquid to solid and how does it affect material deformation? Our hypothesis is that these early phases of stress development are critical to predictive simulation of material performance, net shape, and aging. In this project, we use advanced constitutive models with yield stress to represent both fluid and solid behavior simultaneously. The report provides an abbreviated description of the results from our LDRD "Stress Birth and Death: Disruptive Computational Mechanics and Novel Diagnostics for Fluid-to-Solid Transitions," since we have written four papers that document the work in detail and which we reference. We give highlights of the work and describe the gravitationally driven flow visualization experiment on a model yield stress fluid, Carbopol, at various concentrations and flow rates. We were able to collapse the data on a single master curve by showing it was self-similar. We also describe the Carbopol rheology and the constitutive equations of interest including the Bingham-Carreau-Yasuda model, the Saramito model, and the HB-Saramito model including parameter estimation for the shear and oscillatory rheology. We present several computational models including the 3D moving mesh simulations of both the Saramito models and Bingham-Carreau-Yasuda (BCY) model. We also show results from the BCY model using a 3D level set method and two different ways of handling reduced order Hele-Shaw modeling for generalized Newtonian fluids. We present some first ever two-dimensional results for the modified Jeffries Kamani-Donley-Rogers constitutive equation developed during this project. We include some recent results with a successful Saramito-level set coupling that allows us to tackle problems with complex geometries like mold filling in a thin gap with an obstacle, without the need for remeshing or remapping. We report on some experiments for curing systems where fluorescent particles are used to track material flow. These experiments were carried out in an oven on Sylgard 184 as a model polymerizing system. We conclude the report with a summary of accomplishments and some thoughts on follow-on work.
In recent years, infections and damage caused by malware have increased at exponential rates. At the same time, machine learning (ML) techniques have shown tremendous promise in many domains, often out performing human efforts by learning from large amounts of data. Results in the open literature suggest that ML is able to provide similar results for malware detection, achieving greater than 99% classifcation accuracy [49]. However, the same detection rates when applied in deployed settings have not been achieved. Malware is distinct from many other domains in which ML has shown success in that (1) it purposefully tries to hide, leading to noisy labels and (2) often its behavior is similar to benign software only differing in intent, among other complicating factors. This report details the reasons for the diffcultly of detecting novel malware by ML methods and offers solutions to improve the detection of novel malware.
Today as well as tomorrows spaceborne assets impact almost all areas of national and nuclear security. Spaceborne assets can not only collect and disseminate valuable data, well beyond just the visual, but also track terrestrial-based mobile assets in real-time, and active spaceborne platforms potentially pose serious risk to vulnerable earth-based systems and infrastructures. The capability to defend national spaceborne assets from attack/interference is critical for security interests. This effort supports this mission through the cost-effective preeminent detection of approaching threats to our nation’s vital resources, in order to help secure and trust these high-value assets against the threats of tomorrow. This project develops novel fabrication techniques for conformal, low-profile and lightweight leakywave antenna (LWA) detection/imaging systems, which fuses technical embroidery (TE) and laser ablation (LA) processes with LWA design. Technical embroidery is an emerging field in additive textile manufacturing where flexible materials and functionalized fabrics are created for a wide variety of uses and purposes, while laser ablation is the process of removing material from a solid surface by irradiating it with a laser beam. Here, thin, conformal antenna designs are designed, modeled and fabricated using both TE and LA, to create lightweight, flexible and conformal object detection and imaging radars. This novel development ensures our nation’s ability to field advanced lightweight and conformal technologies to protect spaceborne assets.
Oblapenko, G.; Goldstein, D.; Varghese, P.; Moore, C.
We propose a new scheme for simulation of collisions with multiple possible outcomes in variable-weight DSMC computations. The scheme is applied to a 0-D ionization rate coefficient computation, and 1-D electrical breakdown simulation. We show that the scheme offers a significant (up to an order of magnitude) improvement in the level of stochastic noise over the usual acceptance-rejection algorithm, even when controlling for the slight additional computational costs. The benefits and performance of the scheme are analyzed in detail, and possible extensions are proposed.
As part of the development process, scaled testing of wave energy converter devices are necessary to prove a concept, study hydrodynamics, and validate control system approaches. Creating a low-cost, small, lightweight data acquisition system suitable for scaled testing is often a barrier for wave energy converter developers’ ability to test such devices. This paper outlines an open-source solution to these issues, which can be customized based on specific needs. This will help developers with limited resources along a path toward commercialization.
X-ray computed tomography is generally a primary step in characterization of defective electronic components, but is generally too slow to screen large lots of components. Super-resolution imaging approaches, in which higher-resolution data is inferred from lower-resolution images, have the potential to substantially reduce collection times for data volumes accessible via x-ray computed tomography. Here we seek to advance existing two-dimensional super-resolution approaches directly to three-dimensional computed tomography data. Multiple scan resolutions over a half order of magnitude of resolution were collected for four classes of commercial electronic components to serve as training data for a deep-learning, super-resolution network. A modular python framework for three-dimensional super-resolution of computed tomography data has been developed and trained over multiple classes of electronic components. Initial training and testing demonstrate the vast promise for these approaches, which have the potential for more than an order of magnitude reduction in collection time for electronic component screening.
Milestone accomplishments staged the ExaWind team for successful completion of KPP-2 challenge problem in FY23, which requires the simulation on Frontier of at least four MW-scale turbines in an atmospheric boundary layer with at least 20B gridpoints. The ExaWind project and software stack is many faceted, with team members working on multiple areas, including linear-system solvers (Trilinos, hypre, AMReX), overset meshes, turbulence modeling, and in situ visualization, all with an aim for high fidelity predictions and performance portability. This milestone marks significant improvements on many fronts and provides the team with a pathway to exascale wind farm simulations in FY23.
Bidadi, Shreyas B.; Brazell, Michael B.; Brunhart-Lupo, Nicholas B.; Henry de Frahan, Marc T.; Lee, Dong H.; Hu, Jonathan J.; Melvin, Jeremy M.; Mullowney, Paul M.; Vijayakumar, Ganesh V.; Moser, Robert D.; Rood, Jon R.; Sakievich, Philip S.; Sharma, Ashesh S.; Williams, Alan B.; Sprague, Michael A.
The goal of the ExaWind project is to enable predictive simulations of wind farms comprised of many megawatt-scale turbines situated in complex terrain. Predictive simulations will require computational fluid dynamics (CFD) simulations for which the mesh resolves the geometry of the turbines, capturing the thin boundary layers, and captures the rotation and large deflections of blades. Whereas such simulations for a single turbine are arguably petascale class, multi-turbine wind farm simulations will require exascale-class resources.
The Sandia National Laboratories, California (SNL/CA) site comprises approximately 410 acres and is located in the eastern portion of Livermore, Alameda County, California. The property is owned by the United States Department of Energy and is being managed and operated by National Technology & Engineering Solutions of Sandia, LLC. The facility location is shown on the Site Map(s) in Appendix A. This Stormwater Pollution Prevention Plan (SWPPP) is designed to comply with California’s General Permit for Stormwater Discharges Associated with Industrial Activities (General Permit) Order No. 2015-0122-DWQ (NPDES No. CAS000001) issued by the State Water Resources Control Board (State Water Board) (Ref. 6.1). This SWPPP has been prepared following the SWPPP Template provided on the California Stormwater Quality Association Stormwater Best Management Practice Handbook Portal: Industrial and Commercial (CASQA 2014). In accordance with the General Permit, Section X.A, this SWPPP contains the following required elements: Facility Name and Contact Information; Site Map; List of Significant Industrial Materials; Description of Potential Pollution Sources; Assessment of Potential Pollutant Sources; Minimum BMPs; Advanced BMPs, if applicable; Monitoring Implementation Plan (MIP); Annual Comprehensive Facility Compliance Evaluation (Annual Evaluation); and, Date that SWPPP was Initially Prepared and the Date of Each SWPPP Amendment, if Applicable.
Disastrous consequences can result from defects in manufactured parts—particularly the high consequence parts developed at Sandia. Identifying flaws in as-built parts can be done with nondestructive means, such as X-ray Computed Tomography (CT). However, due to artifacts and complex imagery, the task of analyzing the CT images falls to humans. Human analysis is inherently unreproducible, unscalable, and can easily miss subtle flaws. We hypothesized that deep learning methods could improve defect identification, increase the number of parts that can effectively be analyzed, and do it in a reproducible manner. We pursued two methods: 1) generating a defect-free version of a scan and looking for differences (PandaNet), and 2) using pre-trained models to develop a statistical model of normality (Feature-based Anomaly Detection System: FADS). Both PandaNet and FADS provide good results, are scalable, and can identify anomalies in imagery. In particular, FADS enables zero-shot (training-free) identification of defects for minimal computational cost and expert time. It significantly outperforms prior approaches in computational cost while achieving comparable results. FADS’ core concept has also shown utility beyond anomaly detection by providing feature extraction for downstream tasks.
Single photon detection (SPD) plays an important role in many forefront areas of fundamental science and advanced engineering applications. In recent years, rapid developments in superconducting quantum computation, quantum key distribution, and quantum sensing call for SPD in the microwave frequency range. We have explored in this LDRD project a new approach to SPD in an effort to provide deterministic photon-number-resolving capability by using topological Josephson junction structures. In this SAND report, we will present results from our experimental studies of microwave response and theoretical simulations of microwave photon number resolving detector in topological Dirac semimetal Cd3As2. These results are promising for SPD at the microwave frequencies using topological quantum materials.
Projection-based model order reduction allows for the parsimonious representation of full order models (FOMs), typically obtained through the discretization of a set of partial differential equations (PDEs) using conventional techniques (e.g., finite element, finite volume, finite difference methods) where the discretization may contain a very large number of degrees of freedom. As a result of this more compact representation, the resulting projection-based reduced order models (ROMs) can achieve considerable computational speedups, which are especially useful in real-time or multi-query analyses. One known deficiency of projection-based ROMs is that they can suffer from a lack of robustness, stability and accuracy, especially in the predictive regime, which ultimately limits their useful application. Another research gap that has prevented the widespread adoption of ROMs within the modeling and simulation community is the lack of theoretical and algorithmic foundations necessary for the “plug-and-play” integration of these models into existing multi-scale and multi-physics frameworks. This paper describes a new methodology that has the potential to address both of the aforementioned deficiencies by coupling projection-based ROMs with each other as well as with conventional FOMs by means of the Schwarz alternating method [41]. Leveraging recent work that adapted the Schwarz alternating method to enable consistent and concurrent multiscale coupling of finite element FOMs in solid mechanics [35, 36], we present a new extension of the Schwarz framework that enables FOM-ROM and ROM-ROM coupling, following a domain decomposition of the physical geometry on which a PDE is posed. In order to maintain efficiency and achieve computation speed-ups, we employ hyper-reduction via the Energy-Conserving Sampling and Weighting (ECSW) approach [13]. We evaluate the proposed coupling approach in the reproductive as well as in the predictive regime on a canonical test case that involves the dynamic propagation of a traveling wave in a nonlinear hyper-elastic material.
This document provides an overview of the economic and technical challenges related to bringing small modular reactors to market and then presents an outline for how to address the new challenges. The purpose of this project was to proactively design software for its intended use to provide a strategic positioning for work in the future. This project seeks to augment the short-term stop-gap approach of trying to use legacy software well outside of its range of applicability.
The PRO-X program is actively supporting the design of nuclear systems by developing a framework to both optimize the fuel cycle infrastructure for advanced reactors (ARs) and minimize the potential for production of weapons-usable nuclear material. Three study topics are currently being investigated by Sandia National Laboratories (SNL) with support from Argonne National Laboratories (ANL). This multi-lab collaboration is focused on three study topics which may offer proliferation resistance opportunities or advantages in the nuclear fuel cycle. These topics are: 1) Transportation Global Landscape, 2) Transportation Avoidability, and 3) Parallel Modular Systems vs Single Large System (Crosscutting Activity).
The tearing parameter criterion and material softening failure method currently used in the multilinear elastic-plastic constitutive model was added as an option to modular failure capabilities. The modular failure implementation was integrated with the multilevel solver for multi-element simulations. Currently, this implementation is only available to the J2 plasticity model due to the formulation of the material softening approach. The implementation compared well with multilinear elastic-plastic model results for a uniaxial tension test, a simple shear test, and a representative structural problem. Necessary generalizations of the failure method to extend it as a modular option for all plasticity models are highlighted.
The ASC program seeks to use machine learning to improve efficiencies in its stockpile stewardship mission. Moreover, there is a growing market for technologies dedicated to accelerating AI workloads. Many of these emerging architectures promise to provide savings in energy efficiency, area, and latency when compared to traditional CPUs for these types of applications — neuromorphic analog and digital technologies provide both low-power and configurable acceleration of challenging artificial intelligence (AI) algorithms. If designed into a heterogeneous system with other accelerators and conventional compute nodes, these technologies have the potential to augment the capabilities of traditional High Performance Computing (HPC) platforms [5]. This expanded computation space requires not only a new approach to physics simulation, but the ability to evaluate and analyze next-generation architectures specialized for AI/ML workloads in both traditional HPC and embedded ND applications. Developing this capability will enable ASC to understand how this hardware performs in both HPC and ND environments, improve our ability to port our applications, guide the development of computing hardware, and inform vendor interactions, leading them toward solutions that address ASC’s unique requirements.
The recent growth in multifidelity uncertainty quantification has given rise to a large set of variance reduction techniques that leverage information from model ensembles to provide variance reduction for estimates of the statistics of a high-fidelity model. In this paper we provide two contributions: (1) we utilize an ensemble estimator to account for uncertainties in the optimal weights of approximate control variate (ACV) approaches and derive lower bounds on the number of samples required to guarantee variance reduction; and (2) we extend an existing multifidelity importance sampling (MFIS) scheme to leverage control variates. Our approach directly addresses a limitation of many multifidelity sampling strategies that require the usage of pilot samples to estimate covariances. As such we make significant progress towards both increasing the practicality of approximate control variates—for instance, by accounting for the effect of pilot samples—and using multifidelity approaches more effectively for estimating low-probability events. The numerical results indicate our hybrid MFIS-ACV estimator achieves up to 50% improvement in variance reduction over the existing state-of-the-art MFIS estimator, which had already shown an outstanding convergence rate compared to the Monte Carlo method, on several problems of computational mechanics.
Kononov, Alina K.; Lee, Cheng-Wei L.; Pereira dos Santos, Tatiane P.; Robinson, Brian R.; Yao, Yifan Y.; Yao, Yi Y.; Andrade, Xavier A.; Baczewski, Andrew D.; Constantinescu, Emil C.; Correa, Alfredo C.; Kanai, Yosuke K.; Modine, N.A.; Schleife, Andre S.
Due to a beneficial balance of computational cost and accuracy, real-time time-dependent density-functional theory has emerged as a promising first-principles framework to describe electron real-time dynamics. Here we discuss recent implementations around this approach, in particular in the context of complex, extended systems. Results include an analysis of the computational cost associated with numerical propagation and when using absorbing boundary conditions. We extensively explore the shortcomings for describing electron-electron scattering in real time and compare to many-body perturbation theory. Modern improvements of the description of exchange and correlation are reviewed. In this work, we specifically focus on the Qb@ll code, which we have mainly used for these types of simulations over the last years, and we conclude by pointing to further progress needed going forward.
We report translating the surging interest in neuromorphic electronic components, such as those based on nonlinearities near Mott transitions, into large-scale commercial deployment faces steep challenges in the current lack of means to identify and design key material parameters. These issues are exemplified by the difficulties in connecting measurable material properties to device behavior via circuit element models. Here, the principle of local activity is used to build a model of VO2/SiN Mott threshold switches by sequentially accounting for constraints from a minimal set of quasistatic and dynamic electrical and high-spatial-resolution thermal data obtained via in situ thermoreflectance mapping. By combining independent data sets for devices with varying dimensions, the model is distilled to measurable material properties, and device scaling laws are established. The model can accurately predict electrical and thermal conductivities and capacitances and locally active dynamics (especially persistent spiking self-oscillations). The systematic procedure by which this model is developed has been a missing link in predictively connecting neuromorphic device behavior with their underlying material properties, and should enable rapid screening of material candidates before employing expensive manufacturing processes and testing procedures.
In this LDRD we investigated the application of machine learning methods to understand dimensionality reduction and evolution of the Rayleigh-Taylor instability (RTI). As part of the project, we undertook a significant literature review to understand current analytical theory and machine learning based methods to treat evolution of this instability. We note that we chose to refocus on assessing the hydrodynamic RTI as opposed to the magneto-Rayleigh-Taylor instability originally proposed. This choice enabled utilizing a wealth of analytic test cases and working with relatively fast running open-source simulations of single-mode RTI. This greatly facilitated external collaboration with URA summer fellowship student, Theodore Broeren. In this project we studied the application of methods from dynamical systems learning and traditional regression methods to recover behavior of RTI ranging from the fully nonlinear to weakly nonlinear (wNL) regimes. Here we report on two of the tested methods SINDy and a more traditional regression-based approach inspired by analytic wNL theory with which we had the most success. We conclude with a discussion of potential future extensions to this work that may improve our understanding from both theoretical and phenomenological perspectives.
The sCO2 system located in 916/160A, Sandia National Laboratories, CA, was constructed in 2014, for testing of materials in the presence of supercritical carbon dioxide (sCO2) at high pressures (up to 3500 psi) and temperatures (up to 650°C). The basic design of the system consists of a thermally insulated IN625 autoclave, a high-pressure supercritical CO2 compressor, autoclave heaters, temperature controllers, gas manifold, and temperature and pressure diagnostics. This system was modified in 2016 (sCO2 compressor was removed) to enable corrosion studies with metal alloys in gaseous CO2 at lower pressure (up to 300 psi) at 500°C. The capability was not used much afterwards until 2020, when preliminary tests using this capability (again without the supercritical CO2 compressor) involved the exposure of fatigue and tensile specimens of HN 230 and 800H alloys to CO2 gas for 168 hours in gaseous CO2. Using this capability, we finished experiments with low pressure (450 psi/ 3 MPa), high temperature (650°C) exposure of fatigue and tensile specimens of HN 230 and 800H alloys to CO2 gas for 168 hours. The data from these experiments will be compared to that gathered from experiments performed in 2020 using the tube furnace and presented in a future report. It is to be noted that the tube furnace experiments ran 500-1500 hours, unlike the 168 hours of exposure in the recent experiment. This can help validate the use of the sCO2 autoclave for both CO2 and sCO2 experiments.
Under IER-305, critical experiments will be done with and without molybdenum sleeves on 7uPCX fuel rods. New critical assembly hardware has been designed and procured to accomplish the experiments with the fuel supported by in a 1.55 cm triangular-pitched array.
Uncertainty quantification (UQ) plays a major role in verification and validation for computational engineering models and simulations, and establishes trust in the predictive capability of computational models. In the materials science and engineering context, where the process-structure-property-performance linkage is well known to be the only road mapping from manufacturing to engineering performance, numerous integrated computational materials engineering (ICME) models have been developed across a wide spectrum of length-scales and time-scales to relieve the burden of resource-intensive experiments. Within the structure-property linkage, crystal plasticity finite element method (CPFEM) models have been widely used since they are one of a few ICME toolboxes that allows numerical predictions, providing the bridge from microstructure to materials properties and performances. Several constitutive models have been proposed in the last few decades to capture the mechanics and plasticity behavior of materials. While some UQ studies have been performed, the robustness and uncertainty of these constitutive models have not been rigorously established. In this work, we apply a stochastic collocation (SC) method, which is mathematically rigorous and has been widely used in the field of UQ, to quantify the uncertainty of three most commonly used constitutive models in CPFEM, namely phenomenological models (with and without twinning), and dislocation-density-based constitutive models, for three different types of crystal structures, namely face-centered cubic (fcc) copper (Cu), body-centered cubic (bcc) tungsten (W), and hexagonal close packing (hcp) magnesium (Mg). Our numerical results not only quantify the uncertainty of these constitutive models in stress-strain curve, but also analyze the global sensitivity of the underlying constitutive parameters with respect to the initial yield behavior, which may be helpful for robust constitutive model calibration works in the future.
Plasma formation from intensely ohmically heated conductors is known to be highly non-uniform, as local overheating can be driven by micron-scale imperfections. Detailed understanding of plasma formation is required to predict the performance of magnetically driven physics targets and magnetically-insulated transmission lines (MITLs). Previous LDRD-supported work (projects 178661 and 200269) developed the electrothermal instability (ETI) platform, on the Mykonos facility, to gather high-resolution images of the self-emission from the non-uniform ohmic heating of z-pinch rods. Experiments studying highly inhomogeneous alloyed aluminum captured complex heating topography. To enable detailed comparison with magnetohydrodynamic (MHD) simulation, 99.999% pure aluminum rods in a z-pinch configuration were diamond-turned to ~10nm surface roughness and then further machined to include well-characterized micron-scale "engineered" defects (ED) on the rod's surface (T.J. Awe, et al., Phys. Plasmas 28, 072104 (2021)). In this project, the engineered defect hardware and diagnostic platform were used to study ETI evolution and non-uniform plasma formation from stainless steel targets. The experimental objective was to clearly determine what, if any, role manufacturing, preparation, or alloy differences have in encouraging nonuniform heating and plasma formation from high-current density stainless steel. Data may identify improvements that may be implemented in the fabrication/preparation of electrodes used on the Z machine. Preliminary data shows that difference in manufacturer has no observed effect on ETI evolution, stainless alloy 304L heated more uniformly than alloy 310 at similar current densities, and that stainless steel undergoes the same evolutionary ETI stages as ultra-pure aluminum, with increased emission tied to areas of elevated surface roughness.
Artificial intelligence and machine learning (AI/ML) are becoming important tools for scientific modeling and simulation as in several other fields such as image analysis and natural language processing. ML techniques can leverage the computing power available in modern systems and reduce the human effort needed to configure experiments, interpret and visualize results, draw conclusions from huge quantities of raw data, and build surrogates for physics based models. Domain scientists in fields like fluid dynamics, microelectronics and chemistry can automate many of their most difficult and repetitive tasks or improve the design times by use of the faster ML-surrogates. However, modern ML and traditional scientific highperformance computing (HPC) tend to use completely different software ecosystems. While ML frameworks like PyTorch and TensorFlow provide Python APIs, most HPC applications and libraries are written in C++. Direct interoperability between the two languages is possible but is tedious and error-prone. In this work, we show that a compiler-based approach can bridge the gap between ML frameworks and scientific software with less developer effort and better efficiency. We use the MLIR (multi-level intermediate representation) ecosystem to compile a pre-trained convolutional neural network (CNN) in PyTorch to freestanding C++ source code in the Kokkos programming model. Kokkos is a programming model widely used in HPC to write portable, shared-memory parallel code that can natively target a variety of CPU and GPU architectures. Our compiler-generated source code can be directly integrated into any Kokkosbased application with no dependencies on Python or cross-language interfaces.
The purpose of this report is to document updates on the apparatus to simulate commercial vacuum drying procedures at the Nuclear Energy Work Complex at Sandia National Laboratories. Validation of the extent of water removal in a dry spent nuclear fuel storage system based on drying procedures used at nuclear power plants is needed to close existing technical gaps. Operational conditions leading to incomplete drying may have potential impacts on the fuel, cladding, and other components in the system during subsequent storage and disposal. A general lack of data suitable for model validation of commercial nuclear canister drying processes necessitates well-designed investigations of drying process efficacy and water retention. Scaled tests that incorporate relevant physics and well-controlled boundary conditions are essential to provide insight and guidance to the simulation of prototypic systems undergoing drying processes. This report documents a new test apparatus, the Advanced Drying Cycle Simulator (ADCS). This apparatus was built to simulate commercial drying procedures and quantify the amount of residual water remaining in a pressurized water reactor (PWR) fuel assembly after drying. The ADCS was constructed with a prototypic 17×17 PWR fuel skeleton and waterproof heater rods to simulate decay heat. These waterproof heaters are the next generation design to heater rods developed and tested at Sandia National Laboratories in FY20. This report describes the ADCS vessel build that was completed late in FY22, including the receipt of the prototypic length waterproof heater rods and construction of the fuel basket and the pressure vessel components. In addition, installations of thermocouples, emissivity coupons, pressure and vacuum lines, pressure transducers, and electrical connections were completed. Preliminary power functionality testing was conducted to demonstrate the capabilities of the ADCS. In FY23, a test plan for the ADCS will be developed to implement a drying procedure based on measurements from the process used for the High Burnup Demonstration Project. While applying power to the simulated fuel rods, this procedure is expected to consist of filling the ADCS vessel with water, draining the water with applied pressure and multiple helium blowdowns, evacuating additional water with a vacuum drying sequence at successively lower pressures, and backfilling the vessel with helium. Additional investigations are expected to feature failed fuel rod simulators with engineered cladding defects and guide tubes with obstructed dashpots to challenge the drying system with multiple water retention sites.
Tang, Yanfei T.; McLaugahan, John E.; Grest, Gary S.; cheng, shengfeng c.
A method of simulating the drying process of a soft matter solution with an implicit solvent model by moving the liquid-vapor interface is applied to various solution films and droplets. For a solution of a polymer and nanoparticles, we observe “polymer-on-top” stratification, similar to that found previously with an explicit solvent model. Furthermore, “polymer-on-top” is found even when the nanoparticle size is smaller than the radius of gyration of the polymer chains. For a suspension droplet of a bidisperse mixture of nanoparticles, we show that core-shell clusters of nanoparticles can be obtained via the “small-on-outside” stratification mechanism at fast evaporation rates. “Large-on-outside” stratification and uniform particle distribution are also observed when the evaporation rate is reduced. Polymeric particles with various morphologies, including Janus spheres, core-shell particles, and patchy particles, are produced from drying droplets of polymer solutions by combining fast evaporation with a controlled interaction between the polymers and the liquid-vapor interface. Our results validate the applicability of the moving interface method to a wide range of drying systems. The limitations of the method are pointed out and cautions are provided to potential practitioners on cases where the method might fail.
Direct air capture (DAC) of CO2 is one of the negative emission technologies under development to limit the impacts of climate change. The dilute concentration of CO2 in the atmosphere (~400 ppm) requires new materials for carbon capture with increased CO2 selectivity that is not met with current materials. Porous liquids (PLs) are an emerging material that consist of a combination of solvents and porous hosts creating a liquid with permanent porosity. PLs have demonstrated excellent CO2 selectivity, but the features that control how and why PLs selectively capture CO2 is unknown. To elucidate these mechanisms, density functional theory (DFT) simulations were used to investigate two different PLs. The first is a ZIF-8 porous host in a water/glycol/2-methylimidazole solvent. The second is the CC13 porous organic cage with multiple bulky solvents. DFT simulations identified that in both systems, CO2 preferentially bound in the pore window rather than in the internal pore space, identifying that the solvent-porous host interface controls the CO2 selectivity. Additionally, SNL synthesized ZIF-8 based PL compositions. Evaluation of the long-term stability of the PL identified no change in the ZIF-8 crystallinity after multiple agitation cycles, identifying its potential for use in carbon capture systems. Through this project, SNL has developed a fundamental understanding of solvent-host interactions, as well as how and where CO2 binds in PLs. Through these results, future efforts will focus not on how CO2 behaves inside the pore, but on the porous host-solvent interface as the driving force for PL stability and CO2 selectivity.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. The National Nuclear Security Administration’s Sandia Field Office administers the contract and oversees contractor operations at Sandia National Laboratories, New Mexico. Activities at the site support research and development programs with a wide variety of national security missions, resulting in technologies for nonproliferation, homeland security, energy and infrastructure, and defense systems and assessments.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. The National Nuclear Security Administration’s Sandia Field Office administers the contract and oversees contractor operations at Sandia National Laboratories, Kaua‘i Test Facility in Hawai‘i. Activities at the site are conducted in support of U.S. Department of Energy weapons programs, and the site has operated as a rocket preparation launching and tracking facility since 1962.
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration. The National Nuclear Security Administration’s Sandia Field Office administers the contract and oversees contractor operations at Sandia National Laboratories, Tonopah Test Range. Activities at the site are conducted in support of U.S. Department of Energy weapons programs and have operated at the site since 1957.
CTE (coefficient of thermal expansion) mismatch between two wafers has potential for brittle failure when large areas are bonded on top of one another (wafer to wafer or wafer to die bonds). To address this type of failure, we proposed patterning a polymer around metallic interconnects. For this project, utilized benzo cyclobutene (BCB) to form the bond and accommodate stress. For the metal interconnects, we used indium. To determine the benefits of utilizing BCB, mechanical shear testing of die bonding with just BCB were compared to die bonded just with oxide. These tests demonstrated that BCB, when cured for only 30 minutes and bonded at 200°C, the BCB was able to withstand shear forces similar to oxide. Furthermore, when the BCB did fail, it experienced a more ductile failure, allowing the silicon to crack, rather than shatter. To demonstrate the feasibility of using BCB between indium interconnects, wafers were pattered with layers of BCB with vias for indium or ENEPIG (electroless nickel, electroless palladium, immersion gold). Subsequently, these wafers were pattered with a variety of indium or ENEPIG interconnect pitches, diameters, and heights. These dies were bonded under a variety of conditions, and those that held a bond, were cross-sectioned and imaged. Images revealed that certain bonding conditions allow for interconnects and BCB to achieve a void-less bond and thus demonstrate that utilizing polymers in place of oxide is a feasible way to reduce CTE stress.
As heterogeneous systems become increasingly popular for both mobile and high-performance computing, conventional efficiency techniques such as dynamic voltage and frequency scaling (DVFS) fail to account for the tightly coupled and varied nature of systems on a chip (SoCs). In this work, we explore the impact of system unaware DVFS techniques on a mobile SoC under three benchmark suites: Chai, Rodinia, and Antutu. We then analyze performance trends across the suites to identify a set of consistent operating points that optimally balance power and performance across the system. The consistent operating points are then constructed into a dependency graph which can be leveraged to produce a more effective, SoC-wide governor.
Air-cooled heat exchangers are used to reject excess heat from a concentrated source to the surrounding atmosphere for a variety of mechanical and electrical systems. Advancements in heat exchanger design have been very limited in recent years for most product applications. In support of heat exchanger advancement, Sandia developed the Sandia Cooler.
We theoretically studied the feasibility of building a long-term read-write quantum memory using the principle of parity-time (PT) symmetry, which has already been demonstrated for classical systems. The design consisted of a two-resonator system. Although both resonators would feature intrinsic loss, the goal was to apply a driving signal to one of the resonators such that it would become an amplifying subsystem, with a gain rate equal and opposite to the loss rate of the lossy resonator. Consequently, the loss and gain probabilities in the overall system would cancel out, yielding a closed quantum system. Upon performing detailed calculations on the impact of a driving signal on a lossy resonator, our results demonstrated that an amplifying resonator is physically unfeasible, thus forestalling the possibility of PT-symmetric quantum storage. Our finding serves to significantly narrow down future research into designing a viable quantum hard drive.
Nonlocal models provide a much-needed predictive capability for important Sandia mission applications, ranging from fracture mechanics for nuclear components to subsurface flow for nuclear waste disposal, where traditional partial differential equations (PDEs) models fail to capture effects due to long-range forces at the microscale and mesoscale. However, utilization of this capability is seriously compromised by the lack of a rigorous nonlocal interface theory, required for both application and efficient solution of nonlocal models. To unlock the full potential of nonlocal modeling we developed a mathematically rigorous and physically consistent interface theory and demonstrate its scope in mission-relevant exemplar problems.
Accurate estimation of greenhouse gases (GHGs) emissions is very important for developing mitigation strategies to climate change by controlling and reducing GHG emissions. This project aims to develop multiple deep learning approaches to estimate anthropogenic greenhouse gas emissions using multiple types of satellite data. NO2 concentration is chosen as an example of GHGs to evaluate the proposed approach. Two sentinel satellites (sentinel-2 and sentinel-5P) provide multiscale observations of GHGs from 10-60m resolution (sentinel-2) to ~kilometer scale resolution (sentinel-5P). Among multiple deep learning (DL) architectures evaluated, two best DL models demonstrate that key features of spatio-temporal satellite data and additional information (e.g., observation times and/or coordinates of ground stations) can be extracted using convolutional neural networks and feed forward neural networks, respectively. In particular, irregular time series data from different NO2 observation stations limit the flexibility of long short-term memory architecture, requiring zero-padding to fill in missing data. However, deep neural operator (DNO) architecture can stack time-series data as input, providing the flexibility of input structure without zero-padding. As a result, the DNO outperformed other deep learning architectures to account for time-varying features. Overall, temporal patterns with smooth seasonal variations were predicted very well, while frequent fluctuation patterns were not predicted well. In addition, uncertainty quantification using conformal inference method is performed to account for prediction ranges. Overall, this research will lead to a new groundwork for estimating greenhouse gas concentrations using multiple satellite data to enhance our capability of tracking the cause of climate change and developing mitigation strategies.
Select one or more publication years and click "Update search results".
SELECTED PUBLICATION YEARS
MATCHING PUBLICATION YEARS
ALL PUBLICATION YEARS
No matches found.
Search for an author
Search for a Sandian author by first name, last name, or initials. Click on the author's name to add them as an option, and then click "Update search results".
SELECTED AUTHORS
MATCHING AUTHORS
ALL AUTHORS
Start typing to search.
Searching...
No matches found.
Search for a funding sponsor
Search for one or more funding sponsors and click "Update search results".
SELECTED FUNDING SPONSORS
MATCHING FUNDING SPONSORS
ALL FUNDING SPONSORS
No matches found.
Search for a research partner
Search for one or more research partners and click "Update search results".
SELECTED RESEARCH PARTNERS
MATCHING RESEARCH PARTNERS
ALL RESEARCH PARTNERS
Start typing to search.
Searching...
No matches found.
Search for a subject
Search for one or more subjects and click "Update search results".
SELECTED SUBJECTS
MATCHING SUBJECTS
ALL SUBJECTS
No matches found.
Search for a keyword
Search for one or more keywords and click "Update search results".