Publications

40 Results
Skip to search filters

MELCOR Code Change History (Revision 14959 to 18019)

Humphries, Larry; Phillips, Jesse P.; Schmidt, Rodney C.; Beeny, Bradley A.; Louie, David L.; Bixler, Nathan E.

This document summarily provides brief descriptions of the MELCOR code enhancement made between code revision number 14959and 18019. Revision 14959 represents the previous official code release; therefore, the modeling features described within this document are provided to assist users that update to the newest official MELCOR code release, 18019. Along with the newly updated MELCOR Users Guide and Reference Manual, users are aware and able to assess the new capabilities for their modeling and analysis applications.

More Details

MELCOR Code Change History: Revision 11932 to 14959 Patch Release Addendum

Humphries, Larry; Phillips, Jesse P.; Schmidt, Rodney C.; Beeny, Bradley A.; Wagner, Kenneth C.; Louie, David L.

This document summarily provides brief descriptions of the MELCOR code enhancement made between code revision number 11932 and 14959. Revision 11932 represents the last official code release; therefore, the modeling features described within this document are provided to assist users that update to the newest official MELCOR code release, 14959. Along with the newly updated MELCOR Users' Guide [2] and Reference Manual [3], users will be aware and able to assess the new capabilities for their modeling and analysis applications. Following the official release an addendum section has been added to this report detailing modifications made to the official release which support the accompanying patch release. The addendums address user reported issues and previously known issues within the official code release which extends the original Quick look document to also support the patch release. Furthermore, the addendums section documents the recent changes to input records in the Users' Guide applicable to the patch release and corrects a few issues in the revision 14959 release as well. This page left blank.

More Details

Sensitivity Analysis of OECD Benchmark Tests in BISON

Swiler, Laura P.; Gamble, Kyle G.; Schmidt, Rodney C.; Williamson, Richard W.

This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

More Details

Optimization and parallelization of the thermal-hydraulic subchannel code CTF for high-fidelity multi-physics applications

Annals of Nuclear Energy

Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

This paper describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations - including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices - are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a "single program multiple data" parallelization strategy targeting distributed memory "multiple instruction multiple data" platforms utilizing domain decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard Message-Passing Interface (MPI) calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pressurized water reactor (PWR) pre-processor utility that uses a greatly simplified set of user input compared with the traditional CTF input. To run CTF in parallel, two additional libraries are currently needed: MPI, for inter-processor message passing, and the Parallel Extensible Toolkit for Scientific Computation (PETSc), which is used to solve the global pressure matrix in parallel. Results presented include a set of testing and verification calculations and performance tests assessing parallel scaling characteristics up to a full-core, pincell-resolved model of a PWR core containing 193 17 × 17 assemblies under hot full-power conditions. This model, representative of Watts Bar Unit 1 and containing about 56,000 pins, was modeled with roughly 59,000 subchannels, leading to about 2.8 million thermal-hydraulic control volumes in total. Results demonstrate that CTF can now perform full-core analysis of a PWR (not previously possible owing to excessively long runtimes and memory requirements) on the order of 20 min. This new capability not only is useful to stand-alone CTF users, but also is being leveraged in support of coupled code multi-physics calculations being done in the CASL program.

More Details

Sensitivity Analysis of the Gap Heat Transfer Model in BISON

Swiler, Laura P.; Schmidt, Rodney C.; Williamson, Richard W.; Perez, Danielle P.

This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

More Details

Sodium fast reactor safety and licensing research plan. Volume II

LaChance, Jeffrey L.; Suo-Anttila, Jill M.; Hewson, John C.; Olivier, Tara J.; Phillips, Jesse P.; Denman, Matthew R.; Powers, Dana A.; Schmidt, Rodney C.

Expert panels comprised of subject matter experts identified at the U.S. National Laboratories (SNL, ANL, INL, ORNL, LBL, and BNL), universities (University of Wisconsin and Ohio State University), international agencies (IRSN, CEA, JAEA, KAERI, and JRC-IE) and private consultation companies (Radiation Effects Consulting) were assembled to perform a gap analysis for sodium fast reactor licensing. Expert-opinion elicitation was performed to qualitatively assess the current state of sodium fast reactor technologies. Five independent gap analyses were performed resulting in the following topical reports: (1) Accident Initiators and Sequences (i.e., Initiators/Sequences Technology Gap Analysis), (2) Sodium Technology Phenomena (i.e., Advanced Burner Reactor Sodium Technology Gap Analysis), (3) Fuels and Materials (i.e., Sodium Fast Reactor Fuels and Materials: Research Needs), (4) Source Term Characterization (i.e., Advanced Sodium Fast Reactor Accident Source Terms: Research Needs), and (5) Computer Codes and Models (i.e., Sodium Fast Reactor Gaps Analysis of Computer Codes and Models for Accident Analysis and Reactor Safety). Volume II of the Sodium Research Plan consolidates the five gap analysis reports produced by each expert panel, wherein the importance of the identified phenomena and necessities of further experimental research and code development were addressed. The findings from these five reports comprised the basis for the analysis in Sodium Fast Reactor Research Plan Volume I.

More Details

An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations

Schmidt, Rodney C.; Belcourt, Kenneth N.; Hooper, Russell H.; Pawlowski, Roger P.

LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.

More Details

Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety

Schmidt, Rodney C.

This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

More Details

A theory manual for multi-physics code coupling in LIME

Bartlett, Roscoe B.; Belcourt, Kenneth N.; Hooper, Russell H.; Schmidt, Rodney C.

The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

More Details

Foundational development of an advanced nuclear reactor integrated safety code

Schmidt, Rodney C.; Hooper, Russell H.; Humphries, Larry; Lorber, Alfred L.; Spotz, William S.

This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

More Details

Automated mask creation from a 3D model using Faethm

Schmidt, Rodney C.; Schiek, Richard S.

We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micro-machining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology of the model.

More Details

SummitView 1.0: a code to automatically generate 3D solid models of surface micro-machining based MEMS designs

Yarberry, Victor R.; Schmidt, Rodney C.

This report describes the SummitView 1.0 computer code developed at Sandia National Laboratories. SummitView is designed to generate a 3D solid model, amenable to visualization and meshing, that represents the end state of a microsystem fabrication process such as the SUMMiT (Sandia Ultra-Planar Multilevel MEMS Technology) V process. Functionally, SummitView performs essentially the same computational task as an earlier code called the 3D Geometry modeler [1]. However, because SummitView is based on 2D instead of 3D data structures and operations, it has significant speed and robustness advantages. As input it requires a definition of both the process itself and the collection of individual 2D masks created by the designer and associated with each of the process steps. The definition of the process is contained in a special process definition file [2] and the 2D masks are contained in MEM format files [3]. The code is written in C++ and consists of a set of classes and routines. The classes represent the geometric data and the SUMMiT V process steps. Classes are provided for the following process steps: Planar Deposition, Planar Etch, Conformal Deposition, Dry Etch, Wet Etch and Release Etch. SummitView is built upon the 2D Boolean library GBL-2D [4], and thus contains all of that library's functionality.

More Details

GBL-2D Version 1.0: a 2D geometry boolean library

Yarberry, Victor R.; Schmidt, Rodney C.

This report describes version 1.0 of GBL-2D, a geometric Boolean library for 2D objects. The library is written in C++ and consists of a set of classes and routines. The classes primarily represent geometric data and relationships. Classes are provided for 2D points, lines, arcs, edge uses, loops, surfaces and mask sets. The routines contain algorithms for geometric Boolean operations and utility functions. Routines are provided that incorporate the Boolean operations: Union(OR), XOR, Intersection and Difference. A variety of additional analytical geometry routines and routines for importing and exporting the data in various file formats are also provided. The GBL-2D library was originally developed as a geometric modeling engine for use with a separate software tool, called SummitView [1], that manipulates the 2D mask sets created by designers of Micro-Electro-Mechanical Systems (MEMS). However, many other practical applications for this type of software can be envisioned because the need to perform 2D Boolean operations can arise in many contexts.

More Details

ChISELS 1.0: theory and user manual :a theoretical modeler of deposition and etch processes in microsystems fabrication

Musson, Lawrence M.; Schmidt, Rodney C.; Ho, Pauline H.; Plimpton, Steven J.

Chemically Induced Surface Evolution with Level-Sets--ChISELS--is a parallel code for modeling 2D and 3D material depositions and etches at feature scales on patterned wafers at low pressures. Designed for efficient use on a variety of computer architectures ranging from single-processor workstations to advanced massively parallel computers running MPI, ChISELS is a platform on which to build and improve upon previous feature-scale modeling tools while taking advantage of the most recent advances in load balancing and scalable solution algorithms. Evolving interfaces are represented using the level-set method and the evolution equations time integrated using a Semi-Lagrangian approach [1]. The computational meshes used are quad-trees (2D) and oct-trees (3D), constructed such that grid refinement is localized to regions near the surface interfaces. As the interface evolves, the mesh is dynamically reconstructed as needed for the grid to remain fine only around the interface. For parallel computation, a domain decomposition scheme with dynamic load balancing is used to distribute the computational work across processors. A ballistic transport model is employed to solve for the fluxes incident on each of the surface elements. Surface chemistry is computed by either coupling to the CHEMKIN software [2] or by providing user defined subroutines. This report describes the theoretical underpinnings, methods, and practical use instruction of the ChISELS 1.0 computer code.

More Details

Automated and integrated mask generation from a CAD constructed 3D model

2005 NSTI Nanotechnology Conference and Trade Show - NSTI Nanotech 2005 Technical Proceedings

Schiek, Richard L.; Schmidt, Rodney C.

We have developed and implemented a method which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique horizontal cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set.

More Details

LDRD final report : on the development of hybrid level-set/particle methods for modeling surface evolution during feature-scale etching and deposition processes

Schmidt, Rodney C.

Two methods for creating a hybrid level-set (LS)/particle method for modeling surface evolution during feature-scale etching and deposition processes are developed and tested. The first method supplements the LS method by introducing Lagrangian marker points in regions of high curvature. Once both the particle set and the LS function are advanced in time, minimization of certain objective functions adjusts the LS function so that its zero contour is in closer alignment with the particle locations. It was found that the objective-minimization problem was unexpectedly difficult to solve, and even when a solution could be found, the acquisition of it proved more costly than simply expanding the basis set of the LS function. The second method explored is a novel explicit marker-particle method that we have named the grid point particle (GPP) approach. Although not a LS method, the GPP approach has strong procedural similarities to certain aspects of the LS approach. A key aspect of the method is a surface rediscretization procedure--applied at each time step and based on a global background mesh--that maintains a representation of the surface while naturally adding and subtracting surface discretization points as the surface evolves in time. This method was coded in 2-D, and tested on a variety of surface evolution problems by using it in the ChISELS computer code. Results shown for 2-D problems illustrate the effectiveness of the method and highlight some notable advantages in accuracy over the LS method. Generalizing the method to 3D is discussed but not implemented.

More Details

ODTLES : a model for 3D turbulent flow based on one-dimensional turbulence modeling concepts

Schmidt, Rodney C.; Kerstein, Alan R.

This report describes an approach for extending the one-dimensional turbulence (ODT) model of Kerstein [6] to treat turbulent flow in three-dimensional (3D) domains. This model, here called ODTLES, can also be viewed as a new LES model. In ODTLES, 3D aspects of the flow are captured by embedding three, mutually orthogonal, one-dimensional ODT domain arrays within a coarser 3D mesh. The ODTLES model is obtained by developing a consistent approach for dynamically coupling the different ODT line sets to each other and to the large scale processes that are resolved on the 3D mesh. The model is implemented computationally and its performance is tested and evaluated by performing simulations of decaying isotropic turbulence, a standard turbulent flow benchmarking problem.

More Details

Automated surface micro-machining mask creation from a 3D model

Proposed for publication in the Journal of Analog Integrated Circuits and Signal Processing.

Schiek, Richard S.; Schmidt, Rodney C.

We have developed and implemented a method, which given a three-dimensional object can infer from topology the two-dimensional masks needed to produce that object with surface micromachining. The masks produced by this design tool can be generic, process independent masks, or if given process constraints, specific for a target process. This design tool calculates the two-dimensional mask set required to produce a given three-dimensional model by investigating the vertical topology to the model. The 3D model is first separated into bodies that are non-intersecting, made from different materials or only linked through a ground plane. Next, for each body unique vertical cross sections are located and arranged into a tree based on their topological relationship. A branch-wise search of the tree uncovers locations where deposition boundaries must lie and identifies candidate masks creating a generic mask set for the 3D model. Finally, in the last step specific process requirements are considered that may constrain the generic mask set. Constraints can include the thickness or number of deposition layers, specific ordering of masks as required by a process and type of material used in a given layer. Candidate masks are reconciled with the process constraints through a constrained optimization.

More Details

Feature length-scale modeling of LPCVD & PECVD MEMS fabrication processes

Proposed for publication in the Journal of Microsystems Technologies.

Plimpton, Steven J.; Schmidt, Rodney C.

The surface micromachining processes used to manufacture MEMS devices and integrated circuits transpire at such small length scales and are sufficiently complex that a theoretical analysis of them is particularly inviting. Under development at Sandia National Laboratories (SNL) is Chemically Induced Surface Evolution with Level Sets (ChISELS), a level-set based feature-scale modeler of such processes. The theoretical models used, a description of the software and some example results are presented here. The focus to date has been of low-pressure and plasma enhanced chemical vapor deposition (low-pressure chemical vapor deposition, LPCVD and PECVD) processes. Both are employed in SNLs SUMMiT V technology. Examples of step coverage of SiO{sub 2} into a trench by each of the LPCVD and PECVD process are presented.

More Details

On the Development of the Large Eddy Simulation Approach for Modeling Turbulent Flow: LDRD Final Report

Schmidt, Rodney C.; DesJardin, Paul E.; Voth, Thomas E.; Christon, Mark A.; Kerstein, Alan R.; Wunsch, Scott E.

This report describes research and development of the large eddy simulation (LES) turbulence modeling approach conducted as part of Sandia's laboratory directed research and development (LDRD) program. The emphasis of the work described here has been toward developing the capability to perform accurate and computationally affordable LES calculations of engineering problems using unstructured-grid codes, in wall-bounded geometries and for problems with coupled physics. Specific contributions documented here include (1) the implementation and testing of LES models in Sandia codes, including tests of a new conserved scalar--laminar flamelet SGS combustion model that does not assume statistical independence between the mixture fraction and the scalar dissipation rate, (2) the development and testing of statistical analysis and visualization utility software developed for Exodus II unstructured grid LES, and (3) the development and testing of a novel new LES near-wall subgrid model based on the one-dimensional Turbulence (ODT) model.

More Details
40 Results
40 Results