Publications

Results 1–50 of 210
Skip to search filters

LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales

Computer Physics Communications

Thompson, Aidan P.; Aktulga, H.M.; Berger, Richard; Bolintineanu, Dan S.; Brown, W.M.; Crozier, Paul C.; in 't Veld, Pieter J.; Kohlmeyer, Axel; Moore, Stan G.; Nguyen, Trung D.; Shan, Ray; Stevens, Mark J.; Tranchida, Julien; Trott, Christian R.; Plimpton, Steven J.

Since the classical molecular dynamics simulator LAMMPS was released as an open source code in 2004, it has become a widely-used tool for particle-based modeling of materials at length scales ranging from atomic to mesoscale to continuum. Reasons for its popularity are that it provides a wide variety of particle interaction models for different materials, that it runs on any platform from a single CPU core to the largest supercomputers with accelerators, and that it gives users control over simulation details, either via the input script or by adding code for new interatomic potentials, constraints, diagnostics, or other features needed for their models. As a result, hundreds of people have contributed new capabilities to LAMMPS and it has grown from fifty thousand lines of code in 2004 to a million lines today. In this paper several of the fundamental algorithms used in LAMMPS are described along with the design strategies which have made it flexible for both users and developers. We also highlight some capabilities recently added to the code which were enabled by this flexibility, including dynamic load balancing, on-the-fly visualization, magnetic spin dynamics models, and quantum-accuracy machine learning interatomic potentials. Program Summary: Program Title: Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) CPC Library link to program files: https://doi.org/10.17632/cxbxs9btsv.1 Developer's repository link: https://github.com/lammps/lammps Licensing provisions: GPLv2 Programming language: C++, Python, C, Fortran Supplementary material: https://www.lammps.org Nature of problem: Many science applications in physics, chemistry, materials science, and related fields require parallel, scalable, and efficient generation of long, stable classical particle dynamics trajectories. Within this common problem definition, there lies a great diversity of use cases, distinguished by different particle interaction models, external constraints, as well as timescales and lengthscales ranging from atomic to mesoscale to macroscopic. Solution method: The LAMMPS code uses parallel spatial decomposition, distributed neighbor lists, and parallel FFTs for long-range Coulombic interactions [1]. The time integration algorithm is based on the Størmer-Verlet symplectic integrator [2], which provides better stability than higher-order non-symplectic methods. In addition, LAMMPS supports a wide range of interatomic potentials, constraints, diagnostics, software interfaces, and pre- and post-processing features. Additional comments including restrictions and unusual features: This paper serves as the definitive reference for the LAMMPS code. References: [1] S. Plimpton, Fast parallel algorithms for short-range molecular dynamics. J. Comp. Phys. 117 (1995) 1–19. [2] L. Verlet, Computer experiments on classical fluids: I. Thermodynamical properties of Lennard–Jones molecules, Phys. Rev. 159 (1967) 98–103.

More Details

Memo regarding the Final Review of FY21 ASC L2 Milestone 7840: Neural Mini-Apps for Future Heterogeneous HPC Systems

Oldfield, Ron A.; Plimpton, Steven J.; Laros, James H.; Poliakoff, David Z.; Sornborger, Andrew S.

The final review for the FY21 Advanced Simulation and Computing (ASC) Computational Systems and Software Environments (CSSE) L2 Milestone #7840 was conducted on August 25th, 2021 at Sandia National Laboratories in Albuquerque, New Mexico. The review committee/panel unanimously agreed that the milestone has been successfully completed, exceeding expectations on several of the key deliverables.

More Details

Rendezvous algorithms for large-scale modeling and simulation

Journal of Parallel and Distributed Computing

Plimpton, Steven J.; Knight, Christopher

Rendezvous algorithms encode a communication pattern that is useful when processors sending data do not know who the receiving processors should be, or vice versa. The idea is to define an intermediate decomposition where datums from different sending processors can ”rendezvous” to perform a computation, in a manner that both the senders and eventual receivers of the results can identify the appropriate rendezvous processor. Originally designed for interpolating between overlaid grids with independent parallel decompositions (Plimpton et al., 2004), we have recently found rendezvous algorithms useful for a variety of operations in particle- or grid-based simulation codes when running large problems on large numbers of processors. In particular, we show they can perform well when a load-balanced intermediate decomposition is randomized and not spatial, requiring all-to-all communication to move data between processors. In this case rendezvous algorithms leverage the large bisection communication bandwidths which parallel machines provide. We describe how rendezvous algorithms work in a scientific computing context and give specific examples for molecular dynamics and Direct Simulation Monte Carlo codes which result in dramatic performance improvements versus simpler algorithms which do not scale as well. We explain how a generic rendezvous algorithm can be implemented, and also point out similarities with the MapReduce paradigm popularized by Google and Hadoop.

More Details

Granular packings with sliding, rolling, and twisting friction

Physical Review E

Santos, Andrew P.; Bolintineanu, Dan S.; Grest, Gary S.; Lechman, Jeremy B.; Plimpton, Steven J.; Srivastava, Ishan; Silbert, Leonardo E.

Intuition tells us that a rolling or spinning sphere will eventually stop due to the presence of friction and other dissipative interactions. The resistance to rolling and spinning or twisting torque that stops a sphere also changes the microstructure of a granular packing of frictional spheres by increasing the number of constraints on the degrees of freedom of motion. We perform discrete element modeling simulations to construct sphere packings implementing a range of frictional constraints under a pressure-controlled protocol. Mechanically stable packings are achievable at volume fractions and average coordination numbers as low as 0.53 and 2.5, respectively, when the particles experience high resistance to sliding, rolling, and twisting. Only when the particle model includes rolling and twisting friction were experimental volume fractions reproduced.

More Details

Parallel algorithms for hyperdynamics and local hyperdynamics

Journal of Chemical Physics

Plimpton, Steven J.; Perez, Danny; Voter, Arthur F.

Hyperdynamics (HD) is a method for accelerating the timescale of standard molecular dynamics (MD). It can be used for simulations of systems with an energy potential landscape that is a collection of basins, separated by barriers, where transitions between basins are infrequent. HD enables the system to escape from a basin more quickly while enabling a statistically accurate renormalization of the simulation time, thus effectively boosting the timescale of the simulation. In the work of Kim et al. [J. Chem. Phys. 139, 144110 (2013)], a local version of HD was formulated, which exploits the intrinsic locality characteristic typical of most systems to mitigate the poor scaling properties of standard HD as the system size is increased. Here, we discuss how both HD and local HD can be formulated to run efficiently in parallel. We have implemented these ideas in the LAMMPS MD code, which means HD can be used with any interatomic potential LAMMPS supports. Together, these parallel methods allow simulations of any size to achieve the time acceleration offered by HD (which can be orders of magnitude), at a cost of 2-4× that of standard MD. As examples, we performed two simulations of a million-atom system to model the diffusion and clustering of Pt adatoms on a large patch of the Pt(100) surface for 80 μs and 160 μs.

More Details

Aspherical particle models for molecular dynamics simulation

Computer Physics Communications

Nguyen, Trung D.; Plimpton, Steven J.

In traditional molecular dynamics (MD) simulations, atoms and coarse-grained particles are modeled as point masses interacting via isotropic potentials. For studies where particle shape plays a vital role, more complex models are required. In this paper we describe a spectrum of approaches for modeling aspherical particles, all of which are now available (some recently) as options within the LAMMPS MD package. Broadly these include two classes of models. In the first, individual particles are aspherical, either via a pairwise anisotropic potential which implicitly assigns a simple geometric shape to each particle, or in a more general way where particles store internal state which can explicitly define a complex geometric shape. In the second class of models, individual particles are simple points or spheres, but rigid body constraints are used to create composite aspherical particles in a variety of complex shapes. We discuss parallel algorithms and associated data structures for both kinds of models, which enable dynamics simulations of aspherical particle systems across a wide range of length and time scales. We also highlight parallel performance and scalability and give a few illustrative examples of aspherical models in different contexts.

More Details

DSMC simulations of turbulent flows at moderate Reynolds numbers

AIP Conference Proceedings

Gallis, Michail A.; Torczynski, J.R.; Bitter, Neal B.; Koehler, Timothy P.; Moore, Stan G.; Plimpton, Steven J.; Papadakis, G.

The Direct Simulation Monte Carlo (DSMC) method has been used for more than 50 years to simulate rarefied gases. The advent of modern supercomputers has brought higher-density near-continuum flows within range. This in turn has revived the debate as to whether the Boltzmann equation, which assumes molecular chaos, can be used to simulate continuum flows when they become turbulent. In an effort to settle this debate, two canonical turbulent flows are examined, and the results are compared to available continuum theoretical and numerical results for the Navier-Stokes equations.

More Details

Direct simulation Monte Carlo on petaflop supercomputers and beyond

Physics of Fluids

Plimpton, Steven J.; Moore, Stan G.; Borner, A.; Stagg, Alan K.; Koehler, T.P.; Torczynski, J.R.; Gallis, Michail A.

The gold-standard definition of the Direct Simulation Monte Carlo (DSMC) method is given in the 1994 book by Bird [Molecular Gas Dynamics and the Direct Simulation of Gas Flows (Clarendon Press, Oxford, UK, 1994)], which refined his pioneering earlier papers in which he first formulated the method. In the intervening 25 years, DSMC has become the method of choice for modeling rarefied gas dynamics in a variety of scenarios. The chief barrier to applying DSMC to more dense or even continuum flows is its computational expense compared to continuum computational fluid dynamics methods. The dramatic (nearly billion-fold) increase in speed of the largest supercomputers over the last 30 years has thus been a key enabling factor in using DSMC to model a richer variety of flows, due to the method's inherent parallelism. We have developed the open-source SPARTA DSMC code with the goal of running DSMC efficiently on the largest machines, both current and future. It is largely an implementation of Bird's 1994 formulation. Here, we describe algorithms used in SPARTA to enable DSMC to operate in parallel at the scale of many billions of particles or grid cells, or with billions of surface elements. We give a few examples of the kinds of fundamental physics questions and engineering applications that DSMC can address at these scales.

More Details

Highly scalable discrete-particle simulations with novel coarse-graining: accessing the microscale

Molecular Physics

Mattox, Timothy I.; Larentzos, James P.; Moore, Stan G.; Stone, Christopher P.; Ibanez, Daniel A.; Thompson, Aidan P.; Lísal, Martin; Brennan, John K.; Plimpton, Steven J.

Simulating energetic materials with complex microstructure is a grand challenge, where until recently, an inherent gap in computational capabilities had existed in modelling grain-scale effects at the microscale. We have enabled a critical capability in modelling the multiscale nature of the energy release and propagation mechanisms in advanced energetic materials by implementing, in the widely used LAMMPS molecular dynamics (MD) package, several novel coarse-graining techniques that also treat chemical reactivity. Our innovative algorithmic developments rooted within the dissipative particle dynamics framework, along with performance optimisations and application of acceleration technologies, have enabled extensions in both the length and time scales far beyond those ever realised by atomistic reactive MD simulations. In this paper, we demonstrate these advances by modelling a shockwave propagating through a microstructured material and comparing performance with the state-of-the-art in atomistic reactive MD techniques. As a result of this work, unparalleled explorations in energetic materials research are now possible.

More Details

Gas-kinetic simulation of sustained turbulence in minimal Couette flow

Physical Review Fluids

Gallis, Michail A.; Torczynski, J.R.; Bitter, Neal B.; Koehler, Timothy P.; Plimpton, Steven J.; Papadakis, G.

We provide a demonstration that gas-kinetic methods incorporating molecular chaos can simulate the sustained turbulence that occurs in wall-bounded turbulent shear flows. The direct simulation Monte Carlo method, a gas-kinetic molecular method that enforces molecular chaos for gas-molecule collisions, is used to simulate the minimal Couette flow at Re=500. The resulting law of the wall, the average wall shear stress, the average kinetic energy, and the continually regenerating coherent structures all agree closely with corresponding results from direct numerical simulation of the Navier-Stokes equations. These results indicate that molecular chaos for collisions in gas-kinetic methods does not prevent development of molecular-scale long-range correlations required to form hydrodynamic-scale turbulent coherent structures.

More Details

Open Source Software for HPC

Lacy, Susan L.; Plimpton, Steven J.

The computational power of HPC is beyond our comprehension when we hear that 5 quadrillion computations can happen in a matter of seconds, or that machine learning is changing the way everything works. But none of that happens in a vacuum, and the teams behind the scenes—the developers of the hardware, the operating systems, the data transfer protocols, and the applications themselves—are the unsung heroes of a world where faster is better and you'd better hope there's no bug in the software or the hardware to slow you down. HPC is most successful when all these aspects work together seamlessly. The stories that follow are a tribute to the hardworking teams behind the scenes.

More Details
Results 1–50 of 210
Results 1–50 of 210