LAMMPS
LAMMPS is a highly modular and extensible molecular dynamics software package designed to run efficiently on a wide range of high-performance computing architectures. It supports numerous force fields and simulation models, making it suitable for materials science, soft matter, biomolecular systems, reactive dynamics, and particle-based simulations. Similar to GROMACS, LAMMPS achieves its best performance when compiled and configured for the specific hardware and accelerator technologies available.
Palmetto 2 Quick Start Example
The following example is provided by Palmetto 2 and is intended for quick testing and demonstration purposes.
Users can copy the entire directory /project/rcde/public_examples/lammps/ to
their home directory or scratch space to run the simulation.
cp -r /project/rcde/public_examples/lammps/ ~
cd ~/lammps
sbatch lammps_cpu.slurm
The following sections describe available LAMMPS modules on Palmetto 2 and provide detailed instructions for custom builds and advanced usage.
Global LAMMPS Module installed on Palmetto 2
Pre-built LAMMPS modules are available on Palmetto 2 for general use.
- Version number: 3Nov2022
module purge
module load ngc/lammps/3Nov2022
-
This module must be used on compute nodes. Running it on a login node may result in:
-bash: /bin/apptainer: No such file or directory. -
For this module,
mpirunis recommended for launching MPI jobs.
If you are interested in how LAMMPS is built or need custom configurations (e.g., specific acceleration packages or CPU/GPU architectures), please refer to the manual installation instructions below.
Manually installing LAMMPS on your home directory
General Notes
-
LAMMPS simulations and checkpoint files require consistent LAMMPS versions throughout a workflow.
-
In the examples below, we use LAMMPS 22 Jul 2025.
-
For local software management, we recommend installing software under a directory named software_slurm.
mkdir ~/software_slurm
cd ~/software_slurm
CPU version
Reserve a Compute Node.
salloc --nodes=1 --tasks-per-node=12 --mem=12G --time=2:00:00
Download and Extract LAMMPS.
wget https://download.lammps.org/tars/lammps-stable.tar.gz
tar zxvf lammps-stable.tar.gz
cd lammps-22Jul2025
Configure the Build Directory.
LAMMPS uses CMake and supports multiple builds from a single source tree.
mkdir build-openmpi-omp
cd build-openmpi-omp
Prepare CMake Preset Files.
We use basic.cmake as a template, which includes the packages:
KSPACEMANYBODYMOLECULERIGID
Make a copy for an OpenMPI/OpenMP build:
# Copy a CMake preset from the source tree (relative to the build directory)
cp ../cmake/presets/basic.cmake ../cmake/presets/basic-openmpi-omp.cmake
For a preset enabling all packages, see ../cmake/presets/all_on.cmake.
Edit the new preset to include OPENMP and USER-OMP:
sed -i "s/set(ALL_PACKAGES KSPACE MANYBODY MOLECULE RIGID)/set(ALL_PACKAGES KSPACE MANYBODY MOLECULE RIGID OPENMP USER-OMP)/g" ../cmake/presets/basic-openmpi-omp.cmake
Load Required Modules.
module load openmpi/5.0.1
Build LAMMPS (CPU).
cmake -C ../cmake/presets/basic-openmpi-omp.cmake -C ../cmake/presets/gcc.cmake ../cmake
cmake --build . --parallel 12
Once the build finishes, the executable lmp will be available in
build-openmpi-omp.
Do not request GPUs when running CPU-only LAMMPS builds. If you would like LAMMPS with support of GPU, please use the methods below.
GPU version
Reserve a GPU Node (Example: A100).
salloc --nodes=1 --tasks-per-node=12 --mem=12G --gpus-per-node=a100:1 --time=2:00:00
LAMMPS Accelerator Packages Overview.
LAMMPS provides several accelerator packages:
OPTUSER-INTELUSER-OMPGPUKOKKOS
Configure KOKKOS + GPU Build.
cd lammps-22Jul2025
mkdir build-kokkos-gpu-omp
cd build-kokkos-gpu-omp
We will use two preset templates:
-
basic.cmakecontains four basic simulation packages:KSPACE,MANYBODY,MOLECULE, andRIGID. -
kokkos-cuda.cmakecontains the architectural configuration for the type of GPU card. The default value isPASCAL60.
Create customized copies:
cp ../cmake/presets/basic.cmake ../cmake/presets/basic-gpu-omp.cmake
cp ../cmake/presets/kokkos-cuda.cmake ../cmake/presets/kokkos-a100.cmake
Enable GPU and OpenMP packages:
sed -i "s/set(ALL_PACKAGES KSPACE MANYBODY MOLECULE RIGID)/set(ALL_PACKAGES KSPACE MANYBODY MOLECULE RIGID GPU OPENMP USER-OMP)/g" ../cmake/presets/basic-gpu-omp.cmake
Set GPU architecture to AMPERE80 for A100:
sed -i "s/PASCAL60/AMPERE80/g" ../cmake/presets/kokkos-a100.cmake
GPU Architecture Reference (KOKKOS)
| Palmetto GPU | Architecture name for Kokkos |
|---|---|
| P100 | PASCAL60 |
| V100 and V100S | VOLTA70 |
| A100 | AMPERE80 |
Load Required Modules.
module load cuda/11.8.0 openmpi/5.0.1
Build and install
cmake -C ../cmake/presets/basic-gpu-omp.cmake -C ../cmake/presets/kokkos-a100.cmake ../cmake
cmake --build . --parallel 6
Using too many parallel build threads may trigger out-of-memory (OOM) errors, for example:
slurmstepd: error: Detected 1 oom_kill event in StepId=8148712.interactive. Some of the step tasks have been OOM Killed.
If this happens, simply reduce the number of parallel build threads.
Once the build finishes, the executable lmp will be available in:
build-kokkos-gpu-omp. You can test your built lmp on the example given.
Testing the Installation
You can test the installation using a simple Lennard-Jones (LJ) example. A
successful run should complete without MPI errors and produce thermodynamic
output in the log file.
This test is intended only to verify correctness, not for performance benchmarking.
Input files can be obtained from the official LAMMPS examples or from the repository linked below.
CPU example
mkdir /scratch/$USER/lammps_test
cd /scratch/$USER/lammps_test
wget https://www.lammps.org/inputs/in.lj.txt
export PATH="$HOME/software_slurm/lammps-22Jul2025/build-openmpi-omp/":$PATH
export OMP_NUM_THREADS=1
srun lmp -sf omp -pk omp 1 -in in.lj.txt > out.cpu
cat out.cpu
Example Batch Script for Slurm job:
GPU example
mkdir /scratch/$USER/lammps_test
cd /scratch/$USER/lammps_test
wget https://www.lammps.org/inputs/in.lj.txt
export PATH="$HOME/software_slurm/lammps-22Jul2025/build-kokkos-gpu-omp/":$PATH
export OMP_NUM_THREADS=1
srun lmp -sf omp -pk omp 1 -in in.lj.txt > out.gpu
cat out.gpu
Example Batch Script for Slurm job: