Updating to develop

This commit is contained in:
jtclemm
2023-06-15 16:00:48 -06:00
7542 changed files with 966700 additions and 310738 deletions

1
.gitattributes vendored
View File

@ -3,6 +3,7 @@
.github export-ignore .github export-ignore
.lgtm.yml export-ignore .lgtm.yml export-ignore
SECURITY.md export-ignore SECURITY.md export-ignore
CITATION.cff export-ignore
* text=auto * text=auto
*.jpg -text *.jpg -text
*.pdf -text *.pdf -text

76
.github/CODEOWNERS vendored
View File

@ -13,39 +13,45 @@ lib/kim/* @ellio167
lib/mesont/* @iafoss lib/mesont/* @iafoss
# whole packages # whole packages
src/COMPRESS/* @rbberger src/ADIOS/* @pnorbert
src/GPU/* @ndtrung81 src/AMOEBA/* @sjplimp
src/KOKKOS/* @stanmoore1 src/BPM/* @jtclemm
src/KIM/* @ellio167
src/LATTE/* @cnegre
src/MESSAGE/* @sjplimp
src/MLIAP/* @athomps
src/SNAP/* @athomps
src/SPIN/* @julient31
src/BROWNIAN/* @samueljmcameron src/BROWNIAN/* @samueljmcameron
src/CG-DNA/* @ohenrich src/CG-DNA/* @ohenrich
src/CG-SDK/* @akohlmey src/CG-SPICA/* @yskmiyazaki
src/COLVARS/* @giacomofiorin src/COLVARS/* @giacomofiorin
src/COMPRESS/* @rbberger
src/DIELECTRIC/* @ndtrung81 src/DIELECTRIC/* @ndtrung81
src/ELECTRODE/* @ludwig-ahrens
src/FEP/* @agiliopadua src/FEP/* @agiliopadua
src/ML-HDNNP/* @singraber src/GPU/* @ndtrung81
src/GRANULAR/* @jtclemm @dsbolin
src/INTEL/* @wmbrownintel src/INTEL/* @wmbrownintel
src/KIM/* @ellio167
src/KOKKOS/* @stanmoore1
src/LATTE/* @cnegre
src/MANIFOLD/* @Pakketeretet2 src/MANIFOLD/* @Pakketeretet2
src/MDI/* @taylor-a-barnes src/MDI/* @taylor-a-barnes @sjplimp
src/MEAM/* @martok src/MEAM/* @martok
src/MESONT/* @iafoss src/MESONT/* @iafoss
src/ML-HDNNP/* @singraber
src/ML-IAP/* @athomps
src/ML-PACE/* @yury-lysogorskiy
src/ML-POD/* @exapde @rohskopf
src/MOFFF/* @hheenen src/MOFFF/* @hheenen
src/MOLFILE/* @akohlmey src/MOLFILE/* @akohlmey
src/NETCDF/* @pastewka src/NETCDF/* @pastewka
src/ML-PACE/* @yury-lysogorskiy
src/PLUMED/* @gtribello
src/PHONON/* @lingtikong
src/PTM/* @pmla
src/OPENMP/* @akohlmey src/OPENMP/* @akohlmey
src/PHONON/* @lingtikong
src/PLUGIN/* @akohlmey
src/PLUMED/* @gtribello
src/PTM/* @pmla
src/QMMM/* @akohlmey src/QMMM/* @akohlmey
src/REAXFF/* @hasanmetin @stanmoore1
src/REACTION/* @jrgissing src/REACTION/* @jrgissing
src/REAXFF/* @hasanmetin @stanmoore1
src/SCAFACOS/* @rhalver src/SCAFACOS/* @rhalver
src/SNAP/* @athomps
src/SPIN/* @julient31
src/TALLY/* @akohlmey src/TALLY/* @akohlmey
src/UEF/* @danicholson src/UEF/* @danicholson
src/VTK/* @rbberger src/VTK/* @rbberger
@ -57,7 +63,10 @@ src/MANYBODY/pair_vashishta_table.* @andeplane
src/MANYBODY/pair_atm.* @sergeylishchuk src/MANYBODY/pair_atm.* @sergeylishchuk
src/REPLICA/*_grem.* @dstelter92 src/REPLICA/*_grem.* @dstelter92
src/EXTRA-COMPUTE/compute_stress_mop*.* @RomainVermorel src/EXTRA-COMPUTE/compute_stress_mop*.* @RomainVermorel
src/EXTRA-COMPUTE/compute_born_matrix.* @Bibobu @athomps
src/MISC/*_tracker.* @jtclemm src/MISC/*_tracker.* @jtclemm
src/MC/fix_gcmc.* @athomps
src/MC/fix_sgcmc.* @athomps
# core LAMMPS classes # core LAMMPS classes
src/lammps.* @sjplimp src/lammps.* @sjplimp
@ -119,41 +128,46 @@ src/dump_movie.* @akohlmey
src/exceptions.h @rbberger src/exceptions.h @rbberger
src/fix_nh.* @athomps src/fix_nh.* @athomps
src/info.* @akohlmey @rbberger src/info.* @akohlmey @rbberger
src/timer.* @akohlmey
src/min* @sjplimp @stanmoore1 src/min* @sjplimp @stanmoore1
src/platform.* @akohlmey
src/timer.* @akohlmey
src/utils.* @akohlmey @rbberger src/utils.* @akohlmey @rbberger
src/verlet.* @sjplimp @stanmoore1 src/verlet.* @sjplimp @stanmoore1
src/math_eigen_impl.h @jewettaij src/math_eigen_impl.h @jewettaij
# tools # tools
tools/msi2lmp/* @akohlmey tools/coding_standard/* @akohlmey @rbberger
tools/emacs/* @HaoZeke tools/emacs/* @HaoZeke
tools/singularity/* @akohlmey @rbberger tools/lammps-shell/* @akohlmey
tools/coding_standard/* @rbberger tools/msi2lmp/* @akohlmey
tools/valgrind/* @akohlmey
tools/swig/* @akohlmey
tools/offline/* @rbberger tools/offline/* @rbberger
tools/singularity/* @akohlmey @rbberger
tools/swig/* @akohlmey
tools/valgrind/* @akohlmey
tools/vim/* @hammondkd
# tests # tests
unittest/* @akohlmey @rbberger unittest/* @akohlmey
# cmake # cmake
cmake/* @junghans @rbberger cmake/* @rbberger
cmake/Modules/Packages/COLVARS.cmake @junghans @rbberger @giacomofiorin cmake/Modules/LAMMPSInterfacePlugin.cmake @akohlmey
cmake/Modules/Packages/KIM.cmake @junghans @rbberger @ellio167 cmake/Modules/MPI4WIN.cmake @akohlmey
cmake/Modules/OpenCLLoader.cmake @akohlmey
cmake/Modules/Packages/COLVARS.cmake @rbberger @giacomofiorin
cmake/Modules/Packages/KIM.cmake @rbberger @ellio167
cmake/presets/*.cmake @akohlmey cmake/presets/*.cmake @akohlmey
# python # python
python/* @rbberger python/* @rbberger
# fortran # fortran
fortran/* @akohlmey fortran/* @akohlmey @hammondkd
# docs # docs
doc/utils/*/* @rbberger doc/* @akohlmey
doc/Makefile @rbberger
doc/README @rbberger
examples/plugin/* @akohlmey examples/plugin/* @akohlmey
examples/PACKAGES/pace/plugin/* @akohlmey
# for releases # for releases
src/version.h @sjplimp src/version.h @sjplimp

View File

@ -1,6 +1,6 @@
--- ---
name: Request for Help name: Request for Help
about: "Don't post help requests here, email the lammps-users mailing list" about: "Don't post help requests here, post in the LAMMPS forum"
title: "" title: ""
labels: invalid labels: invalid
assignees: '' assignees: ''
@ -8,8 +8,9 @@ assignees: ''
--- ---
Please **do not** post requests for help (e.g. with installing or using LAMMPS) here. Please **do not** post requests for help (e.g. with installing or using LAMMPS) here.
Instead send an e-mail to the lammps-users mailing list. Instead, you can post to the LAMMPS category in the Materials Science Community
Discourse forum at: https://matsci.org/lammps/
This issue tracker is for tracking LAMMPS development related issues only. This issue tracker is for tracking LAMMPS development related issues only.
Thanks for your cooperation. Thank you in advance for your cooperation.

6
.github/codecov.yml vendored
View File

@ -7,7 +7,7 @@ coverage:
threshold: 10% threshold: 10%
only_pulls: false only_pulls: false
branches: branches:
- "master" - "develop"
flags: flags:
- "unit" - "unit"
paths: paths:
@ -16,14 +16,14 @@ coverage:
project: project:
default: default:
branches: branches:
- "master" - "develop"
paths: paths:
- "src" - "src"
informational: true informational: true
patch: patch:
default: default:
branches: branches:
- "master" - "develop"
paths: paths:
- "src" - "src"
informational: true informational: true

View File

@ -49,7 +49,9 @@ jobs:
shell: bash shell: bash
working-directory: build working-directory: build
run: | run: |
cmake -C ../cmake/presets/most.cmake ../cmake cmake -C ../cmake/presets/most.cmake \
-D DOWNLOAD_POTENTIALS=off \
../cmake
cmake --build . --parallel 2 cmake --build . --parallel 2
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis

View File

@ -3,7 +3,11 @@ name: "Native Windows Compilation and Unit Tests"
on: on:
push: push:
branches: [develop] branches:
- develop
pull_request:
branches:
- develop
workflow_dispatch: workflow_dispatch:
@ -22,20 +26,25 @@ jobs:
- name: Select Python version - name: Select Python version
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
python-version: '3.10' python-version: '3.11'
- name: Building LAMMPS via CMake - name: Building LAMMPS via CMake
shell: bash shell: bash
run: | run: |
python3 -m pip install numpy python3 -m pip install numpy
python3 -m pip install pyyaml python3 -m pip install pyyaml
nuget install MSMPIsdk
nuget install MSMPIDIST
cmake -C cmake/presets/windows.cmake \ cmake -C cmake/presets/windows.cmake \
-D DOWNLOAD_POTENTIALS=off \
-D PKG_PYTHON=on \ -D PKG_PYTHON=on \
-D WITH_PNG=off \
-D WITH_JPEG=off \
-S cmake -B build \ -S cmake -B build \
-D BUILD_SHARED_LIBS=on \ -D BUILD_SHARED_LIBS=on \
-D LAMMPS_EXCEPTIONS=on \ -D LAMMPS_EXCEPTIONS=on \
-D ENABLE_TESTING=on -D ENABLE_TESTING=on
cmake --build build --config Release cmake --build build --config Release --parallel 2
- name: Run LAMMPS executable - name: Run LAMMPS executable
shell: bash shell: bash
@ -46,4 +55,4 @@ jobs:
- name: Run Unit Tests - name: Run Unit Tests
working-directory: build working-directory: build
shell: bash shell: bash
run: ctest -V -C Release run: ctest -V -C Release -E FixTimestep:python_move_nve

103
.github/workflows/coverity.yml vendored Normal file
View File

@ -0,0 +1,103 @@
name: "Run Coverity Scan"
on:
schedule:
- cron: "0 0 * * FRI"
workflow_dispatch:
jobs:
analyze:
name: Analyze
if: ${{ github.repository == 'lammps/lammps' }}
runs-on: ubuntu-latest
container:
image: lammps/buildenv:ubuntu20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Create Build and Download Folder
run: mkdir build download
- name: Cache Coverity
id: cache-coverity
uses: actions/cache@v3
with:
path: ./download/
key: ${{ runner.os }}-download-${{ hashFiles('**/coverity_tool.*') }}
- name: Download Coverity if necessary
if: steps.cache-coverity.outputs.cache-hit != 'true'
working-directory: download
run: |
wget -nv https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_TOKEN }}&project=LAMMPS" -O coverity_tool.tgz
wget -nv https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_TOKEN }}&project=LAMMPS&md5=1" -O coverity_tool.md5
echo " coverity_tool.tgz" >> coverity_tool.md5
md5sum -c coverity_tool.md5
- name: Setup Coverity
run: |
tar xzf download/coverity_tool.tgz
ln -s cov-analysis-linux64-* coverity
- name: Configure LAMMPS via CMake
shell: bash
working-directory: build
run: |
cmake \
-C ../cmake/presets/clang.cmake \
-C ../cmake/presets/most.cmake \
-C ../cmake/presets/kokkos-openmp.cmake \
-D CMAKE_BUILD_TYPE="RelWithDebug" \
-D CMAKE_TUNE_FLAGS="-Wall -Wextra -Wno-unused-result" \
-D BUILD_MPI=on \
-D BUILD_OMP=on \
-D BUILD_SHARED_LIBS=on \
-D LAMMPS_SIZES=SMALLBIG \
-D LAMMPS_EXCEPTIONS=off \
-D PKG_MESSAGE=on \
-D PKG_MPIIO=on \
-D PKG_ATC=on \
-D PKG_AWPMD=on \
-D PKG_BOCS=on \
-D PKG_EFF=on \
-D PKG_H5MD=on \
-D PKG_INTEL=on \
-D PKG_LATBOLTZ=on \
-D PKG_MANIFOLD=on \
-D PKG_MGPT=on \
-D PKG_ML-PACE=on \
-D PKG_ML-RANN=on \
-D PKG_MOLFILE=on \
-D PKG_NETCDF=on \
-D PKG_PTM=on \
-D PKG_QTB=on \
-D PKG_SMTBQ=on \
-D PKG_TALLY=on \
../cmake
- name: Run Coverity Scan
shell: bash
working-directory: build
run: |
export PATH=$GITHUB_WORKSPACE/coverity/bin:$PATH
cov-build --dir cov-int cmake --build . --parallel 2
- name: Create tarball with scan results
shell: bash
working-directory: build
run: tar czf lammps.tgz cov-int
- name: Upload scan result to Coverity
shell: bash
run: |
curl --form token=${{ secrets.COVERITY_TOKEN }} \
--form email=${{ secrets.COVERITY_EMAIL }} \
--form file=@build/lammps.tgz \
--form version=${{ github.sha }} \
--form description="LAMMPS automated build" \
https://scan.coverity.com/builds?project=LAMMPS

View File

@ -3,7 +3,11 @@ name: "Unittest for MacOS"
on: on:
push: push:
branches: [develop] branches:
- develop
pull_request:
branches:
- develop
workflow_dispatch: workflow_dispatch:
@ -39,9 +43,11 @@ jobs:
working-directory: build working-directory: build
run: | run: |
ccache -z ccache -z
python3 -m pip install numpy
python3 -m pip install pyyaml python3 -m pip install pyyaml
cmake -C ../cmake/presets/clang.cmake \ cmake -C ../cmake/presets/clang.cmake \
-C ../cmake/presets/most.cmake \ -C ../cmake/presets/most.cmake \
-D DOWNLOAD_POTENTIALS=off \
-D CMAKE_CXX_COMPILER_LAUNCHER=ccache \ -D CMAKE_CXX_COMPILER_LAUNCHER=ccache \
-D CMAKE_C_COMPILER_LAUNCHER=ccache \ -D CMAKE_C_COMPILER_LAUNCHER=ccache \
-D ENABLE_TESTING=on \ -D ENABLE_TESTING=on \

2
.gitignore vendored
View File

@ -55,3 +55,5 @@ out/RelWithDebInfo
out/Release out/Release
out/x86 out/x86
out/x64 out/x64
src/Makefile.package-e
src/Makefile.package.settings-e

91
CITATION.cff Normal file
View File

@ -0,0 +1,91 @@
# YAML 1.2
---
cff-version: 1.2.0
title: "LAMMPS: Large-scale Atomic/Molecular Massively Parallel Simulator"
type: software
authors:
- family-names: "Plimpton"
given-names: "Steven J."
- family-names: "Kohlmeyer"
given-names: "Axel"
orcid: "https://orcid.org/0000-0001-6204-6475"
- family-names: "Thompson"
given-names: "Aidan P."
orcid: "https://orcid.org/0000-0002-0324-9114"
- family-names: "Moore"
given-names: "Stan G."
- family-names: "Berger"
given-names: "Richard"
orcid: "https://orcid.org/0000-0002-3044-8266"
doi: 10.5281/zenodo.3726416
license: GPL-2.0-only
url: https://www.lammps.org
repository-code: https://github.com/lammps/lammps/
keywords:
- "Molecular Dynamics"
- "Materials Modeling"
message: "If you are referencing LAMMPS in a publication, please cite the paper below."
preferred-citation:
type: article
doi: "10.1016/j.cpc.2021.108171"
url: "https://www.sciencedirect.com/science/article/pii/S0010465521002836"
authors:
- family-names: "Thompson"
given-names: "Aidan P."
orcid: "https://orcid.org/0000-0002-0324-9114"
- family-names: "Aktulga"
given-names: "H. Metin"
- family-names: "Berger"
given-names: "Richard"
orcid: "https://orcid.org/0000-0002-3044-8266"
- family-names: "Bolintineanu"
given-names: "Dan S."
- family-names: "Brown"
given-names: "W. Michael"
- family-names: "Crozier"
given-names: "Paul S."
- family-names: "in 't Veld"
given-names: "Pieter J."
- family-names: "Kohlmeyer"
given-names: "Axel"
orcid: "https://orcid.org/0000-0001-6204-6475"
- family-names: "Moore"
given-names: "Stan G."
- family-names: "Nguyen"
given-names: "Trung Dac"
- family-names: "Shan"
given-names: "Ray"
- family-names: "Stevens"
given-names: "Mark J."
- family-names: "Tranchida"
given-names: "Julien"
- family-names: "Trott"
given-names: "Christian"
- family-names: "Plimpton"
given-names: "Steven J."
title: "LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales"
journal: "Computer Physics Communications"
keywords:
- Molecular dynamics
- Materials modeling
- Parallel algorithms
- LAMMPS
month: 2
volume: 271
issn: 0010-4655
pages: 108171
year: 2022
references:
- title: "Fast Parallel Algorithms for Short-Range Molecular Dynamics"
type: article
journal: Journal of Computational Physics
volume: 117
number: 1
pages: "1-19"
year: 1995
issn: 0021-9991
doi: 10.1006/jcph.1995.1039
url: https://www.sciencedirect.com/science/article/pii/S002199918571039X
authors:
- family-names: "Plimpton"
given-names: "Steve"

4
README
View File

@ -16,8 +16,8 @@ National Laboratories, a US Department of Energy facility, with
funding from the DOE. It is an open-source code, distributed freely funding from the DOE. It is an open-source code, distributed freely
under the terms of the GNU Public License (GPL) version 2. under the terms of the GNU Public License (GPL) version 2.
The primary author of the code is Steve Plimpton, who can be emailed The code is maintained by the LAMMPS development team who can be emailed
at sjplimp@sandia.gov. The LAMMPS WWW Site at www.lammps.org has at developers@lammps.org. The LAMMPS WWW Site at www.lammps.org has
more information about the code and its uses. more information about the code and its uses.
The LAMMPS distribution includes the following files and directories: The LAMMPS distribution includes the following files and directories:

View File

@ -19,9 +19,9 @@ kinds of filesystem manipulations. And because of that LAMMPS should
"administrator" account directly or indirectly via "sudo" or "su". "administrator" account directly or indirectly via "sudo" or "su".
Therefore what could be seen as a security vulnerability is usually Therefore what could be seen as a security vulnerability is usually
either a user mistake or a bug in the code. Bugs can be reported in either a user mistake or a bug in the code. Bugs can be reported in the
the LAMMPS project LAMMPS project [issue tracker on
[issue tracker on GitHub](https://github.com/lammps/lammps/issues). GitHub](https://github.com/lammps/lammps/issues).
To mitigate issues with using homoglyphs or bidirectional reordering in To mitigate issues with using homoglyphs or bidirectional reordering in
unicode, which have been demonstrated as a vector to obfuscate and hide unicode, which have been demonstrated as a vector to obfuscate and hide
@ -30,10 +30,23 @@ for unicode characters and only all-ASCII source code is accepted.
# Version Updates # Version Updates
LAMMPS follows continuous release development model. We aim to keep all LAMMPS follows a continuous release development model. We aim to keep
release versions (stable or patch) fully functional and employ a variety the development version (`develop` branch) always fully functional and
of automatic testing procedures to detect failures of existing employ a variety of automatic testing procedures to detect failures of
functionality from adding new features before releases are made. Thus existing functionality from adding or modifying features. Most of those
bugfixes and updates are only integrated into the current development tests are run on pull requests and must be passed *before* merging to
branch and thus the next (patch) release and users are recommended to the `develop` branch. The `develop` branch is protected, so all changes
update regularly. *must* be submitted as a pull request and thus cannot avoid the
automated tests.
Additional tests are run *after* merging. Before releases are made
*all* tests must have cleared. Then a release tag is applied and the
`release` branch is fast-forwarded to that tag. This is referred to to
as a "feature release". Bug fixes and updates are applied first to the
`develop` branch. Later, they appear in the `release` branch when the
next patch release occurs. For stable releases, backported bug fixes
and infrastructure updates are first applied to the `maintenance` branch
and then merged to `stable` and published as "updates". For a new
stable release the `stable` branch is updated to the corresponding state
of the `release` branch and a new stable tag is applied in addition to
the release tag.

View File

@ -3,15 +3,31 @@
# This file is part of LAMMPS # This file is part of LAMMPS
# Created by Christoph Junghans and Richard Berger # Created by Christoph Junghans and Richard Berger
cmake_minimum_required(VERSION 3.10) cmake_minimum_required(VERSION 3.10)
########################################
# set policy to silence warnings about ignoring <PackageName>_ROOT but use it # set policy to silence warnings about ignoring <PackageName>_ROOT but use it
if(POLICY CMP0074) if(POLICY CMP0074)
cmake_policy(SET CMP0074 NEW) cmake_policy(SET CMP0074 NEW)
endif() endif()
# set policy to silence warnings about ignoring ${CMAKE_REQUIRED_LIBRARIES} but use it
if(POLICY CMP0075)
cmake_policy(SET CMP0075 NEW)
endif()
# set policy to silence warnings about missing executable permissions in # set policy to silence warnings about missing executable permissions in
# pythonx.y-config when cross-compiling. review occasionally if it may be set to NEW # pythonx.y-config when cross-compiling. review occasionally if it may be set to NEW
if(POLICY CMP0109) if(POLICY CMP0109)
cmake_policy(SET CMP0109 OLD) cmake_policy(SET CMP0109 OLD)
endif() endif()
# set policy to silence warnings about timestamps of downloaded files. review occasionally if it may be set to NEW
if(POLICY CMP0135)
cmake_policy(SET CMP0135 OLD)
endif()
########################################
# Use CONFIGURE_DEPENDS as option for file(GLOB...) when available
if(CMAKE_VERSION VERSION_LESS 3.12)
unset(CONFIGURE_DEPENDS)
else()
set(CONFIGURE_DEPENDS CONFIGURE_DEPENDS)
endif()
######################################## ########################################
project(lammps CXX) project(lammps CXX)
@ -100,7 +116,7 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
if(CMAKE_CXX_COMPILER_VERSION VERSION_EQUAL 17.3 OR CMAKE_CXX_COMPILER_VERSION VERSION_EQUAL 17.4) if(CMAKE_CXX_COMPILER_VERSION VERSION_EQUAL 17.3 OR CMAKE_CXX_COMPILER_VERSION VERSION_EQUAL 17.4)
set(CMAKE_TUNE_DEFAULT "-xCOMMON-AVX512") set(CMAKE_TUNE_DEFAULT "-xCOMMON-AVX512")
else() else()
set(CMAKE_TUNE_DEFAULT "-xHost") set(CMAKE_TUNE_DEFAULT "-xHost -fp-model fast=2 -no-prec-div -qoverride-limits -diag-disable=10441 -diag-disable=2196")
endif() endif()
endif() endif()
endif() endif()
@ -135,14 +151,12 @@ set(CMAKE_CXX_EXTENSIONS OFF CACHE BOOL "Use compiler extensions")
# ugly hacks for MSVC which by default always reports an old C++ standard in the __cplusplus macro # ugly hacks for MSVC which by default always reports an old C++ standard in the __cplusplus macro
# and prints lots of pointless warnings about "unsafe" functions # and prints lots of pointless warnings about "unsafe" functions
if(MSVC) if(MSVC)
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") if((CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") OR (CMAKE_CXX_COMPILER_ID STREQUAL "Intel"))
add_compile_options(/Zc:__cplusplus) add_compile_options(/Zc:__cplusplus)
add_compile_options(/wd4244) add_compile_options(/wd4244)
add_compile_options(/wd4267) add_compile_options(/wd4267)
if(LAMMPS_EXCEPTIONS)
add_compile_options(/EHsc) add_compile_options(/EHsc)
endif() endif()
endif()
add_compile_definitions(_CRT_SECURE_NO_WARNINGS) add_compile_definitions(_CRT_SECURE_NO_WARNINGS)
endif() endif()
@ -154,6 +168,19 @@ endif()
######################################################################## ########################################################################
# User input options # # User input options #
######################################################################## ########################################################################
# set path to python interpreter and thus enforcing python version when
# in a virtual environment and PYTHON_EXECUTABLE is not set on command line
if(DEFINED ENV{VIRTUAL_ENV} AND NOT PYTHON_EXECUTABLE)
if(CMAKE_HOST_SYSTEM_NAME STREQUAL "Windows")
set(PYTHON_EXECUTABLE "$ENV{VIRTUAL_ENV}/Scripts/python.exe")
else()
set(PYTHON_EXECUTABLE "$ENV{VIRTUAL_ENV}/bin/python")
endif()
set(Python_EXECUTABLE "${PYTHON_EXECUTABLE}")
message(STATUS "Running in virtual environment: $ENV{VIRTUAL_ENV}\n"
" Setting Python interpreter to: ${PYTHON_EXECUTABLE}")
endif()
set(LAMMPS_MACHINE "" CACHE STRING "Suffix to append to lmp binary (WON'T enable any features automatically") set(LAMMPS_MACHINE "" CACHE STRING "Suffix to append to lmp binary (WON'T enable any features automatically")
mark_as_advanced(LAMMPS_MACHINE) mark_as_advanced(LAMMPS_MACHINE)
if(LAMMPS_MACHINE) if(LAMMPS_MACHINE)
@ -175,8 +202,8 @@ else()
endif() endif()
include(GNUInstallDirs) include(GNUInstallDirs)
file(GLOB ALL_SOURCES ${LAMMPS_SOURCE_DIR}/[^.]*.cpp) file(GLOB ALL_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/[^.]*.cpp)
file(GLOB MAIN_SOURCES ${LAMMPS_SOURCE_DIR}/main.cpp) file(GLOB MAIN_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/main.cpp)
list(REMOVE_ITEM ALL_SOURCES ${MAIN_SOURCES}) list(REMOVE_ITEM ALL_SOURCES ${MAIN_SOURCES})
add_library(lammps ${ALL_SOURCES}) add_library(lammps ${ALL_SOURCES})
@ -194,6 +221,7 @@ option(CMAKE_VERBOSE_MAKEFILE "Generate verbose Makefiles" OFF)
set(STANDARD_PACKAGES set(STANDARD_PACKAGES
ADIOS ADIOS
AMOEBA
ASPHERE ASPHERE
ATC ATC
AWPMD AWPMD
@ -202,7 +230,7 @@ set(STANDARD_PACKAGES
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -229,7 +257,7 @@ set(STANDARD_PACKAGES
KIM KIM
KSPACE KSPACE
LATBOLTZ LATBOLTZ
LATTE LEPTON
MACHDYN MACHDYN
MANIFOLD MANIFOLD
MANYBODY MANYBODY
@ -245,6 +273,7 @@ set(STANDARD_PACKAGES
ML-QUIP ML-QUIP
ML-RANN ML-RANN
ML-SNAP ML-SNAP
ML-POD
MOFFF MOFFF
MOLECULE MOLECULE
MOLFILE MOLFILE
@ -295,6 +324,15 @@ if(PKG_ADIOS)
# script that defines the MPI::MPI_C target # script that defines the MPI::MPI_C target
enable_language(C) enable_language(C)
find_package(ADIOS2 REQUIRED) find_package(ADIOS2 REQUIRED)
if(BUILD_MPI)
if(NOT ADIOS2_HAVE_MPI)
message(FATAL_ERROR "ADIOS2 must be built with MPI support when LAMMPS has MPI enabled")
endif()
else()
if(ADIOS2_HAVE_MPI)
message(FATAL_ERROR "ADIOS2 must be built without MPI support when LAMMPS has MPI disabled")
endif()
endif()
target_link_libraries(lammps PRIVATE adios2::adios2) target_link_libraries(lammps PRIVATE adios2::adios2)
endif() endif()
@ -358,17 +396,19 @@ pkg_depends(MPIIO MPI)
pkg_depends(ATC MANYBODY) pkg_depends(ATC MANYBODY)
pkg_depends(LATBOLTZ MPI) pkg_depends(LATBOLTZ MPI)
pkg_depends(SCAFACOS MPI) pkg_depends(SCAFACOS MPI)
pkg_depends(AMOEBA KSPACE)
pkg_depends(DIELECTRIC KSPACE) pkg_depends(DIELECTRIC KSPACE)
pkg_depends(DIELECTRIC EXTRA-PAIR) pkg_depends(DIELECTRIC EXTRA-PAIR)
pkg_depends(CG-DNA MOLECULE) pkg_depends(CG-DNA MOLECULE)
pkg_depends(CG-DNA ASPHERE) pkg_depends(CG-DNA ASPHERE)
pkg_depends(ELECTRODE KSPACE) pkg_depends(ELECTRODE KSPACE)
pkg_depends(EXTRA-MOLECULE MOLECULE)
# detect if we may enable OpenMP support by default # detect if we may enable OpenMP support by default
set(BUILD_OMP_DEFAULT OFF) set(BUILD_OMP_DEFAULT OFF)
find_package(OpenMP QUIET) find_package(OpenMP COMPONENTS CXX QUIET)
if(OpenMP_FOUND) if(OpenMP_CXX_FOUND)
check_include_file_cxx(omp.h HAVE_OMP_H_INCLUDE) check_omp_h_include()
if(HAVE_OMP_H_INCLUDE) if(HAVE_OMP_H_INCLUDE)
set(BUILD_OMP_DEFAULT ON) set(BUILD_OMP_DEFAULT ON)
endif() endif()
@ -377,8 +417,8 @@ endif()
option(BUILD_OMP "Build with OpenMP support" ${BUILD_OMP_DEFAULT}) option(BUILD_OMP "Build with OpenMP support" ${BUILD_OMP_DEFAULT})
if(BUILD_OMP) if(BUILD_OMP)
find_package(OpenMP REQUIRED) find_package(OpenMP COMPONENTS CXX REQUIRED)
check_include_file_cxx(omp.h HAVE_OMP_H_INCLUDE) check_omp_h_include()
if(NOT HAVE_OMP_H_INCLUDE) if(NOT HAVE_OMP_H_INCLUDE)
message(FATAL_ERROR "Cannot find the 'omp.h' header file required for full OpenMP support") message(FATAL_ERROR "Cannot find the 'omp.h' header file required for full OpenMP support")
endif() endif()
@ -400,21 +440,19 @@ if(BUILD_OMP)
target_link_libraries(lmp PRIVATE OpenMP::OpenMP_CXX) target_link_libraries(lmp PRIVATE OpenMP::OpenMP_CXX)
endif() endif()
if(PKG_MSCG OR PKG_ATC OR PKG_AWPMD OR PKG_ML-QUIP OR PKG_LATTE OR PKG_ELECTRODE) if(PKG_MSCG OR PKG_ATC OR PKG_AWPMD OR PKG_ML-QUIP OR PKG_ML-POD OR PKG_ELECTRODE OR BUILD_TOOLS)
enable_language(C) enable_language(C)
if (NOT USE_INTERNAL_LINALG)
find_package(LAPACK) find_package(LAPACK)
find_package(BLAS) find_package(BLAS)
if(NOT LAPACK_FOUND OR NOT BLAS_FOUND)
include(CheckGeneratorSupport)
if(NOT CMAKE_GENERATOR_SUPPORT_FORTRAN)
status(FATAL_ERROR "Cannot build internal linear algebra library as CMake build tool lacks Fortran support")
endif() endif()
enable_language(Fortran) if(NOT LAPACK_FOUND OR NOT BLAS_FOUND OR USE_INTERNAL_LINALG)
file(GLOB LAPACK_SOURCES ${LAMMPS_LIB_SOURCE_DIR}/linalg/[^.]*.[fF]) file(GLOB LINALG_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/linalg/[^.]*.cpp)
add_library(linalg STATIC ${LAPACK_SOURCES}) add_library(linalg STATIC ${LINALG_SOURCES})
set_target_properties(linalg PROPERTIES OUTPUT_NAME lammps_linalg${LAMMPS_MACHINE}) set_target_properties(linalg PROPERTIES OUTPUT_NAME lammps_linalg${LAMMPS_MACHINE})
set(BLAS_LIBRARIES "$<TARGET_FILE:linalg>") set(BLAS_LIBRARIES "$<TARGET_FILE:linalg>")
set(LAPACK_LIBRARIES "$<TARGET_FILE:linalg>") set(LAPACK_LIBRARIES "$<TARGET_FILE:linalg>")
target_link_libraries(lammps PRIVATE linalg)
else() else()
list(APPEND LAPACK_LIBRARIES ${BLAS_LIBRARIES}) list(APPEND LAPACK_LIBRARIES ${BLAS_LIBRARIES})
endif() endif()
@ -482,7 +520,7 @@ else()
endif() endif()
foreach(PKG_WITH_INCL KSPACE PYTHON ML-IAP VORONOI COLVARS ML-HDNNP MDI MOLFILE NETCDF foreach(PKG_WITH_INCL KSPACE PYTHON ML-IAP VORONOI COLVARS ML-HDNNP MDI MOLFILE NETCDF
PLUMED QMMM ML-QUIP SCAFACOS MACHDYN VTK KIM LATTE MSCG COMPRESS ML-PACE) PLUMED QMMM ML-QUIP SCAFACOS MACHDYN VTK KIM MSCG COMPRESS ML-PACE LEPTON)
if(PKG_${PKG_WITH_INCL}) if(PKG_${PKG_WITH_INCL})
include(Packages/${PKG_WITH_INCL}) include(Packages/${PKG_WITH_INCL})
endif() endif()
@ -499,7 +537,10 @@ set(CMAKE_TUNE_FLAGS "${CMAKE_TUNE_DEFAULT}" CACHE STRING "Compiler and machine
separate_arguments(CMAKE_TUNE_FLAGS) separate_arguments(CMAKE_TUNE_FLAGS)
foreach(_FLAG ${CMAKE_TUNE_FLAGS}) foreach(_FLAG ${CMAKE_TUNE_FLAGS})
target_compile_options(lammps PRIVATE ${_FLAG}) target_compile_options(lammps PRIVATE ${_FLAG})
# skip these flags when linking the main executable
if(NOT (("${_FLAG}" STREQUAL "-Xcudafe") OR (("${_FLAG}" STREQUAL "--diag_suppress=unrecognized_pragma"))))
target_compile_options(lmp PRIVATE ${_FLAG}) target_compile_options(lmp PRIVATE ${_FLAG})
endif()
endforeach() endforeach()
######################################################################## ########################################################################
# Basic system tests (standard libraries, headers, functions, types) # # Basic system tests (standard libraries, headers, functions, types) #
@ -527,6 +568,8 @@ RegisterStyles(${LAMMPS_SOURCE_DIR})
######################################################## ########################################################
# Fetch missing external files and archives for packages # Fetch missing external files and archives for packages
######################################################## ########################################################
option(DOWNLOAD_POTENTIALS "Automatically download large potential files" ON)
mark_as_advanced(DOWNLOAD_POTENTIALS)
foreach(PKG ${STANDARD_PACKAGES} ${SUFFIX_PACKAGES}) foreach(PKG ${STANDARD_PACKAGES} ${SUFFIX_PACKAGES})
if(PKG_${PKG}) if(PKG_${PKG})
FetchPotentials(${LAMMPS_SOURCE_DIR}/${PKG} ${LAMMPS_POTENTIALS_DIR}) FetchPotentials(${LAMMPS_SOURCE_DIR}/${PKG} ${LAMMPS_POTENTIALS_DIR})
@ -539,8 +582,8 @@ endforeach()
foreach(PKG ${STANDARD_PACKAGES}) foreach(PKG ${STANDARD_PACKAGES})
set(${PKG}_SOURCES_DIR ${LAMMPS_SOURCE_DIR}/${PKG}) set(${PKG}_SOURCES_DIR ${LAMMPS_SOURCE_DIR}/${PKG})
file(GLOB ${PKG}_SOURCES ${${PKG}_SOURCES_DIR}/[^.]*.cpp) file(GLOB ${PKG}_SOURCES ${CONFIGURE_DEPENDS} ${${PKG}_SOURCES_DIR}/[^.]*.cpp)
file(GLOB ${PKG}_HEADERS ${${PKG}_SOURCES_DIR}/[^.]*.h) file(GLOB ${PKG}_HEADERS ${CONFIGURE_DEPENDS} ${${PKG}_SOURCES_DIR}/[^.]*.h)
# check for package files in src directory due to old make system # check for package files in src directory due to old make system
DetectBuildSystemConflict(${LAMMPS_SOURCE_DIR} ${${PKG}_SOURCES} ${${PKG}_HEADERS}) DetectBuildSystemConflict(${LAMMPS_SOURCE_DIR} ${${PKG}_SOURCES} ${${PKG}_HEADERS})
@ -567,8 +610,8 @@ endforeach()
foreach(PKG ${SUFFIX_PACKAGES}) foreach(PKG ${SUFFIX_PACKAGES})
set(${PKG}_SOURCES_DIR ${LAMMPS_SOURCE_DIR}/${PKG}) set(${PKG}_SOURCES_DIR ${LAMMPS_SOURCE_DIR}/${PKG})
file(GLOB ${PKG}_SOURCES ${${PKG}_SOURCES_DIR}/[^.]*.cpp) file(GLOB ${PKG}_SOURCES ${CONFIGURE_DEPENDS} ${${PKG}_SOURCES_DIR}/[^.]*.cpp)
file(GLOB ${PKG}_HEADERS ${${PKG}_SOURCES_DIR}/[^.]*.h) file(GLOB ${PKG}_HEADERS ${CONFIGURE_DEPENDS} ${${PKG}_SOURCES_DIR}/[^.]*.h)
# check for package files in src directory due to old make system # check for package files in src directory due to old make system
DetectBuildSystemConflict(${LAMMPS_SOURCE_DIR} ${${PKG}_SOURCES} ${${PKG}_HEADERS}) DetectBuildSystemConflict(${LAMMPS_SOURCE_DIR} ${${PKG}_SOURCES} ${${PKG}_HEADERS})
@ -579,18 +622,11 @@ endforeach()
############################################## ##############################################
# add lib sources of (simple) enabled packages # add lib sources of (simple) enabled packages
############################################ ############################################
foreach(PKG_LIB POEMS ATC AWPMD H5MD MESONT) foreach(PKG_LIB POEMS ATC AWPMD H5MD)
if(PKG_${PKG_LIB}) if(PKG_${PKG_LIB})
string(TOLOWER "${PKG_LIB}" PKG_LIB) string(TOLOWER "${PKG_LIB}" PKG_LIB)
if(PKG_LIB STREQUAL "mesont") file(GLOB_RECURSE ${PKG_LIB}_SOURCES ${CONFIGURE_DEPENDS}
enable_language(Fortran) ${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.c ${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.cpp)
file(GLOB_RECURSE ${PKG_LIB}_SOURCES
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.f90)
else()
file(GLOB_RECURSE ${PKG_LIB}_SOURCES
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.c
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.cpp)
endif()
add_library(${PKG_LIB} STATIC ${${PKG_LIB}_SOURCES}) add_library(${PKG_LIB} STATIC ${${PKG_LIB}_SOURCES})
set_target_properties(${PKG_LIB} PROPERTIES OUTPUT_NAME lammps_${PKG_LIB}${LAMMPS_MACHINE}) set_target_properties(${PKG_LIB} PROPERTIES OUTPUT_NAME lammps_${PKG_LIB}${LAMMPS_MACHINE})
target_link_libraries(lammps PRIVATE ${PKG_LIB}) target_link_libraries(lammps PRIVATE ${PKG_LIB})
@ -604,7 +640,7 @@ foreach(PKG_LIB POEMS ATC AWPMD H5MD MESONT)
endif() endif()
endforeach() endforeach()
if(PKG_ELECTRODE) if(PKG_ELECTRODE OR PKG_ML-POD)
target_link_libraries(lammps PRIVATE ${LAPACK_LIBRARIES}) target_link_libraries(lammps PRIVATE ${LAPACK_LIBRARIES})
endif() endif()
@ -633,7 +669,7 @@ endif()
# packages which selectively include variants based on enabled styles # packages which selectively include variants based on enabled styles
# e.g. accelerator packages # e.g. accelerator packages
###################################################################### ######################################################################
foreach(PKG_WITH_INCL CORESHELL DPD-SMOOTH PHONON QEQ OPENMP KOKKOS OPT INTEL GPU) foreach(PKG_WITH_INCL CORESHELL DPD-SMOOTH MC MISC PHONON QEQ OPENMP KOKKOS OPT INTEL GPU)
if(PKG_${PKG_WITH_INCL}) if(PKG_${PKG_WITH_INCL})
include(Packages/${PKG_WITH_INCL}) include(Packages/${PKG_WITH_INCL})
endif() endif()
@ -706,18 +742,17 @@ list(FIND LANGUAGES "Fortran" _index)
if(_index GREATER -1) if(_index GREATER -1)
target_link_libraries(lammps PRIVATE ${CMAKE_Fortran_IMPLICIT_LINK_LIBRARIES}) target_link_libraries(lammps PRIVATE ${CMAKE_Fortran_IMPLICIT_LINK_LIBRARIES})
endif() endif()
set(LAMMPS_CXX_HEADERS angle.h atom.h bond.h citeme.h comm.h compute.h dihedral.h domain.h error.h fix.h force.h group.h improper.h set(LAMMPS_CXX_HEADERS angle.h atom.h bond.h citeme.h comm.h command.h compute.h dihedral.h domain.h
input.h info.h kspace.h lammps.h lattice.h library.h lmppython.h lmptype.h memory.h modify.h neighbor.h neigh_list.h output.h error.h exceptions.h fix.h force.h group.h improper.h input.h info.h kspace.h lammps.h lattice.h
pair.h pointers.h region.h timer.h universe.h update.h utils.h variable.h) library.h lmppython.h lmptype.h memory.h modify.h neighbor.h neigh_list.h output.h pair.h
if(LAMMPS_EXCEPTIONS) platform.h pointers.h region.h timer.h universe.h update.h utils.h variable.h)
list(APPEND LAMMPS_CXX_HEADERS exceptions.h) set(LAMMPS_FMT_HEADERS core.h format.h)
endif()
set_target_properties(lammps PROPERTIES OUTPUT_NAME lammps${LAMMPS_MACHINE}) set_target_properties(lammps PROPERTIES OUTPUT_NAME lammps${LAMMPS_MACHINE})
set_target_properties(lammps PROPERTIES SOVERSION ${SOVERSION}) set_target_properties(lammps PROPERTIES SOVERSION ${SOVERSION})
set_target_properties(lammps PROPERTIES PREFIX "lib") set_target_properties(lammps PROPERTIES PREFIX "lib")
target_include_directories(lammps PUBLIC $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/lammps>) target_include_directories(lammps PUBLIC $<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}/lammps>)
file(MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps) file(MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/fmt)
foreach(_HEADER ${LAMMPS_CXX_HEADERS}) foreach(_HEADER ${LAMMPS_CXX_HEADERS})
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER} COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_SOURCE_DIR}/${_HEADER} ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER} DEPENDS ${LAMMPS_SOURCE_DIR}/${_HEADER}) add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER} COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_SOURCE_DIR}/${_HEADER} ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER} DEPENDS ${LAMMPS_SOURCE_DIR}/${_HEADER})
add_custom_target(${_HEADER} DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER}) add_custom_target(${_HEADER} DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/${_HEADER})
@ -726,6 +761,14 @@ foreach(_HEADER ${LAMMPS_CXX_HEADERS})
install(FILES ${LAMMPS_SOURCE_DIR}/${_HEADER} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/lammps) install(FILES ${LAMMPS_SOURCE_DIR}/${_HEADER} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/lammps)
endif() endif()
endforeach() endforeach()
foreach(_HEADER ${LAMMPS_FMT_HEADERS})
add_custom_command(OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/fmt/${_HEADER} COMMAND ${CMAKE_COMMAND} -E copy_if_different ${LAMMPS_SOURCE_DIR}/fmt/${_HEADER} ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/fmt/${_HEADER} DEPENDS ${LAMMPS_SOURCE_DIR}/fmt/${_HEADER})
add_custom_target(fmt_${_HEADER} DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/includes/lammps/fmt/${_HEADER})
add_dependencies(lammps fmt_${_HEADER})
if(BUILD_SHARED_LIBS)
install(FILES ${LAMMPS_SOURCE_DIR}/fmt/${_HEADER} DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/lammps/fmt)
endif()
endforeach()
target_include_directories(lammps INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_BINARY_DIR}/includes>) target_include_directories(lammps INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_BINARY_DIR}/includes>)
add_library(LAMMPS::lammps ALIAS lammps) add_library(LAMMPS::lammps ALIAS lammps)
get_target_property(LAMMPS_DEFINES lammps INTERFACE_COMPILE_DEFINITIONS) get_target_property(LAMMPS_DEFINES lammps INTERFACE_COMPILE_DEFINITIONS)
@ -783,6 +826,10 @@ if(BUILD_SHARED_LIBS)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE}) set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif() endif()
else() else()
# backward compatibility
if(PYTHON_EXECUTABLE)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
find_package(Python COMPONENTS Interpreter) find_package(Python COMPONENTS Interpreter)
endif() endif()
if(BUILD_IS_MULTI_CONFIG) if(BUILD_IS_MULTI_CONFIG)
@ -793,9 +840,8 @@ if(BUILD_SHARED_LIBS)
set(LIBLAMMPS_SHARED_BINARY ${MY_BUILD_DIR}/liblammps${LAMMPS_MACHINE}${CMAKE_SHARED_LIBRARY_SUFFIX}) set(LIBLAMMPS_SHARED_BINARY ${MY_BUILD_DIR}/liblammps${LAMMPS_MACHINE}${CMAKE_SHARED_LIBRARY_SUFFIX})
if(Python_EXECUTABLE) if(Python_EXECUTABLE)
add_custom_target( add_custom_target(
install-python ${CMAKE_COMMAND} -E remove_directory build install-python ${Python_EXECUTABLE} ${LAMMPS_PYTHON_DIR}/install.py -p ${LAMMPS_PYTHON_DIR}/lammps
COMMAND ${Python_EXECUTABLE} ${LAMMPS_PYTHON_DIR}/install.py -p ${LAMMPS_PYTHON_DIR}/lammps -l ${LIBLAMMPS_SHARED_BINARY} -w ${MY_BUILD_DIR} -v ${LAMMPS_SOURCE_DIR}/version.h
-l ${LIBLAMMPS_SHARED_BINARY} -w ${MY_BUILD_DIR}
COMMENT "Installing LAMMPS Python module") COMMENT "Installing LAMMPS Python module")
else() else()
add_custom_target( add_custom_target(
@ -808,26 +854,6 @@ else()
${CMAKE_COMMAND} -E echo "Must build LAMMPS as a shared library to use the Python module") ${CMAKE_COMMAND} -E echo "Must build LAMMPS as a shared library to use the Python module")
endif() endif()
###############################################################################
# Add LAMMPS python module to "install" target. This is taylored for building
# LAMMPS for package managers and with different prefix settings.
# This requires either a shared library or that the PYTHON package is included.
###############################################################################
if(BUILD_SHARED_LIBS OR PKG_PYTHON)
if(CMAKE_VERSION VERSION_LESS 3.12)
find_package(PythonInterp) # Deprecated since version 3.12
if(PYTHONINTERP_FOUND)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
else()
find_package(Python COMPONENTS Interpreter)
endif()
if(Python_EXECUTABLE)
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/python)
install(CODE "execute_process(COMMAND ${Python_EXECUTABLE} setup.py build -b ${CMAKE_BINARY_DIR}/python install --prefix=${CMAKE_INSTALL_PREFIX} --root=\$ENV{DESTDIR}/ WORKING_DIRECTORY ${LAMMPS_PYTHON_DIR})")
endif()
endif()
include(Testing) include(Testing)
include(CodeCoverage) include(CodeCoverage)
include(CodingStandard) include(CodingStandard)
@ -839,6 +865,23 @@ if(ClangFormat_FOUND)
WORKING_DIRECTORY ${LAMMPS_SOURCE_DIR}) WORKING_DIRECTORY ${LAMMPS_SOURCE_DIR})
endif() endif()
# extract Kokkos compilation settings
get_cmake_property(_allvars VARIABLES)
foreach(_var ${_allvars})
if(${_var})
string(REGEX MATCH "Kokkos_ENABLE_(SERIAL|THREADS|OPENMP|CUDA|HIP|SYCL|OPENMPTARGET|HPX)" _match ${_var})
if(_match)
string(REGEX REPLACE "Kokkos_ENABLE_(OPENMP|SERIAL|CUDA|HIP|SYCL)" "\\1" _match ${_var})
list(APPEND KOKKOS_DEVICE ${_match})
endif()
string(REGEX MATCH "Kokkos_ARCH" _match ${_var})
if(_match)
string(REGEX REPLACE "Kokkos_ARCH_(.*)" "\\1" _match ${_var})
list(APPEND KOKKOS_ARCH ${_match})
endif()
endif()
endforeach()
get_target_property(DEFINES lammps COMPILE_DEFINITIONS) get_target_property(DEFINES lammps COMPILE_DEFINITIONS)
if(BUILD_IS_MULTI_CONFIG) if(BUILD_IS_MULTI_CONFIG)
set(LAMMPS_BUILD_TYPE "Multi-Config") set(LAMMPS_BUILD_TYPE "Multi-Config")
@ -850,6 +893,7 @@ feature_summary(DESCRIPTION "The following tools and libraries have been found a
message(STATUS "<<< Build configuration >>> message(STATUS "<<< Build configuration >>>
LAMMPS Version: ${PROJECT_VERSION} LAMMPS Version: ${PROJECT_VERSION}
Operating System: ${CMAKE_SYSTEM_NAME} ${CMAKE_LINUX_DISTRO} ${CMAKE_DISTRO_VERSION} Operating System: ${CMAKE_SYSTEM_NAME} ${CMAKE_LINUX_DISTRO} ${CMAKE_DISTRO_VERSION}
CMake Version: ${CMAKE_VERSION}
Build type: ${LAMMPS_BUILD_TYPE} Build type: ${LAMMPS_BUILD_TYPE}
Install path: ${CMAKE_INSTALL_PREFIX} Install path: ${CMAKE_INSTALL_PREFIX}
Generator: ${CMAKE_GENERATOR} using ${CMAKE_MAKE_PROGRAM}") Generator: ${CMAKE_GENERATOR} using ${CMAKE_MAKE_PROGRAM}")
@ -936,7 +980,10 @@ if(PKG_GPU)
message(STATUS "GPU precision: ${GPU_PREC}") message(STATUS "GPU precision: ${GPU_PREC}")
endif() endif()
if(PKG_KOKKOS) if(PKG_KOKKOS)
message(STATUS "Kokkos Arch: ${KOKKOS_ARCH}") message(STATUS "Kokkos Devices: ${KOKKOS_DEVICE}")
if(KOKKOS_ARCH)
message(STATUS "Kokkos Architecture: ${KOKKOS_ARCH}")
endif()
endif() endif()
if(PKG_KSPACE) if(PKG_KSPACE)
message(STATUS "<<< FFT settings >>> message(STATUS "<<< FFT settings >>>

View File

@ -72,7 +72,7 @@
"configurationType": "Debug", "configurationType": "Debug",
"buildRoot": "${workspaceRoot}\\build\\${name}", "buildRoot": "${workspaceRoot}\\build\\${name}",
"installRoot": "${workspaceRoot}\\install\\${name}", "installRoot": "${workspaceRoot}\\install\\${name}",
"cmakeCommandArgs": "-C ${workspaceRoot}\\cmake\\presets\\windows.cmake -DCMAKE_C_COMPILER=clang-cl.exe -DCMAKE_CXX_COMPILER=clang-cl.exe", "cmakeCommandArgs": "-C ${workspaceRoot}\\cmake\\presets\\windows.cmake -DCMAKE_C_COMPILER=clang-cl.exe -DCMAKE_CXX_COMPILER=clang-cl.exe -DBUILD_MPI=off",
"buildCommandArgs": "", "buildCommandArgs": "",
"ctestCommandArgs": "", "ctestCommandArgs": "",
"inheritEnvironments": [ "clang_cl_x64" ], "inheritEnvironments": [ "clang_cl_x64" ],
@ -105,7 +105,7 @@
"configurationType": "Release", "configurationType": "Release",
"buildRoot": "${workspaceRoot}\\build\\${name}", "buildRoot": "${workspaceRoot}\\build\\${name}",
"installRoot": "${workspaceRoot}\\install\\${name}", "installRoot": "${workspaceRoot}\\install\\${name}",
"cmakeCommandArgs": "-C ${workspaceRoot}\\cmake\\presets\\windows.cmake -DCMAKE_C_COMPILER=clang-cl.exe -DCMAKE_CXX_COMPILER=clang-cl.exe", "cmakeCommandArgs": "-C ${workspaceRoot}\\cmake\\presets\\windows.cmake -DCMAKE_C_COMPILER=clang-cl.exe -DCMAKE_CXX_COMPILER=clang-cl.exe -DBUILD_MPI=off",
"buildCommandArgs": "", "buildCommandArgs": "",
"ctestCommandArgs": "-V", "ctestCommandArgs": "-V",
"inheritEnvironments": [ "clang_cl_x64" ], "inheritEnvironments": [ "clang_cl_x64" ],

View File

@ -5,6 +5,10 @@ if(CMAKE_VERSION VERSION_LESS 3.12)
set(Python3_VERSION ${PYTHON_VERSION_STRING}) set(Python3_VERSION ${PYTHON_VERSION_STRING})
endif() endif()
else() else()
# use default (or custom) Python executable, if version is sufficient
if(Python_VERSION VERSION_GREATER_EQUAL 3.5)
set(Python3_EXECUTABLE ${Python_EXECUTABLE})
endif()
find_package(Python3 COMPONENTS Interpreter QUIET) find_package(Python3 COMPONENTS Interpreter QUIET)
endif() endif()

View File

@ -0,0 +1,16 @@
if(NOT DEFINED HIP_PATH)
if(NOT DEFINED ENV{HIP_PATH})
message(FATAL_ERROR "HIP support requires HIP_PATH to be defined.\n"
"Either pass the HIP_PATH as a CMake option via -DHIP_PATH=... or set the HIP_PATH environment variable.")
else()
set(HIP_PATH $ENV{HIP_PATH} CACHE PATH "Path to HIP installation")
endif()
endif()
if(NOT DEFINED ROCM_PATH)
if(NOT DEFINED ENV{ROCM_PATH})
set(ROCM_PATH "/opt/rocm" CACHE PATH "Path to ROCm installation")
else()
set(ROCM_PATH $ENV{ROCM_PATH} CACHE PATH "Path to ROCm installation")
endif()
endif()
list(APPEND CMAKE_PREFIX_PATH ${HIP_PATH} ${ROCM_PATH})

View File

@ -4,20 +4,24 @@
option(BUILD_DOC "Build LAMMPS HTML documentation" OFF) option(BUILD_DOC "Build LAMMPS HTML documentation" OFF)
if(BUILD_DOC) if(BUILD_DOC)
# Sphinx 3.x requires at least Python 3.5 # Current Sphinx versions require at least Python 3.8
if(CMAKE_VERSION VERSION_LESS 3.12) if(CMAKE_VERSION VERSION_LESS 3.12)
find_package(PythonInterp 3.5 REQUIRED) find_package(PythonInterp 3.8 REQUIRED)
set(VIRTUALENV ${PYTHON_EXECUTABLE} -m venv) set(VIRTUALENV ${PYTHON_EXECUTABLE} -m venv)
else() else()
# use default (or custom) Python executable, if version is sufficient
if(Python_VERSION VERSION_GREATER_EQUAL 3.8)
set(Python3_EXECUTABLE ${Python_EXECUTABLE})
endif()
find_package(Python3 REQUIRED COMPONENTS Interpreter) find_package(Python3 REQUIRED COMPONENTS Interpreter)
if(Python3_VERSION VERSION_LESS 3.5) if(Python3_VERSION VERSION_LESS 3.8)
message(FATAL_ERROR "Python 3.5 and up is required to build the HTML documentation") message(FATAL_ERROR "Python 3.8 and up is required to build the HTML documentation")
endif() endif()
set(VIRTUALENV ${Python3_EXECUTABLE} -m venv) set(VIRTUALENV ${Python3_EXECUTABLE} -m venv)
endif() endif()
find_package(Doxygen 1.8.10 REQUIRED) find_package(Doxygen 1.8.10 REQUIRED)
file(GLOB DOC_SOURCES ${LAMMPS_DOC_DIR}/src/[^.]*.rst) file(GLOB DOC_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_DOC_DIR}/src/[^.]*.rst)
add_custom_command( add_custom_command(
@ -56,16 +60,27 @@ if(BUILD_DOC)
) )
set(MATHJAX_URL "https://github.com/mathjax/MathJax/archive/3.1.3.tar.gz" CACHE STRING "URL for MathJax tarball") set(MATHJAX_URL "https://github.com/mathjax/MathJax/archive/3.1.3.tar.gz" CACHE STRING "URL for MathJax tarball")
set(MATHJAX_MD5 "d1c98c746888bfd52ca8ebc10704f92f" CACHE STRING "MD5 checksum of MathJax tarball") set(MATHJAX_MD5 "b81661c6e6ba06278e6ae37b30b0c492" CACHE STRING "MD5 checksum of MathJax tarball")
mark_as_advanced(MATHJAX_URL) mark_as_advanced(MATHJAX_URL)
GetFallbackURL(MATHJAX_URL MATHJAX_FALLBACK)
# download mathjax distribution and unpack to folder "mathjax" # download mathjax distribution and unpack to folder "mathjax"
if(NOT EXISTS ${DOC_BUILD_STATIC_DIR}/mathjax/es5) if(NOT EXISTS ${DOC_BUILD_STATIC_DIR}/mathjax/es5)
file(DOWNLOAD ${MATHJAX_URL} if(EXISTS ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz)
"${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz" file(MD5 ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz)
EXPECTED_MD5 ${MATHJAX_MD5}) endif()
if(NOT "${DL_MD5}" STREQUAL "${MATHJAX_MD5}")
file(DOWNLOAD ${MATHJAX_URL} "${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz" STATUS DL_STATUS SHOW_PROGRESS)
file(MD5 ${CMAKE_CURRENT_BINARY_DIR}/mathjax.tar.gz DL_MD5)
if((NOT DL_STATUS EQUAL 0) OR (NOT "${DL_MD5}" STREQUAL "${MATHJAX_MD5}"))
message(WARNING "Download from primary URL ${MATHJAX_URL} failed\nTrying fallback URL ${MATHJAX_FALLBACK}")
file(DOWNLOAD ${MATHJAX_FALLBACK} ${CMAKE_BINARY_DIR}/libpace.tar.gz EXPECTED_HASH MD5=${MATHJAX_MD5} SHOW_PROGRESS)
endif()
else()
message(STATUS "Using already downloaded archive ${CMAKE_BINARY_DIR}/libpace.tar.gz")
endif()
execute_process(COMMAND ${CMAKE_COMMAND} -E tar xzf mathjax.tar.gz WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}) execute_process(COMMAND ${CMAKE_COMMAND} -E tar xzf mathjax.tar.gz WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR})
file(GLOB MATHJAX_VERSION_DIR ${CMAKE_CURRENT_BINARY_DIR}/MathJax-*) file(GLOB MATHJAX_VERSION_DIR ${CONFIGURE_DEPENDS} ${CMAKE_CURRENT_BINARY_DIR}/MathJax-*)
execute_process(COMMAND ${CMAKE_COMMAND} -E rename ${MATHJAX_VERSION_DIR} ${DOC_BUILD_STATIC_DIR}/mathjax) execute_process(COMMAND ${CMAKE_COMMAND} -E rename ${MATHJAX_VERSION_DIR} ${DOC_BUILD_STATIC_DIR}/mathjax)
endif() endif()

View File

@ -9,8 +9,22 @@ function(ExternalCMakeProject target url hash basedir cmakedir cmakefile)
get_filename_component(archive ${url} NAME) get_filename_component(archive ${url} NAME)
file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src) file(MAKE_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src)
if(EXISTS ${CMAKE_BINARY_DIR}/_deps/${archive})
file(MD5 ${CMAKE_BINARY_DIR}/_deps/${archive} DL_MD5)
endif()
if(NOT "${DL_MD5}" STREQUAL "${hash}")
message(STATUS "Downloading ${url}") message(STATUS "Downloading ${url}")
file(DOWNLOAD ${url} ${CMAKE_BINARY_DIR}/_deps/${archive} EXPECTED_HASH MD5=${hash} SHOW_PROGRESS) file(DOWNLOAD ${url} ${CMAKE_BINARY_DIR}/_deps/${archive} STATUS DL_STATUS SHOW_PROGRESS)
file(MD5 ${CMAKE_BINARY_DIR}/_deps/${archive} DL_MD5)
if((NOT DL_STATUS EQUAL 0) OR (NOT "${DL_MD5}" STREQUAL "${hash}"))
set(${target}_URL ${url})
GetFallbackURL(${target}_URL fallback)
message(WARNING "Download from primary URL ${url} failed\nTrying fallback URL ${fallback}")
file(DOWNLOAD ${fallback} ${CMAKE_BINARY_DIR}/_deps/${archive} EXPECTED_HASH MD5=${hash} SHOW_PROGRESS)
endif()
else()
message(STATUS "Using already downloaded archive ${CMAKE_BINARY_DIR}/_deps/${archive}")
endif()
message(STATUS "Unpacking and configuring ${archive}") message(STATUS "Unpacking and configuring ${archive}")
execute_process(COMMAND ${CMAKE_COMMAND} -E tar xzf ${CMAKE_BINARY_DIR}/_deps/${archive} execute_process(COMMAND ${CMAKE_COMMAND} -E tar xzf ${CMAKE_BINARY_DIR}/_deps/${archive}
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src) WORKING_DIRECTORY ${CMAKE_BINARY_DIR}/_deps/src)

View File

@ -1,5 +1,10 @@
# Find clang-format # Find clang-format
find_program(ClangFormat_EXECUTABLE NAMES clang-format find_program(ClangFormat_EXECUTABLE NAMES clang-format
clang-format-15.0
clang-format-14.0
clang-format-13.0
clang-format-12.0
clang-format-11.0
clang-format-10.0 clang-format-10.0
clang-format-9.0 clang-format-9.0
clang-format-8.0 clang-format-8.0
@ -14,19 +19,27 @@ if(ClangFormat_EXECUTABLE)
OUTPUT_VARIABLE clang_format_version OUTPUT_VARIABLE clang_format_version
ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE) ERROR_QUIET OUTPUT_STRIP_TRAILING_WHITESPACE)
if(clang_format_version MATCHES "^(Ubuntu |)clang-format version .*")
if(clang_format_version MATCHES "^clang-format version .*") # Arch Linux output:
# Arch Linux
# clang-format version 10.0.0 # clang-format version 10.0.0
#
# Ubuntu 18.04 LTS Output # Ubuntu 18.04 LTS output:
# clang-format version 6.0.0-1ubuntu2 (tags/RELEASE_600/final) # clang-format version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
string(REGEX REPLACE "clang-format version ([0-9.]+).*" #
"\\1" # Ubuntu 20.04 LTS output:
# clang-format version 10.0.0-4ubuntu1
#
# Ubuntu 22.04 LTS output:
# Ubuntu clang-format version 14.0.0-1ubuntu1
#
# Fedora 36 output:
# clang-format version 14.0.5 (Fedora 14.0.5-1.fc36)
string(REGEX REPLACE "^(Ubuntu |)clang-format version ([0-9.]+).*"
"\\2"
ClangFormat_VERSION ClangFormat_VERSION
"${clang_format_version}") "${clang_format_version}")
elseif(clang_format_version MATCHES ".*LLVM version .*") elseif(clang_format_version MATCHES ".*LLVM version .*")
# CentOS 7 Output # CentOS 7 output:
# LLVM (http://llvm.org/): # LLVM (http://llvm.org/):
# LLVM version 3.4.2 # LLVM version 3.4.2
# Optimized build. # Optimized build.

View File

@ -22,7 +22,7 @@ endif()
if(Python_EXECUTABLE) if(Python_EXECUTABLE)
get_filename_component(_python_path ${Python_EXECUTABLE} PATH) get_filename_component(_python_path ${Python_EXECUTABLE} PATH)
find_program(Cythonize_EXECUTABLE find_program(Cythonize_EXECUTABLE
NAMES cythonize3 cythonize cythonize.bat NAMES cythonize-${Python_VERSION_MAJOR}.${Python_VERSION_MINOR} cythonize3 cythonize cythonize.bat
HINTS ${_python_path}) HINTS ${_python_path})
endif() endif()

View File

@ -1,19 +0,0 @@
find_path(ZMQ_INCLUDE_DIR zmq.h)
find_library(ZMQ_LIBRARY NAMES zmq)
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(ZMQ DEFAULT_MSG ZMQ_LIBRARY ZMQ_INCLUDE_DIR)
# Copy the results to the output variables and target.
if(ZMQ_FOUND)
set(ZMQ_LIBRARIES ${ZMQ_LIBRARY})
set(ZMQ_INCLUDE_DIRS ${ZMQ_INCLUDE_DIR})
if(NOT TARGET ZMQ::ZMQ)
add_library(ZMQ::ZMQ UNKNOWN IMPORTED)
set_target_properties(ZMQ::ZMQ PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES "C"
IMPORTED_LOCATION "${ZMQ_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${ZMQ_INCLUDE_DIR}")
endif()
endif()

View File

@ -6,6 +6,9 @@ if(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
"Please remove CMakeCache.txt and CMakeFiles first.") "Please remove CMakeCache.txt and CMakeFiles first.")
endif() endif()
set(LAMMPS_THIRDPARTY_URL "https://download.lammps.org/thirdparty"
CACHE STRING "URL for thirdparty package downloads")
# global LAMMPS/plugin build settings # global LAMMPS/plugin build settings
set(LAMMPS_SOURCE_DIR "" CACHE PATH "Location of LAMMPS sources folder") set(LAMMPS_SOURCE_DIR "" CACHE PATH "Location of LAMMPS sources folder")
if(NOT LAMMPS_SOURCE_DIR) if(NOT LAMMPS_SOURCE_DIR)
@ -62,7 +65,7 @@ endfunction(validate_option)
# helper function for getting the most recently modified file or folder from a glob pattern # helper function for getting the most recently modified file or folder from a glob pattern
function(get_newest_file path variable) function(get_newest_file path variable)
file(GLOB _dirs ${path}) file(GLOB _dirs ${CONFIGURE_DEPENDS} ${path})
set(_besttime 2000-01-01T00:00:00) set(_besttime 2000-01-01T00:00:00)
set(_bestfile "<unknown>") set(_bestfile "<unknown>")
foreach(_dir ${_dirs}) foreach(_dir ${_dirs})
@ -78,6 +81,25 @@ function(get_newest_file path variable)
set(${variable} ${_bestfile} PARENT_SCOPE) set(${variable} ${_bestfile} PARENT_SCOPE)
endfunction() endfunction()
# get LAMMPS version date
function(get_lammps_version version_header variable)
file(STRINGS ${version_header} line REGEX LAMMPS_VERSION)
string(REGEX REPLACE "#define LAMMPS_VERSION \"([0-9]+) ([A-Za-z]+) ([0-9]+)\"" "\\1\\2\\3" date "${line}")
set(${variable} "${date}" PARENT_SCOPE)
endfunction()
# determine canonical URL for downloading backup copy from download.lammps.org/thirdparty
function(GetFallbackURL input output)
string(REPLACE "_URL" "" _tmp ${input})
string(TOLOWER ${_tmp} libname)
string(REGEX REPLACE "^https://.*/([^/]+gz)" "${LAMMPS_THIRDPARTY_URL}/${libname}-\\1" newurl "${${input}}")
if ("${newurl}" STREQUAL "${${input}}")
set(${output} "" PARENT_SCOPE)
else()
set(${output} ${newurl} PARENT_SCOPE)
endif()
endfunction(GetFallbackURL)
################################################################################# #################################################################################
# LAMMPS C++ interface. We only need the header related parts except on windows. # LAMMPS C++ interface. We only need the header related parts except on windows.
add_library(lammps INTERFACE) add_library(lammps INTERFACE)
@ -89,6 +111,7 @@ endif()
################################################################################ ################################################################################
# MPI configuration # MPI configuration
if(NOT CMAKE_CROSSCOMPILING) if(NOT CMAKE_CROSSCOMPILING)
set(MPI_CXX_SKIP_MPICXX TRUE)
find_package(MPI QUIET) find_package(MPI QUIET)
option(BUILD_MPI "Build MPI version" ${MPI_FOUND}) option(BUILD_MPI "Build MPI version" ${MPI_FOUND})
else() else()
@ -101,16 +124,46 @@ if(BUILD_MPI)
set(MPI_CXX_SKIP_MPICXX TRUE) set(MPI_CXX_SKIP_MPICXX TRUE)
# We use a non-standard procedure to cross-compile with MPI on Windows # We use a non-standard procedure to cross-compile with MPI on Windows
if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING) if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND CMAKE_CROSSCOMPILING)
# Download and configure MinGW compatible MPICH development files for Windows
option(USE_MSMPI "Use Microsoft's MS-MPI SDK instead of MPICH2-1.4.1" OFF)
if(USE_MSMPI)
message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation")
set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball")
set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball")
mark_as_advanced(MPICH2_WIN64_DEVEL_URL)
mark_as_advanced(MPICH2_WIN64_DEVEL_MD5)
include(ExternalProject)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
ExternalProject_Add(mpi4win_build
URL ${MPICH2_WIN64_DEVEL_URL}
URL_MD5 ${MPICH2_WIN64_DEVEL_MD5}
CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND ""
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/libmsmpi.a)
else()
message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI")
endif()
ExternalProject_get_property(mpi4win_build SOURCE_DIR)
file(MAKE_DIRECTORY "${SOURCE_DIR}/include")
add_library(MPI::MPI_CXX UNKNOWN IMPORTED)
set_target_properties(MPI::MPI_CXX PROPERTIES
IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a"
INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include"
INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
add_dependencies(MPI::MPI_CXX mpi4win_build)
# set variables for status reporting at the end of CMake run
set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include")
set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a")
else()
# Download and configure custom MPICH files for Windows # Download and configure custom MPICH files for Windows
message(STATUS "Downloading and configuring MPICH-1.4.1 for Windows") message(STATUS "Downloading and configuring MPICH-1.4.1 for Windows")
set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball") set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball")
set(MPICH2_WIN32_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win32-devel.tar.gz" CACHE STRING "URL for MPICH2 (win32) tarball")
set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball") set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball")
set(MPICH2_WIN32_DEVEL_MD5 "a61d153500dce44e21b755ee7257e031" CACHE STRING "MD5 checksum of MPICH2 (win32) tarball")
mark_as_advanced(MPICH2_WIN64_DEVEL_URL) mark_as_advanced(MPICH2_WIN64_DEVEL_URL)
mark_as_advanced(MPICH2_WIN32_DEVEL_URL)
mark_as_advanced(MPICH2_WIN64_DEVEL_MD5) mark_as_advanced(MPICH2_WIN64_DEVEL_MD5)
mark_as_advanced(MPICH2_WIN32_DEVEL_MD5)
include(ExternalProject) include(ExternalProject)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64") if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
@ -140,6 +193,7 @@ if(BUILD_MPI)
set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include")
set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a") set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a")
endif()
else() else()
find_package(MPI REQUIRED) find_package(MPI REQUIRED)
option(LAMMPS_LONGLONG_TO_LONG "Workaround if your system or MPI version does not recognize 'long long' data types" OFF) option(LAMMPS_LONGLONG_TO_LONG "Workaround if your system or MPI version does not recognize 'long long' data types" OFF)

View File

@ -24,9 +24,24 @@ function(validate_option name values)
endif() endif()
endfunction(validate_option) endfunction(validate_option)
# helper function to check for usable omp.h header
function(check_omp_h_include)
find_package(OpenMP COMPONENTS CXX QUIET)
if(OpenMP_CXX_FOUND)
set(CMAKE_REQUIRED_FLAGS ${OpenMP_CXX_FLAGS})
set(CMAKE_REQUIRED_INCLUDES ${OpenMP_CXX_INCLUDE_DIRS})
set(CMAKE_REQUIRED_LINK_OPTIONS ${OpenMP_CXX_FLAGS})
set(CMAKE_REQUIRED_LIBRARIES ${OpenMP_CXX_LIBRARIES})
check_include_file_cxx(omp.h _have_omp_h)
else()
set(_have_omp_h FALSE)
endif()
set(HAVE_OMP_H_INCLUDE ${_have_omp_h} PARENT_SCOPE)
endfunction()
# helper function for getting the most recently modified file or folder from a glob pattern # helper function for getting the most recently modified file or folder from a glob pattern
function(get_newest_file path variable) function(get_newest_file path variable)
file(GLOB _dirs ${path}) file(GLOB _dirs ${CONFIGURE_DEPENDS} ${path})
set(_besttime 2000-01-01T00:00:00) set(_besttime 2000-01-01T00:00:00)
set(_bestfile "<unknown>") set(_bestfile "<unknown>")
foreach(_dir ${_dirs}) foreach(_dir ${_dirs})
@ -65,8 +80,8 @@ endfunction()
function(check_for_autogen_files source_dir) function(check_for_autogen_files source_dir)
message(STATUS "Running check for auto-generated files from make-based build system") message(STATUS "Running check for auto-generated files from make-based build system")
file(GLOB SRC_AUTOGEN_FILES ${source_dir}/style_*.h) file(GLOB SRC_AUTOGEN_FILES ${CONFIGURE_DEPENDS} ${source_dir}/style_*.h)
file(GLOB SRC_AUTOGEN_PACKAGES ${source_dir}/packages_*.h) file(GLOB SRC_AUTOGEN_PACKAGES ${CONFIGURE_DEPENDS} ${source_dir}/packages_*.h)
list(APPEND SRC_AUTOGEN_FILES ${SRC_AUTOGEN_PACKAGES} ${source_dir}/lmpinstalledpkgs.h ${source_dir}/lmpgitversion.h) list(APPEND SRC_AUTOGEN_FILES ${SRC_AUTOGEN_PACKAGES} ${source_dir}/lmpinstalledpkgs.h ${source_dir}/lmpgitversion.h)
list(APPEND SRC_AUTOGEN_FILES ${SRC_AUTOGEN_PACKAGES} ${source_dir}/mliap_model_python_couple.h ${source_dir}/mliap_model_python_couple.cpp) list(APPEND SRC_AUTOGEN_FILES ${SRC_AUTOGEN_PACKAGES} ${source_dir}/mliap_model_python_couple.h ${source_dir}/mliap_model_python_couple.cpp)
foreach(_SRC ${SRC_AUTOGEN_FILES}) foreach(_SRC ${SRC_AUTOGEN_FILES})
@ -84,8 +99,15 @@ function(check_for_autogen_files source_dir)
endfunction() endfunction()
macro(pkg_depends PKG1 PKG2) macro(pkg_depends PKG1 PKG2)
if(PKG_${PKG1} AND NOT (PKG_${PKG2} OR BUILD_${PKG2})) if(DEFINED BUILD_${PKG2})
message(FATAL_ERROR "The ${PKG1} package needs LAMMPS to be build with the ${PKG2} package") if(PKG_${PKG1} AND NOT BUILD_${PKG2})
message(FATAL_ERROR "The ${PKG1} package needs LAMMPS to be built with -D BUILD_${PKG2}=ON")
endif()
elseif(DEFINED PKG_${PKG2})
if(PKG_${PKG1} AND NOT PKG_${PKG2})
message(WARNING "The ${PKG1} package depends on the ${PKG2} package. Enabling it.")
set(PKG_${PKG2} ON CACHE BOOL "" FORCE)
endif()
endif() endif()
endmacro() endmacro()
@ -103,6 +125,7 @@ endfunction(GenerateBinaryHeader)
# fetch missing potential files # fetch missing potential files
function(FetchPotentials pkgfolder potfolder) function(FetchPotentials pkgfolder potfolder)
if(DOWNLOAD_POTENTIALS)
if(EXISTS "${pkgfolder}/potentials.txt") if(EXISTS "${pkgfolder}/potentials.txt")
file(STRINGS "${pkgfolder}/potentials.txt" linelist REGEX "^[^#].") file(STRINGS "${pkgfolder}/potentials.txt" linelist REGEX "^[^#].")
foreach(line ${linelist}) foreach(line ${linelist})
@ -110,25 +133,40 @@ function(FetchPotentials pkgfolder potfolder)
math(EXPR plusone "${blank}+1") math(EXPR plusone "${blank}+1")
string(SUBSTRING ${line} 0 ${blank} pot) string(SUBSTRING ${line} 0 ${blank} pot)
string(SUBSTRING ${line} ${plusone} -1 sum) string(SUBSTRING ${line} ${plusone} -1 sum)
if(EXISTS ${LAMMPS_POTENTIALS_DIR}/${pot}) if(EXISTS "${LAMMPS_POTENTIALS_DIR}/${pot}")
file(MD5 "${LAMMPS_POTENTIALS_DIR}/${pot}" oldsum) file(MD5 "${LAMMPS_POTENTIALS_DIR}/${pot}" oldsum)
endif() endif()
if(NOT sum STREQUAL oldsum) if(NOT sum STREQUAL oldsum)
message(STATUS "Checking external potential ${pot} from ${LAMMPS_POTENTIALS_URL}") message(STATUS "Downloading external potential ${pot} from ${LAMMPS_POTENTIALS_URL}")
file(DOWNLOAD "${LAMMPS_POTENTIALS_URL}/${pot}.${sum}" "${CMAKE_BINARY_DIR}/${pot}" string(RANDOM LENGTH 10 TMP_EXT)
file(DOWNLOAD "${LAMMPS_POTENTIALS_URL}/${pot}.${sum}" "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}"
EXPECTED_HASH MD5=${sum} SHOW_PROGRESS) EXPECTED_HASH MD5=${sum} SHOW_PROGRESS)
file(COPY "${CMAKE_BINARY_DIR}/${pot}" DESTINATION ${LAMMPS_POTENTIALS_DIR}) file(COPY "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}" DESTINATION "${LAMMPS_POTENTIALS_DIR}")
file(RENAME "${LAMMPS_POTENTIALS_DIR}/${pot}.${TMP_EXT}" "${LAMMPS_POTENTIALS_DIR}/${pot}")
endif() endif()
endforeach() endforeach()
endif() endif()
endif()
endfunction(FetchPotentials) endfunction(FetchPotentials)
# set CMAKE_LINUX_DISTRO and CMAKE_DISTRO_VERSION on Linux # set CMAKE_LINUX_DISTRO and CMAKE_DISTRO_VERSION on Linux
if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND (EXISTS /etc/os-release)) if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND (EXISTS /etc/os-release))
file(STRINGS /etc/os-release distro REGEX "^NAME=") file(STRINGS /etc/os-release distro REGEX "^NAME=")
string(REGEX REPLACE "NAME=\"?([^\"]*)\"?" "\\1" distro "${distro}") string(REGEX REPLACE "NAME=\"?([^ ]+).*\"?" "\\1" distro "${distro}")
file(STRINGS /etc/os-release disversion REGEX "^VERSION_ID=") file(STRINGS /etc/os-release disversion REGEX "^VERSION_ID=")
string(REGEX REPLACE "VERSION_ID=\"?([^\"]*)\"?" "\\1" disversion "${disversion}") string(REGEX REPLACE "VERSION_ID=\"?([^\"]*)\"?" "\\1" disversion "${disversion}")
set(CMAKE_LINUX_DISTRO ${distro}) set(CMAKE_LINUX_DISTRO ${distro})
set(CMAKE_DISTRO_VERSION ${disversion}) set(CMAKE_DISTRO_VERSION ${disversion})
endif() endif()
# determine canonical URL for downloading backup copy from download.lammps.org/thirdparty
function(GetFallbackURL input output)
string(REPLACE "_URL" "" _tmp ${input})
string(TOLOWER ${_tmp} libname)
string(REGEX REPLACE "^https://.*/([^/]+gz)" "${LAMMPS_THIRDPARTY_URL}/${libname}-\\1" newurl "${${input}}")
if ("${newurl}" STREQUAL "${${input}}")
set(${output} "" PARENT_SCOPE)
else()
set(${output} ${newurl} PARENT_SCOPE)
endif()
endfunction(GetFallbackURL)

View File

@ -1,5 +1,39 @@
# Download and configure custom MPICH files for Windows # Download and configure MinGW compatible MPICH development files for Windows
message(STATUS "Downloading and configuring MPICH-1.4.1 for Windows") option(USE_MSMPI "Use Microsoft's MS-MPI SDK instead of MPICH2-1.4.1" OFF)
if(USE_MSMPI)
message(STATUS "Downloading and configuring MS-MPI 10.1 for Windows cross-compilation")
set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/msmpi-win64-devel.tar.gz" CACHE STRING "URL for MS-MPI (win64) tarball")
set(MPICH2_WIN64_DEVEL_MD5 "86314daf1bffb809f1fcbefb8a547490" CACHE STRING "MD5 checksum of MS-MPI (win64) tarball")
mark_as_advanced(MPICH2_WIN64_DEVEL_URL)
mark_as_advanced(MPICH2_WIN64_DEVEL_MD5)
include(ExternalProject)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
ExternalProject_Add(mpi4win_build
URL ${MPICH2_WIN64_DEVEL_URL}
URL_MD5 ${MPICH2_WIN64_DEVEL_MD5}
CONFIGURE_COMMAND "" BUILD_COMMAND "" INSTALL_COMMAND ""
BUILD_BYPRODUCTS <SOURCE_DIR>/lib/libmsmpi.a)
else()
message(FATAL_ERROR "Only x86 64-bit builds are supported with MS-MPI")
endif()
ExternalProject_get_property(mpi4win_build SOURCE_DIR)
file(MAKE_DIRECTORY "${SOURCE_DIR}/include")
add_library(MPI::MPI_CXX UNKNOWN IMPORTED)
set_target_properties(MPI::MPI_CXX PROPERTIES
IMPORTED_LOCATION "${SOURCE_DIR}/lib/libmsmpi.a"
INTERFACE_INCLUDE_DIRECTORIES "${SOURCE_DIR}/include"
INTERFACE_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
add_dependencies(MPI::MPI_CXX mpi4win_build)
# set variables for status reporting at the end of CMake run
set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include")
set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmsmpi.a")
else()
message(STATUS "Downloading and configuring MPICH2-1.4.1 for Windows cross-compilation")
set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball") set(MPICH2_WIN64_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win64-devel.tar.gz" CACHE STRING "URL for MPICH2 (win64) tarball")
set(MPICH2_WIN32_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win32-devel.tar.gz" CACHE STRING "URL for MPICH2 (win32) tarball") set(MPICH2_WIN32_DEVEL_URL "${LAMMPS_THIRDPARTY_URL}/mpich2-win32-devel.tar.gz" CACHE STRING "URL for MPICH2 (win32) tarball")
set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball") set(MPICH2_WIN64_DEVEL_MD5 "4939fdb59d13182fd5dd65211e469f14" CACHE STRING "MD5 checksum of MPICH2 (win64) tarball")
@ -37,3 +71,4 @@ add_dependencies(MPI::MPI_CXX mpi4win_build)
set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include") set(MPI_CXX_INCLUDE_PATH "${SOURCE_DIR}/include")
set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX") set(MPI_CXX_COMPILE_DEFINITIONS "MPICH_SKIP_MPICXX")
set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a") set(MPI_CXX_LIBRARIES "${SOURCE_DIR}/lib/libmpi.a")
endif()

View File

@ -1,20 +1,15 @@
set(COLVARS_SOURCE_DIR ${LAMMPS_LIB_SOURCE_DIR}/colvars) set(COLVARS_SOURCE_DIR ${LAMMPS_LIB_SOURCE_DIR}/colvars)
file(GLOB COLVARS_SOURCES ${COLVARS_SOURCE_DIR}/[^.]*.cpp) file(GLOB COLVARS_SOURCES ${CONFIGURE_DEPENDS} ${COLVARS_SOURCE_DIR}/[^.]*.cpp)
option(COLVARS_DEBUG "Debugging messages for Colvars (quite verbose)" OFF) option(COLVARS_DEBUG "Enable debugging messages for Colvars (quite verbose)" OFF)
# Build Lepton by default option(COLVARS_LEPTON "Use the Lepton library for custom expressions" ON)
option(COLVARS_LEPTON "Build and link the Lepton library" ON)
if(COLVARS_LEPTON) if(COLVARS_LEPTON)
set(LEPTON_DIR ${LAMMPS_LIB_SOURCE_DIR}/colvars/lepton) if(NOT LEPTON_SOURCE_DIR)
file(GLOB LEPTON_SOURCES ${LEPTON_DIR}/src/[^.]*.cpp) include(Packages/LEPTON)
add_library(lepton STATIC ${LEPTON_SOURCES}) endif()
# Change the define below to LEPTON_BUILDING_SHARED_LIBRARY when linking Lepton as a DLL with MSVC
target_compile_definitions(lepton PRIVATE -DLEPTON_BUILDING_STATIC_LIBRARY)
set_target_properties(lepton PROPERTIES OUTPUT_NAME lammps_lepton${LAMMPS_MACHINE})
target_include_directories(lepton PRIVATE ${LEPTON_DIR}/include)
endif() endif()
add_library(colvars STATIC ${COLVARS_SOURCES}) add_library(colvars STATIC ${COLVARS_SOURCES})
@ -30,14 +25,11 @@ target_include_directories(colvars PRIVATE ${LAMMPS_SOURCE_DIR})
target_link_libraries(lammps PRIVATE colvars) target_link_libraries(lammps PRIVATE colvars)
if(COLVARS_DEBUG) if(COLVARS_DEBUG)
# Need to export the macro publicly to also affect the proxy # Need to export the define publicly to be valid in interface code
target_compile_definitions(colvars PUBLIC -DCOLVARS_DEBUG) target_compile_definitions(colvars PUBLIC -DCOLVARS_DEBUG)
endif() endif()
if(COLVARS_LEPTON) if(COLVARS_LEPTON)
target_link_libraries(lammps PRIVATE lepton)
target_compile_definitions(colvars PRIVATE -DLEPTON) target_compile_definitions(colvars PRIVATE -DLEPTON)
# Disable the line below when linking Lepton as a DLL with MSVC target_link_libraries(colvars PUBLIC lepton)
target_compile_definitions(colvars PRIVATE -DLEPTON_USE_STATIC_LIBRARIES)
target_include_directories(colvars PUBLIC ${LEPTON_DIR}/include)
endif() endif()

View File

@ -1,4 +1,9 @@
find_package(ZLIB REQUIRED) find_package(ZLIB)
if(NOT ZLIB_FOUND)
message(WARNING "No Zlib development support found. Disabling COMPRESS package...")
set(PKG_COMPRESS OFF CACHE BOOL "" FORCE)
return()
endif()
target_link_libraries(lammps PRIVATE ZLIB::ZLIB) target_link_libraries(lammps PRIVATE ZLIB::ZLIB)
find_package(PkgConfig QUIET) find_package(PkgConfig QUIET)

View File

@ -26,7 +26,20 @@ elseif(GPU_PREC STREQUAL "SINGLE")
set(GPU_PREC_SETTING "SINGLE_SINGLE") set(GPU_PREC_SETTING "SINGLE_SINGLE")
endif() endif()
file(GLOB GPU_LIB_SOURCES ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cpp) option(GPU_DEBUG "Enable debugging code of the GPU package" OFF)
mark_as_advanced(GPU_DEBUG)
if(PKG_AMOEBA AND FFT_SINGLE)
message(FATAL_ERROR "GPU acceleration of AMOEBA is not (yet) compatible with single precision FFT")
endif()
if (PKG_AMOEBA)
list(APPEND GPU_SOURCES
${GPU_SOURCES_DIR}/amoeba_convolution_gpu.h
${GPU_SOURCES_DIR}/amoeba_convolution_gpu.cpp)
endif()
file(GLOB GPU_LIB_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cpp)
file(MAKE_DIRECTORY ${LAMMPS_LIB_BINARY_DIR}/gpu) file(MAKE_DIRECTORY ${LAMMPS_LIB_BINARY_DIR}/gpu)
if(GPU_API STREQUAL "CUDA") if(GPU_API STREQUAL "CUDA")
@ -47,15 +60,15 @@ if(GPU_API STREQUAL "CUDA")
option(CUDA_MPS_SUPPORT "Enable tweaks to support CUDA Multi-process service (MPS)" OFF) option(CUDA_MPS_SUPPORT "Enable tweaks to support CUDA Multi-process service (MPS)" OFF)
if(CUDA_MPS_SUPPORT) if(CUDA_MPS_SUPPORT)
if(CUDPP_OPT) if(CUDPP_OPT)
message(FATAL_ERROR "Must use -DCUDPP_OPT=OFF with -DGPU_CUDA_MPS_SUPPORT=ON") message(FATAL_ERROR "Must use -DCUDPP_OPT=OFF with -DCUDA_MPS_SUPPORT=ON")
endif() endif()
set(GPU_CUDA_MPS_FLAGS "-DCUDA_PROXY") set(GPU_CUDA_MPS_FLAGS "-DCUDA_MPS_SUPPORT")
endif() endif()
set(GPU_ARCH "sm_50" CACHE STRING "LAMMPS GPU CUDA SM primary architecture (e.g. sm_60)") set(GPU_ARCH "sm_50" CACHE STRING "LAMMPS GPU CUDA SM primary architecture (e.g. sm_60)")
# ensure that no *cubin.h files exist from a compile in the lib/gpu folder # ensure that no *cubin.h files exist from a compile in the lib/gpu folder
file(GLOB GPU_LIB_OLD_CUBIN_HEADERS ${LAMMPS_LIB_SOURCE_DIR}/gpu/*_cubin.h) file(GLOB GPU_LIB_OLD_CUBIN_HEADERS ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/*_cubin.h)
if(GPU_LIB_OLD_CUBIN_HEADERS) if(GPU_LIB_OLD_CUBIN_HEADERS)
message(FATAL_ERROR "########################################################################\n" message(FATAL_ERROR "########################################################################\n"
"Found file(s) generated by the make-based build system in lib/gpu\n" "Found file(s) generated by the make-based build system in lib/gpu\n"
@ -65,15 +78,15 @@ if(GPU_API STREQUAL "CUDA")
"########################################################################") "########################################################################")
endif() endif()
file(GLOB GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu ${CMAKE_CURRENT_SOURCE_DIR}/gpu/[^.]*.cu) file(GLOB GPU_LIB_CU ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu ${CMAKE_CURRENT_SOURCE_DIR}/gpu/[^.]*.cu)
list(REMOVE_ITEM GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_pppm.cu) list(REMOVE_ITEM GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_pppm.cu)
cuda_include_directories(${LAMMPS_LIB_SOURCE_DIR}/gpu ${LAMMPS_LIB_BINARY_DIR}/gpu) cuda_include_directories(${LAMMPS_LIB_SOURCE_DIR}/gpu ${LAMMPS_LIB_BINARY_DIR}/gpu)
if(CUDPP_OPT) if(CUDPP_OPT)
cuda_include_directories(${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini) cuda_include_directories(${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini)
file(GLOB GPU_LIB_CUDPP_SOURCES ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini/[^.]*.cpp) file(GLOB GPU_LIB_CUDPP_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini/[^.]*.cpp)
file(GLOB GPU_LIB_CUDPP_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini/[^.]*.cu) file(GLOB GPU_LIB_CUDPP_CU ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini/[^.]*.cu)
endif() endif()
# build arch/gencode commands for nvcc based on CUDA toolkit version and use choice # build arch/gencode commands for nvcc based on CUDA toolkit version and use choice
@ -82,11 +95,14 @@ if(GPU_API STREQUAL "CUDA")
# apply the following to build "fat" CUDA binaries only for known CUDA toolkits since version 8.0 # apply the following to build "fat" CUDA binaries only for known CUDA toolkits since version 8.0
# only the Kepler achitecture and beyond is supported # only the Kepler achitecture and beyond is supported
# comparison chart according to: https://en.wikipedia.org/wiki/CUDA#GPUs_supported
if(CUDA_VERSION VERSION_LESS 8.0) if(CUDA_VERSION VERSION_LESS 8.0)
message(FATAL_ERROR "CUDA Toolkit version 8.0 or later is required") message(FATAL_ERROR "CUDA Toolkit version 8.0 or later is required")
elseif(CUDA_VERSION VERSION_GREATER_EQUAL "12.0") elseif(CUDA_VERSION VERSION_GREATER_EQUAL "13.0")
message(WARNING "Untested CUDA Toolkit version ${CUDA_VERSION}. Use at your own risk") message(WARNING "Untested CUDA Toolkit version ${CUDA_VERSION}. Use at your own risk")
set(GPU_CUDA_GENCODE "-arch=all") set(GPU_CUDA_GENCODE "-arch=all")
elseif(CUDA_VERSION VERSION_GREATER_EQUAL "12.0")
set(GPU_CUDA_GENCODE "-arch=all")
else() else()
# Kepler (GPU Arch 3.0) is supported by CUDA 5 to CUDA 10.2 # Kepler (GPU Arch 3.0) is supported by CUDA 5 to CUDA 10.2
if((CUDA_VERSION VERSION_GREATER_EQUAL "5.0") AND (CUDA_VERSION VERSION_LESS "11.0")) if((CUDA_VERSION VERSION_GREATER_EQUAL "5.0") AND (CUDA_VERSION VERSION_LESS "11.0"))
@ -120,14 +136,14 @@ if(GPU_API STREQUAL "CUDA")
if(CUDA_VERSION VERSION_GREATER_EQUAL "11.1") if(CUDA_VERSION VERSION_GREATER_EQUAL "11.1")
string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_86,code=[sm_86,compute_86]") string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_86,code=[sm_86,compute_86]")
endif() endif()
# Hopper (GPU Arch 9.0) is supported by CUDA 12.0? and later # Lovelace (GPU Arch 8.9) is supported by CUDA 11.8 and later
if(CUDA_VERSION VERSION_GREATER_EQUAL "11.8")
string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]")
endif()
# Hopper (GPU Arch 9.0) is supported by CUDA 12.0 and later
if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0") if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0")
string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]")
endif() endif()
# # Lovelace (GPU Arch 9.x) is supported by CUDA 12.0? and later
#if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0")
# string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_9x,code=[sm_9x,compute_9x]")
#endif()
endif() endif()
cuda_compile_fatbin(GPU_GEN_OBJS ${GPU_LIB_CU} OPTIONS ${CUDA_REQUEST_PIC} cuda_compile_fatbin(GPU_GEN_OBJS ${GPU_LIB_CU} OPTIONS ${CUDA_REQUEST_PIC}
@ -150,14 +166,17 @@ if(GPU_API STREQUAL "CUDA")
add_library(gpu STATIC ${GPU_LIB_SOURCES} ${GPU_LIB_CUDPP_SOURCES} ${GPU_OBJS}) add_library(gpu STATIC ${GPU_LIB_SOURCES} ${GPU_LIB_CUDPP_SOURCES} ${GPU_OBJS})
target_link_libraries(gpu PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY}) target_link_libraries(gpu PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY})
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu ${CUDA_INCLUDE_DIRS}) target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu ${CUDA_INCLUDE_DIRS})
target_compile_definitions(gpu PRIVATE -DUSE_CUDA -D_${GPU_PREC_SETTING} -DMPI_GERYON -DUCL_NO_EXIT ${GPU_CUDA_MPS_FLAGS}) target_compile_definitions(gpu PRIVATE -DUSE_CUDA -D_${GPU_PREC_SETTING} ${GPU_CUDA_MPS_FLAGS})
if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DUCL_NO_EXIT)
endif()
if(CUDPP_OPT) if(CUDPP_OPT)
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini) target_include_directories(gpu PRIVATE ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini)
target_compile_definitions(gpu PRIVATE -DUSE_CUDPP) target_compile_definitions(gpu PRIVATE -DUSE_CUDPP)
endif() endif()
target_link_libraries(lammps PRIVATE gpu)
add_executable(nvc_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp) add_executable(nvc_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp)
target_compile_definitions(nvc_get_devices PRIVATE -DUCL_CUDADR) target_compile_definitions(nvc_get_devices PRIVATE -DUCL_CUDADR)
target_link_libraries(nvc_get_devices PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY}) target_link_libraries(nvc_get_devices PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY})
@ -182,7 +201,7 @@ elseif(GPU_API STREQUAL "OPENCL")
include(OpenCLUtils) include(OpenCLUtils)
set(OCL_COMMON_HEADERS ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_preprocessor.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_aux_fun1.h) set(OCL_COMMON_HEADERS ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_preprocessor.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_aux_fun1.h)
file(GLOB GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu) file(GLOB GPU_LIB_CU ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu)
list(REMOVE_ITEM GPU_LIB_CU list(REMOVE_ITEM GPU_LIB_CU
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_gayberne.cu ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_gayberne.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_gayberne_lj.cu ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_gayberne_lj.cu
@ -191,6 +210,7 @@ elseif(GPU_API STREQUAL "OPENCL")
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo.cu
) )
foreach(GPU_KERNEL ${GPU_LIB_CU}) foreach(GPU_KERNEL ${GPU_LIB_CU})
@ -207,6 +227,7 @@ elseif(GPU_API STREQUAL "OPENCL")
GenerateOpenCLHeader(tersoff ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu) GenerateOpenCLHeader(tersoff ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu)
GenerateOpenCLHeader(tersoff_zbl ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu) GenerateOpenCLHeader(tersoff_zbl ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu)
GenerateOpenCLHeader(tersoff_mod ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu) GenerateOpenCLHeader(tersoff_mod ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu)
GenerateOpenCLHeader(hippo ${CMAKE_CURRENT_BINARY_DIR}/gpu/hippo_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo.cu)
list(APPEND GPU_LIB_SOURCES list(APPEND GPU_LIB_SOURCES
${CMAKE_CURRENT_BINARY_DIR}/gpu/gayberne_cl.h ${CMAKE_CURRENT_BINARY_DIR}/gpu/gayberne_cl.h
@ -216,36 +237,26 @@ elseif(GPU_API STREQUAL "OPENCL")
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/hippo_cl.h
) )
add_library(gpu STATIC ${GPU_LIB_SOURCES}) add_library(gpu STATIC ${GPU_LIB_SOURCES})
target_link_libraries(gpu PRIVATE OpenCL::OpenCL) target_link_libraries(gpu PRIVATE OpenCL::OpenCL)
target_include_directories(gpu PRIVATE ${CMAKE_CURRENT_BINARY_DIR}/gpu) target_include_directories(gpu PRIVATE ${CMAKE_CURRENT_BINARY_DIR}/gpu)
target_compile_definitions(gpu PRIVATE -D_${GPU_PREC_SETTING} -DMPI_GERYON -DGERYON_NUMA_FISSION -DUCL_NO_EXIT) target_compile_definitions(gpu PRIVATE -DUSE_OPENCL -D_${GPU_PREC_SETTING})
target_compile_definitions(gpu PRIVATE -DUSE_OPENCL) if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
target_link_libraries(lammps PRIVATE gpu) else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DGERYON_NUMA_FISSION -DUCL_NO_EXIT)
endif()
add_executable(ocl_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp) add_executable(ocl_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp)
target_compile_definitions(ocl_get_devices PRIVATE -DUCL_OPENCL) target_compile_definitions(ocl_get_devices PRIVATE -DUCL_OPENCL)
target_link_libraries(ocl_get_devices PRIVATE OpenCL::OpenCL) target_link_libraries(ocl_get_devices PRIVATE OpenCL::OpenCL)
add_dependencies(ocl_get_devices OpenCL::OpenCL) add_dependencies(ocl_get_devices OpenCL::OpenCL)
elseif(GPU_API STREQUAL "HIP") elseif(GPU_API STREQUAL "HIP")
if(NOT DEFINED HIP_PATH) include(DetectHIPInstallation)
if(NOT DEFINED ENV{HIP_PATH})
set(HIP_PATH "/opt/rocm/hip" CACHE PATH "Path to HIP installation")
else()
set(HIP_PATH $ENV{HIP_PATH} CACHE PATH "Path to HIP installation")
endif()
endif()
if(NOT DEFINED ROCM_PATH)
if(NOT DEFINED ENV{ROCM_PATH})
set(ROCM_PATH "/opt/rocm" CACHE PATH "Path to ROCm installation")
else()
set(ROCM_PATH $ENV{ROCM_PATH} CACHE PATH "Path to ROCm installation")
endif()
endif()
list(APPEND CMAKE_PREFIX_PATH ${HIP_PATH} ${ROCM_PATH})
find_package(hip REQUIRED) find_package(hip REQUIRED)
option(HIP_USE_DEVICE_SORT "Use GPU sorting" ON) option(HIP_USE_DEVICE_SORT "Use GPU sorting" ON)
@ -259,8 +270,10 @@ elseif(GPU_API STREQUAL "HIP")
set(ENV{HIP_PLATFORM} ${HIP_PLATFORM}) set(ENV{HIP_PLATFORM} ${HIP_PLATFORM})
if(HIP_PLATFORM STREQUAL "hcc" OR HIP_PLATFORM STREQUAL "amd") if(HIP_PLATFORM STREQUAL "amd")
set(HIP_ARCH "gfx906" CACHE STRING "HIP target architecture") set(HIP_ARCH "gfx906" CACHE STRING "HIP target architecture")
elseif(HIP_PLATFORM STREQUAL "spirv")
set(HIP_ARCH "spirv" CACHE STRING "HIP target architecture")
elseif(HIP_PLATFORM STREQUAL "nvcc") elseif(HIP_PLATFORM STREQUAL "nvcc")
find_package(CUDA REQUIRED) find_package(CUDA REQUIRED)
set(HIP_ARCH "sm_50" CACHE STRING "HIP primary CUDA architecture (e.g. sm_60)") set(HIP_ARCH "sm_50" CACHE STRING "HIP primary CUDA architecture (e.g. sm_60)")
@ -273,6 +286,7 @@ elseif(GPU_API STREQUAL "HIP")
else() else()
# build arch/gencode commands for nvcc based on CUDA toolkit version and use choice # build arch/gencode commands for nvcc based on CUDA toolkit version and use choice
# --arch translates directly instead of JIT, so this should be for the preferred or most common architecture # --arch translates directly instead of JIT, so this should be for the preferred or most common architecture
# comparison chart according to: https://en.wikipedia.org/wiki/CUDA#GPUs_supported
set(HIP_CUDA_GENCODE "-arch=${HIP_ARCH}") set(HIP_CUDA_GENCODE "-arch=${HIP_ARCH}")
# Kepler (GPU Arch 3.0) is supported by CUDA 5 to CUDA 10.2 # Kepler (GPU Arch 3.0) is supported by CUDA 5 to CUDA 10.2
if((CUDA_VERSION VERSION_GREATER_EQUAL "5.0") AND (CUDA_VERSION VERSION_LESS "11.0")) if((CUDA_VERSION VERSION_GREATER_EQUAL "5.0") AND (CUDA_VERSION VERSION_LESS "11.0"))
@ -302,14 +316,22 @@ elseif(GPU_API STREQUAL "HIP")
if(CUDA_VERSION VERSION_GREATER_EQUAL "11.0") if(CUDA_VERSION VERSION_GREATER_EQUAL "11.0")
string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_80,code=[sm_80,compute_80]") string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_80,code=[sm_80,compute_80]")
endif() endif()
# Hopper (GPU Arch 9.0) is supported by CUDA 12.0? and later # Ampere (GPU Arch 8.6) is supported by CUDA 11.1 and later
if(CUDA_VERSION VERSION_GREATER_EQUAL "11.1")
string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_86,code=[sm_86,compute_86]")
endif()
# Lovelace (GPU Arch 8.9) is supported by CUDA 11.8 and later
if(CUDA_VERSION VERSION_GREATER_EQUAL "11.8")
string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]")
endif()
# Hopper (GPU Arch 9.0) is supported by CUDA 12.0 and later
if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0") if(CUDA_VERSION VERSION_GREATER_EQUAL "12.0")
string(APPEND GPU_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]") string(APPEND HIP_CUDA_GENCODE " -gencode arch=compute_90,code=[sm_90,compute_90]")
endif() endif()
endif() endif()
endif() endif()
file(GLOB GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu ${CMAKE_CURRENT_SOURCE_DIR}/gpu/[^.]*.cu) file(GLOB GPU_LIB_CU ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cu ${CMAKE_CURRENT_SOURCE_DIR}/gpu/[^.]*.cu)
list(REMOVE_ITEM GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_pppm.cu) list(REMOVE_ITEM GPU_LIB_CU ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_pppm.cu)
set(GPU_LIB_CU_HIP "") set(GPU_LIB_CU_HIP "")
@ -321,7 +343,7 @@ elseif(GPU_API STREQUAL "HIP")
set(CUBIN_FILE "${LAMMPS_LIB_BINARY_DIR}/gpu/${CU_NAME}.cubin") set(CUBIN_FILE "${LAMMPS_LIB_BINARY_DIR}/gpu/${CU_NAME}.cubin")
set(CUBIN_H_FILE "${LAMMPS_LIB_BINARY_DIR}/gpu/${CU_NAME}_cubin.h") set(CUBIN_H_FILE "${LAMMPS_LIB_BINARY_DIR}/gpu/${CU_NAME}_cubin.h")
if(HIP_PLATFORM STREQUAL "hcc" OR HIP_PLATFORM STREQUAL "amd") if(HIP_PLATFORM STREQUAL "amd")
configure_file(${CU_FILE} ${CU_CPP_FILE} COPYONLY) configure_file(${CU_FILE} ${CU_CPP_FILE} COPYONLY)
if(HIP_COMPILER STREQUAL "clang") if(HIP_COMPILER STREQUAL "clang")
@ -340,6 +362,13 @@ elseif(GPU_API STREQUAL "HIP")
VERBATIM COMMAND ${HIP_HIPCC_EXECUTABLE} --fatbin --use_fast_math -DUSE_HIP -D_${GPU_PREC_SETTING} -DLAMMPS_${LAMMPS_SIZES} ${HIP_CUDA_GENCODE} -I${LAMMPS_LIB_SOURCE_DIR}/gpu -o ${CUBIN_FILE} ${CU_FILE} VERBATIM COMMAND ${HIP_HIPCC_EXECUTABLE} --fatbin --use_fast_math -DUSE_HIP -D_${GPU_PREC_SETTING} -DLAMMPS_${LAMMPS_SIZES} ${HIP_CUDA_GENCODE} -I${LAMMPS_LIB_SOURCE_DIR}/gpu -o ${CUBIN_FILE} ${CU_FILE}
DEPENDS ${CU_FILE} DEPENDS ${CU_FILE}
COMMENT "Generating ${CU_NAME}.cubin") COMMENT "Generating ${CU_NAME}.cubin")
elseif(HIP_PLATFORM STREQUAL "spirv")
configure_file(${CU_FILE} ${CU_CPP_FILE} COPYONLY)
add_custom_command(OUTPUT ${CUBIN_FILE}
VERBATIM COMMAND ${HIP_HIPCC_EXECUTABLE} -c -O3 -DUSE_HIP -D_${GPU_PREC_SETTING} -DLAMMPS_${LAMMPS_SIZES} -I${LAMMPS_LIB_SOURCE_DIR}/gpu -o ${CUBIN_FILE} ${CU_CPP_FILE}
DEPENDS ${CU_CPP_FILE}
COMMENT "Gerating ${CU_NAME}.cubin")
endif() endif()
add_custom_command(OUTPUT ${CUBIN_H_FILE} add_custom_command(OUTPUT ${CUBIN_H_FILE}
@ -354,8 +383,12 @@ elseif(GPU_API STREQUAL "HIP")
add_library(gpu STATIC ${GPU_LIB_SOURCES}) add_library(gpu STATIC ${GPU_LIB_SOURCES})
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu) target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu)
target_compile_definitions(gpu PRIVATE -D_${GPU_PREC_SETTING} -DMPI_GERYON -DUCL_NO_EXIT) target_compile_definitions(gpu PRIVATE -DUSE_HIP -D_${GPU_PREC_SETTING})
target_compile_definitions(gpu PRIVATE -DUSE_HIP) if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DUCL_NO_EXIT)
endif()
target_link_libraries(gpu PRIVATE hip::host) target_link_libraries(gpu PRIVATE hip::host)
if(HIP_USE_DEVICE_SORT) if(HIP_USE_DEVICE_SORT)
@ -364,7 +397,8 @@ elseif(GPU_API STREQUAL "HIP")
set_property(TARGET gpu PROPERTY CXX_STANDARD 14) set_property(TARGET gpu PROPERTY CXX_STANDARD 14)
endif() endif()
# add hipCUB # add hipCUB
target_include_directories(gpu PRIVATE ${HIP_ROOT_DIR}/../include) find_package(hipcub REQUIRED)
target_link_libraries(gpu PRIVATE hip::hipcub)
target_compile_definitions(gpu PRIVATE -DUSE_HIP_DEVICE_SORT) target_compile_definitions(gpu PRIVATE -DUSE_HIP_DEVICE_SORT)
if(HIP_PLATFORM STREQUAL "nvcc") if(HIP_PLATFORM STREQUAL "nvcc")
@ -380,15 +414,17 @@ elseif(GPU_API STREQUAL "HIP")
if(DOWNLOAD_CUB) if(DOWNLOAD_CUB)
message(STATUS "CUB download requested") message(STATUS "CUB download requested")
set(CUB_URL "https://github.com/NVlabs/cub/archive/1.12.0.tar.gz" CACHE STRING "URL for CUB tarball") # TODO: test update to current version 1.17.2
set(CUB_URL "https://github.com/nvidia/cub/archive/1.12.0.tar.gz" CACHE STRING "URL for CUB tarball")
set(CUB_MD5 "1cf595beacafff104700921bac8519f3" CACHE STRING "MD5 checksum of CUB tarball") set(CUB_MD5 "1cf595beacafff104700921bac8519f3" CACHE STRING "MD5 checksum of CUB tarball")
mark_as_advanced(CUB_URL) mark_as_advanced(CUB_URL)
mark_as_advanced(CUB_MD5) mark_as_advanced(CUB_MD5)
GetFallbackURL(CUB_URL CUB_FALLBACK)
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(CUB ExternalProject_Add(CUB
URL ${CUB_URL} URL ${CUB_URL} ${CUB_FALLBACK}
URL_MD5 ${CUB_MD5} URL_MD5 ${CUB_MD5}
PREFIX "${CMAKE_CURRENT_BINARY_DIR}" PREFIX "${CMAKE_CURRENT_BINARY_DIR}"
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
@ -411,34 +447,24 @@ elseif(GPU_API STREQUAL "HIP")
add_executable(hip_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp) add_executable(hip_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp)
target_compile_definitions(hip_get_devices PRIVATE -DUCL_HIP) target_compile_definitions(hip_get_devices PRIVATE -DUCL_HIP)
target_link_libraries(hip_get_devices hip::host) target_link_libraries(hip_get_devices PRIVATE hip::host)
if(HIP_PLATFORM STREQUAL "nvcc") if(HIP_PLATFORM STREQUAL "nvcc")
target_compile_definitions(gpu PRIVATE -D__HIP_PLATFORM_NVCC__) target_compile_definitions(gpu PRIVATE -D__HIP_PLATFORM_NVCC__)
target_include_directories(gpu PRIVATE ${HIP_ROOT_DIR}/../include)
target_include_directories(gpu PRIVATE ${CUDA_INCLUDE_DIRS}) target_include_directories(gpu PRIVATE ${CUDA_INCLUDE_DIRS})
target_link_libraries(gpu PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY}) target_link_libraries(gpu PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY})
target_compile_definitions(hip_get_devices PRIVATE -D__HIP_PLATFORM_NVCC__) target_compile_definitions(hip_get_devices PRIVATE -D__HIP_PLATFORM_NVCC__)
target_include_directories(hip_get_devices PRIVATE ${HIP_ROOT_DIR}/include)
target_include_directories(hip_get_devices PRIVATE ${CUDA_INCLUDE_DIRS}) target_include_directories(hip_get_devices PRIVATE ${CUDA_INCLUDE_DIRS})
target_link_libraries(hip_get_devices PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY}) target_link_libraries(hip_get_devices PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY})
elseif(HIP_PLATFORM STREQUAL "hcc") endif()
target_compile_definitions(gpu PRIVATE -D__HIP_PLATFORM_HCC__)
target_include_directories(gpu PRIVATE ${HIP_ROOT_DIR}/../include)
target_compile_definitions(hip_get_devices PRIVATE -D__HIP_PLATFORM_HCC__)
target_include_directories(hip_get_devices PRIVATE ${HIP_ROOT_DIR}/../include)
elseif(HIP_PLATFORM STREQUAL "amd")
target_compile_definitions(gpu PRIVATE -D__HIP_PLATFORM_AMD__)
target_include_directories(gpu PRIVATE ${HIP_ROOT_DIR}/../include)
target_compile_definitions(hip_get_devices PRIVATE -D__HIP_PLATFORM_AMD__)
target_include_directories(hip_get_devices PRIVATE ${HIP_ROOT_DIR}/../include)
endif() endif()
if(BUILD_OMP)
find_package(OpenMP COMPONENTS CXX REQUIRED)
target_link_libraries(gpu PRIVATE OpenMP::OpenMP_CXX)
endif()
target_link_libraries(lammps PRIVATE gpu) target_link_libraries(lammps PRIVATE gpu)
endif()
set_property(GLOBAL PROPERTY "GPU_SOURCES" "${GPU_SOURCES}") set_property(GLOBAL PROPERTY "GPU_SOURCES" "${GPU_SOURCES}")
# detect styles which have a GPU version # detect styles which have a GPU version

View File

@ -112,9 +112,5 @@ if(PKG_KSPACE)
RegisterIntegrateStyle(${INTEL_SOURCES_DIR}/verlet_lrt_intel.h) RegisterIntegrateStyle(${INTEL_SOURCES_DIR}/verlet_lrt_intel.h)
endif() endif()
if(PKG_ELECTRODE)
list(APPEND INTEL_SOURCES ${INTEL_SOURCES_DIR}/electrode_accel_intel.cpp)
endif()
target_sources(lammps PRIVATE ${INTEL_SOURCES}) target_sources(lammps PRIVATE ${INTEL_SOURCES})
target_include_directories(lammps PRIVATE ${INTEL_SOURCES_DIR}) target_include_directories(lammps PRIVATE ${INTEL_SOURCES_DIR})

View File

@ -19,7 +19,7 @@ if(CURL_FOUND)
target_compile_definitions(lammps PRIVATE -DLMP_NO_SSL_CHECK) target_compile_definitions(lammps PRIVATE -DLMP_NO_SSL_CHECK)
endif() endif()
endif() endif()
set(KIM_EXTRA_UNITTESTS OFF CACHE STRING "Set extra unit tests verbose mode on/off. If on, extra tests are included.") option(KIM_EXTRA_UNITTESTS "Enable extra unit tests for the KIM package." OFF)
mark_as_advanced(KIM_EXTRA_UNITTESTS) mark_as_advanced(KIM_EXTRA_UNITTESTS)
find_package(PkgConfig QUIET) find_package(PkgConfig QUIET)
set(DOWNLOAD_KIM_DEFAULT ON) set(DOWNLOAD_KIM_DEFAULT ON)

View File

@ -3,6 +3,7 @@
if(CMAKE_CXX_STANDARD LESS 14) if(CMAKE_CXX_STANDARD LESS 14)
message(FATAL_ERROR "The KOKKOS package requires the C++ standard to be set to at least C++14") message(FATAL_ERROR "The KOKKOS package requires the C++ standard to be set to at least C++14")
endif() endif()
######################################################################## ########################################################################
# consistency checks and Kokkos options/settings required by LAMMPS # consistency checks and Kokkos options/settings required by LAMMPS
if(Kokkos_ENABLE_CUDA) if(Kokkos_ENABLE_CUDA)
@ -15,8 +16,9 @@ if(Kokkos_ENABLE_OPENMP)
if(NOT BUILD_OMP) if(NOT BUILD_OMP)
message(FATAL_ERROR "Must enable BUILD_OMP with Kokkos_ENABLE_OPENMP") message(FATAL_ERROR "Must enable BUILD_OMP with Kokkos_ENABLE_OPENMP")
else() else()
if(LAMMPS_OMP_COMPAT_LEVEL LESS 4) # NVHPC/(AMD)Clang does not seem to provide a detectable OpenMP version, but is far beyond version 3.1
message(FATAL_ERROR "Compiler must support OpenMP 4.0 or later with Kokkos_ENABLE_OPENMP") if((OpenMP_CXX_VERSION VERSION_LESS 3.1) AND NOT ((CMAKE_CXX_COMPILER_ID STREQUAL "NVHPC") OR (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")))
message(FATAL_ERROR "Compiler must support OpenMP 3.1 or later with Kokkos_ENABLE_OPENMP")
endif() endif()
endif() endif()
endif() endif()
@ -47,12 +49,14 @@ if(DOWNLOAD_KOKKOS)
list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_CXX_EXTENSIONS=${CMAKE_CXX_EXTENSIONS}") list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_CXX_EXTENSIONS=${CMAKE_CXX_EXTENSIONS}")
list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}") list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}")
include(ExternalProject) include(ExternalProject)
set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/3.6.00.tar.gz" CACHE STRING "URL for KOKKOS tarball") set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/3.7.02.tar.gz" CACHE STRING "URL for KOKKOS tarball")
set(KOKKOS_MD5 "b5c44ea961031795f434002cd7b31c20" CACHE STRING "MD5 checksum of KOKKOS tarball") set(KOKKOS_MD5 "34d7860d548c06a4040236d959c9f99a" CACHE STRING "MD5 checksum of KOKKOS tarball")
mark_as_advanced(KOKKOS_URL) mark_as_advanced(KOKKOS_URL)
mark_as_advanced(KOKKOS_MD5) mark_as_advanced(KOKKOS_MD5)
GetFallbackURL(KOKKOS_URL KOKKOS_FALLBACK)
ExternalProject_Add(kokkos_build ExternalProject_Add(kokkos_build
URL ${KOKKOS_URL} URL ${KOKKOS_URL} ${KOKKOS_FALLBACK}
URL_MD5 ${KOKKOS_MD5} URL_MD5 ${KOKKOS_MD5}
CMAKE_ARGS ${KOKKOS_LIB_BUILD_ARGS} CMAKE_ARGS ${KOKKOS_LIB_BUILD_ARGS}
BUILD_BYPRODUCTS <INSTALL_DIR>/lib/libkokkoscore.a <INSTALL_DIR>/lib/libkokkoscontainers.a BUILD_BYPRODUCTS <INSTALL_DIR>/lib/libkokkoscore.a <INSTALL_DIR>/lib/libkokkoscontainers.a
@ -68,13 +72,11 @@ if(DOWNLOAD_KOKKOS)
set_target_properties(LAMMPS::KOKKOSCONTAINERS PROPERTIES set_target_properties(LAMMPS::KOKKOSCONTAINERS PROPERTIES
IMPORTED_LOCATION "${INSTALL_DIR}/lib/libkokkoscontainers.a") IMPORTED_LOCATION "${INSTALL_DIR}/lib/libkokkoscontainers.a")
target_link_libraries(lammps PRIVATE LAMMPS::KOKKOSCORE LAMMPS::KOKKOSCONTAINERS) target_link_libraries(lammps PRIVATE LAMMPS::KOKKOSCORE LAMMPS::KOKKOSCONTAINERS)
target_link_libraries(lmp PRIVATE LAMMPS::KOKKOSCORE LAMMPS::KOKKOSCONTAINERS)
add_dependencies(LAMMPS::KOKKOSCORE kokkos_build) add_dependencies(LAMMPS::KOKKOSCORE kokkos_build)
add_dependencies(LAMMPS::KOKKOSCONTAINERS kokkos_build) add_dependencies(LAMMPS::KOKKOSCONTAINERS kokkos_build)
elseif(EXTERNAL_KOKKOS) elseif(EXTERNAL_KOKKOS)
find_package(Kokkos 3.6.00 REQUIRED CONFIG) find_package(Kokkos 3.7.02 REQUIRED CONFIG)
target_link_libraries(lammps PRIVATE Kokkos::kokkos) target_link_libraries(lammps PRIVATE Kokkos::kokkos)
target_link_libraries(lmp PRIVATE Kokkos::kokkos)
else() else()
set(LAMMPS_LIB_KOKKOS_SRC_DIR ${LAMMPS_LIB_SOURCE_DIR}/kokkos) set(LAMMPS_LIB_KOKKOS_SRC_DIR ${LAMMPS_LIB_SOURCE_DIR}/kokkos)
set(LAMMPS_LIB_KOKKOS_BIN_DIR ${LAMMPS_LIB_BINARY_DIR}/kokkos) set(LAMMPS_LIB_KOKKOS_BIN_DIR ${LAMMPS_LIB_BINARY_DIR}/kokkos)
@ -88,14 +90,12 @@ else()
endif() endif()
add_subdirectory(${LAMMPS_LIB_KOKKOS_SRC_DIR} ${LAMMPS_LIB_KOKKOS_BIN_DIR}) add_subdirectory(${LAMMPS_LIB_KOKKOS_SRC_DIR} ${LAMMPS_LIB_KOKKOS_BIN_DIR})
set(Kokkos_INCLUDE_DIRS ${LAMMPS_LIB_KOKKOS_SRC_DIR}/core/src set(Kokkos_INCLUDE_DIRS ${LAMMPS_LIB_KOKKOS_SRC_DIR}/core/src
${LAMMPS_LIB_KOKKOS_SRC_DIR}/containers/src ${LAMMPS_LIB_KOKKOS_SRC_DIR}/containers/src
${LAMMPS_LIB_KOKKOS_SRC_DIR}/algorithms/src ${LAMMPS_LIB_KOKKOS_SRC_DIR}/algorithms/src
${LAMMPS_LIB_KOKKOS_BIN_DIR}) ${LAMMPS_LIB_KOKKOS_BIN_DIR})
target_include_directories(lammps PRIVATE ${Kokkos_INCLUDE_DIRS}) target_include_directories(lammps PRIVATE ${Kokkos_INCLUDE_DIRS})
target_link_libraries(lammps PRIVATE kokkos) target_link_libraries(lammps PRIVATE kokkos)
target_link_libraries(lmp PRIVATE kokkos)
if(BUILD_SHARED_LIBS_WAS_ON) if(BUILD_SHARED_LIBS_WAS_ON)
set(BUILD_SHARED_LIBS ON) set(BUILD_SHARED_LIBS ON)
endif() endif()
@ -121,9 +121,14 @@ set(KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/domain_kokkos.cpp ${KOKKOS_PKG_SOURCES_DIR}/domain_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/modify_kokkos.cpp) ${KOKKOS_PKG_SOURCES_DIR}/modify_kokkos.cpp)
# fix wall/gran has been refactored in an incompatible way. Use old version of base class for now
if(PKG_GRANULAR)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/fix_wall_gran_old.cpp)
endif()
if(PKG_KSPACE) if(PKG_KSPACE)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/fft3d_kokkos.cpp list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/fft3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/gridcomm_kokkos.cpp ${KOKKOS_PKG_SOURCES_DIR}/grid3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/remap_kokkos.cpp) ${KOKKOS_PKG_SOURCES_DIR}/remap_kokkos.cpp)
if(Kokkos_ENABLE_CUDA) if(Kokkos_ENABLE_CUDA)
if(NOT (FFT STREQUAL "KISS")) if(NOT (FFT STREQUAL "KISS"))
@ -132,12 +137,37 @@ if(PKG_KSPACE)
endif() endif()
elseif(Kokkos_ENABLE_HIP) elseif(Kokkos_ENABLE_HIP)
if(NOT (FFT STREQUAL "KISS")) if(NOT (FFT STREQUAL "KISS"))
include(DetectHIPInstallation)
find_package(hipfft REQUIRED)
target_compile_definitions(lammps PRIVATE -DFFT_HIPFFT) target_compile_definitions(lammps PRIVATE -DFFT_HIPFFT)
target_link_libraries(lammps PRIVATE hipfft) target_link_libraries(lammps PRIVATE hip::hipfft)
endif() endif()
endif() endif()
endif() endif()
if(PKG_ML-IAP)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/mliap_data_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_descriptor_so3_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_model_linear_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_model_python_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_unified_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_so3_kokkos.cpp)
# Add KOKKOS version of ML-IAP Python coupling if non-KOKKOS version is included
if(MLIAP_ENABLE_PYTHON AND Cythonize_EXECUTABLE)
file(GLOB MLIAP_KOKKOS_CYTHON_SRC ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/KOKKOS/*.pyx)
foreach(MLIAP_CYTHON_FILE ${MLIAP_KOKKOS_CYTHON_SRC})
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_FILE} NAME_WE)
add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_FILE} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
WORKING_DIRECTORY ${MLIAP_BINARY_DIR}
MAIN_DEPENDENCY ${MLIAP_CYTHON_FILE}
COMMENT "Generating C++ sources with cythonize...")
list(APPEND KOKKOS_PKG_SOURCES ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp)
endforeach()
endif()
endif()
if(PKG_PHONON) if(PKG_PHONON)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/dynamical_matrix_kokkos.cpp) list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/dynamical_matrix_kokkos.cpp)

View File

@ -1,52 +0,0 @@
enable_language(Fortran)
# using lammps in a super-build setting
if(TARGET LATTE::latte)
target_link_libraries(lammps PRIVATE LATTE::latte)
return()
endif()
find_package(LATTE 1.2.2 CONFIG)
if(LATTE_FOUND)
set(DOWNLOAD_LATTE_DEFAULT OFF)
else()
set(DOWNLOAD_LATTE_DEFAULT ON)
endif()
option(DOWNLOAD_LATTE "Download the LATTE library instead of using an already installed one" ${DOWNLOAD_LATTE_DEFAULT})
if(DOWNLOAD_LATTE)
message(STATUS "LATTE download requested - we will build our own")
set(LATTE_URL "https://github.com/lanl/LATTE/archive/v1.2.2.tar.gz" CACHE STRING "URL for LATTE tarball")
set(LATTE_MD5 "820e73a457ced178c08c71389a385de7" CACHE STRING "MD5 checksum of LATTE tarball")
mark_as_advanced(LATTE_URL)
mark_as_advanced(LATTE_MD5)
# CMake cannot pass BLAS or LAPACK library variable to external project if they are a list
list(LENGTH BLAS_LIBRARIES} NUM_BLAS)
list(LENGTH LAPACK_LIBRARIES NUM_LAPACK)
if((NUM_BLAS GREATER 1) OR (NUM_LAPACK GREATER 1))
message(FATAL_ERROR "Cannot compile downloaded LATTE library due to a technical limitation")
endif()
include(ExternalProject)
ExternalProject_Add(latte_build
URL ${LATTE_URL}
URL_MD5 ${LATTE_MD5}
SOURCE_SUBDIR cmake
CMAKE_ARGS -DCMAKE_INSTALL_PREFIX=<INSTALL_DIR> ${CMAKE_REQUEST_PIC} -DCMAKE_INSTALL_LIBDIR=lib
-DBLAS_LIBRARIES=${BLAS_LIBRARIES} -DLAPACK_LIBRARIES=${LAPACK_LIBRARIES}
-DCMAKE_Fortran_COMPILER=${CMAKE_Fortran_COMPILER} -DCMAKE_Fortran_FLAGS=${CMAKE_Fortran_FLAGS}
-DCMAKE_Fortran_FLAGS_${BTYPE}=${CMAKE_Fortran_FLAGS_${BTYPE}} -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM} -DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}
BUILD_BYPRODUCTS <INSTALL_DIR>/lib/liblatte.a
)
ExternalProject_get_property(latte_build INSTALL_DIR)
add_library(LAMMPS::LATTE UNKNOWN IMPORTED)
set_target_properties(LAMMPS::LATTE PROPERTIES
IMPORTED_LOCATION "${INSTALL_DIR}/lib/liblatte.a"
INTERFACE_LINK_LIBRARIES "${LAPACK_LIBRARIES}")
target_link_libraries(lammps PRIVATE LAMMPS::LATTE)
add_dependencies(LAMMPS::LATTE latte_build)
else()
find_package(LATTE 1.2.2 REQUIRED CONFIG)
target_link_libraries(lammps PRIVATE LATTE::latte)
endif()

View File

@ -0,0 +1,35 @@
# avoid including this file twice
if(LEPTON_SOURCE_DIR)
return()
endif()
set(LEPTON_SOURCE_DIR ${LAMMPS_LIB_SOURCE_DIR}/lepton)
file(GLOB LEPTON_SOURCES ${CONFIGURE_DEPENDS} ${LEPTON_SOURCE_DIR}/src/[^.]*.cpp)
if((CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "amd64") OR
(CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "AMD64") OR
(CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64"))
option(LEPTON_ENABLE_JIT "Enable Just-In-Time compiler for Lepton" ON)
else()
option(LEPTON_ENABLE_JIT "Enable Just-In-Time compiler for Lepton" OFF)
endif()
if(LEPTON_ENABLE_JIT)
file(GLOB ASMJIT_SOURCES ${CONFIGURE_DEPENDS} ${LEPTON_SOURCE_DIR}/asmjit/*/[^.]*.cpp)
endif()
add_library(lepton STATIC ${LEPTON_SOURCES} ${ASMJIT_SOURCES})
set_target_properties(lepton PROPERTIES OUTPUT_NAME lammps_lepton${LAMMPS_MACHINE})
target_compile_definitions(lepton PUBLIC LEPTON_BUILDING_STATIC_LIBRARY=1)
target_include_directories(lepton PUBLIC ${LEPTON_SOURCE_DIR}/include)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
find_library(LIB_RT rt QUIET)
target_link_libraries(lepton PUBLIC ${LIB_RT})
endif()
if(LEPTON_ENABLE_JIT)
target_compile_definitions(lepton PUBLIC "LEPTON_USE_JIT=1;ASMJIT_BUILD_X86=1;ASMJIT_STATIC=1;ASMJIT_BUILD_RELEASE=1")
target_include_directories(lepton PUBLIC ${LEPTON_SOURCE_DIR})
endif()
target_link_libraries(lammps PRIVATE lepton)

View File

@ -0,0 +1,9 @@
# fix sgcmc may only be installed if also the EAM pair style from MANYBODY is installed
if(NOT PKG_MANYBODY)
get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX)
list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MC/fix_sgcmc.h)
set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}")
get_target_property(LAMMPS_SOURCES lammps SOURCES)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MC/fix_sgcmc.cpp)
set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}")
endif()

View File

@ -8,10 +8,11 @@ option(DOWNLOAD_MDI "Download and compile the MDI library instead of using an al
if(DOWNLOAD_MDI) if(DOWNLOAD_MDI)
message(STATUS "MDI download requested - we will build our own") message(STATUS "MDI download requested - we will build our own")
set(MDI_URL "https://github.com/MolSSI-MDI/MDI_Library/archive/v1.3.2.tar.gz" CACHE STRING "URL for MDI tarball") set(MDI_URL "https://github.com/MolSSI-MDI/MDI_Library/archive/v1.4.16.tar.gz" CACHE STRING "URL for MDI tarball")
set(MDI_MD5 "836f5da400d8cff0f0e4435640f9454f" CACHE STRING "MD5 checksum for MDI tarball") set(MDI_MD5 "407db44e2d79447ab5c1233af1965f65" CACHE STRING "MD5 checksum for MDI tarball")
mark_as_advanced(MDI_URL) mark_as_advanced(MDI_URL)
mark_as_advanced(MDI_MD5) mark_as_advanced(MDI_MD5)
GetFallbackURL(MDI_URL MDI_FALLBACK)
enable_language(C) enable_language(C)
# only ON/OFF are allowed for "mpi" flag when building MDI library # only ON/OFF are allowed for "mpi" flag when building MDI library
@ -26,8 +27,21 @@ if(DOWNLOAD_MDI)
# detect if we have python development support and thus can enable python plugins # detect if we have python development support and thus can enable python plugins
set(MDI_USE_PYTHON_PLUGINS OFF) set(MDI_USE_PYTHON_PLUGINS OFF)
if(CMAKE_VERSION VERSION_LESS 3.12) if(CMAKE_VERSION VERSION_LESS 3.12)
if(NOT PYTHON_VERSION_STRING)
set(Python_ADDITIONAL_VERSIONS 3.12 3.11 3.10 3.9 3.8 3.7 3.6)
# search for interpreter first, so we have a consistent library
find_package(PythonInterp) # Deprecated since version 3.12
if(PYTHONINTERP_FOUND)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
endif()
# search for the library matching the selected interpreter
set(Python_ADDITIONAL_VERSIONS ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR})
find_package(PythonLibs QUIET) # Deprecated since version 3.12 find_package(PythonLibs QUIET) # Deprecated since version 3.12
if(PYTHONLIBS_FOUND) if(PYTHONLIBS_FOUND)
if(NOT (PYTHON_VERSION_STRING STREQUAL PYTHONLIBS_VERSION_STRING))
message(FATAL_ERROR "Python Library version ${PYTHONLIBS_VERSION_STRING} does not match Interpreter version ${PYTHON_VERSION_STRING}")
endif()
set(MDI_USE_PYTHON_PLUGINS ON) set(MDI_USE_PYTHON_PLUGINS ON)
endif() endif()
else() else()
@ -36,16 +50,25 @@ if(DOWNLOAD_MDI)
set(MDI_USE_PYTHON_PLUGINS ON) set(MDI_USE_PYTHON_PLUGINS ON)
endif() endif()
endif() endif()
# python plugins are not supported and thus must be always off on Windows
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
unset(Python_Development_FOUND)
set(MDI_USE_PYTHON_PLUGINS OFF)
if(CMAKE_CROSSCOMPILING)
set(CMAKE_INSTALL_LIBDIR lib)
endif()
endif()
# download/ build MDI library # download/ build MDI library
# always build static library with -fpic # always build static library with -fpic
# support cross-compilation and ninja-build # support cross-compilation and ninja-build
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(mdi_build ExternalProject_Add(mdi_build
URL ${MDI_URL} URL ${MDI_URL} ${MDI_FALLBACK}
URL_MD5 ${MDI_MD5} URL_MD5 ${MDI_MD5}
CMAKE_ARGS ${CMAKE_REQUEST_PIC} PREFIX ${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext
-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR> CMAKE_ARGS
-DCMAKE_INSTALL_PREFIX=${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER} -DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM} -DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}
@ -54,24 +77,25 @@ if(DOWNLOAD_MDI)
-Dlanguage=C -Dlanguage=C
-Dlibtype=STATIC -Dlibtype=STATIC
-Dmpi=${MDI_USE_MPI} -Dmpi=${MDI_USE_MPI}
-Dplugins=ON
-Dpython_plugins=${MDI_USE_PYTHON_PLUGINS} -Dpython_plugins=${MDI_USE_PYTHON_PLUGINS}
UPDATE_COMMAND "" UPDATE_COMMAND ""
INSTALL_COMMAND "" INSTALL_COMMAND ${CMAKE_COMMAND} --build ${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext/src/mdi_build-build --target install
BUILD_BYPRODUCTS "<BINARY_DIR>/MDI_Library/libmdi.a" BUILD_BYPRODUCTS "${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext/${CMAKE_INSTALL_LIBDIR}/mdi/${CMAKE_STATIC_LIBRARY_PREFIX}mdi${CMAKE_STATIC_LIBRARY_SUFFIX}"
) )
# where is the compiled library? # where is the compiled library?
ExternalProject_get_property(mdi_build BINARY_DIR) ExternalProject_get_property(mdi_build PREFIX)
set(MDI_BINARY_DIR "${BINARY_DIR}/MDI_Library")
# workaround for older CMake versions # workaround for older CMake versions
file(MAKE_DIRECTORY ${MDI_BINARY_DIR}) file(MAKE_DIRECTORY ${PREFIX}/${CMAKE_INSTALL_LIBDIR}/mdi)
file(MAKE_DIRECTORY ${PREFIX}/include/mdi)
# create imported target for the MDI library # create imported target for the MDI library
add_library(LAMMPS::MDI UNKNOWN IMPORTED) add_library(LAMMPS::MDI UNKNOWN IMPORTED)
add_dependencies(LAMMPS::MDI mdi_build) add_dependencies(LAMMPS::MDI mdi_build)
set_target_properties(LAMMPS::MDI PROPERTIES set_target_properties(LAMMPS::MDI PROPERTIES
IMPORTED_LOCATION "${MDI_BINARY_DIR}/libmdi.a" IMPORTED_LOCATION "${PREFIX}/${CMAKE_INSTALL_LIBDIR}/mdi/${CMAKE_STATIC_LIBRARY_PREFIX}mdi${CMAKE_STATIC_LIBRARY_SUFFIX}"
INTERFACE_INCLUDE_DIRECTORIES ${MDI_BINARY_DIR} INTERFACE_INCLUDE_DIRECTORIES ${PREFIX}/include/mdi
) )
set(MDI_DEP_LIBS "") set(MDI_DEP_LIBS "")

View File

@ -0,0 +1,13 @@
# pair style and fix srp/react depend on the fixes bond/break and bond/create from the MC package
if(NOT PKG_MC)
get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX)
list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MISC/fix_srp_react.h)
set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}")
get_property(LAMMPS_PAIR_HEADERS GLOBAL PROPERTY PAIR)
list(REMOVE_ITEM LAMMPS_PAIR_HEADERS ${LAMMPS_SOURCE_DIR}/MISC/pair_srp_react.h)
set_property(GLOBAL PROPERTY PAIR "${LAMMPS_PAIR_HEADERS}")
get_target_property(LAMMPS_SOURCES lammps SOURCES)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MISC/fix_srp_react.cpp)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MISC/pair_srp_react.cpp)
set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}")
endif()

View File

@ -6,10 +6,11 @@ else()
endif() endif()
option(DOWNLOAD_N2P2 "Download n2p2 library instead of using an already installed one)" ${DOWNLOAD_N2P2_DEFAULT}) option(DOWNLOAD_N2P2 "Download n2p2 library instead of using an already installed one)" ${DOWNLOAD_N2P2_DEFAULT})
if(DOWNLOAD_N2P2) if(DOWNLOAD_N2P2)
set(N2P2_URL "https://github.com/CompPhysVienna/n2p2/archive/v2.1.4.tar.gz" CACHE STRING "URL for n2p2 tarball") set(N2P2_URL "https://github.com/CompPhysVienna/n2p2/archive/v2.2.0.tar.gz" CACHE STRING "URL for n2p2 tarball")
set(N2P2_MD5 "9595b066636cd6b90b0fef93398297a5" CACHE STRING "MD5 checksum of N2P2 tarball") set(N2P2_MD5 "a2d9ab7f676b3a74a324fc1eda0a911d" CACHE STRING "MD5 checksum of N2P2 tarball")
mark_as_advanced(N2P2_URL) mark_as_advanced(N2P2_URL)
mark_as_advanced(N2P2_MD5) mark_as_advanced(N2P2_MD5)
GetFallbackURL(N2P2_URL N2P2_FALLBACK)
# adjust settings from detected compiler to compiler platform in n2p2 library # adjust settings from detected compiler to compiler platform in n2p2 library
# set compiler specific flag to turn on C++11 syntax (required on macOS and CentOS 7) # set compiler specific flag to turn on C++11 syntax (required on macOS and CentOS 7)
@ -44,7 +45,9 @@ if(DOWNLOAD_N2P2)
else() else()
# get path to MPI include directory # get path to MPI include directory
get_target_property(N2P2_MPI_INCLUDE MPI::MPI_CXX INTERFACE_INCLUDE_DIRECTORIES) get_target_property(N2P2_MPI_INCLUDE MPI::MPI_CXX INTERFACE_INCLUDE_DIRECTORIES)
set(N2P2_PROJECT_OPTIONS "-I${N2P2_MPI_INCLUDE}") foreach (_INCL ${N2P2_MPI_INCLUDE})
set(N2P2_PROJECT_OPTIONS "${N2P2_PROJECT_OPTIONS} -I${_INCL}")
endforeach()
endif() endif()
# prefer GNU make, if available. N2P2 lib seems to need it. # prefer GNU make, if available. N2P2 lib seems to need it.
@ -70,12 +73,12 @@ if(DOWNLOAD_N2P2)
# download compile n2p2 library. much patch MPI calls in LAMMPS interface to accommodate MPI-2 (e.g. for cross-compiling) # download compile n2p2 library. much patch MPI calls in LAMMPS interface to accommodate MPI-2 (e.g. for cross-compiling)
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(n2p2_build ExternalProject_Add(n2p2_build
URL ${N2P2_URL} URL ${N2P2_URL} ${N2P2_FALLBACK}
URL_MD5 ${N2P2_MD5} URL_MD5 ${N2P2_MD5}
UPDATE_COMMAND "" UPDATE_COMMAND ""
CONFIGURE_COMMAND "" CONFIGURE_COMMAND ""
PATCH_COMMAND sed -i -e "s/\\(MPI_\\(P\\|Unp\\)ack(\\)/\\1(void *) /" src/libnnpif/LAMMPS/InterfaceLammps.cpp PATCH_COMMAND sed -i -e "s/\\(MPI_\\(P\\|Unp\\)ack(\\)/\\1(void *) /" src/libnnpif/LAMMPS/InterfaceLammps.cpp
BUILD_COMMAND ${N2P2_MAKE} -f makefile libnnpif ${N2P2_BUILD_OPTIONS} BUILD_COMMAND ${N2P2_MAKE} -C <SOURCE_DIR>/src -f makefile libnnpif ${N2P2_BUILD_OPTIONS}
BUILD_ALWAYS YES BUILD_ALWAYS YES
INSTALL_COMMAND "" INSTALL_COMMAND ""
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1

View File

@ -2,7 +2,13 @@
set(MLIAP_ENABLE_PYTHON_DEFAULT OFF) set(MLIAP_ENABLE_PYTHON_DEFAULT OFF)
if(PKG_PYTHON) if(PKG_PYTHON)
find_package(Cythonize QUIET) find_package(Cythonize QUIET)
if(Cythonize_FOUND) if (CMAKE_VERSION VERSION_GREATER_EQUAL 3.14)
find_package(Python COMPONENTS NumPy QUIET)
else()
# assume we have NumPy
set(Python_NumPy_FOUND ON)
endif()
if(Cythonize_FOUND AND Python_NumPy_FOUND)
set(MLIAP_ENABLE_PYTHON_DEFAULT ON) set(MLIAP_ENABLE_PYTHON_DEFAULT ON)
endif() endif()
endif() endif()
@ -11,6 +17,9 @@ option(MLIAP_ENABLE_PYTHON "Build ML-IAP package with Python support" ${MLIAP_EN
if(MLIAP_ENABLE_PYTHON) if(MLIAP_ENABLE_PYTHON)
find_package(Cythonize REQUIRED) find_package(Cythonize REQUIRED)
if (CMAKE_VERSION VERSION_GREATER_EQUAL 3.14)
find_package(Python COMPONENTS NumPy REQUIRED)
endif()
if(NOT PKG_PYTHON) if(NOT PKG_PYTHON)
message(FATAL_ERROR "Must enable PYTHON package for including Python support in ML-IAP") message(FATAL_ERROR "Must enable PYTHON package for including Python support in ML-IAP")
endif() endif()
@ -25,16 +34,18 @@ if(MLIAP_ENABLE_PYTHON)
endif() endif()
set(MLIAP_BINARY_DIR ${CMAKE_BINARY_DIR}/cython) set(MLIAP_BINARY_DIR ${CMAKE_BINARY_DIR}/cython)
set(MLIAP_CYTHON_SRC ${LAMMPS_SOURCE_DIR}/ML-IAP/mliap_model_python_couple.pyx) file(GLOB MLIAP_CYTHON_SRC ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/ML-IAP/*.pyx)
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_SRC} NAME_WE)
file(MAKE_DIRECTORY ${MLIAP_BINARY_DIR}) file(MAKE_DIRECTORY ${MLIAP_BINARY_DIR})
foreach(MLIAP_CYTHON_FILE ${MLIAP_CYTHON_SRC})
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_FILE} NAME_WE)
add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_SRC} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_FILE} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
WORKING_DIRECTORY ${MLIAP_BINARY_DIR} WORKING_DIRECTORY ${MLIAP_BINARY_DIR}
MAIN_DEPENDENCY ${MLIAP_CYTHON_SRC} MAIN_DEPENDENCY ${MLIAP_CYTHON_FILE}
COMMENT "Generating C++ sources with cythonize...") COMMENT "Generating C++ sources with cythonize...")
target_compile_definitions(lammps PRIVATE -DMLIAP_PYTHON)
target_sources(lammps PRIVATE ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp) target_sources(lammps PRIVATE ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp)
endforeach()
target_compile_definitions(lammps PRIVATE -DMLIAP_PYTHON)
target_include_directories(lammps PRIVATE ${MLIAP_BINARY_DIR}) target_include_directories(lammps PRIVATE ${MLIAP_BINARY_DIR})
endif() endif()

View File

@ -1,11 +1,25 @@
set(PACELIB_URL "https://github.com/ICAMS/lammps-user-pace/archive/refs/tags/v.2021.10.25.fix2.tar.gz" CACHE STRING "URL for PACE evaluator library sources") set(PACELIB_URL "https://github.com/ICAMS/lammps-user-pace/archive/refs/tags/v.2023.01.3.fix.tar.gz" CACHE STRING "URL for PACE evaluator library sources")
set(PACELIB_MD5 "32394d799bc282bb57696c78c456e64f" CACHE STRING "MD5 checksum of PACE evaluator library tarball") set(PACELIB_MD5 "4f0b3b5b14456fe9a73b447de3765caa" CACHE STRING "MD5 checksum of PACE evaluator library tarball")
mark_as_advanced(PACELIB_URL) mark_as_advanced(PACELIB_URL)
mark_as_advanced(PACELIB_MD5) mark_as_advanced(PACELIB_MD5)
GetFallbackURL(PACELIB_URL PACELIB_FALLBACK)
# download library sources to build folder # download library sources to build folder
file(DOWNLOAD ${PACELIB_URL} ${CMAKE_BINARY_DIR}/libpace.tar.gz EXPECTED_HASH MD5=${PACELIB_MD5}) #SHOW_PROGRESS if(EXISTS ${CMAKE_BINARY_DIR}/libpace.tar.gz)
file(MD5 ${CMAKE_BINARY_DIR}/libpace.tar.gz DL_MD5)
endif()
if(NOT "${DL_MD5}" STREQUAL "${PACELIB_MD5}")
message(STATUS "Downloading ${PACELIB_URL}")
file(DOWNLOAD ${PACELIB_URL} ${CMAKE_BINARY_DIR}/libpace.tar.gz STATUS DL_STATUS SHOW_PROGRESS)
file(MD5 ${CMAKE_BINARY_DIR}/libpace.tar.gz DL_MD5)
if((NOT DL_STATUS EQUAL 0) OR (NOT "${DL_MD5}" STREQUAL "${PACELIB_MD5}"))
message(WARNING "Download from primary URL ${PACELIB_URL} failed\nTrying fallback URL ${PACELIB_FALLBACK}")
file(DOWNLOAD ${PACELIB_FALLBACK} ${CMAKE_BINARY_DIR}/libpace.tar.gz EXPECTED_HASH MD5=${PACELIB_MD5} SHOW_PROGRESS)
endif()
else()
message(STATUS "Using already downloaded archive ${CMAKE_BINARY_DIR}/libpace.tar.gz")
endif()
# uncompress downloaded sources # uncompress downloaded sources
execute_process( execute_process(
@ -15,23 +29,9 @@ execute_process(
) )
get_newest_file(${CMAKE_BINARY_DIR}/lammps-user-pace-* lib-pace) get_newest_file(${CMAKE_BINARY_DIR}/lammps-user-pace-* lib-pace)
# enforce building libyaml-cpp as static library and turn off optional features add_subdirectory(${lib-pace} build-pace)
set(YAML_BUILD_SHARED_LIBS OFF)
set(YAML_CPP_BUILD_CONTRIB OFF)
set(YAML_CPP_BUILD_TOOLS OFF)
add_subdirectory(${lib-pace}/yaml-cpp build-yaml-cpp)
set(YAML_CPP_INCLUDE_DIR ${lib-pace}/yaml-cpp/include)
file(GLOB PACE_EVALUATOR_INCLUDE_DIR ${lib-pace}/ML-PACE)
file(GLOB PACE_EVALUATOR_SOURCES ${lib-pace}/ML-PACE/*.cpp)
list(FILTER PACE_EVALUATOR_SOURCES EXCLUDE REGEX pair_pace.cpp)
add_library(pace STATIC ${PACE_EVALUATOR_SOURCES})
set_target_properties(pace PROPERTIES CXX_EXTENSIONS ON OUTPUT_NAME lammps_pace${LAMMPS_MACHINE}) set_target_properties(pace PROPERTIES CXX_EXTENSIONS ON OUTPUT_NAME lammps_pace${LAMMPS_MACHINE})
target_include_directories(pace PUBLIC ${PACE_EVALUATOR_INCLUDE_DIR} ${YAML_CPP_INCLUDE_DIR})
target_link_libraries(pace PRIVATE yaml-cpp-pace)
if(CMAKE_PROJECT_NAME STREQUAL "lammps") if(CMAKE_PROJECT_NAME STREQUAL "lammps")
target_link_libraries(lammps PRIVATE pace) target_link_libraries(lammps PRIVATE pace)
endif() endif()

View File

@ -16,6 +16,7 @@ if(DOWNLOAD_QUIP)
set(temp "${temp}DEFINES += -DGETARG_F2003 -DFORTRAN_UNDERSCORE\n") set(temp "${temp}DEFINES += -DGETARG_F2003 -DFORTRAN_UNDERSCORE\n")
set(temp "${temp}F95FLAGS += -fpp -free -fPIC\n") set(temp "${temp}F95FLAGS += -fpp -free -fPIC\n")
set(temp "${temp}F77FLAGS += -fpp -fixed -fPIC\n") set(temp "${temp}F77FLAGS += -fpp -fixed -fPIC\n")
set(temp "${temp}F95_PRE_FILENAME_FLAG = -Tf\n")
elseif(CMAKE_Fortran_COMPILER_ID STREQUAL GNU) elseif(CMAKE_Fortran_COMPILER_ID STREQUAL GNU)
set(temp "${temp}FPP=${CMAKE_Fortran_COMPILER} -E -x f95-cpp-input\nOPTIM=${CMAKE_Fortran_FLAGS_${BTYPE}}\n") set(temp "${temp}FPP=${CMAKE_Fortran_COMPILER} -E -x f95-cpp-input\nOPTIM=${CMAKE_Fortran_FLAGS_${BTYPE}}\n")
set(temp "${temp}DEFINES += -DGETARG_F2003 -DGETENV_F2003 -DGFORTRAN -DFORTRAN_UNDERSCORE\n") set(temp "${temp}DEFINES += -DGETARG_F2003 -DGETENV_F2003 -DGFORTRAN -DFORTRAN_UNDERSCORE\n")
@ -58,12 +59,12 @@ if(DOWNLOAD_QUIP)
BUILD_COMMAND env QUIP_ARCH=lammps make libquip BUILD_COMMAND env QUIP_ARCH=lammps make libquip
INSTALL_COMMAND "" INSTALL_COMMAND ""
BUILD_IN_SOURCE YES BUILD_IN_SOURCE YES
BUILD_BYPRODUCTS <SOURCE_DIR>/build/lammps/libquip.a BUILD_BYPRODUCTS <SOURCE_DIR>/build/lammps/${CMAKE_STATIC_LIBRARY_PREFIX}quip${CMAKE_STATIC_LIBRARY_SUFFIX}
) )
ExternalProject_get_property(quip_build SOURCE_DIR) ExternalProject_get_property(quip_build SOURCE_DIR)
add_library(LAMMPS::QUIP UNKNOWN IMPORTED) add_library(LAMMPS::QUIP UNKNOWN IMPORTED)
set_target_properties(LAMMPS::QUIP PROPERTIES set_target_properties(LAMMPS::QUIP PROPERTIES
IMPORTED_LOCATION "${SOURCE_DIR}/build/lammps/libquip.a" IMPORTED_LOCATION "${SOURCE_DIR}/build/lammps/${CMAKE_STATIC_LIBRARY_PREFIX}quip${CMAKE_STATIC_LIBRARY_SUFFIX}"
INTERFACE_LINK_LIBRARIES "${LAPACK_LIBRARIES}") INTERFACE_LINK_LIBRARIES "${LAPACK_LIBRARIES}")
target_link_libraries(lammps PRIVATE LAMMPS::QUIP) target_link_libraries(lammps PRIVATE LAMMPS::QUIP)
add_dependencies(LAMMPS::QUIP quip_build) add_dependencies(LAMMPS::QUIP quip_build)

View File

@ -7,8 +7,8 @@ else()
endif() endif()
option(DOWNLOAD_MSCG "Download MSCG library instead of using an already installed one)" ${DOWNLOAD_MSCG_DEFAULT}) option(DOWNLOAD_MSCG "Download MSCG library instead of using an already installed one)" ${DOWNLOAD_MSCG_DEFAULT})
if(DOWNLOAD_MSCG) if(DOWNLOAD_MSCG)
set(MSCG_URL "https://github.com/uchicago-voth/MSCG-release/archive/1.7.3.1.tar.gz" CACHE STRING "URL for MSCG tarball") set(MSCG_URL "https://github.com/uchicago-voth/MSCG-release/archive/491270a73539e3f6951e76f7dbe84e258b3ebb45.tar.gz" CACHE STRING "URL for MSCG tarball")
set(MSCG_MD5 "8c45e269ee13f60b303edd7823866a91" CACHE STRING "MD5 checksum of MSCG tarball") set(MSCG_MD5 "7ea50748fba5c3a372e0266bd31d2f11" CACHE STRING "MD5 checksum of MSCG tarball")
mark_as_advanced(MSCG_URL) mark_as_advanced(MSCG_URL)
mark_as_advanced(MSCG_MD5) mark_as_advanced(MSCG_MD5)

View File

@ -1,3 +1,83 @@
# Plumed2 support for PLUMED package
if(BUILD_MPI)
set(PLUMED_CONFIG_MPI "--enable-mpi")
set(PLUMED_CONFIG_CC ${CMAKE_MPI_C_COMPILER})
set(PLUMED_CONFIG_CXX ${CMAKE_MPI_CXX_COMPILER})
set(PLUMED_CONFIG_CPP "-I ${MPI_CXX_INCLUDE_PATH}")
set(PLUMED_CONFIG_LIB "${MPI_CXX_LIBRARIES}")
set(PLUMED_CONFIG_DEP "mpi4win_build")
else()
set(PLUMED_CONFIG_MPI "--disable-mpi")
set(PLUMED_CONFIG_CC ${CMAKE_C_COMPILER})
set(PLUMED_CONFIG_CXX ${CMAKE_CXX_COMPILER})
set(PLUMED_CONFIG_CPP "")
set(PLUMED_CONFIG_LIB "")
set(PLUMED_CONFIG_DEP "")
endif()
if(BUILD_OMP)
set(PLUMED_CONFIG_OMP "--enable-openmp")
else()
set(PLUMED_CONFIG_OMP "--disable-openmp")
endif()
set(PLUMED_URL "https://github.com/plumed/plumed2/releases/download/v2.8.2/plumed-src-2.8.2.tgz"
CACHE STRING "URL for PLUMED tarball")
set(PLUMED_MD5 "599092b6a0aa6fff992612537ad98994" CACHE STRING "MD5 checksum of PLUMED tarball")
mark_as_advanced(PLUMED_URL)
mark_as_advanced(PLUMED_MD5)
GetFallbackURL(PLUMED_URL PLUMED_FALLBACK)
if((CMAKE_SYSTEM_NAME STREQUAL "Windows") AND (CMAKE_CROSSCOMPILING))
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64")
set(CROSS_CONFIGURE mingw64-configure)
elseif(CMAKE_SYSTEM_PROCESSOR STREQUAL "x86")
set(CROSS_CONFIGURE mingw32-configure)
else()
message(FATAL_ERROR "Unsupported target system: ${CMAKE_SYSTEM_NAME}/${CMAKE_SYSTEM_PROCESSOR}")
endif()
message(STATUS "Downloading and cross-compiling Plumed2 for ${CMAKE_SYSTEM_NAME}/${CMAKE_SYSTEM_PROCESSOR} with ${CROSS_CONFIGURE}")
include(ExternalProject)
ExternalProject_Add(plumed_build
URL ${PLUMED_URL} ${PLUMED_FALLBACK}
URL_MD5 ${PLUMED_MD5}
BUILD_IN_SOURCE 1
CONFIGURE_COMMAND ${CROSS_CONFIGURE} --disable-shared --disable-bsymbolic
--disable-python --enable-cxx=11
--enable-modules=-adjmat:+crystallization:-dimred:+drr:+eds:-fisst:+funnel:+logmfd:+manyrestraints:+maze:+opes:+multicolvar:-pamm:-piv:+s2cm:-sasa:-ves
${PLUMED_CONFIG_OMP}
${PLUMED_CONFIG_MPI}
CXX=${PLUMED_CONFIG_CXX}
CC=${PLUMED_CONFIG_CC}
CPPFLAGS=${PLUMED_CONFIG_CPP}
LIBS=${PLUMED_CONFIG_LIB}
INSTALL_COMMAND ""
BUILD_BYPRODUCTS "<SOURCE_DIR>/src/lib/install/libplumed.a" "<SOURCE_DIR>/src/lib/install/plumed.exe"
DEPENDS "${PLUMED_MPI_CONFIG_DEP}"
)
ExternalProject_Get_Property(plumed_build SOURCE_DIR)
set(PLUMED_BUILD_DIR ${SOURCE_DIR})
set(PLUMED_INSTALL_DIR ${PLUMED_BUILD_DIR}/src/lib/install)
file(MAKE_DIRECTORY ${PLUMED_BUILD_DIR}/src/include)
add_library(LAMMPS::PLUMED UNKNOWN IMPORTED)
add_dependencies(LAMMPS::PLUMED plumed_build)
set_target_properties(LAMMPS::PLUMED PROPERTIES
IMPORTED_LOCATION "${PLUMED_INSTALL_DIR}/libplumed.a"
INTERFACE_LINK_LIBRARIES "-Wl,--image-base -Wl,0x10000000 -lfftw3 -lz -fstack-protector -lssp -fopenmp"
INTERFACE_INCLUDE_DIRECTORIES "${PLUMED_BUILD_DIR}/src/include")
add_custom_target(plumed_copy ALL ${CMAKE_COMMAND} -E rm -rf ${CMAKE_BINARY_DIR}/plumed.exe ${CMAKE_BINARY_DIR}/plumed_patches
COMMAND ${CMAKE_COMMAND} -E copy ${PLUMED_INSTALL_DIR}/plumed.exe ${CMAKE_BINARY_DIR}/plumed.exe
COMMAND ${CMAKE_COMMAND} -E copy_directory ${PLUMED_BUILD_DIR}/patches ${CMAKE_BINARY_DIR}/patches
BYPRODUCTS ${CMAKE_BINARY_DIR}/plumed.exe ${CMAKE_BINARY_DIR}/patches
DEPENDS plumed_build
COMMENT "Copying Plumed files"
)
else()
set(PLUMED_MODE "static" CACHE STRING "Linkage mode for Plumed2 library") set(PLUMED_MODE "static" CACHE STRING "Linkage mode for Plumed2 library")
set(PLUMED_MODE_VALUES static shared runtime) set(PLUMED_MODE_VALUES static shared runtime)
set_property(CACHE PLUMED_MODE PROPERTY STRINGS ${PLUMED_MODE_VALUES}) set_property(CACHE PLUMED_MODE PROPERTY STRINGS ${PLUMED_MODE_VALUES})
@ -31,43 +111,25 @@ endif()
option(DOWNLOAD_PLUMED "Download Plumed package instead of using an already installed one" ${DOWNLOAD_PLUMED_DEFAULT}) option(DOWNLOAD_PLUMED "Download Plumed package instead of using an already installed one" ${DOWNLOAD_PLUMED_DEFAULT})
if(DOWNLOAD_PLUMED) if(DOWNLOAD_PLUMED)
if(BUILD_MPI)
set(PLUMED_CONFIG_MPI "--enable-mpi")
set(PLUMED_CONFIG_CC ${CMAKE_MPI_C_COMPILER})
set(PLUMED_CONFIG_CXX ${CMAKE_MPI_CXX_COMPILER})
else()
set(PLUMED_CONFIG_MPI "--disable-mpi")
set(PLUMED_CONFIG_CC ${CMAKE_C_COMPILER})
set(PLUMED_CONFIG_CXX ${CMAKE_CXX_COMPILER})
endif()
if(BUILD_OMP)
set(PLUMED_CONFIG_OMP "--enable-openmp")
else()
set(PLUMED_CONFIG_OMP "--disable-openmp")
endif()
message(STATUS "PLUMED download requested - we will build our own") message(STATUS "PLUMED download requested - we will build our own")
if(PLUMED_MODE STREQUAL "STATIC") if(PLUMED_MODE STREQUAL "STATIC")
set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/libplumed.a") set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/${CMAKE_STATIC_LIBRARY_PREFIX}plumed${CMAKE_STATIC_LIBRARY_SUFFIX}")
elseif(PLUMED_MODE STREQUAL "SHARED") elseif(PLUMED_MODE STREQUAL "SHARED")
set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/libplumed${CMAKE_SHARED_LIBRARY_SUFFIX};<INSTALL_DIR>/lib/libplumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}") set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/${CMAKE_SHARED_LIBRARY_PREFIX}plumed${CMAKE_SHARED_LIBRARY_SUFFIX};<INSTALL_DIR>/lib/${CMAKE_SHARED_LIBRARY_PREFIX}plumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}")
elseif(PLUMED_MODE STREQUAL "RUNTIME") elseif(PLUMED_MODE STREQUAL "RUNTIME")
set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/libplumedWrapper.a") set(PLUMED_BUILD_BYPRODUCTS "<INSTALL_DIR>/lib/${CMAKE_STATIC_LIBRARY_PREFIX}plumedWrapper${CMAKE_STATIC_LIBRARY_PREFIX}")
endif() endif()
set(PLUMED_URL "https://github.com/plumed/plumed2/releases/download/v2.7.4/plumed-src-2.7.4.tgz" CACHE STRING "URL for PLUMED tarball")
set(PLUMED_MD5 "858e0b6aed173748fc85b6bc8a9dcb3e" CACHE STRING "MD5 checksum of PLUMED tarball")
mark_as_advanced(PLUMED_URL)
mark_as_advanced(PLUMED_MD5)
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(plumed_build ExternalProject_Add(plumed_build
URL ${PLUMED_URL} URL ${PLUMED_URL} ${PLUMED_FALLBACK}
URL_MD5 ${PLUMED_MD5} URL_MD5 ${PLUMED_MD5}
BUILD_IN_SOURCE 1 BUILD_IN_SOURCE 1
CONFIGURE_COMMAND <SOURCE_DIR>/configure --prefix=<INSTALL_DIR> CONFIGURE_COMMAND <SOURCE_DIR>/configure --prefix=<INSTALL_DIR>
${CONFIGURE_REQUEST_PIC} ${CONFIGURE_REQUEST_PIC}
--enable-modules=all --enable-modules=all
--enable-cxx=11
--disable-python
${PLUMED_CONFIG_MPI} ${PLUMED_CONFIG_MPI}
${PLUMED_CONFIG_OMP} ${PLUMED_CONFIG_OMP}
CXX=${PLUMED_CONFIG_CXX} CXX=${PLUMED_CONFIG_CXX}
@ -78,12 +140,12 @@ if(DOWNLOAD_PLUMED)
add_library(LAMMPS::PLUMED UNKNOWN IMPORTED) add_library(LAMMPS::PLUMED UNKNOWN IMPORTED)
add_dependencies(LAMMPS::PLUMED plumed_build) add_dependencies(LAMMPS::PLUMED plumed_build)
if(PLUMED_MODE STREQUAL "STATIC") if(PLUMED_MODE STREQUAL "STATIC")
set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/libplumed.a INTERFACE_LINK_LIBRARIES "${PLUMED_LINK_LIBS};${CMAKE_DL_LIBS}") set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}plumed${CMAKE_STATIC_LIBRARY_SUFFIX} INTERFACE_LINK_LIBRARIES "${PLUMED_LINK_LIBS};${CMAKE_DL_LIBS}")
elseif(PLUMED_MODE STREQUAL "SHARED") elseif(PLUMED_MODE STREQUAL "SHARED")
set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/libplumed${CMAKE_SHARED_LIBRARY_SUFFIX} INTERFACE_LINK_LIBRARIES "${INSTALL_DIR}/lib/libplumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX};${CMAKE_DL_LIBS}") set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}plumed${CMAKE_SHARED_LIBRARY_SUFFIX} INTERFACE_LINK_LIBRARIES "${INSTALL_DIR}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}plumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX};${CMAKE_DL_LIBS}")
elseif(PLUMED_MODE STREQUAL "RUNTIME") elseif(PLUMED_MODE STREQUAL "RUNTIME")
set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_COMPILE_DEFINITIONS "__PLUMED_DEFAULT_KERNEL=${INSTALL_DIR}/lib/libplumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}") set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_COMPILE_DEFINITIONS "__PLUMED_DEFAULT_KERNEL=${INSTALL_DIR}/lib/${CMAKE_SHARED_LIBRARY_PREFIX}plumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}")
set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/libplumedWrapper.a INTERFACE_LINK_LIBRARIES "${CMAKE_DL_LIBS}") set_target_properties(LAMMPS::PLUMED PROPERTIES IMPORTED_LOCATION ${INSTALL_DIR}/lib/${CMAKE_STATIC_LIBRARY_PREFIX}plumedWrapper${CMAKE_STATIC_LIBRARY_SUFFIX} INTERFACE_LINK_LIBRARIES "${CMAKE_DL_LIBS}")
endif() endif()
set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${INSTALL_DIR}/include) set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_INCLUDE_DIRECTORIES ${INSTALL_DIR}/include)
file(MAKE_DIRECTORY ${INSTALL_DIR}/include) file(MAKE_DIRECTORY ${INSTALL_DIR}/include)
@ -96,10 +158,12 @@ else()
elseif(PLUMED_MODE STREQUAL "SHARED") elseif(PLUMED_MODE STREQUAL "SHARED")
include(${PLUMED_LIBDIR}/plumed/src/lib/Plumed.cmake.shared) include(${PLUMED_LIBDIR}/plumed/src/lib/Plumed.cmake.shared)
elseif(PLUMED_MODE STREQUAL "RUNTIME") elseif(PLUMED_MODE STREQUAL "RUNTIME")
set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_COMPILE_DEFINITIONS "__PLUMED_DEFAULT_KERNEL=${PLUMED_LIBDIR}/libplumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}") set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_COMPILE_DEFINITIONS "__PLUMED_DEFAULT_KERNEL=${PLUMED_LIBDIR}/${CMAKE_SHARED_LIBRARY_PREFIX}plumedKernel${CMAKE_SHARED_LIBRARY_SUFFIX}")
include(${PLUMED_LIBDIR}/plumed/src/lib/Plumed.cmake.runtime) include(${PLUMED_LIBDIR}/plumed/src/lib/Plumed.cmake.runtime)
endif() endif()
set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_LINK_LIBRARIES "${PLUMED_LOAD}") set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_LINK_LIBRARIES "${PLUMED_LOAD}")
set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_INCLUDE_DIRECTORIES "${PLUMED_INCLUDE_DIRS}") set_target_properties(LAMMPS::PLUMED PROPERTIES INTERFACE_INCLUDE_DIRECTORIES "${PLUMED_INCLUDE_DIRS}")
endif() endif()
endif()
target_link_libraries(lammps PRIVATE LAMMPS::PLUMED) target_link_libraries(lammps PRIVATE LAMMPS::PLUMED)

View File

@ -1,8 +1,28 @@
if(CMAKE_VERSION VERSION_LESS 3.12) if(CMAKE_VERSION VERSION_LESS 3.12)
if(NOT PYTHON_VERSION_STRING)
set(Python_ADDITIONAL_VERSIONS 3.12 3.11 3.10 3.9 3.8 3.7 3.6)
# search for interpreter first, so we have a consistent library
find_package(PythonInterp) # Deprecated since version 3.12
if(PYTHONINTERP_FOUND)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
endif()
# search for the library matching the selected interpreter
set(Python_ADDITIONAL_VERSIONS ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR})
find_package(PythonLibs REQUIRED) # Deprecated since version 3.12 find_package(PythonLibs REQUIRED) # Deprecated since version 3.12
if(NOT (PYTHON_VERSION_STRING STREQUAL PYTHONLIBS_VERSION_STRING))
message(FATAL_ERROR "Python Library version ${PYTHONLIBS_VERSION_STRING} does not match Interpreter version ${PYTHON_VERSION_STRING}")
endif()
target_include_directories(lammps PRIVATE ${PYTHON_INCLUDE_DIRS}) target_include_directories(lammps PRIVATE ${PYTHON_INCLUDE_DIRS})
target_link_libraries(lammps PRIVATE ${PYTHON_LIBRARIES}) target_link_libraries(lammps PRIVATE ${PYTHON_LIBRARIES})
else() else()
if(NOT Python_INTERPRETER)
# backward compatibility
if(PYTHON_EXECUTABLE)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
find_package(Python COMPONENTS Interpreter)
endif()
find_package(Python REQUIRED COMPONENTS Interpreter Development) find_package(Python REQUIRED COMPONENTS Interpreter Development)
target_link_libraries(lammps PRIVATE Python::Python) target_link_libraries(lammps PRIVATE Python::Python)
endif() endif()

View File

@ -18,6 +18,8 @@ if(DOWNLOAD_SCAFACOS)
set(SCAFACOS_MD5 "bd46d74e3296bd8a444d731bb10c1738" CACHE STRING "MD5 checksum of SCAFACOS tarball") set(SCAFACOS_MD5 "bd46d74e3296bd8a444d731bb10c1738" CACHE STRING "MD5 checksum of SCAFACOS tarball")
mark_as_advanced(SCAFACOS_URL) mark_as_advanced(SCAFACOS_URL)
mark_as_advanced(SCAFACOS_MD5) mark_as_advanced(SCAFACOS_MD5)
GetFallbackURL(SCAFACOS_URL SCAFACOS_FALLBACK)
# version 1.0.1 needs a patch to compile and linke cleanly with GCC 10 and later. # version 1.0.1 needs a patch to compile and linke cleanly with GCC 10 and later.
file(DOWNLOAD ${LAMMPS_THIRDPARTY_URL}/scafacos-1.0.1-fix.diff ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff file(DOWNLOAD ${LAMMPS_THIRDPARTY_URL}/scafacos-1.0.1-fix.diff ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff
@ -30,7 +32,7 @@ if(DOWNLOAD_SCAFACOS)
include(ExternalProject) include(ExternalProject)
ExternalProject_Add(scafacos_build ExternalProject_Add(scafacos_build
URL ${SCAFACOS_URL} URL ${SCAFACOS_URL} ${SCAFACOS_FALLBACK}
URL_MD5 ${SCAFACOS_MD5} URL_MD5 ${SCAFACOS_MD5}
PATCH_COMMAND patch -p1 < ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff PATCH_COMMAND patch -p1 < ${CMAKE_CURRENT_BINARY_DIR}/scafacos-1.0.1.fix.diff
CONFIGURE_COMMAND <SOURCE_DIR>/configure --prefix=<INSTALL_DIR> --disable-doc CONFIGURE_COMMAND <SOURCE_DIR>/configure --prefix=<INSTALL_DIR> --disable-doc

View File

@ -1,4 +1,9 @@
find_package(VTK REQUIRED NO_MODULE) find_package(VTK REQUIRED NO_MODULE)
include(${VTK_USE_FILE})
target_compile_definitions(lammps PRIVATE -DLAMMPS_VTK) target_compile_definitions(lammps PRIVATE -DLAMMPS_VTK)
if (VTK_MAJOR_VERSION VERSION_LESS 9.0)
include(${VTK_USE_FILE})
target_link_libraries(lammps PRIVATE ${VTK_LIBRARIES}) target_link_libraries(lammps PRIVATE ${VTK_LIBRARIES})
else()
target_link_libraries(lammps PRIVATE VTK::CommonCore VTK::IOCore VTK::CommonDataModel VTK::IOXML VTK::IOLegacy VTK::IOParallelXML)
vtk_module_autoinit(TARGETS lammps MODULES VTK::CommonCore VTK::IOCore VTK::CommonDataModel VTK::IOXML VTK::IOLegacy VTK::IOParallelXML)
endif()

View File

@ -1,5 +1,5 @@
function(FindStyleHeaders path style_class file_pattern headers) function(FindStyleHeaders path style_class file_pattern headers)
file(GLOB files "${path}/${file_pattern}*.h") file(GLOB files ${CONFIGURE_DEPENDS} "${path}/${file_pattern}*.h")
get_property(hlist GLOBAL PROPERTY ${headers}) get_property(hlist GLOBAL PROPERTY ${headers})
foreach(file_name ${files}) foreach(file_name ${files})
@ -104,6 +104,7 @@ function(RegisterStyles search_path)
FindStyleHeaders(${search_path} DIHEDRAL_CLASS dihedral_ DIHEDRAL ) # dihedral ) # force FindStyleHeaders(${search_path} DIHEDRAL_CLASS dihedral_ DIHEDRAL ) # dihedral ) # force
FindStyleHeaders(${search_path} DUMP_CLASS dump_ DUMP ) # dump ) # output write_dump FindStyleHeaders(${search_path} DUMP_CLASS dump_ DUMP ) # dump ) # output write_dump
FindStyleHeaders(${search_path} FIX_CLASS fix_ FIX ) # fix ) # modify FindStyleHeaders(${search_path} FIX_CLASS fix_ FIX ) # fix ) # modify
FindStyleHeaders(${search_path} GRAN_SUB_MOD_CLASS gran_sub_mod_ GRAN_SUB_MOD ) # gran_sub_mod ) # granular_model
FindStyleHeaders(${search_path} IMPROPER_CLASS improper_ IMPROPER ) # improper ) # force FindStyleHeaders(${search_path} IMPROPER_CLASS improper_ IMPROPER ) # improper ) # force
FindStyleHeaders(${search_path} INTEGRATE_CLASS "[^.]" INTEGRATE ) # integrate ) # update FindStyleHeaders(${search_path} INTEGRATE_CLASS "[^.]" INTEGRATE ) # integrate ) # update
FindStyleHeaders(${search_path} KSPACE_CLASS "[^.]" KSPACE ) # kspace ) # force FindStyleHeaders(${search_path} KSPACE_CLASS "[^.]" KSPACE ) # kspace ) # force
@ -127,6 +128,7 @@ function(RegisterStylesExt search_path extension sources)
FindStyleHeadersExt(${search_path} DIHEDRAL_CLASS ${extension} DIHEDRAL ${sources}) FindStyleHeadersExt(${search_path} DIHEDRAL_CLASS ${extension} DIHEDRAL ${sources})
FindStyleHeadersExt(${search_path} DUMP_CLASS ${extension} DUMP ${sources}) FindStyleHeadersExt(${search_path} DUMP_CLASS ${extension} DUMP ${sources})
FindStyleHeadersExt(${search_path} FIX_CLASS ${extension} FIX ${sources}) FindStyleHeadersExt(${search_path} FIX_CLASS ${extension} FIX ${sources})
FindStyleHeadersExt(${search_path} GRAN_SUB_MOD_CLASS ${extension} GRAN_SUB_MOD ${sources})
FindStyleHeadersExt(${search_path} IMPROPER_CLASS ${extension} IMPROPER ${sources}) FindStyleHeadersExt(${search_path} IMPROPER_CLASS ${extension} IMPROPER ${sources})
FindStyleHeadersExt(${search_path} INTEGRATE_CLASS ${extension} INTEGRATE ${sources}) FindStyleHeadersExt(${search_path} INTEGRATE_CLASS ${extension} INTEGRATE ${sources})
FindStyleHeadersExt(${search_path} KSPACE_CLASS ${extension} KSPACE ${sources}) FindStyleHeadersExt(${search_path} KSPACE_CLASS ${extension} KSPACE ${sources})
@ -151,6 +153,7 @@ function(GenerateStyleHeaders output_path)
GenerateStyleHeader(${output_path} DIHEDRAL dihedral ) # force GenerateStyleHeader(${output_path} DIHEDRAL dihedral ) # force
GenerateStyleHeader(${output_path} DUMP dump ) # output write_dump GenerateStyleHeader(${output_path} DUMP dump ) # output write_dump
GenerateStyleHeader(${output_path} FIX fix ) # modify GenerateStyleHeader(${output_path} FIX fix ) # modify
GenerateStyleHeader(${output_path} GRAN_SUB_MOD gran_sub_mod ) # granular_model
GenerateStyleHeader(${output_path} IMPROPER improper ) # force GenerateStyleHeader(${output_path} IMPROPER improper ) # force
GenerateStyleHeader(${output_path} INTEGRATE integrate ) # update GenerateStyleHeader(${output_path} INTEGRATE integrate ) # update
GenerateStyleHeader(${output_path} KSPACE kspace ) # force GenerateStyleHeader(${output_path} KSPACE kspace ) # force
@ -184,7 +187,7 @@ endfunction(DetectBuildSystemConflict)
function(FindPackagesHeaders path style_class file_pattern headers) function(FindPackagesHeaders path style_class file_pattern headers)
file(GLOB files "${path}/${file_pattern}*.h") file(GLOB files ${CONFIGURE_DEPENDS} "${path}/${file_pattern}*.h")
get_property(plist GLOBAL PROPERTY ${headers}) get_property(plist GLOBAL PROPERTY ${headers})
foreach(file_name ${files}) foreach(file_name ${files})

View File

@ -6,7 +6,7 @@ if(ENABLE_TESTING)
find_program(VALGRIND_BINARY NAMES valgrind) find_program(VALGRIND_BINARY NAMES valgrind)
# generate custom suppression file # generate custom suppression file
file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/lammps.supp "\n") file(WRITE ${CMAKE_CURRENT_BINARY_DIR}/lammps.supp "\n")
file(GLOB VALGRIND_SUPPRESSION_FILES ${LAMMPS_TOOLS_DIR}/valgrind/[^.]*.supp) file(GLOB VALGRIND_SUPPRESSION_FILES ${CONFIGURE_DEPENDS} ${LAMMPS_TOOLS_DIR}/valgrind/[^.]*.supp)
foreach(SUPP ${VALGRIND_SUPPRESSION_FILES}) foreach(SUPP ${VALGRIND_SUPPRESSION_FILES})
file(READ ${SUPP} SUPPRESSIONS) file(READ ${SUPP} SUPPRESSIONS)
file(APPEND ${CMAKE_CURRENT_BINARY_DIR}/lammps.supp "${SUPPRESSIONS}") file(APPEND ${CMAKE_CURRENT_BINARY_DIR}/lammps.supp "${SUPPRESSIONS}")
@ -18,29 +18,33 @@ if(ENABLE_TESTING)
# we need to build and link a LOT of tester executables, so it is worth checking if # we need to build and link a LOT of tester executables, so it is worth checking if
# a faster linker is available. requires GNU or Clang compiler, newer CMake. # a faster linker is available. requires GNU or Clang compiler, newer CMake.
# also only verified with Fedora Linux > 30 and Ubuntu <= 18.04 (Ubuntu 20.04 fails) # also only verified with Fedora Linux > 30 and Ubuntu 18.04 or 22.04+(Ubuntu 20.04 fails)
if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND (CMAKE_VERSION VERSION_GREATER_EQUAL 3.13) if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND (CMAKE_VERSION VERSION_GREATER_EQUAL 3.13)
AND ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") AND ((CMAKE_CXX_COMPILER_ID STREQUAL "GNU") OR (CMAKE_CXX_COMPILER_ID STREQUAL "Clang")))
OR (CMAKE_CXX_COMPILER_ID STREQUAL "Clang"))) if(((CMAKE_LINUX_DISTRO STREQUAL "Ubuntu") AND
if(((CMAKE_LINUX_DISTRO STREQUAL "Ubuntu") AND (CMAKE_DISTRO_VERSION VERSION_LESS_EQUAL 18.04)) ((CMAKE_DISTRO_VERSION VERSION_LESS_EQUAL 18.04) OR (CMAKE_DISTRO_VERSION VERSION_GREATER_EQUAL 22.04)))
OR ((CMAKE_LINUX_DISTRO STREQUAL "Fedora") AND (CMAKE_DISTRO_VERSION VERSION_GREATER 30))) OR ((CMAKE_LINUX_DISTRO STREQUAL "Fedora") AND (CMAKE_DISTRO_VERSION VERSION_GREATER 30)))
include(CheckCXXCompilerFlag) include(CheckCXXCompilerFlag)
set(CMAKE_CUSTOM_LINKER_DEFAULT default) set(CMAKE_CUSTOM_LINKER_DEFAULT default)
check_cxx_compiler_flag(-fuse-ld=mold HAVE_MOLD_LINKER_FLAG)
check_cxx_compiler_flag(-fuse-ld=lld HAVE_LLD_LINKER_FLAG) check_cxx_compiler_flag(-fuse-ld=lld HAVE_LLD_LINKER_FLAG)
check_cxx_compiler_flag(-fuse-ld=gold HAVE_GOLD_LINKER_FLAG) check_cxx_compiler_flag(-fuse-ld=gold HAVE_GOLD_LINKER_FLAG)
check_cxx_compiler_flag(-fuse-ld=bfd HAVE_BFD_LINKER_FLAG) check_cxx_compiler_flag(-fuse-ld=bfd HAVE_BFD_LINKER_FLAG)
find_program(HAVE_MOLD_LINKER_BIN ld.mold)
find_program(HAVE_LLD_LINKER_BIN lld ld.lld) find_program(HAVE_LLD_LINKER_BIN lld ld.lld)
find_program(HAVE_GOLD_LINKER_BIN ld.gold) find_program(HAVE_GOLD_LINKER_BIN ld.gold)
find_program(HAVE_BFD_LINKER_BIN ld.bfd) find_program(HAVE_BFD_LINKER_BIN ld.bfd)
if(HAVE_LLD_LINKER_FLAG AND HAVE_LLD_LINKER_BIN) if(HAVE_MOLD_LINKER_FLAG AND HAVE_MOLD_LINKER_BIN)
set(CMAKE_CUSTOM_LINKER_DEFAULT mold)
elseif(HAVE_LLD_LINKER_FLAG AND HAVE_LLD_LINKER_BIN)
set(CMAKE_CUSTOM_LINKER_DEFAULT lld) set(CMAKE_CUSTOM_LINKER_DEFAULT lld)
elseif(HAVE_GOLD_LINKER_FLAG AND HAVE_GOLD_LINKER_BIN) elseif(HAVE_GOLD_LINKER_FLAG AND HAVE_GOLD_LINKER_BIN)
set(CMAKE_CUSTOM_LINKER_DEFAULT gold) set(CMAKE_CUSTOM_LINKER_DEFAULT gold)
elseif(HAVE_BFD_LINKER_FLAG AND HAVE_BFD_LINKER_BIN) elseif(HAVE_BFD_LINKER_FLAG AND HAVE_BFD_LINKER_BIN)
set(CMAKE_CUSTOM_LINKER_DEFAULT bfd) set(CMAKE_CUSTOM_LINKER_DEFAULT bfd)
endif() endif()
set(CMAKE_CUSTOM_LINKER_VALUES lld gold bfd default) set(CMAKE_CUSTOM_LINKER_VALUES mold lld gold bfd default)
set(CMAKE_CUSTOM_LINKER ${CMAKE_CUSTOM_LINKER_DEFAULT} CACHE STRING "Choose a custom linker for faster linking (lld, gold, bfd, default)") set(CMAKE_CUSTOM_LINKER ${CMAKE_CUSTOM_LINKER_DEFAULT} CACHE STRING "Choose a custom linker for faster linking (mold, lld, gold, bfd, default)")
validate_option(CMAKE_CUSTOM_LINKER CMAKE_CUSTOM_LINKER_VALUES) validate_option(CMAKE_CUSTOM_LINKER CMAKE_CUSTOM_LINKER_VALUES)
mark_as_advanced(CMAKE_CUSTOM_LINKER) mark_as_advanced(CMAKE_CUSTOM_LINKER)
if(NOT "${CMAKE_CUSTOM_LINKER}" STREQUAL "default") if(NOT "${CMAKE_CUSTOM_LINKER}" STREQUAL "default")

View File

@ -26,13 +26,15 @@ if(BUILD_TOOLS)
enable_language(C) enable_language(C)
get_filename_component(MSI2LMP_SOURCE_DIR ${LAMMPS_TOOLS_DIR}/msi2lmp/src ABSOLUTE) get_filename_component(MSI2LMP_SOURCE_DIR ${LAMMPS_TOOLS_DIR}/msi2lmp/src ABSOLUTE)
file(GLOB MSI2LMP_SOURCES ${MSI2LMP_SOURCE_DIR}/[^.]*.c) file(GLOB MSI2LMP_SOURCES ${CONFIGURE_DEPENDS} ${MSI2LMP_SOURCE_DIR}/[^.]*.c)
add_executable(msi2lmp ${MSI2LMP_SOURCES}) add_executable(msi2lmp ${MSI2LMP_SOURCES})
if(STANDARD_MATH_LIB) if(STANDARD_MATH_LIB)
target_link_libraries(msi2lmp PRIVATE ${STANDARD_MATH_LIB}) target_link_libraries(msi2lmp PRIVATE ${STANDARD_MATH_LIB})
endif() endif()
install(TARGETS msi2lmp DESTINATION ${CMAKE_INSTALL_BINDIR}) install(TARGETS msi2lmp DESTINATION ${CMAKE_INSTALL_BINDIR})
install(FILES ${LAMMPS_DOC_DIR}/msi2lmp.1 DESTINATION ${CMAKE_INSTALL_MANDIR}/man1) install(FILES ${LAMMPS_DOC_DIR}/msi2lmp.1 DESTINATION ${CMAKE_INSTALL_MANDIR}/man1)
add_subdirectory(${LAMMPS_TOOLS_DIR}/phonon ${CMAKE_BINARY_DIR}/phana_build)
endif() endif()
if(BUILD_LAMMPS_SHELL) if(BUILD_LAMMPS_SHELL)
@ -50,12 +52,16 @@ if(BUILD_LAMMPS_SHELL)
add_executable(lammps-shell ${LAMMPS_TOOLS_DIR}/lammps-shell/lammps-shell.cpp ${ICON_RC_FILE}) add_executable(lammps-shell ${LAMMPS_TOOLS_DIR}/lammps-shell/lammps-shell.cpp ${ICON_RC_FILE})
target_include_directories(lammps-shell PRIVATE ${LAMMPS_TOOLS_DIR}/lammps-shell) target_include_directories(lammps-shell PRIVATE ${LAMMPS_TOOLS_DIR}/lammps-shell)
target_link_libraries(lammps-shell PRIVATE lammps PkgConfig::READLINE)
# workaround for broken readline pkg-config file on FreeBSD # workaround for broken readline pkg-config file on FreeBSD
if(CMAKE_SYSTEM_NAME STREQUAL "FreeBSD") if(CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
target_include_directories(lammps-shell PRIVATE /usr/local/include) target_include_directories(lammps-shell PRIVATE /usr/local/include)
endif() endif()
target_link_libraries(lammps-shell PRIVATE lammps PkgConfig::READLINE) if(CMAKE_SYSTEM_NAME STREQUAL "LinuxMUSL")
pkg_check_modules(TERMCAP IMPORTED_TARGET REQUIRED termcap)
target_link_libraries(lammps-shell PRIVATE lammps PkgConfig::TERMCAP)
endif()
install(TARGETS lammps-shell EXPORT LAMMPS_Targets DESTINATION ${CMAKE_INSTALL_BINDIR}) install(TARGETS lammps-shell EXPORT LAMMPS_Targets DESTINATION ${CMAKE_INSTALL_BINDIR})
install(DIRECTORY ${LAMMPS_TOOLS_DIR}/lammps-shell/icons DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/) install(DIRECTORY ${LAMMPS_TOOLS_DIR}/lammps-shell/icons DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/)
install(FILES ${LAMMPS_TOOLS_DIR}/lammps-shell/lammps-shell.desktop DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/applications/) install(FILES ${LAMMPS_TOOLS_DIR}/lammps-shell/lammps-shell.desktop DESTINATION ${CMAKE_INSTALL_DATAROOTDIR}/applications/)

View File

@ -3,6 +3,7 @@
set(ALL_PACKAGES set(ALL_PACKAGES
ADIOS ADIOS
AMOEBA
ASPHERE ASPHERE
ATC ATC
AWPMD AWPMD
@ -11,7 +12,7 @@ set(ALL_PACKAGES
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -42,7 +43,7 @@ set(ALL_PACKAGES
KOKKOS KOKKOS
KSPACE KSPACE
LATBOLTZ LATBOLTZ
LATTE LEPTON
MACHDYN MACHDYN
MANIFOLD MANIFOLD
MANYBODY MANYBODY
@ -55,6 +56,7 @@ set(ALL_PACKAGES
ML-HDNNP ML-HDNNP
ML-IAP ML-IAP
ML-PACE ML-PACE
ML-POD
ML-QUIP ML-QUIP
ML-RANN ML-RANN
ML-SNAP ML-SNAP

View File

@ -5,6 +5,7 @@
set(ALL_PACKAGES set(ALL_PACKAGES
ADIOS ADIOS
AMOEBA
ASPHERE ASPHERE
ATC ATC
AWPMD AWPMD
@ -13,7 +14,7 @@ set(ALL_PACKAGES
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -44,7 +45,7 @@ set(ALL_PACKAGES
KOKKOS KOKKOS
KSPACE KSPACE
LATBOLTZ LATBOLTZ
LATTE LEPTON
MACHDYN MACHDYN
MANIFOLD MANIFOLD
MANYBODY MANYBODY
@ -57,6 +58,7 @@ set(ALL_PACKAGES
ML-HDNNP ML-HDNNP
ML-IAP ML-IAP
ML-PACE ML-PACE
ML-POD
ML-QUIP ML-QUIP
ML-RANN ML-RANN
ML-SNAP ML-SNAP

View File

@ -3,6 +3,13 @@
# prefer flang over gfortran, if available # prefer flang over gfortran, if available
find_program(CLANG_FORTRAN NAMES flang gfortran f95) find_program(CLANG_FORTRAN NAMES flang gfortran f95)
set(ENV{OMPI_FC} ${CLANG_FORTRAN}) set(ENV{OMPI_FC} ${CLANG_FORTRAN})
get_filename_component(_tmp_fc ${CLANG_FORTRAN} NAME)
if (_tmp_fc STREQUAL "flang")
set(FC_STD_VERSION "-std=f2018")
set(BUILD_MPI OFF)
else()
set(FC_STD_VERSION "-std=f2003")
endif()
set(CMAKE_CXX_COMPILER "clang++" CACHE STRING "" FORCE) set(CMAKE_CXX_COMPILER "clang++" CACHE STRING "" FORCE)
set(CMAKE_C_COMPILER "clang" CACHE STRING "" FORCE) set(CMAKE_C_COMPILER "clang" CACHE STRING "" FORCE)
@ -10,9 +17,9 @@ set(CMAKE_Fortran_COMPILER ${CLANG_FORTRAN} CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE) set(CMAKE_CXX_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE)
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE)
set(CMAKE_Fortran_FLAGS_DEBUG "-Wall -Wextra -g -std=f2003" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_DEBUG "-Wall -Wextra -g ${FC_STD_VERSION}" CACHE STRING "" FORCE)
set(CMAKE_Fortran_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG -std=f2003" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG ${FC_STD_VERSION}" CACHE STRING "" FORCE)
set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -DNDEBUG -std=f2003" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -DNDEBUG ${FC_STD_VERSION}" CACHE STRING "" FORCE)
set(CMAKE_C_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE) set(CMAKE_C_FLAGS_DEBUG "-Wall -Wextra -g" CACHE STRING "" FORCE)
set(CMAKE_C_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_C_FLAGS_RELWITHDEBINFO "-Wall -Wextra -g -O2 -DNDEBUG" CACHE STRING "" FORCE)
set(CMAKE_C_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE) set(CMAKE_C_FLAGS_RELEASE "-O3 -DNDEBUG" CACHE STRING "" FORCE)
@ -21,10 +28,3 @@ set(MPI_CXX "clang++" CACHE STRING "" FORCE)
set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE) set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "clang" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_CXX "clang++" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libomp.so" CACHE PATH "" FORCE)

View File

@ -1,14 +1,13 @@
# Preset that turns on packages with automatic downloads of sources or potentials. # Preset that turns on packages with automatic downloads of sources or potentials.
# Compilation of libraries like Plumed or ScaFaCoS can take a considerable amount of time. # Compilation of libraries like Plumed or ScaFaCoS can take a considerable amount of time.
set(ALL_PACKAGES KIM LATTE MSCG VORONOI PLUMED SCAFACOS MACHDYN MESONT MDI ML-PACE) set(ALL_PACKAGES KIM MSCG VORONOI PLUMED SCAFACOS MACHDYN MESONT MDI ML-PACE)
foreach(PKG ${ALL_PACKAGES}) foreach(PKG ${ALL_PACKAGES})
set(PKG_${PKG} ON CACHE BOOL "" FORCE) set(PKG_${PKG} ON CACHE BOOL "" FORCE)
endforeach() endforeach()
set(DOWNLOAD_KIM ON CACHE BOOL "" FORCE) set(DOWNLOAD_KIM ON CACHE BOOL "" FORCE)
set(DOWNLOAD_LATTE ON CACHE BOOL "" FORCE)
set(DOWNLOAD_MDI ON CACHE BOOL "" FORCE) set(DOWNLOAD_MDI ON CACHE BOOL "" FORCE)
set(DOWNLOAD_MSCG ON CACHE BOOL "" FORCE) set(DOWNLOAD_MSCG ON CACHE BOOL "" FORCE)
set(DOWNLOAD_VORO ON CACHE BOOL "" FORCE) set(DOWNLOAD_VORO ON CACHE BOOL "" FORCE)

View File

@ -19,11 +19,3 @@ set(CMAKE_Fortran_FLAGS_DEBUG "-Wall -Og -g -std=f2003" CACHE STRING "" FORCE)
set(CMAKE_Fortran_FLAGS_RELWITHDEBINFO "-g -O2 -DNDEBUG -std=f2003" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELWITHDEBINFO "-g -O2 -DNDEBUG -std=f2003" CACHE STRING "" FORCE)
set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -DNDEBUG -std=f2003" CACHE STRING "" FORCE) set(CMAKE_Fortran_FLAGS_RELEASE "-O3 -DNDEBUG -std=f2003" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "gcc" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "gomp" CACHE STRING "" FORCE)
set(OpenMP_CXX "g++" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "gomp" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libgomp.so" CACHE PATH "" FORCE)

View File

@ -1,4 +1,4 @@
# preset that will enable Intel compilers with support for MPI and OpenMP (on Linux boxes) # preset that will enable the classic Intel compilers with support for MPI and OpenMP (on Linux boxes)
set(CMAKE_CXX_COMPILER "icpc" CACHE STRING "" FORCE) set(CMAKE_CXX_COMPILER "icpc" CACHE STRING "" FORCE)
set(CMAKE_C_COMPILER "icc" CACHE STRING "" FORCE) set(CMAKE_C_COMPILER "icc" CACHE STRING "" FORCE)
@ -18,11 +18,11 @@ set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "icc" CACHE STRING "" FORCE) set(OpenMP_C "icc" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_C_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_CXX "icpc" CACHE STRING "" FORCE) set(OpenMP_CXX "icpc" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_CXX_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_Fortran_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_Fortran_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libiomp5.so" CACHE PATH "" FORCE) set(OpenMP_omp_LIBRARY "libiomp5.so" CACHE PATH "" FORCE)

View File

@ -1,4 +1,5 @@
set(WIN_PACKAGES set(WIN_PACKAGES
AMOEBA
ASPHERE ASPHERE
ATC ATC
AWPMD AWPMD
@ -7,7 +8,7 @@ set(WIN_PACKAGES
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -34,7 +35,7 @@ set(WIN_PACKAGES
INTEL INTEL
INTERLAYER INTERLAYER
KSPACE KSPACE
LATTE LEPTON
MACHDYN MACHDYN
MANIFOLD MANIFOLD
MANYBODY MANYBODY
@ -46,6 +47,7 @@ set(WIN_PACKAGES
MISC MISC
ML-HDNNP ML-HDNNP
ML-IAP ML-IAP
ML-POD
ML-RANN ML-RANN
ML-SNAP ML-SNAP
MOFFF MOFFF

View File

@ -3,13 +3,14 @@
# are removed. The resulting binary should be able to run most inputs. # are removed. The resulting binary should be able to run most inputs.
set(ALL_PACKAGES set(ALL_PACKAGES
AMOEBA
ASPHERE ASPHERE
BOCS BOCS
BODY BODY
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -34,12 +35,15 @@ set(ALL_PACKAGES
GRANULAR GRANULAR
INTERLAYER INTERLAYER
KSPACE KSPACE
LEPTON
MACHDYN MACHDYN
MANYBODY MANYBODY
MC MC
MEAM MEAM
MESONT
MISC MISC
ML-IAP ML-IAP
ML-POD
ML-SNAP ML-SNAP
MOFFF MOFFF
MOLECULE MOLECULE

View File

@ -12,10 +12,9 @@ set(PACKAGES_WITH_LIB
KIM KIM
KOKKOS KOKKOS
LATBOLTZ LATBOLTZ
LATTE LEPTON
MACHDYN MACHDYN
MDI MDI
MESONT
ML-HDNNP ML-HDNNP
ML-PACE ML-PACE
ML-QUIP ML-QUIP

View File

@ -0,0 +1,9 @@
# preset that will enable Nvidia HPC SDK compilers with support for MPI and OpenMP (on Linux boxes)
set(CMAKE_CXX_COMPILER "nvc++" CACHE STRING "" FORCE)
set(CMAKE_C_COMPILER "nvc" CACHE STRING "" FORCE)
set(CMAKE_Fortran_COMPILER "nvfortran" CACHE STRING "" FORCE)
set(MPI_CXX "nvc++" CACHE STRING "" FORCE)
set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE)

View File

@ -18,11 +18,11 @@ set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "icx" CACHE STRING "" FORCE) set(OpenMP_C "icx" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_C_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_CXX "icpx" CACHE STRING "" FORCE) set(OpenMP_CXX "icpx" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_CXX_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE) set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_Fortran_FLAGS "-qopenmp" CACHE STRING "" FORCE) set(OpenMP_Fortran_FLAGS "-qopenmp -qopenmp-simd" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libiomp5.so" CACHE PATH "" FORCE) set(OpenMP_omp_LIBRARY "libiomp5.so" CACHE PATH "" FORCE)

View File

@ -1,4 +1,4 @@
# preset that will restore gcc/g++ with support for MPI and OpenMP (on Linux boxes) # preset that will set gcc/g++ with extra warnings enabled and support for MPI and OpenMP (on Linux boxes)
set(CMAKE_CXX_COMPILER "g++" CACHE STRING "" FORCE) set(CMAKE_CXX_COMPILER "g++" CACHE STRING "" FORCE)
set(CMAKE_C_COMPILER "gcc" CACHE STRING "" FORCE) set(CMAKE_C_COMPILER "gcc" CACHE STRING "" FORCE)
@ -17,10 +17,3 @@ set(MPI_Fortran "gfortran" CACHE STRING "" FORCE)
set(MPI_Fortran_COMPILER "mpifort" CACHE STRING "" FORCE) set(MPI_Fortran_COMPILER "mpifort" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "gcc" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "gomp" CACHE STRING "" FORCE)
set(OpenMP_CXX "g++" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-fopenmp" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "gomp" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libgomp.so" CACHE PATH "" FORCE)

View File

@ -7,10 +7,3 @@ set(MPI_CXX "pgc++" CACHE STRING "" FORCE)
set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE) set(MPI_CXX_COMPILER "mpicxx" CACHE STRING "" FORCE)
unset(HAVE_OMP_H_INCLUDE CACHE) unset(HAVE_OMP_H_INCLUDE CACHE)
set(OpenMP_C "pgcc" CACHE STRING "" FORCE)
set(OpenMP_C_FLAGS "-mp" CACHE STRING "" FORCE)
set(OpenMP_C_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_CXX "pgc++" CACHE STRING "" FORCE)
set(OpenMP_CXX_FLAGS "-mp" CACHE STRING "" FORCE)
set(OpenMP_CXX_LIB_NAMES "omp" CACHE STRING "" FORCE)
set(OpenMP_omp_LIBRARY "libomp.so" CACHE PATH "" FORCE)

View File

@ -1,11 +1,13 @@
set(WIN_PACKAGES set(WIN_PACKAGES
AMOEBA
ASPHERE ASPHERE
AWPMD
BOCS BOCS
BODY BODY
BPM BPM
BROWNIAN BROWNIAN
CG-DNA CG-DNA
CG-SDK CG-SPICA
CLASS2 CLASS2
COLLOID COLLOID
COLVARS COLVARS
@ -19,6 +21,7 @@ set(WIN_PACKAGES
DPD-SMOOTH DPD-SMOOTH
DRUDE DRUDE
EFF EFF
ELECTRODE
EXTRA-COMPUTE EXTRA-COMPUTE
EXTRA-DUMP EXTRA-DUMP
EXTRA-FIX EXTRA-FIX
@ -28,12 +31,15 @@ set(WIN_PACKAGES
GRANULAR GRANULAR
INTERLAYER INTERLAYER
KSPACE KSPACE
LEPTON
MANIFOLD MANIFOLD
MANYBODY MANYBODY
MC MC
MEAM MEAM
MESONT
MISC MISC
ML-IAP ML-IAP
ML-POD
ML-SNAP ML-SNAP
MOFFF MOFFF
MOLECULE MOLECULE

View File

@ -38,16 +38,14 @@ endif
# override settings for PIP commands # override settings for PIP commands
# PIP_OPTIONS = --cert /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt --proxy http://proxy.mydomain.org # PIP_OPTIONS = --cert /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt --proxy http://proxy.mydomain.org
#SPHINXEXTRA = -j $(shell $(PYTHON) -c 'import multiprocessing;print(multiprocessing.cpu_count())') $(shell test -f $(BUILDDIR)/doxygen/xml/run.stamp && printf -- "-E")
# temporarily disable caching so that the hack for the sphinx-tabs extensions to get proper non-html output works # temporarily disable caching so that the hack for the sphinx-tabs extensions to get proper non-html output works
SPHINXEXTRA = -E -j $(shell $(PYTHON) -c 'import multiprocessing;print(multiprocessing.cpu_count())') SPHINXEXTRA = -j $(shell $(PYTHON) -c 'import multiprocessing;print(multiprocessing.cpu_count())')
# grab list of sources from doxygen config file. # grab list of sources from doxygen config file.
# we only want to use explicitly listed files. # we only want to use explicitly listed files.
DOXYFILES = $(shell sed -n -e 's/\#.*$$//' -e '/^ *INPUT \+=/,/^[A-Z_]\+ \+=/p' doxygen/Doxyfile.in | sed -e 's/@LAMMPS_SOURCE_DIR@/..\/src/g' -e 's/\\//g' -e 's/ \+/ /' -e 's/[A-Z_]\+ \+= *\(YES\|NO\|\)//') DOXYFILES = $(shell sed -n -e 's/\#.*$$//' -e '/^ *INPUT \+=/,/^[A-Z_]\+ \+=/p' doxygen/Doxyfile.in | sed -e 's/@LAMMPS_SOURCE_DIR@/..\/src/g' -e 's/\\//g' -e 's/ \+/ /' -e 's/[A-Z_]\+ \+= *\(YES\|NO\|\)//')
.PHONY: help clean-all clean clean-spelling epub mobi rst html pdf spelling anchor_check style_check char_check xmlgen fasthtml .PHONY: help clean-all clean clean-spelling epub mobi html pdf spelling anchor_check style_check char_check role_check xmlgen fasthtml
# ------------------------------------------ # ------------------------------------------
@ -88,15 +86,19 @@ html: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
@if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi @if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi
@$(MAKE) $(MFLAGS) -C graphviz all @$(MAKE) $(MFLAGS) -C graphviz all
@(\ @(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build -E $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\ sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\
ln -sf Manual.html html/index.html;\ ln -sf Manual.html html/index.html;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
echo "############################################" ;\ echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
rst_anchor_check src/*.rst ;\ rst_anchor_check src/*.rst ;\
python $(BUILDDIR)/utils/check-packages.py -s ../src -d src ;\ $(PYTHON) $(BUILDDIR)/utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\ env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
python $(BUILDDIR)/utils/check-styles.py -s ../src -d src ;\ env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst ;\
env LC_ALL=C grep -n ' `[^`]\+<[a-z][^`]\+`[^_]' $(RSTDIR)/*.rst ;\
$(PYTHON) $(BUILDDIR)/utils/check-styles.py -s ../src -d src ;\
echo "############################################" ;\ echo "############################################" ;\
deactivate ;\ deactivate ;\
) )
@ -113,8 +115,10 @@ fasthtml: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
@$(MAKE) $(MFLAGS) -C graphviz all @$(MAKE) $(MFLAGS) -C graphviz all
@mkdir -p fasthtml @mkdir -p fasthtml
@(\ @(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build -j 4 -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/fasthtml/doctrees $(RSTDIR) fasthtml ;\ sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/fasthtml/doctrees $(RSTDIR) fasthtml ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/fasthtml/doctrees $(RSTDIR) fasthtml ;\
deactivate ;\ deactivate ;\
) )
@rm -rf fasthtml/_sources @rm -rf fasthtml/_sources
@ -128,8 +132,8 @@ fasthtml: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
spelling: xmlgen $(SPHINXCONFIG)/conf.py $(VENV) $(SPHINXCONFIG)/false_positives.txt spelling: xmlgen $(SPHINXCONFIG)/conf.py $(VENV) $(SPHINXCONFIG)/false_positives.txt
@if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi @if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi
@(\ @(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \ . $(VENV)/bin/activate ; \
cp $(SPHINXCONFIG)/false_positives.txt $(RSTDIR)/ ; env PYTHONWARNINGS= \ cp $(SPHINXCONFIG)/false_positives.txt $(RSTDIR)/; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build -b spelling -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) spelling ;\ sphinx-build -b spelling -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) spelling ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
deactivate ;\ deactivate ;\
@ -143,7 +147,9 @@ epub: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
@rm -f LAMMPS.epub @rm -f LAMMPS.epub
@cp src/JPG/*.* epub/JPG @cp src/JPG/*.* epub/JPG
@(\ @(\
. $(VENV)/bin/activate ;\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build -E $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\ sphinx-build $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
deactivate ;\ deactivate ;\
@ -162,14 +168,18 @@ pdf: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
@$(MAKE) $(MFLAGS) -C graphviz all @$(MAKE) $(MFLAGS) -C graphviz all
@if [ "$(HAS_PDFLATEX)" == "NO" ] ; then echo "PDFLaTeX or latexmk were not found! Please check README for further instructions" 1>&2; exit 1; fi @if [ "$(HAS_PDFLATEX)" == "NO" ] ; then echo "PDFLaTeX or latexmk were not found! Please check README for further instructions" 1>&2; exit 1; fi
@(\ @(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build -E $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
sphinx-build $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\ sphinx-build $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\ rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
echo "############################################" ;\ echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
rst_anchor_check src/*.rst ;\ rst_anchor_check src/*.rst ;\
python utils/check-packages.py -s ../src -d src ;\ $(PYTHON) utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\ env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
python utils/check-styles.py -s ../src -d src ;\ env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst ;\
env LC_ALL=C grep -n ' `[^`]\+<[a-z][^`]\+`[^_]' $(RSTDIR)/*.rst ;\
$(PYTHON) utils/check-styles.py -s ../src -d src ;\
echo "############################################" ;\ echo "############################################" ;\
deactivate ;\ deactivate ;\
) )
@ -192,28 +202,39 @@ pdf: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
anchor_check : $(ANCHORCHECK) anchor_check : $(ANCHORCHECK)
@(\ @(\
. $(VENV)/bin/activate ;\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
rst_anchor_check src/*.rst ;\ rst_anchor_check src/*.rst ;\
deactivate ;\ deactivate ;\
) )
style_check : $(VENV) style_check : $(VENV)
@(\ @(\
. $(VENV)/bin/activate ;\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
python utils/check-styles.py -s ../src -d src ;\ $(PYTHON) utils/check-styles.py -s ../src -d src ;\
deactivate ;\ deactivate ;\
) )
package_check : $(VENV) package_check : $(VENV)
@(\ @(\
. $(VENV)/bin/activate ;\ . $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
python utils/check-packages.py -s ../src -d src ;\ $(PYTHON) utils/check-packages.py -s ../src -d src ;\
deactivate ;\ deactivate ;\
) )
char_check : char_check :
@( env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst && exit 1 || : ) @( env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst && exit 1 || : )
role_check :
@( env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst && exit 1 || : )
@( env LC_ALL=C grep -n ' `[^`]\+<[a-z][^`]\+`[^_]' $(RSTDIR)/*.rst && exit 1 || : )
link_check : $(VENV) html
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
linkchecker -F html --check-extern html/Manual.html ;\
deactivate ;\
)
xmlgen : doxygen/xml/index.xml xmlgen : doxygen/xml/index.xml
doxygen/Doxyfile: doxygen/Doxyfile.in doxygen/Doxyfile: doxygen/Doxyfile.in
@ -241,8 +262,7 @@ $(MATHJAX):
$(ANCHORCHECK): $(VENV) $(ANCHORCHECK): $(VENV)
@( \ @( \
. $(VENV)/bin/activate; \ . $(VENV)/bin/activate; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
(cd utils/converters;\ pip $(PIP_OPTIONS) install -e utils/converters;\
python setup.py develop);\
deactivate;\ deactivate;\
) )

View File

@ -40,8 +40,9 @@ environment and local folders.
Installing prerequisites for the documentation build Installing prerequisites for the documentation build
To run the HTML documention build toolchain, python 3.x, doxygen, git, To run the HTML documention build toolchain, python 3.x, doxygen, git,
and virtualenv have to be installed. Also internet access is initially and the venv python module have to be installed if not already available.
required to download external files and tools. Also internet access is initially required to download external files
and tools.
Building the PDF format manual requires in addition a compatible LaTeX Building the PDF format manual requires in addition a compatible LaTeX
installation with support for PDFLaTeX and several add-on LaTeX packages installation with support for PDFLaTeX and several add-on LaTeX packages

View File

@ -4,45 +4,44 @@ This purpose of this document is to provide a point of reference
for LAMMPS developers and contributors as to what conventions for LAMMPS developers and contributors as to what conventions
should be used to structure and format files in the LAMMPS manual. should be used to structure and format files in the LAMMPS manual.
Last change: 2020-04-23 Last change: 2022-12-30
## File format and tools ## File format and tools
In fall 2019, the LAMMPS documentation file format has changed from In fall 2019, the LAMMPS documentation file format has changed from a
a home grown minimal markup designed to generate HTML format files home grown markup designed to generate HTML format files only, to
from a mostly plain text format to using the reStructuredText file [reStructuredText](https://docutils.sourceforge.io/rst.html>. For a
format. For a transition period all files in the old .txt format transition period all files in the old .txt format were transparently
were transparently converted to .rst and then processed. The txt2rst converted to .rst and then processed. The `txt2rst tool` is still
tool is still included in the distribution to obtain an initial .rst included in the distribution to obtain an initial .rst file for legacy
file for integration into the manual. Since the transition to integration into the manual. Since that transition to reStructured
reStructured text as source format, many of the artifacts or the text, many of the artifacts of the translation have been removed though,
translation have been removed though and parts of the documentation and parts of the documentation refactored and expanded to take advantage
refactored and expanded to take advantage of the capabilities of the capabilities reStructuredText and associated tools. The
reStructuredText and associated tools. The conversion from the conversion from the source to the final formats (HTML, PDF, and
source to the final formats (HTML, PDF, and optionally e-book optionally e-book reader formats ePUB and MOBI) is mostly automated and
reader formats ePUB and MOBI) is mostly automated and controlled controlled by a Makefile in the `doc` folder. This makefile assumes that
by a Makefile in the `doc` folder. This makefile assumes that the the processing is done on a Unix-like machine and Python 3.5 or later
processing is done on a Unix-like machine and Python 3.5 or later and a matching venv module are available. Additional Python
and a matching virtualenv module are available. Additional Python packages (like the Sphinx tool and several extensions) are transparently
packages (like the Sphinx tool and several extensions) are installed into a virtual environment over the internet using the `pip`
transparently installed into a virtual environment over the package manager. Further requirements and details are discussed in the
internet using the `pip` package manager. Further requirements manual.
and details are discussed in the manual.
## Work in progress ## Work in progress
The refactoring and improving of the documentation is an ongoing The refactoring and improving of the documentation is an ongoing
process, so statements in this document may not always be fully process, so statements in this document may not always be fully
up-to-date. If in doubt, contact the LAMMPS developers. up-to-date. When in doubt, contact the LAMMPS developers.
## General structure ## General structure
The layout and formatting of added files should follow the example The layout and formatting of added files should follow the example of
of the existing files. Since those are directly derived from their the existing files. Since many of those were initially derived from
former .txt format versions and the manual has been maintained in their former .txt format versions and the manual has been maintained in
that format for many years, there is a large degree of consistency that format for many years, there is a large degree of consistency
already, so comparison with similar files should give you a good already, so comparison with similar files should give you a good idea
idea what kind of information and sections are needed. what kind of information and sections are needed.
## Formatting conventions ## Formatting conventions
@ -59,14 +58,20 @@ double backward quotes bracketing the item: \`\`path/to/some/file\`\`
Keywords and options are formatted in italics: \*option\* Keywords and options are formatted in italics: \*option\*
Mathematical expressions, equations, symbols are typeset using Mathematical expressions, equations, symbols are typeset using
either a `.. math:`` block or the `:math:` role. either a `.. math:` block or the `:math:` role.
Groups of shell commands or LAMMPS input script or C/C++ source Groups of shell commands or LAMMPS input script or C/C++/Python source
code should be typeset into a `.. code-block::` section. A syntax code should be typeset into a `.. code-block::` section. A syntax
highlighting extension for LAMMPS input scripts is provided, so highlighting extension for LAMMPS input scripts is provided, so `LAMMPS`
`LAMMPS` can be used to indicate the language in the code block can be used to indicate the language in the code block in addition to
in addition to `bash`, `c`, or `python`. When no syntax style `bash`, `c`, `c++`, `console`, `csh`, `diff', `fortran`, `json`, `make`,
is indicated, no syntax highlighting is performed. `perl`, `powershell`, `python`, `sh`, or `tcl`, `text`, or `yaml`. When
no syntax style is indicated, no syntax highlighting is performed. When
typesetting commands executed on the shell, please do not prefix
commands with a shell prompt and use `bash` highlighting, except when
the block also shows the output from that command. In the latter case,
please use a dollar sign as the shell prompt and `console` for syntax
highlighting.
As an alternative, e.g. to typeset the syntax of file formats As an alternative, e.g. to typeset the syntax of file formats
a `.. parsed-literal::` block can be used, which allows some a `.. parsed-literal::` block can be used, which allows some
@ -89,22 +94,30 @@ by ` :class: note`.
## Required steps when adding a custom style to LAMMPS ## Required steps when adding a custom style to LAMMPS
When adding a new style (e.g. pair style or a compute or a fix) When adding a new style (e.g. pair style or a compute or a fix) or a new
or a new command, it is **required** to include the corresponding command, it is **required** to include the **corresponding documentation**
documentation. Those are often new files that need to be added. in [reStructuredText format](https://docutils.sourceforge.io/rst.html).
In order to be included in the documentation, those new files Those are often new files that need to be added. In order to be
need to be reference in a `.. toctree::` block. Most of those included in the documentation, those new files need to be referenced in a
use patterns with wildcards, so the addition will be automatic. `.. toctree::` block. Most of those use patterns with wild cards, so the
However, those additions also need to be added to some lists of addition will be automatic. However, those additions also need to be
styles or commands. The `make style\_check` command will perform added to some lists of styles or commands. The `make style\_check`
a test and report any missing entries and list the affected files. command when executed in the `doc` folder will perform a test and report
Any references defined with `.. \_refname:` have to be unique any missing entries and list the affected files. Any references defined
across all documentation files and this can be checked for with with `.. \_refname:` have to be unique across all documentation files
`make anchor\_check`. Finally, a spell-check should be done, and this can be checked for with `make anchor\_check`. Finally, a
which is triggered via `make spelling`. Any offenses need to spell-check should be done, which is triggered via `make spelling`. Any
be corrected and false positives should be added to the file offenses need to be corrected and false positives should be added to the
`utils/sphinx-config/false\_positives.txt`. file `utils/sphinx-config/false\_positives.txt`.
## Required additional steps when adding a new package to LAMMPS ## Required additional steps when adding a new package to LAMMPS
TODO When adding a new package, the package must be added to the list of
packages in the `Packages_list.rst` file. If additional build instructions
need to be followed, a corresponding section should be added to the
`Build_extras.rst` file and linked from the list at the top of the
file as well as the equivalent list in the `Build_packages.rst` file.
A detailed description of the package and pointers to configuration,
included commands and examples, external pages, author information and
more should be added to the `Packages_details.rst` file.

View File

@ -1,12 +1,12 @@
# Outline of the GitHub Development Workflow # Outline of the GitHub Development Workflow
This purpose of this document is to provide a point of reference for the The purpose of this document is to provide a point of reference for the
core LAMMPS developers and other LAMMPS contributors to understand the core LAMMPS developers and other LAMMPS contributors to understand the
choices the LAMMPS developers have agreed on. Git and GitHub provide the choices the LAMMPS developers have agreed on. Git and GitHub provide the
tools, but do not set policies, so it is up to the developers to come to tools, but do not set policies, so it is up to the developers to come to
an agreement as to how to define and interpret policies. This document an agreement as to how to define and interpret policies. This document
is likely to change as our experiences and needs change and we try to is likely to change as our experiences and needs change, and we try to
adapt accordingly. Last change 2021-09-02. adapt it accordingly. Last change 2023-02-10.
## Table of Contents ## Table of Contents
@ -22,47 +22,50 @@ adapt accordingly. Last change 2021-09-02.
## GitHub Merge Management ## GitHub Merge Management
In the interest of consistency, ONLY ONE of the core LAMMPS developers In the interest of consistency, ONLY ONE of the core LAMMPS developers
should doing the merging itself. This is currently should do the merging. This is currently
[@akohlmey](https://github.com/akohlmey) (Axel Kohlmeyer). If this [@akohlmey](https://github.com/akohlmey) (Axel Kohlmeyer). If this
assignment needs to be changed, it shall be done right after a stable assignment needs to be changed, it shall be done right after a stable
release. If the currently assigned developer cannot merge outstanding release. If the currently assigned developer cannot merge outstanding
pull requests in a timely manner, or in other extenuating circumstances, pull requests in a timely manner, or in other extenuating circumstances,
other core LAMMPS developers with merge rights can merge pull requests, other core LAMMPS developers with merge permission may merge pull
when necessary. requests.
## Pull Requests ## Pull Requests
ALL changes to the LAMMPS code and documentation, however trivial, MUST *ALL* changes to the LAMMPS code and documentation, however trivial,
be submitted as a pull request to GitHub. All changes to the "develop" MUST be submitted as a pull request to GitHub. All changes to the
branch must be made exclusively through merging pull requests. The "develop" branch must be made exclusively through merging pull requests.
"release" and "stable" branches, respectively are only to be updated The "release" and "stable" branches, respectively, are only to be
upon patch or stable releases with fast-forward merges based on the updated upon feature or stable releases based on the associated
associated tags. Pull requests may also be submitted to (long-running) tags. Updates to the stable release in between stable releases
feature branches created by LAMMPS developers inside the LAMMPS project, (for example, back-ported bug fixes) are first merged into the "maintenance"
if needed. Those are not subject to the merge and review restrictions branch and then into the "stable" branch as update releases.
discussed in this document, though, but get managed as needed on a
case-by-case basis. Pull requests may also be submitted to (long-running) feature branches
created by LAMMPS developers inside the LAMMPS project, if needed. Those
are not subject to the merge and review restrictions discussed in this
document, though, but get managed as needed on a case-by-case basis.
### Pull Request Assignments ### Pull Request Assignments
Pull requests can be "chaperoned" by one of the LAMMPS core developers. Pull requests can be "chaperoned" by one of the LAMMPS core developers.
This is indicated by who the pull request is assigned to. LAMMPS core This is indicated by whom the pull request is assigned to. LAMMPS core
developers can self-assign or they can decide to assign a pull request developers can self-assign, or they can decide to assign a pull request
to a different LAMMPS developer. Being assigned to a pull request means, to a different LAMMPS developer. Being assigned to a pull request means,
that this pull request may need some work and the assignee is tasked to that this pull request may need some work and the assignee is tasked to
determine whether this might be needed or not, and may either implement determine whether this might be needed or not. The assignee may either
the required changes or ask the submitter of the pull request to implement choose to implement required changes or ask the submitter of the pull
them. Even though, all LAMMPS developers may have write access to pull request to implement them. Even though, all LAMMPS developers may have
requests (if enabled by the submitter, which is the default), only the write access to pull requests (if enabled by the submitter, which is the
submitter or the assignee of a pull request may do so. During this default), only the submitter or the assignee of a pull request should do
period the `work_in_progress` label may be applied to the pull so. During this period, the `work_in_progress` label may be applied to
request. The assignee gets to decide what happens to the pull request the pull request. The assignee gets to decide what happens to the pull
next, e.g. whether it should be assigned to a different developer for request next, e.g. whether it should be assigned to a different
additional checks and changes, or is recommended to be merged. Removing developer for additional checks and changes, or is recommended to be
the `work_in_progress` label and assigning the pull request to the merged. Removing the `work_in_progress` label and assigning the pull
developer tasked with merging signals that a pull request is ready to be request to the developer tasked with merging signals that a pull request
merged. In addition, a `ready_for_merge` label may also be assigned is ready to be merged. In addition, a `ready_for_merge` label may also
to signal urgency to merge this pull request quickly. be assigned to signal urgency to merge this pull request quickly.
### Pull Request Reviews ### Pull Request Reviews
@ -70,32 +73,33 @@ People can be assigned to review a pull request in two ways:
* They can be assigned manually to review a pull request * They can be assigned manually to review a pull request
by the submitter or a LAMMPS developer by the submitter or a LAMMPS developer
* They can be automatically assigned, because a developers matches * They can be automatically assigned, because a developer's GitHub
a file pattern in the `.github/CODEOWNERS` file, which associates handle matches a file pattern in the `.github/CODEOWNERS` file,
developers with the code they contributed and maintain. which associates developers with the code they contributed and
maintain.
Reviewers are requested to state their appraisal of the proposed changes Reviewers are requested to state their appraisal of the proposed changes
and either approve or request changes. People may unassign themselves and either approve or request changes. People may unassign themselves
from review, if they feel not competent about the changes proposed. At from review, if they feel not competent about the changes proposed. At
least two approvals from LAMMPS developers with write access are required least two approvals from LAMMPS developers with write access are
before merging in addition to the automated compilation tests. required before merging, in addition to passing all automated
Merging counts as implicit approval, so does submission of a pull request compilation and unit tests. Merging counts as implicit approval, so
(by a LAMMPS developer). So the person doing the merge may not also submit does submission of a pull request (by a LAMMPS developer). So the person
an approving review. The feature, that reviews from code owners are "hard" doing the merge may not also submit an approving review. The GitHub
reviews (i.e. they must all be approved before merging is allowed), is feature, that reviews from code owners are "hard" reviews (i.e. they
currently disabled and it is in the discretion of the merge maintainer to must all approve before merging is allowed), is currently disabled.
assess when a sufficient degree of approval, especially from external It is in the discretion of the merge maintainer to assess when a
contributors, has been reached in these cases. Reviews may be sufficient degree of approval has been reached, especially from external
(automatically) dismissed, when the reviewed code has been changed, collaborators. Reviews may be (automatically) dismissed, when the
and then approval is required a second time. reviewed code has been changed. Review may be requested a second time.
### Pull Request Discussions ### Pull Request Discussions
All discussions about a pull request should be kept as much as possible All discussions about a pull request should be kept as much as possible
on the pull request discussion page on GitHub, so that other developers on the pull request discussion page on GitHub, so that other developers
can later review the entire discussion after the fact and understand the can later review the entire discussion after the fact and understand the
rationale behind choices made. Exceptions to this policy are technical rationale behind choices that were made. Exceptions to this policy are
discussions, that are centered on tools or policies themselves technical discussions, that are centered on tools or policies themselves
(git, GitHub, c++) rather than on the content of the pull request. (git, GitHub, c++) rather than on the content of the pull request.
## GitHub Issues ## GitHub Issues
@ -109,39 +113,47 @@ marker in the subject. This is automatically done when using the
corresponding template for submitting an issue. Issues may be assigned corresponding template for submitting an issue. Issues may be assigned
to one or more developers, if they are working on this feature or to one or more developers, if they are working on this feature or
working to resolve an issue. Issues that have nobody working working to resolve an issue. Issues that have nobody working
on them at the moment or in the near future, have the label on them at the moment, or in the near future, have the label
`volunteer needed` attached. `volunteer needed` attached.
When an issue, say `#125` is resolved by a specific pull request, When an issue, say `#125` is resolved by a specific pull request, the
the comment for the pull request shall contain the text `closes #125` comment for the pull request shall contain the text `closes #125` or
or `fixes #125`, so that the issue is automatically deleted when `fixes #125`, so that the issue is automatically deleted when the pull
the pull request is merged. The template for pull requests includes request is merged. The template for pull requests includes a header
a header where connections between pull requests and issues can be listed where connections between pull requests and issues can be listed, and
and thus were this comment should be placed. thus where this comment should be placed.
## Milestones and Release Planning ## Release Planning
LAMMPS uses a continuous release development model with incremental LAMMPS uses a continuous release development model with incremental
changes, i.e. significant effort is made - including automated pre-merge changes, i.e. significant effort is made -- including automated pre-merge
testing - that the code in the branch "develop" does not get easily testing -- that the code in the branch "develop" does not get easily
broken. These tests are run after every update to a pull request. More broken. These tests are run after every update to a pull request. More
extensive and time consuming tests (including regression testing) are extensive and time-consuming tests (including regression testing) are
performed after code is merged to the "develop" branch. There are patch performed after code is merged to the "develop" branch. There are feature
releases of LAMMPS every 3-5 weeks at a point, when the LAMMPS releases of LAMMPS made about every 4-6 weeks at a point, when the LAMMPS
developers feel, that a sufficient amount of changes have happened, and developers feel, that a sufficient number of changes have been included
the post-merge testing has been successful. These patch releases are and all post-merge testing has been successful. These feature releases are
marked with a `patch_<version date>` tag and the "release" branch marked with a `patch_<version date>` tag and the "release" branch
follows only these versions (and thus is always supposed to be of follows only these versions with fast-forward merges. While "develop" may
production quality, unlike "develop", which may be temporary broken, in be temporarily broken through issues only detected by the post-merge tests,
the case of larger change sets or unexpected incompatibilities or side The "release" branch is always supposed to be of production quality.
effects.
About 1-2 times each year, there are going to be "stable" releases of About once each year, there is a "stable" release of LAMMPS. These have
LAMMPS. These have seen additional, manual testing and review of seen additional, manual testing and review of results from testing with
results from testing with instrumented code and static code analysis. instrumented code and static code analysis. Also, the last few feature
Also, the last 1-3 patch releases before a stable release are "release releases before a stable release are "release candidate" versions which
candidate" versions which only contain bugfixes and documentation only contain bug fixes, feature additions to peripheral functionality,
updates. For release planning and the information of code contributors, and documentation updates. In between stable releases, bug fixes and
issues and pull requests being actively worked on are assigned a infrastructure updates are back-ported from the "develop" branch to the
"milestone", which corresponds to the next stable release or the stable "maintenance" branch and occasionally merged into "stable" and published
release after that, with a tentative release date. as update releases.
## Project Management
For release planning and the information of code contributors, issues
and pull requests are being managed with GitHub Project Boards. There
are currently three boards: LAMMPS Feature Requests, LAMMPS Bug Reports,
and LAMMPS Pull Requests. Each board is organized in columns where
submissions are categorized. Within each column the entries are
(manually) sorted according their priority.

View File

@ -1,7 +1,7 @@
.TH LAMMPS "1" "23 June 2022" "2022-6-23" .TH LAMMPS "1" "28 March 2023" "2023-03-28"
.SH NAME .SH NAME
.B LAMMPS .B LAMMPS
\- Molecular Dynamics Simulator. Version 23 June 2022 \- Molecular Dynamics Simulator. Version 28 March 2023
.SH SYNOPSIS .SH SYNOPSIS
.B lmp .B lmp
@ -161,7 +161,7 @@ list references for specific cite-able features used during a
run. run.
.TP .TP
\fB\-pk <style> [options]\fR or \fB\-package <style> [options]\fR \fB\-pk <style> [options]\fR or \fB\-package <style> [options]\fR
Invoke the \fBpackage\R command with <style> and optional arguments. Invoke the \fBpackage\fR command with <style> and optional arguments.
The syntax is the same as if the command appeared in an input script. The syntax is the same as if the command appeared in an input script.
For example "-pk gpu 2" is the same as "package gpu 2" in the input For example "-pk gpu 2" is the same as "package gpu 2" in the input
script. The possible styles and options are discussed in the script. The possible styles and options are discussed in the

View File

@ -1,11 +1,11 @@
.TH MSI2LMP "1" "v3.9.9" "2018-11-05" .TH MSI2LMP "1" "v3.9.10" "2023-03-10"
.SH NAME .SH NAME
.B MSI2LMP .B MSI2LMP
\- Converter for Materials Studio files to LAMMPS \- Converter for Materials Studio files to LAMMPS
.SH SYNOPSIS .SH SYNOPSIS
.B msi2lmp .B msi2lmp
<ROOTNAME> [-class <I|1|II|2|O|0>] [-frc <path to frc file>] [-print #] [-ignore] [-nocenter] [-oldstyle] [-shift <x> <y> <z>] [-help] <ROOTNAME> [-class <I|1|II|2|O|0>] [-frc <path to frc file>] [-print #] [-ignore] [-nocenter] [-oldstyle] [-shift <x> <y> <z>]
.SH DESCRIPTION .SH DESCRIPTION
.PP .PP
@ -22,6 +22,9 @@ needed between .frc and .car/.mdf files are the atom types.
.SH OPTIONS .SH OPTIONS
.TP .TP
\fB\-h\fR, \fB\-help\fR,
Print detailed help message to the screen and stop.
.TP
\fB\<ROOTNAME>\fR \fB\<ROOTNAME>\fR
This has to be the first argument and is a This has to be the first argument and is a
.B mandatory .B mandatory

View File

@ -63,7 +63,7 @@ In the src directory, there is one top-level Makefile and several
low-level machine-specific files named Makefile.xxx where xxx = the low-level machine-specific files named Makefile.xxx where xxx = the
machine name. If a low-level Makefile exists for your platform, you do machine name. If a low-level Makefile exists for your platform, you do
not need to edit the top-level Makefile. However you should check the not need to edit the top-level Makefile. However you should check the
system-specific section of the low-level Makefile to insure the system-specific section of the low-level Makefile to ensure the
various paths are correct for your environment. If a low-level various paths are correct for your environment. If a low-level
Makefile does not exist for your platform, you will need to add a Makefile does not exist for your platform, you will need to add a
suitable target to the top-level Makefile. You will also need to suitable target to the top-level Makefile. You will also need to

View File

@ -1206,7 +1206,7 @@ this command is not typically needed if the &quot;nonbond style&quot; and &quot;
an exception to this is if a short cutoff is used initially, an exception to this is if a short cutoff is used initially,
but a longer cutoff will be used for a subsequent run (in the same but a longer cutoff will be used for a subsequent run (in the same
input script), in this case the &quot;maximum cutoff&quot; command should be input script), in this case the &quot;maximum cutoff&quot; command should be
used to insure enough memory is allocated for the later run used to ensure enough memory is allocated for the later run
note that a restart file contains nonbond cutoffs (so it is not necessary note that a restart file contains nonbond cutoffs (so it is not necessary
to use a &quot;nonbond style&quot; command before &quot;read restart&quot;), but LAMMPS to use a &quot;nonbond style&quot; command before &quot;read restart&quot;), but LAMMPS
still needs to know what the maximum cutoff will be before the still needs to know what the maximum cutoff will be before the

View File

@ -368,7 +368,7 @@ Bibliography
Frenkel and Smit, Understanding Molecular Simulation, Academic Press, London, 2002. Frenkel and Smit, Understanding Molecular Simulation, Academic Press, London, 2002.
**(GLE4MD)** **(GLE4MD)**
`http://gle4md.org/ <http://gle4md.org/>`_ `https://gle4md.org/ <https://gle4md.org/>`_
**(Gao)** **(Gao)**
Gao and Weber, Nuclear Instruments and Methods in Physics Research B 191 (2012) 504. Gao and Weber, Nuclear Instruments and Methods in Physics Research B 191 (2012) 504.
@ -407,7 +407,7 @@ Bibliography
Guenole, Noehring, Vaid, Houlle, Xie, Prakash, Bitzek, Comput Mater Sci, 175, 109584 (2020). Guenole, Noehring, Vaid, Houlle, Xie, Prakash, Bitzek, Comput Mater Sci, 175, 109584 (2020).
**(Gullet)** **(Gullet)**
Gullet, Wagner, Slepoy, SANDIA Report 2003-8782 (2003). Gullet, Wagner, Slepoy, SANDIA Report 2003-8782 (2003). DOI:10.2172/918395
**(Guo)** **(Guo)**
Guo and Thirumalai, Journal of Molecular Biology, 263, 323-43 (1996). Guo and Thirumalai, Journal of Molecular Biology, 263, 323-43 (1996).
@ -461,7 +461,7 @@ Bibliography
Hunt, Mol Simul, 42, 347 (2016). Hunt, Mol Simul, 42, 347 (2016).
**(IPI)** **(IPI)**
`http://epfl-cosmo.github.io/gle4md/index.html?page=ipi <http://epfl-cosmo.github.io/gle4md/index.html?page=ipi>`_ `https://ipi-code.org/ <https://ipi-code.org/>`
**(IPI-CPC)** **(IPI-CPC)**
Ceriotti, More and Manolopoulos, Comp Phys Comm, 185, 1019-1026 (2014). Ceriotti, More and Manolopoulos, Comp Phys Comm, 185, 1019-1026 (2014).
@ -733,8 +733,8 @@ Bibliography
**(Mishin)** **(Mishin)**
Mishin, Mehl, and Papaconstantopoulos, Acta Mater, 53, 4029 (2005). Mishin, Mehl, and Papaconstantopoulos, Acta Mater, 53, 4029 (2005).
**(Mitchell and Finchham)** **(Mitchell and Fincham)**
Mitchell, Finchham, J Phys Condensed Matter, 5, 1031-1038 (1993). Mitchell, Fincham, J Phys Condensed Matter, 5, 1031-1038 (1993).
**(Mitchell2011)** **(Mitchell2011)**
Mitchell. A non-local, ordinary-state-based viscoelasticity model for peridynamics. Sandia National Lab Report, 8064:1-28 (2011). Mitchell. A non-local, ordinary-state-based viscoelasticity model for peridynamics. Sandia National Lab Report, 8064:1-28 (2011).
@ -875,7 +875,7 @@ Bibliography
G.A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni and G. Bussi, Comp. Phys. Comm 185, 604 (2014) G.A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni and G. Bussi, Comp. Phys. Comm 185, 604 (2014)
**(Paquay)** **(Paquay)**
Paquay and Kusters, Biophys. J., 110, 6, (2016). preprint available at `arXiv:1411.3019 <http://arxiv.org/abs/1411.3019/>`_. Paquay and Kusters, Biophys. J., 110, 6, (2016). preprint available at `arXiv:1411.3019 <https://arxiv.org/abs/1411.3019/>`_.
**(Park)** **(Park)**
Park, Schulten, J. Chem. Phys. 120 (13), 5946 (2004) Park, Schulten, J. Chem. Phys. 120 (13), 5946 (2004)
@ -1373,7 +1373,7 @@ Bibliography
Zhu, Tajkhorshid, and Schulten, Biophys. J. 83, 154 (2002). Zhu, Tajkhorshid, and Schulten, Biophys. J. 83, 154 (2002).
**(Ziegler)** **(Ziegler)**
J.F. Ziegler, J. P. Biersack and U. Littmark, "The Stopping and Range of Ions in Matter," Volume 1, Pergamon, 1985. J.F. Ziegler, J. P. Biersack and U. Littmark, "The Stopping and Range of Ions in Matter", Volume 1, Pergamon, 1985.
**(Zimmerman2004)** **(Zimmerman2004)**
Zimmerman, JA; Webb, EB; Hoyt, JJ;. Jones, RE; Klein, PA; Bammann, DJ, "Calculation of stress in atomistic simulation." Special Issue of Modelling and Simulation in Materials Science and Engineering (2004),12:S319. Zimmerman, JA; Webb, EB; Hoyt, JJ;. Jones, RE; Klein, PA; Bammann, DJ, "Calculation of stress in atomistic simulation." Special Issue of Modelling and Simulation in Materials Science and Engineering (2004),12:S319.

View File

@ -6,9 +6,9 @@ either traditional makefiles for use with GNU make (which may require
manual editing), or using a build environment generated by CMake (Unix manual editing), or using a build environment generated by CMake (Unix
Makefiles, Ninja, Xcode, Visual Studio, KDevelop, CodeBlocks and more). Makefiles, Ninja, Xcode, Visual Studio, KDevelop, CodeBlocks and more).
As an alternative you can download a package with pre-built executables As an alternative, you can download a package with pre-built executables
or automated build trees as described on the :doc:`Install <Install>` or automated build trees, as described in the :doc:`Install <Install>`
page. section of the manual.
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -44,7 +44,7 @@ standard. A more detailed discussion of that is below.
The executable created by CMake (after running make) is named The executable created by CMake (after running make) is named
``lmp`` unless the ``LAMMPS_MACHINE`` option is set. When setting ``lmp`` unless the ``LAMMPS_MACHINE`` option is set. When setting
``LAMMPS_MACHINE=name`` the executable will be called ``LAMMPS_MACHINE=name``, the executable will be called
``lmp_name``. Using ``BUILD_MPI=no`` will enforce building a ``lmp_name``. Using ``BUILD_MPI=no`` will enforce building a
serial executable using the MPI STUBS library. serial executable using the MPI STUBS library.
@ -60,7 +60,7 @@ standard. A more detailed discussion of that is below.
Any ``make machine`` command will look up the make settings from a Any ``make machine`` command will look up the make settings from a
file ``Makefile.machine`` in the folder ``src/MAKE`` or one of its file ``Makefile.machine`` in the folder ``src/MAKE`` or one of its
sub-directories ``MINE``, ``MACHINES``, or ``OPTIONS``, create a subdirectories ``MINE``, ``MACHINES``, or ``OPTIONS``, create a
folder ``Obj_machine`` with all objects and generated files and an folder ``Obj_machine`` with all objects and generated files and an
executable called ``lmp_machine``\ . The standard parallel build executable called ``lmp_machine``\ . The standard parallel build
with ``make mpi`` assumes a standard MPI installation with MPI with ``make mpi`` assumes a standard MPI installation with MPI
@ -107,9 +107,9 @@ MPI and OpenMP support in LAMMPS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are installing MPI yourself to build a parallel LAMMPS If you are installing MPI yourself to build a parallel LAMMPS
executable, we recommend either MPICH or OpenMPI which are regularly executable, we recommend either MPICH or OpenMPI, which are regularly
used and tested with LAMMPS by the LAMMPS developers. MPICH can be used and tested with LAMMPS by the LAMMPS developers. MPICH can be
downloaded from the `MPICH home page <https://www.mpich.org>`_ and downloaded from the `MPICH home page <https://www.mpich.org>`_, and
OpenMPI can be downloaded correspondingly from the `OpenMPI home page OpenMPI can be downloaded correspondingly from the `OpenMPI home page
<https://www.open-mpi.org>`_. Other MPI packages should also work. No <https://www.open-mpi.org>`_. Other MPI packages should also work. No
specific vendor provided and standard compliant MPI library is currently specific vendor provided and standard compliant MPI library is currently
@ -128,14 +128,14 @@ and adds vectorization support when compiled with compatible compilers,
in particular the Intel compilers on top of OpenMP. Also, the ``KOKKOS`` in particular the Intel compilers on top of OpenMP. Also, the ``KOKKOS``
package can be compiled to include OpenMP threading. package can be compiled to include OpenMP threading.
In addition, there are a few commands in LAMMPS that have native OpenMP In addition, there are a few commands in LAMMPS that have native
support included as well. These are commands in the ``MPIIO``, OpenMP support included as well. These are commands in the ``MPIIO``,
``ML-SNAP``, ``DIFFRACTION``, and ``DPD-REACT`` packages. In addition ``ML-SNAP``, ``DIFFRACTION``, and ``DPD-REACT`` packages.
some packages support OpenMP threading indirectly through the libraries Furthermore, some packages support OpenMP threading indirectly through
they interface to: e.g. ``LATTE``, ``KSPACE``, and ``COLVARS``. the libraries they interface to: e.g. ``KSPACE``, and ``COLVARS``.
See the :doc:`Packages details <Packages_details>` page for more See the :doc:`Packages details <Packages_details>` page for more info
info on these packages and the pages for their respective commands on these packages, and the pages for their respective commands for
for OpenMP threading info. OpenMP threading info.
For CMake, if you use ``BUILD_OMP=yes``, you can use these packages For CMake, if you use ``BUILD_OMP=yes``, you can use these packages
and turn on their native OpenMP support and turn on their native OpenMP and turn on their native OpenMP support and turn on their native OpenMP
@ -144,9 +144,9 @@ variable before you launch LAMMPS.
For building via conventional make, the ``CCFLAGS`` and ``LINKFLAGS`` For building via conventional make, the ``CCFLAGS`` and ``LINKFLAGS``
variables in Makefile.machine need to include the compiler flag that variables in Makefile.machine need to include the compiler flag that
enables OpenMP. For GNU compilers it is ``-fopenmp``\ . For (recent) Intel enables OpenMP. For the GNU compilers or Clang, it is ``-fopenmp``\ .
compilers it is ``-qopenmp``\ . If you are using a different compiler, For (recent) Intel compilers, it is ``-qopenmp``\ . If you are using a
please refer to its documentation. different compiler, please refer to its documentation.
.. _default-none-issues: .. _default-none-issues:
@ -174,15 +174,16 @@ Choice of compiler and compile/link options
The choice of compiler and compiler flags can be important for maximum The choice of compiler and compiler flags can be important for maximum
performance. Vendor provided compilers for a specific hardware can performance. Vendor provided compilers for a specific hardware can
produce faster code than open-source compilers like the GNU compilers. produce faster code than open-source compilers like the GNU compilers.
On the most common x86 hardware most popular C++ compilers are quite On the most common x86 hardware, the most popular C++ compilers are
similar in performance of C/C++ code at high optimization levels. When quite similar in their ability to optimize regular C/C++ source code at
using the ``INTEL`` package, there is a distinct advantage in using high optimization levels. When using the ``INTEL`` package, there is a
the `Intel C++ compiler <intel_>`_ due to much improved vectorization distinct advantage in using the `Intel C++ compiler <intel_>`_ due to
through SSE and AVX instructions on compatible hardware as the source much improved vectorization through SSE and AVX instructions on
code includes changes and Intel compiler specific directives to enable compatible hardware. The source code in that package conditionally
high degrees of vectorization. This may change over time as equivalent includes compiler specific directives to enable these high degrees of
vectorization directives are included into OpenMP standard revisions and vectorization. This may change over time as equivalent vectorization
other compilers adopt them. directives are included into the OpenMP standard and other compilers
adopt them.
.. _intel: https://software.intel.com/en-us/intel-compilers .. _intel: https://software.intel.com/en-us/intel-compilers
@ -196,7 +197,7 @@ LAMMPS.
.. tab:: CMake build .. tab:: CMake build
By default CMake will use the compiler it finds according to By default CMake will use the compiler it finds according to
internal preferences and it will add optimization flags internal preferences, and it will add optimization flags
appropriate to that compiler and any :doc:`accelerator packages appropriate to that compiler and any :doc:`accelerator packages
<Speed_packages>` you have included in the build. CMake will <Speed_packages>` you have included in the build. CMake will
check if the detected or selected compiler is compatible with the check if the detected or selected compiler is compatible with the
@ -250,9 +251,9 @@ LAMMPS.
and `-C ../cmake/presets/pgi.cmake` and `-C ../cmake/presets/pgi.cmake`
will switch the compiler to the PGI compilers. will switch the compiler to the PGI compilers.
In addition you can set ``CMAKE_TUNE_FLAGS`` to specifically add Furthermore, you can set ``CMAKE_TUNE_FLAGS`` to specifically add
compiler flags to tune for optimal performance on given hosts. By compiler flags to tune for optimal performance on given hosts.
default this variable is empty. This variable is empty by default.
.. note:: .. note::
@ -276,7 +277,7 @@ LAMMPS.
Parallel build (see ``src/MAKE/Makefile.mpi``): Parallel build (see ``src/MAKE/Makefile.mpi``):
.. code-block:: bash .. code-block:: make
CC = mpicxx CC = mpicxx
CCFLAGS = -g -O3 CCFLAGS = -g -O3
@ -296,7 +297,7 @@ LAMMPS.
If compilation stops with a message like the following: If compilation stops with a message like the following:
.. code-block:: .. code-block:: output
g++ -g -O3 -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -I../STUBS -c ../main.cpp g++ -g -O3 -DLAMMPS_GZIP -DLAMMPS_MEMALIGN=64 -I../STUBS -c ../main.cpp
In file included from ../pointers.h:24:0, In file included from ../pointers.h:24:0,
@ -368,10 +369,10 @@ running LAMMPS from Python via its library interface.
# no default value # no default value
The compilation will always produce a LAMMPS library and an The compilation will always produce a LAMMPS library and an
executable linked to it. By default this will be a static library executable linked to it. By default, this will be a static
named ``liblammps.a`` and an executable named ``lmp`` Setting library named ``liblammps.a`` and an executable named ``lmp``
``BUILD_SHARED_LIBS=yes`` will instead produce a shared library Setting ``BUILD_SHARED_LIBS=yes`` will instead produce a shared
called ``liblammps.so`` (or ``liblammps.dylib`` or library called ``liblammps.so`` (or ``liblammps.dylib`` or
``liblammps.dll`` depending on the platform) If ``liblammps.dll`` depending on the platform) If
``LAMMPS_MACHINE=name`` is set in addition, the name of the ``LAMMPS_MACHINE=name`` is set in addition, the name of the
generated libraries will be changed to either ``liblammps_name.a`` generated libraries will be changed to either ``liblammps_name.a``
@ -429,7 +430,7 @@ You may need to use ``sudo make install`` in place of the last line if
you do not have write privileges for ``/usr/local/lib`` or use the you do not have write privileges for ``/usr/local/lib`` or use the
``--prefix`` configuration option to select an installation folder, ``--prefix`` configuration option to select an installation folder,
where you do have write access. The end result should be the file where you do have write access. The end result should be the file
``/usr/local/lib/libmpich.so``. On many Linux installations the folder ``/usr/local/lib/libmpich.so``. On many Linux installations, the folder
``${HOME}/.local`` is an alternative to using ``/usr/local`` and does ``${HOME}/.local`` is an alternative to using ``/usr/local`` and does
not require superuser or sudo access. In that case the configuration not require superuser or sudo access. In that case the configuration
step becomes: step becomes:
@ -438,9 +439,10 @@ step becomes:
./configure --enable-shared --prefix=${HOME}/.local ./configure --enable-shared --prefix=${HOME}/.local
Avoiding to use "sudo" for custom software installation (i.e. from source Avoiding the use of "sudo" for custom software installation (i.e. from
and not through a package manager tool provided by the OS) is generally source and not through a package manager tool provided by the OS) is
recommended to ensure the integrity of the system software installation. generally recommended to ensure the integrity of the system software
installation.
---------- ----------
@ -514,11 +516,11 @@ using CMake or Make.
Install LAMMPS after a build Install LAMMPS after a build
------------------------------------------ ------------------------------------------
After building LAMMPS, you may wish to copy the LAMMPS executable of After building LAMMPS, you may wish to copy the LAMMPS executable or
library, along with other LAMMPS files (library header, doc files) to library, along with other LAMMPS files (library header, doc files), to a
a globally visible place on your system, for others to access. Note globally visible place on your system, for others to access. Note that
that you may need super-user privileges (e.g. sudo) if the directory you may need super-user privileges (e.g. sudo) if the directory you want
you want to copy files to is protected. to copy files to is protected.
.. tabs:: .. tabs::
@ -536,7 +538,7 @@ you want to copy files to is protected.
environment variable, if you are installing LAMMPS into a non-system environment variable, if you are installing LAMMPS into a non-system
location and/or are linking to libraries in a non-system location that location and/or are linking to libraries in a non-system location that
depend on such runtime path settings. depend on such runtime path settings.
As an alternative you may set the CMake variable ``LAMMPS_INSTALL_RPATH`` As an alternative, you may set the CMake variable ``LAMMPS_INSTALL_RPATH``
to ``on`` and then the runtime paths for any linked shared libraries to ``on`` and then the runtime paths for any linked shared libraries
and the library installation folder for the LAMMPS library will be and the library installation folder for the LAMMPS library will be
embedded and thus the requirement to set environment variables is avoided. embedded and thus the requirement to set environment variables is avoided.

View File

@ -9,10 +9,10 @@ page.
The following text assumes some familiarity with CMake and focuses on The following text assumes some familiarity with CMake and focuses on
using the command line tool ``cmake`` and what settings are supported using the command line tool ``cmake`` and what settings are supported
for building LAMMPS. A more detailed tutorial on how to use ``cmake`` for building LAMMPS. A more detailed tutorial on how to use CMake
itself, the text mode or graphical user interface, change the generated itself, the text mode or graphical user interface, to change the
output files for different build tools and development environments is generated output files for different build tools and development
on a :doc:`separate page <Howto_cmake>`. environments is on a :doc:`separate page <Howto_cmake>`.
.. note:: .. note::
@ -22,12 +22,12 @@ on a :doc:`separate page <Howto_cmake>`.
.. warning:: .. warning::
You must not mix the :doc:`traditional make based <Build_make>` You must not mix the :doc:`traditional make based <Build_make>`
LAMMPS build procedure with using CMake. Thus no packages may be LAMMPS build procedure with using CMake. No packages may be
installed or a build been previously attempted in the LAMMPS source installed or a build been previously attempted in the LAMMPS source
directory by using ``make <machine>``. CMake will detect if this is directory by using ``make <machine>``. CMake will detect if this is
the case and generate an error. To remove conflicting files from the the case and generate an error. To remove conflicting files from the
``src`` you can use the command ``make no-all purge`` which will ``src`` you can use the command ``make no-all purge`` which will
un-install all packages and delete all auto-generated files. uninstall all packages and delete all auto-generated files.
Advantages of using CMake Advantages of using CMake
@ -44,9 +44,9 @@ software or for people that want to modify or extend LAMMPS.
and adapt the LAMMPS default build configuration accordingly. and adapt the LAMMPS default build configuration accordingly.
- CMake can generate files for different build tools and integrated - CMake can generate files for different build tools and integrated
development environments (IDE). development environments (IDE).
- CMake supports customization of settings with a text mode or graphical - CMake supports customization of settings with a command line, text
user interface. No knowledge of file formats or and complex command mode, or graphical user interface. No knowledge of file formats or
line syntax required. complex command line syntax is required.
- All enabled components are compiled in a single build operation. - All enabled components are compiled in a single build operation.
- Automated dependency tracking for all files and configuration options. - Automated dependency tracking for all files and configuration options.
- Support for true out-of-source compilation. Multiple configurations - Support for true out-of-source compilation. Multiple configurations
@ -55,23 +55,23 @@ software or for people that want to modify or extend LAMMPS.
source tree. source tree.
- Simplified packaging of LAMMPS for Linux distributions, environment - Simplified packaging of LAMMPS for Linux distributions, environment
modules, or automated build tools like `Homebrew <https://brew.sh/>`_. modules, or automated build tools like `Homebrew <https://brew.sh/>`_.
- Integration of automated regression testing (the LAMMPS side for that - Integration of automated unit and regression testing (the LAMMPS side
is still under development). of this is still under active development).
.. _cmake_build: .. _cmake_build:
Getting started Getting started
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
Building LAMMPS with CMake is a two-step process. First you use CMake Building LAMMPS with CMake is a two-step process. In the first step,
to generate a build environment in a new directory. For that purpose you use CMake to generate a build environment in a new directory. For
you can use either the command-line utility ``cmake`` (or ``cmake3``), that purpose you can use either the command-line utility ``cmake`` (or
the text-mode UI utility ``ccmake`` (or ``ccmake3``) or the graphical ``cmake3``), the text-mode UI utility ``ccmake`` (or ``ccmake3``) or the
utility ``cmake-gui``, or use them interchangeably. The second step is graphical utility ``cmake-gui``, or use them interchangeably. The
then the compilation and linking of all objects, libraries, and second step is then the compilation and linking of all objects,
executables. Here is a minimal example using the command line version of libraries, and executables using the selected build tool. Here is a
CMake to build LAMMPS with no add-on packages enabled and no minimal example using the command line version of CMake to build LAMMPS
customization: with no add-on packages enabled and no customization:
.. code-block:: bash .. code-block:: bash
@ -96,17 +96,17 @@ Compilation can take a long time, since LAMMPS is a large project with
many features. If your machine has multiple CPU cores (most do these many features. If your machine has multiple CPU cores (most do these
days), you can speed this up by compiling sources in parallel with days), you can speed this up by compiling sources in parallel with
``make -j N`` (with N being the maximum number of concurrently executed ``make -j N`` (with N being the maximum number of concurrently executed
tasks). Also installation of the `ccache <https://ccache.dev/>`_ (= tasks). Installation of the `ccache <https://ccache.dev/>`_ (= Compiler
Compiler Cache) software may speed up repeated compilation even more, Cache) software may speed up repeated compilation even more, e.g. during
e.g. during code development. code development, especially when repeatedly switching between branches.
After the initial build, whenever you edit LAMMPS source files, enable After the initial build, whenever you edit LAMMPS source files, enable
or disable packages, change compiler flags or build options, you must or disable packages, change compiler flags or build options, you must
re-compile and relink the LAMMPS executable with ``cmake --build .`` (or re-compile and relink the LAMMPS executable with ``cmake --build .`` (or
``make``). If the compilation fails for some reason, try running ``make``). If the compilation fails for some reason, try running
``cmake .`` and then compile again. The included dependency tracking ``cmake .`` and then compile again. The included dependency tracking
should make certain that only the necessary subset of files are should make certain that only the necessary subset of files is
re-compiled. You can also delete compiled objects, libraries and re-compiled. You can also delete compiled objects, libraries, and
executables with ``cmake --build . --target clean`` (or ``make clean``). executables with ``cmake --build . --target clean`` (or ``make clean``).
After compilation, you may optionally install the LAMMPS executable into After compilation, you may optionally install the LAMMPS executable into
@ -132,12 +132,12 @@ file called ``CMakeLists.txt`` (for LAMMPS it is located in the
``CMakeCache.txt``, which is generated at the end of the CMake ``CMakeCache.txt``, which is generated at the end of the CMake
configuration step. The cache file contains all current CMake settings. configuration step. The cache file contains all current CMake settings.
To modify settings, enable or disable features, you need to set *variables* To modify settings, enable or disable features, you need to set
with either the *-D* command line flag (``-D VARIABLE1_NAME=value``) or *variables* with either the *-D* command line flag (``-D
change them in the text mode of graphical user interface. The *-D* flag VARIABLE1_NAME=value``) or change them in the text mode of the graphical
can be used several times in one command. user interface. The *-D* flag can be used several times in one command.
For your convenience we provide :ref:`CMake presets <cmake_presets>` For your convenience, we provide :ref:`CMake presets <cmake_presets>`
that combine multiple settings to enable optional LAMMPS packages or use that combine multiple settings to enable optional LAMMPS packages or use
a different compiler tool chain. Those are loaded with the *-C* flag a different compiler tool chain. Those are loaded with the *-C* flag
(``-C ../cmake/presets/basic.cmake``). This step would only be needed (``-C ../cmake/presets/basic.cmake``). This step would only be needed
@ -155,22 +155,23 @@ specific CMake version is given when running ``cmake --help``.
Multi-configuration build systems Multi-configuration build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Throughout this manual it is mostly assumed that LAMMPS is being built Throughout this manual, it is mostly assumed that LAMMPS is being built
on a Unix-like operating system with "make" as the underlying "builder", on a Unix-like operating system with "make" as the underlying "builder",
since this is the most common case. In this case the build "configuration" since this is the most common case. In this case the build
is chose using ``-D CMAKE_BUILD_TYPE=<configuration>`` with ``<configuration>`` "configuration" is chose using ``-D CMAKE_BUILD_TYPE=<configuration>``
being one of "Release", "Debug", "RelWithDebInfo", or "MinSizeRel". with ``<configuration>`` being one of "Release", "Debug",
Some build tools, however, can also use or even require to have a so-called "RelWithDebInfo", or "MinSizeRel". Some build tools, however, can also
multi-configuration build system setup. For those the built type (or use or even require having a so-called multi-configuration build system
configuration) is chosen at compile time using the same build files. E.g. setup. For a multi-configuration build, the built type (or
with: configuration) is selected at compile time using the same build
files. E.g. with:
.. code-block:: bash .. code-block:: bash
cmake --build build-multi --config Release cmake --build build-multi --config Release
In that case the resulting binaries are not in the build folder directly In that case the resulting binaries are not in the build folder directly
but in sub-directories corresponding to the build type (i.e. Release in but in subdirectories corresponding to the build type (i.e. Release in
the example from above). Similarly, for running unit tests the the example from above). Similarly, for running unit tests the
configuration is selected with the *-C* flag: configuration is selected with the *-C* flag:
@ -184,7 +185,7 @@ particular applicable to compiling packages that require additional libraries
that would be downloaded and compiled by CMake. The "windows" preset file that would be downloaded and compiled by CMake. The "windows" preset file
tries to keep track of which packages can be compiled natively with the tries to keep track of which packages can be compiled natively with the
MSVC compilers out-of-the box. Not all of those external libraries are MSVC compilers out-of-the box. Not all of those external libraries are
portable to Windows either. portable to Windows, either.
Installing CMake Installing CMake

View File

@ -46,7 +46,7 @@ It can be enabled for all C++ code with the following CMake flag
With this flag enabled all source files will be processed twice, first to With this flag enabled all source files will be processed twice, first to
be compiled and then to be analyzed. Please note that the analysis can be be compiled and then to be analyzed. Please note that the analysis can be
significantly more time consuming than the compilation itself. significantly more time-consuming than the compilation itself.
---------- ----------
@ -140,36 +140,62 @@ of the LAMMPS project on GitHub.
The unit testing facility is integrated into the CMake build process The unit testing facility is integrated into the CMake build process
of the LAMMPS source code distribution itself. It can be enabled by of the LAMMPS source code distribution itself. It can be enabled by
setting ``-D ENABLE_TESTING=on`` during the CMake configuration step. setting ``-D ENABLE_TESTING=on`` during the CMake configuration step.
It requires the `YAML <http://pyyaml.org/>`_ library and development It requires the `YAML <https://pyyaml.org/>`_ library and development
headers (if those are not found locally a recent version will be headers (if those are not found locally a recent version will be
downloaded and compiled along with LAMMPS and the test program) to downloaded and compiled along with LAMMPS and the test program) to
compile and will download and compile a specific recent version of the compile and will download and compile a specific recent version of the
`Googletest <https://github.com/google/googletest/>`_ C++ test framework `Googletest <https://github.com/google/googletest/>`_ C++ test framework
for implementing the tests. for implementing the tests.
.. admonition:: Software version requirements for testing
:class: note
The compiler and library version requirements for the testing
framework are more strict than for the main part of LAMMPS. For
example the default GNU C++ and Fortran compilers of RHEL/CentOS 7.x
(version 4.8.x) are not sufficient. The CMake configuration will try
to detect incompatible versions and either skip incompatible tests or
stop with an error. Also the number of tests will depend on
installed LAMMPS packages, development environment, operating system,
and configuration settings.
After compilation is complete, the unit testing is started in the build After compilation is complete, the unit testing is started in the build
folder using the ``ctest`` command, which is part of the CMake software. folder using the ``ctest`` command, which is part of the CMake software.
The output of this command will be looking something like this:: The output of this command will be looking something like this:
[...]$ ctest .. code-block:: console
$ ctest
Test project /home/akohlmey/compile/lammps/build-testing Test project /home/akohlmey/compile/lammps/build-testing
Start 1: MolPairStyle:hybrid-overlay Start 1: RunLammps
1/109 Test #1: MolPairStyle:hybrid-overlay ......... Passed 0.02 sec 1/563 Test #1: RunLammps .......................................... Passed 0.28 sec
Start 2: MolPairStyle:hybrid Start 2: HelpMessage
2/109 Test #2: MolPairStyle:hybrid ................. Passed 0.01 sec 2/563 Test #2: HelpMessage ........................................ Passed 0.06 sec
Start 3: MolPairStyle:lj_class2 Start 3: InvalidFlag
3/563 Test #3: InvalidFlag ........................................ Passed 0.06 sec
Start 4: Tokenizer
4/563 Test #4: Tokenizer .......................................... Passed 0.05 sec
Start 5: MemPool
5/563 Test #5: MemPool ............................................ Passed 0.05 sec
Start 6: ArgUtils
6/563 Test #6: ArgUtils ........................................... Passed 0.05 sec
[...] [...]
Start 107: PotentialFileReader Start 561: ImproperStyle:zero
107/109 Test #107: PotentialFileReader ................ Passed 0.04 sec 561/563 Test #561: ImproperStyle:zero ................................. Passed 0.07 sec
Start 108: EIMPotentialFileReader Start 562: TestMliapPyUnified
108/109 Test #108: EIMPotentialFileReader ............. Passed 0.03 sec 562/563 Test #562: TestMliapPyUnified ................................. Passed 0.16 sec
Start 109: TestSimpleCommands Start 563: TestPairList
109/109 Test #109: TestSimpleCommands ................. Passed 0.02 sec 563/563 Test #563: TestPairList ....................................... Passed 0.06 sec
100% tests passed, 0 tests failed out of 26 100% tests passed, 0 tests failed out of 563
Total Test time (real) = 25.57 sec Label Time Summary:
generated = 0.85 sec*proc (3 tests)
noWindows = 4.16 sec*proc (2 tests)
slow = 78.33 sec*proc (67 tests)
unstable = 28.23 sec*proc (34 tests)
Total Test time (real) = 132.34 sec
The ``ctest`` command has many options, the most important ones are: The ``ctest`` command has many options, the most important ones are:
@ -200,11 +226,13 @@ Fortran) and testing different aspects of the LAMMPS software and its features.
The tests will adapt to the compilation settings of LAMMPS, so that tests The tests will adapt to the compilation settings of LAMMPS, so that tests
will be skipped if prerequisite features are not available in LAMMPS. will be skipped if prerequisite features are not available in LAMMPS.
.. note:: .. admonition:: Work in Progress
:class: note
The unit test framework was added in spring 2020 and is under active The unit test framework was added in spring 2020 and is under active
development. The coverage is not complete and will be expanded over development. The coverage is not complete and will be expanded over
time. time. Preference is given to parts of the code base that are easy to
test or commonly used.
Tests for styles of the same kind of style (e.g. pair styles or bond Tests for styles of the same kind of style (e.g. pair styles or bond
styles) are performed with the same test executable using different styles) are performed with the same test executable using different
@ -238,9 +266,9 @@ the CMake option ``-D BUILD_MPI=off`` can significantly speed up testing,
since this will skip the MPI initialization for each test run. since this will skip the MPI initialization for each test run.
Below is an example command and output: Below is an example command and output:
.. parsed-literal:: .. code-block:: console
[tests]$ test_pair_style mol-pair-lj_cut.yaml $ test_pair_style mol-pair-lj_cut.yaml
[==========] Running 6 tests from 1 test suite. [==========] Running 6 tests from 1 test suite.
[----------] Global test environment set-up. [----------] Global test environment set-up.
[----------] 6 tests from PairStyle [----------] 6 tests from PairStyle
@ -495,6 +523,8 @@ The following options are available.
These should help to make source and documentation files conforming These should help to make source and documentation files conforming
to some the coding style preferences of the LAMMPS developers. to some the coding style preferences of the LAMMPS developers.
.. _clang-format:
Clang-format support Clang-format support
-------------------- --------------------
@ -520,7 +550,7 @@ commands like the following:
.. code-block:: bash .. code-block:: bash
$ clang-format -i some_file.cpp clang-format -i some_file.cpp
The following target are available for both, GNU make and CMake: The following target are available for both, GNU make and CMake:
@ -529,3 +559,19 @@ The following target are available for both, GNU make and CMake:
make format-src # apply clang-format to all files in src and the package folders make format-src # apply clang-format to all files in src and the package folders
make format-tests # apply clang-format to all files in the unittest tree make format-tests # apply clang-format to all files in the unittest tree
----------
.. _gh-cli:
GitHub command line interface
-----------------------------
GitHub is developing a `tool for the command line
<https://cli.github.com>`_ that interacts with the GitHub website via a
command called ``gh``. This can be extremely convenient when working
with a Git repository hosted on GitHub (like LAMMPS). It is thus highly
recommended to install it when doing LAMMPS development.
The capabilities of the ``gh`` command is continually expanding, so
please see the documentation at https://cli.github.com/manual/

View File

@ -12,11 +12,11 @@ in addition to
- Traditional make - Traditional make
* - .. code-block:: bash * - .. code-block:: bash
$ cmake -D PKG_NAME=yes cmake -D PKG_NAME=yes
- .. code-block:: bash - .. code-block:: console
$ make yes-name make yes-name
as described on the :doc:`Build_package <Build_package>` page. as described on the :doc:`Build_package <Build_package>` page.
@ -28,26 +28,28 @@ You may need to tell LAMMPS where it is found on your system.
This is the list of packages that may require additional steps. This is the list of packages that may require additional steps.
.. this list must be kept in sync with its counterpart in Build_package.rst
.. table_from_list:: .. table_from_list::
:columns: 6 :columns: 6
* :ref:`ADIOS <adios>` * :ref:`ADIOS <adios>`
* :ref:`ATC <atc>` * :ref:`ATC <atc>`
* :ref:`AWPMD <awpmd>` * :ref:`AWPMD <awpmd>`
* :ref:`COLVARS <colvars>` * :ref:`COLVARS <colvar>`
* :ref:`COMPRESS <compress>` * :ref:`COMPRESS <compress>`
* :ref:`ELECTRODE <electrode>`
* :ref:`GPU <gpu>` * :ref:`GPU <gpu>`
* :ref:`H5MD <h5md>` * :ref:`H5MD <h5md>`
* :ref:`INTEL <intel>` * :ref:`INTEL <intel>`
* :ref:`KIM <kim>` * :ref:`KIM <kim>`
* :ref:`KOKKOS <kokkos>` * :ref:`KOKKOS <kokkos>`
* :ref:`LATTE <latte>` * :ref:`LEPTON <lepton>`
* :ref:`MACHDYN <machdyn>` * :ref:`MACHDYN <machdyn>`
* :ref:`MDI <mdi>` * :ref:`MDI <mdi>`
* :ref:`MESONT <mesont>`
* :ref:`ML-HDNNP <ml-hdnnp>` * :ref:`ML-HDNNP <ml-hdnnp>`
* :ref:`ML-IAP <mliap>` * :ref:`ML-IAP <mliap>`
* :ref:`ML-PACE <ml-pace>` * :ref:`ML-PACE <ml-pace>`
* :ref:`ML-POD <ml-pod>`
* :ref:`ML-QUIP <ml-quip>` * :ref:`ML-QUIP <ml-quip>`
* :ref:`MOLFILE <molfile>` * :ref:`MOLFILE <molfile>`
* :ref:`MSCG <mscg>` * :ref:`MSCG <mscg>`
@ -124,8 +126,10 @@ CMake build
-D GPU_PREC=value # precision setting -D GPU_PREC=value # precision setting
# value = double or mixed (default) or single # value = double or mixed (default) or single
-D GPU_ARCH=value # primary GPU hardware choice for GPU_API=cuda -D GPU_ARCH=value # primary GPU hardware choice for GPU_API=cuda
# value = sm_XX, see below # value = sm_XX (see below, default is sm_50)
# default is sm_50 -D GPU_DEBUG=value # enable debug code in the GPU package library, mostly useful for developers
# value = yes or no (default)
-D HIP_PATH=value # value = path to HIP installation. Must be set if GPU_API=HIP
-D HIP_ARCH=value # primary GPU hardware choice for GPU_API=hip -D HIP_ARCH=value # primary GPU hardware choice for GPU_API=hip
# value depends on selected HIP_PLATFORM # value depends on selected HIP_PLATFORM
# default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for HIP_PLATFORM=nvcc # default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for HIP_PLATFORM=nvcc
@ -147,7 +151,9 @@ CMake build
* sm_60 or sm_61 for Pascal (supported since CUDA 8) * sm_60 or sm_61 for Pascal (supported since CUDA 8)
* sm_70 for Volta (supported since CUDA 9) * sm_70 for Volta (supported since CUDA 9)
* sm_75 for Turing (supported since CUDA 10) * sm_75 for Turing (supported since CUDA 10)
* sm_80 for Ampere (supported since CUDA 11) * sm_80 or sm_86 for Ampere (supported since CUDA 11, sm_86 since CUDA 11.1)
* sm_89 for Lovelace (supported since CUDA 11.8)
* sm_90 for Hopper (supported since CUDA 12.0)
A more detailed list can be found, for example, A more detailed list can be found, for example,
at `Wikipedia's CUDA article <https://en.wikipedia.org/wiki/CUDA#GPUs_supported>`_ at `Wikipedia's CUDA article <https://en.wikipedia.org/wiki/CUDA#GPUs_supported>`_
@ -174,15 +180,25 @@ way no local OpenCL development headers or library needs to be present and only
OpenCL compatible drivers need to be installed to use OpenCL. If this is not OpenCL compatible drivers need to be installed to use OpenCL. If this is not
desired, you can set :code:`USE_STATIC_OPENCL_LOADER` to :code:`no`. desired, you can set :code:`USE_STATIC_OPENCL_LOADER` to :code:`no`.
The GPU library has some multi-thread support using OpenMP. If LAMMPS is built
with ``-D BUILD_OMP=on`` this will also be enabled.
If you are compiling with HIP, note that before running CMake you will have to If you are compiling with HIP, note that before running CMake you will have to
set appropriate environment variables. Some variables such as set appropriate environment variables. Some variables such as
:code:`HCC_AMDGPU_TARGET` (for ROCm <= 4.0) or :code:`CUDA_PATH` are necessary for :code:`hipcc` :code:`HCC_AMDGPU_TARGET` (for ROCm <= 4.0) or :code:`CUDA_PATH` are necessary for :code:`hipcc`
and the linker to work correctly. and the linker to work correctly.
Using CHIP-SPV implementation of HIP is now supported. It allows one to run HIP
code on Intel GPUs via the OpenCL or Level Zero backends. To use CHIP-SPV, you must
set :code:`-DHIP_USE_DEVICE_SORT=OFF` in your CMake command line as CHIP-SPV does not
yet support hipCUB. The use of HIP for Intel GPUs is still experimental so you
should only use this option in preparations to run on Aurora system at ANL.
.. code:: bash .. code:: bash
# AMDGPU target (ROCm <= 4.0) # AMDGPU target (ROCm <= 4.0)
export HIP_PLATFORM=hcc export HIP_PLATFORM=hcc
export HIP_PATH=/path/to/HIP/install
export HCC_AMDGPU_TARGET=gfx906 export HCC_AMDGPU_TARGET=gfx906
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc .. cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc ..
make -j 4 make -j 4
@ -191,6 +207,7 @@ and the linker to work correctly.
# AMDGPU target (ROCm >= 4.1) # AMDGPU target (ROCm >= 4.1)
export HIP_PLATFORM=amd export HIP_PLATFORM=amd
export HIP_PATH=/path/to/HIP/install
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc .. cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc ..
make -j 4 make -j 4
@ -199,10 +216,20 @@ and the linker to work correctly.
# CUDA target (not recommended, use GPU_ARCH=cuda) # CUDA target (not recommended, use GPU_ARCH=cuda)
# !!! DO NOT set CMAKE_CXX_COMPILER !!! # !!! DO NOT set CMAKE_CXX_COMPILER !!!
export HIP_PLATFORM=nvcc export HIP_PLATFORM=nvcc
export HIP_PATH=/path/to/HIP/install
export CUDA_PATH=/usr/local/cuda export CUDA_PATH=/usr/local/cuda
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=sm_70 .. cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=sm_70 ..
make -j 4 make -j 4
.. code:: bash
# SPIR-V target (Intel GPUs)
export HIP_PLATFORM=spirv
export HIP_PATH=/path/to/HIP/install
export CMAKE_CXX_COMPILER=<hipcc/clang++>
cmake -D PKG_GPU=on -D GPU_API=HIP ..
make -j 4
Traditional make Traditional make
^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^
@ -215,15 +242,15 @@ LAMMPS code. This also applies to the ``-DLAMMPS_BIGBIG``\ ,
Makefile you use. Makefile you use.
You can also build the library in one step from the ``lammps/src`` dir, You can also build the library in one step from the ``lammps/src`` dir,
using a command like these, which simply invoke the ``lib/gpu/Install.py`` using a command like these, which simply invokes the ``lib/gpu/Install.py``
script with the specified args: script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-gpu # print help message make lib-gpu # print help message
$ make lib-gpu args="-b" # build GPU library with default Makefile.linux make lib-gpu args="-b" # build GPU library with default Makefile.linux
$ make lib-gpu args="-m xk7 -p single -o xk7.single" # create new Makefile.xk7.single, altered for single-precision make lib-gpu args="-m xk7 -p single -o xk7.single" # create new Makefile.xk7.single, altered for single-precision
$ make lib-gpu args="-m mpi -a sm_60 -p mixed -b" # build GPU library with mixed precision and P100 using other settings in Makefile.mpi make lib-gpu args="-m mpi -a sm_60 -p mixed -b" # build GPU library with mixed precision and P100 using other settings in Makefile.mpi
Note that this procedure starts with a Makefile.machine in lib/gpu, as Note that this procedure starts with a Makefile.machine in lib/gpu, as
specified by the "-m" switch. For your convenience, machine makefiles specified by the "-m" switch. For your convenience, machine makefiles
@ -249,10 +276,13 @@ To enable GPU binning via CUDA performance primitives set the Makefile variable
most modern GPUs. most modern GPUs.
To support the CUDA multiprocessor server you can set the define To support the CUDA multiprocessor server you can set the define
``-DCUDA_PROXY``. Please note that in this case you must **not** use ``-DCUDA_MPS_SUPPORT``. Please note that in this case you must **not** use
the CUDA performance primitives and thus set the variable ``CUDPP_OPT`` the CUDA performance primitives and thus set the variable ``CUDPP_OPT``
to empty. to empty.
The GPU library has some multi-thread support using OpenMP. You need to add
the compiler flag that enables OpenMP to the ``CUDR_OPTS`` Makefile variable.
If the library build is successful, 3 files should be created: If the library build is successful, 3 files should be created:
``lib/gpu/libgpu.a``\ , ``lib/gpu/nvc_get_devices``\ , and ``lib/gpu/libgpu.a``\ , ``lib/gpu/nvc_get_devices``\ , and
``lib/gpu/Makefile.lammps``\ . The latter has settings that enable LAMMPS ``lib/gpu/Makefile.lammps``\ . The latter has settings that enable LAMMPS
@ -263,7 +293,7 @@ your machine are not correct, the LAMMPS build will fail, and
.. note:: .. note::
If you re-build the GPU library in ``lib/gpu``, you should always If you re-build the GPU library in ``lib/gpu``, you should always
un-install the GPU package in ``lammps/src``, then re-install it and uninstall the GPU package in ``lammps/src``, then re-install it and
re-build LAMMPS. This is because the compilation of files in the GPU re-build LAMMPS. This is because the compilation of files in the GPU
package uses the library settings from the ``lib/gpu/Makefile.machine`` package uses the library settings from the ``lib/gpu/Makefile.machine``
used to build the GPU library. used to build the GPU library.
@ -295,7 +325,7 @@ detailed information is available at:
In addition to installing the KIM API, it is also necessary to install the In addition to installing the KIM API, it is also necessary to install the
library of KIM models (interatomic potentials). library of KIM models (interatomic potentials).
See `Obtaining KIM Models <http://openkim.org/doc/usage/obtaining-models>`_ to See `Obtaining KIM Models <https://openkim.org/doc/usage/obtaining-models>`_ to
learn how to install a pre-build binary of the OpenKIM Repository of Models. learn how to install a pre-build binary of the OpenKIM Repository of Models.
See the list of all KIM models here: https://openkim.org/browse/models See the list of all KIM models here: https://openkim.org/browse/models
@ -331,18 +361,18 @@ minutes to hours) to build. Of course you only need to do that once.)
You can download and build the KIM library manually if you prefer; You can download and build the KIM library manually if you prefer;
follow the instructions in ``lib/kim/README``. You can also do follow the instructions in ``lib/kim/README``. You can also do
this in one step from the lammps/src directory, using a command like this in one step from the lammps/src directory, using a command like
these, which simply invoke the ``lib/kim/Install.py`` script with these, which simply invokes the ``lib/kim/Install.py`` script with
the specified args. the specified args.
.. code-block:: bash .. code-block:: bash
$ make lib-kim # print help message make lib-kim # print help message
$ make lib-kim args="-b " # (re-)install KIM API lib with only example models make lib-kim args="-b " # (re-)install KIM API lib with only example models
$ make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001" # ditto plus one model make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001" # ditto plus one model
$ make lib-kim args="-b -a everything" # install KIM API lib with all models make lib-kim args="-b -a everything" # install KIM API lib with all models
$ make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # add one model or model driver make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # add one model or model driver
$ make lib-kim args="-p /usr/local" # use an existing KIM API installation at the provided location make lib-kim args="-p /usr/local" # use an existing KIM API installation at the provided location
$ make lib-kim args="-p /usr/local -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # ditto but add one model or driver make lib-kim args="-p /usr/local -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # ditto but add one model or driver
When using the "-b " option, the KIM library is built using its native When using the "-b " option, the KIM library is built using its native
cmake build system. The ``lib/kim/Install.py`` script supports a cmake build system. The ``lib/kim/Install.py`` script supports a
@ -354,7 +384,7 @@ minutes to hours) to build. Of course you only need to do that once.)
.. code-block:: bash .. code-block:: bash
$ CMAKE=cmake3 CXX=g++-11 CC=gcc-11 FC=gfortran-11 make lib-kim args="-b " # (re-)install KIM API lib using cmake3 and gnu v11 compilers with only example models CMAKE=cmake3 CXX=g++-11 CC=gcc-11 FC=gfortran-11 make lib-kim args="-b " # (re-)install KIM API lib using cmake3 and gnu v11 compilers with only example models
Settings for debugging OpenKIM web queries discussed below need to Settings for debugging OpenKIM web queries discussed below need to
be applied by adding them to the ``LMP_INC`` variable through be applied by adding them to the ``LMP_INC`` variable through
@ -413,7 +443,7 @@ Enabling the extra unit tests have some requirements,
``EAM_Dynamo_MendelevAckland_2007v3_Zr__MO_004835508849_000``, ``EAM_Dynamo_MendelevAckland_2007v3_Zr__MO_004835508849_000``,
``EAM_Dynamo_ErcolessiAdams_1994_Al__MO_123629422045_005``, and ``EAM_Dynamo_ErcolessiAdams_1994_Al__MO_123629422045_005``, and
``LennardJones612_UniversalShifted__MO_959249795837_003`` KIM models. ``LennardJones612_UniversalShifted__MO_959249795837_003`` KIM models.
See `Obtaining KIM Models <http://openkim.org/doc/usage/obtaining-models>`_ See `Obtaining KIM Models <https://openkim.org/doc/usage/obtaining-models>`_
to learn how to install a pre-built binary of the OpenKIM Repository of to learn how to install a pre-built binary of the OpenKIM Repository of
Models or see Models or see
`Installing KIM Models <https://openkim.org/doc/usage/obtaining-models/#installing_models>`_ `Installing KIM Models <https://openkim.org/doc/usage/obtaining-models/#installing_models>`_
@ -464,6 +494,9 @@ They must be specified in uppercase.
* - **Arch-ID** * - **Arch-ID**
- **HOST or GPU** - **HOST or GPU**
- **Description** - **Description**
* - NATIVE
- HOST
- Local machine
* - AMDAVX * - AMDAVX
- HOST - HOST
- AMD 64-bit x86 CPU (AVX 1) - AMD 64-bit x86 CPU (AVX 1)
@ -503,9 +536,21 @@ They must be specified in uppercase.
* - BDW * - BDW
- HOST - HOST
- Intel Broadwell Xeon E-class CPU (AVX 2 + transactional mem) - Intel Broadwell Xeon E-class CPU (AVX 2 + transactional mem)
* - SKL
- HOST
- Intel Skylake Client CPU
* - SKX * - SKX
- HOST - HOST
- Intel Sky Lake Xeon E-class HPC CPU (AVX512 + transactional mem) - Intel Skylake Xeon Server CPU (AVX512)
* - ICL
- HOST
- Intel Ice Lake Client CPU (AVX512)
* - ICX
- HOST
- Intel Ice Lake Xeon Server CPU (AVX512)
* - SPR
- HOST
- Intel Sapphire Rapids Xeon Server CPU (AVX512)
* - KNC * - KNC
- HOST - HOST
- Intel Knights Corner Xeon Phi - Intel Knights Corner Xeon Phi
@ -566,6 +611,12 @@ They must be specified in uppercase.
* - AMPERE86 * - AMPERE86
- GPU - GPU
- NVIDIA Ampere generation CC 8.6 GPU - NVIDIA Ampere generation CC 8.6 GPU
* - ADA89
- GPU
- NVIDIA Ada Lovelace generation CC 8.9 GPU
* - HOPPER90
- GPU
- NVIDIA Hopper generation CC 9.0 GPU
* - VEGA900 * - VEGA900
- GPU - GPU
- AMD GPU MI25 GFX900 - AMD GPU MI25 GFX900
@ -577,7 +628,10 @@ They must be specified in uppercase.
- AMD GPU MI100 GFX908 - AMD GPU MI100 GFX908
* - VEGA90A * - VEGA90A
- GPU - GPU
- AMD GPU - AMD GPU MI200 GFX90A
* - INTEL_GEN
- GPU
- SPIR64-based devices, e.g. Intel GPUs, using JIT
* - INTEL_DG1 * - INTEL_DG1
- GPU - GPU
- Intel Iris XeMAX GPU - Intel Iris XeMAX GPU
@ -592,9 +646,12 @@ They must be specified in uppercase.
- Intel GPU Gen12LP - Intel GPU Gen12LP
* - INTEL_XEHP * - INTEL_XEHP
- GPU - GPU
- Intel GPUs Xe-HP - Intel GPU Xe-HP
* - INTEL_PVC
- GPU
- Intel GPU Ponte Vecchio
This list was last updated for version 3.5.0 of the Kokkos library. This list was last updated for version 3.7.1 of the Kokkos library.
.. tabs:: .. tabs::
@ -626,20 +683,11 @@ This list was last updated for version 3.5.0 of the Kokkos library.
-D Kokkos_ARCH_GPUARCH=yes # GPUARCH = GPU from list above -D Kokkos_ARCH_GPUARCH=yes # GPUARCH = GPU from list above
-D Kokkos_ENABLE_CUDA=yes -D Kokkos_ENABLE_CUDA=yes
-D Kokkos_ENABLE_OPENMP=yes -D Kokkos_ENABLE_OPENMP=yes
-D CMAKE_CXX_COMPILER=wrapper # wrapper = full path to Cuda nvcc wrapper
This will also enable executing FFTs on the GPU, either via the This will also enable executing FFTs on the GPU, either via the
internal KISSFFT library, or - by preference - with the cuFFT internal KISSFFT library, or - by preference - with the cuFFT
library bundled with the CUDA toolkit, depending on whether CMake library bundled with the CUDA toolkit, depending on whether CMake
can identify its location. The *wrapper* value for can identify its location.
``CMAKE_CXX_COMPILER`` variable is the path to the CUDA nvcc
compiler wrapper provided in the Kokkos library:
``lib/kokkos/bin/nvcc_wrapper``\ . The setting should include the
full path name to the wrapper, e.g.
.. code-block:: bash
-D CMAKE_CXX_COMPILER=${HOME}/lammps/lib/kokkos/bin/nvcc_wrapper
For AMD or NVIDIA GPUs using HIP, set these variables: For AMD or NVIDIA GPUs using HIP, set these variables:
@ -774,51 +822,52 @@ will thus always enable it.
---------- ----------
.. _latte: .. _lepton:
LATTE package LEPTON package
------------------------- --------------
To build with this package, you must download and build the LATTE To build with this package, you must build the Lepton library which is
library. included in the LAMMPS source distribution in the ``lib/lepton`` folder.
.. tabs:: .. tabs::
.. tab:: CMake build .. tab:: CMake build
.. code-block:: bash This is the recommended build procedure for using Lepton in
LAMMPS. No additional settings are normally needed besides
``-D PKG_LEPTON=yes``.
-D DOWNLOAD_LATTE=value # download LATTE for build, value = no (default) or yes On x86 hardware the Lepton library will also include a just-in-time
-D LATTE_LIBRARY=path # LATTE library file (only needed if a custom location) compiler for faster execution. This is auto detected but can
be explicitly disabled by setting ``-D LEPTON_ENABLE_JIT=no``
If ``DOWNLOAD_LATTE`` is set, the LATTE library will be downloaded (or enabled by setting it to yes).
and built inside the CMake build directory. If the LATTE library
is already on your system (in a location CMake cannot find it),
``LATTE_LIBRARY`` is the filename (plus path) of the LATTE library
file, not the directory the library file is in.
.. tab:: Traditional make .. tab:: Traditional make
You can download and build the LATTE library manually if you Before building LAMMPS, one must build the Lepton library in lib/lepton.
prefer; follow the instructions in ``lib/latte/README``\ . You
can also do it in one step from the ``lammps/src`` dir, using a This can be done manually in the same folder by using or adapting
command like these, which simply invokes the one of the provided Makefiles: for example, ``Makefile.serial`` for
``lib/latte/Install.py`` script with the specified args: the GNU C++ compiler, or ``Makefile.mpi`` for the MPI compiler wrapper.
The Lepton library is written in C++-11 and thus the C++ compiler
may need to be instructed to enable support for that.
In general, it is safer to use build setting consistent with the
rest of LAMMPS. This is best carried out from the LAMMPS src
directory using a command like these, which simply invokes the
``lib/lepton/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-latte # print help message make lib-lepton # print help message
$ make lib-latte args="-b" # download and build in lib/latte/LATTE-master make lib-lepton args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
$ make lib-latte args="-p $HOME/latte" # use existing LATTE installation in $HOME/latte make lib-lepton args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
$ make lib-latte args="-b -m gfortran" # download and build in lib/latte and
# copy Makefile.lammps.gfortran to Makefile.lammps
Note that 3 symbolic (soft) links, ``includelink`` and ``liblink`` The "machine" argument of the "-m" flag is used to find a
and ``filelink.o``, are created in ``lib/latte`` to point to Makefile.machine to use as build recipe.
required folders and files in the LATTE home directory. When
LAMMPS itself is built it will use these links. You should also The build should produce a ``build`` folder and the library ``lib/lepton/liblmplepton.a``
check that the ``Makefile.lammps`` file you create is appropriate
for the compiler you use on your system to build LATTE.
---------- ----------
@ -905,17 +954,17 @@ more details.
You can download and build the MS-CG library manually if you You can download and build the MS-CG library manually if you
prefer; follow the instructions in ``lib/mscg/README``\ . You can prefer; follow the instructions in ``lib/mscg/README``\ . You can
also do it in one step from the ``lammps/src`` dir, using a also do it in one step from the ``lammps/src`` dir, using a
command like these, which simply invoke the command like these, which simply invokes the
``lib/mscg/Install.py`` script with the specified args: ``lib/mscg/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-mscg # print help message make lib-mscg # print help message
$ make lib-mscg args="-b -m serial" # download and build in lib/mscg/MSCG-release-master make lib-mscg args="-b -m serial" # download and build in lib/mscg/MSCG-release-master
# with the settings compatible with "make serial" # with the settings compatible with "make serial"
$ make lib-mscg args="-b -m mpi" # download and build in lib/mscg/MSCG-release-master make lib-mscg args="-b -m mpi" # download and build in lib/mscg/MSCG-release-master
# with the settings compatible with "make mpi" # with the settings compatible with "make mpi"
$ make lib-mscg args="-p /usr/local/mscg-release" # use the existing MS-CG installation in /usr/local/mscg-release make lib-mscg args="-p /usr/local/mscg-release" # use the existing MS-CG installation in /usr/local/mscg-release
Note that 2 symbolic (soft) links, ``includelink`` and ``liblink``, Note that 2 symbolic (soft) links, ``includelink`` and ``liblink``,
will be created in ``lib/mscg`` to point to the MS-CG will be created in ``lib/mscg`` to point to the MS-CG
@ -962,15 +1011,15 @@ POEMS package
``lib/poems``\ . You can do this manually if you prefer; follow ``lib/poems``\ . You can do this manually if you prefer; follow
the instructions in ``lib/poems/README``\ . You can also do it in the instructions in ``lib/poems/README``\ . You can also do it in
one step from the ``lammps/src`` dir, using a command like these, one step from the ``lammps/src`` dir, using a command like these,
which simply invoke the ``lib/poems/Install.py`` script with the which simply invokes the ``lib/poems/Install.py`` script with the
specified args: specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-poems # print help message make lib-poems # print help message
$ make lib-poems args="-m serial" # build with GNU g++ compiler (settings as with "make serial") make lib-poems args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
$ make lib-poems args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi") make lib-poems args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
$ make lib-poems args="-m icc" # build with Intel icc compiler make lib-poems args="-m icc" # build with Intel icc compiler
The build should produce two files: ``lib/poems/libpoems.a`` and The build should produce two files: ``lib/poems/libpoems.a`` and
``lib/poems/Makefile.lammps``. The latter is copied from an ``lib/poems/Makefile.lammps``. The latter is copied from an
@ -1025,7 +1074,7 @@ VORONOI package
----------------------------- -----------------------------
To build with this package, you must download and build the To build with this package, you must download and build the
`Voro++ library <http://math.lbl.gov/voro++>`_ or install a `Voro++ library <https://math.lbl.gov/voro++/>`_ or install a
binary package provided by your operating system. binary package provided by your operating system.
.. tabs:: .. tabs::
@ -1051,15 +1100,15 @@ binary package provided by your operating system.
You can download and build the Voro++ library manually if you You can download and build the Voro++ library manually if you
prefer; follow the instructions in ``lib/voronoi/README``. You prefer; follow the instructions in ``lib/voronoi/README``. You
can also do it in one step from the ``lammps/src`` dir, using a can also do it in one step from the ``lammps/src`` dir, using a
command like these, which simply invoke the command like these, which simply invokes the
``lib/voronoi/Install.py`` script with the specified args: ``lib/voronoi/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-voronoi # print help message make lib-voronoi # print help message
$ make lib-voronoi args="-b" # download and build the default version in lib/voronoi/voro++-<version> make lib-voronoi args="-b" # download and build the default version in lib/voronoi/voro++-<version>
$ make lib-voronoi args="-p $HOME/voro++" # use existing Voro++ installation in $HOME/voro++ make lib-voronoi args="-p $HOME/voro++" # use existing Voro++ installation in $HOME/voro++
$ make lib-voronoi args="-b -v voro++0.4.6" # download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6 make lib-voronoi args="-b -v voro++0.4.6" # download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6
Note that 2 symbolic (soft) links, ``includelink`` and Note that 2 symbolic (soft) links, ``includelink`` and
``liblink``, are created in lib/voronoi to point to the Voro++ ``liblink``, are created in lib/voronoi to point to the Voro++
@ -1100,13 +1149,13 @@ systems.
.. code-block:: bash .. code-block:: bash
$ make yes-adios make yes-adios
otherwise, set ADIOS2_DIR environment variable when turning on the package: otherwise, set ADIOS2_DIR environment variable when turning on the package:
.. code-block:: bash .. code-block:: bash
$ ADIOS2_DIR=path make yes-adios # path is where ADIOS 2.x is installed ADIOS2_DIR=path make yes-adios # path is where ADIOS 2.x is installed
---------- ----------
@ -1130,15 +1179,15 @@ The ATC package requires the MANYBODY package also be installed.
``lib/atc``. You can do this manually if you prefer; follow the ``lib/atc``. You can do this manually if you prefer; follow the
instructions in ``lib/atc/README``. You can also do it in one instructions in ``lib/atc/README``. You can also do it in one
step from the ``lammps/src`` dir, using a command like these, step from the ``lammps/src`` dir, using a command like these,
which simply invoke the ``lib/atc/Install.py`` script with the which simply invokes the ``lib/atc/Install.py`` script with the
specified args: specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-atc # print help message make lib-atc # print help message
$ make lib-atc args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial") make lib-atc args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
$ make lib-atc args="-m mpi" # build with default MPI compiler (settings as with "make mpi") make lib-atc args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
$ make lib-atc args="-m icc" # build with Intel icc compiler make lib-atc args="-m icc" # build with Intel icc compiler
The build should produce two files: ``lib/atc/libatc.a`` and The build should produce two files: ``lib/atc/libatc.a`` and
``lib/atc/Makefile.lammps``. The latter is copied from an ``lib/atc/Makefile.lammps``. The latter is copied from an
@ -1157,17 +1206,17 @@ The ATC package requires the MANYBODY package also be installed.
.. code-block:: bash .. code-block:: bash
$ make lib-linalg # print help message make lib-linalg # print help message
$ make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial") make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
$ make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi") make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
$ make lib-linalg args="-m gfortran" # build with GNU Fortran compiler make lib-linalg args="-m g++" # build with GNU Fortran compiler
---------- ----------
.. _awpmd: .. _awpmd:
AWPMD package AWPMD package
------------------ -------------
.. tabs:: .. tabs::
@ -1181,15 +1230,15 @@ AWPMD package
``lib/awpmd``. You can do this manually if you prefer; follow the ``lib/awpmd``. You can do this manually if you prefer; follow the
instructions in ``lib/awpmd/README``. You can also do it in one instructions in ``lib/awpmd/README``. You can also do it in one
step from the ``lammps/src`` dir, using a command like these, step from the ``lammps/src`` dir, using a command like these,
which simply invoke the ``lib/awpmd/Install.py`` script with the which simply invokes the ``lib/awpmd/Install.py`` script with the
specified args: specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-awpmd # print help message make lib-awpmd # print help message
$ make lib-awpmd args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial") make lib-awpmd args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
$ make lib-awpmd args="-m mpi" # build with default MPI compiler (settings as with "make mpi") make lib-awpmd args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
$ make lib-awpmd args="-m icc" # build with Intel icc compiler make lib-awpmd args="-m icc" # build with Intel icc compiler
The build should produce two files: ``lib/awpmd/libawpmd.a`` and The build should produce two files: ``lib/awpmd/libawpmd.a`` and
``lib/awpmd/Makefile.lammps``. The latter is copied from an ``lib/awpmd/Makefile.lammps``. The latter is copied from an
@ -1208,21 +1257,20 @@ AWPMD package
.. code-block:: bash .. code-block:: bash
$ make lib-linalg # print help message make lib-linalg # print help message
$ make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial") make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
$ make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi") make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
$ make lib-linalg args="-m gfortran" # build with GNU Fortran compiler make lib-linalg args="-m g++" # build with GNU C++ compiler
---------- ----------
.. _colvars: .. _colvar:
COLVARS package COLVARS package
--------------------------------------- ---------------
This package includes the `Colvars library This package enables the use of the `Colvars <https://colvars.github.io/>`_
<https://colvars.github.io/>`_ into the LAMMPS distribution, which can module included in the LAMMPS source distribution.
be built for the most part with all major versions of the C++ language.
.. tabs:: .. tabs::
@ -1235,42 +1283,45 @@ be built for the most part with all major versions of the C++ language.
.. tab:: Traditional make .. tab:: Traditional make
Before building LAMMPS, one must build the Colvars library in lib/colvars. As with other libraries distributed with LAMMPS, the Colvars library
needs to be built before building the LAMMPS program with the COLVARS
package enabled.
This can be done manually in the same folder by using or adapting From the LAMMPS ``src`` directory, this is most easily and safely done
one of the provided Makefiles: for example, ``Makefile.g++`` for via one of the following commands, which implicitly rely on the
the GNU C++ compiler. C++11 compatibility may need to be enabled ``lib/colvars/Install.py`` script with optional arguments:
for some older compilers (as is done in the example makefile).
In general, it is safer to use build setting consistent with the
rest of LAMMPS. This is best carried out from the LAMMPS src
directory using a command like these, which simply invoke the
``lib/colvars/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-colvars # print help message make lib-colvars # print help message
$ make lib-colvars args="-m serial" # build with GNU g++ compiler (settings as with "make serial") make lib-colvars args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
$ make lib-colvars args="-m mpi" # build with default MPI compiler (settings as with "make mpi") make lib-colvars args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
$ make lib-colvars args="-m g++-debug" # build with GNU g++ compiler and colvars debugging enabled make lib-colvars args="-m g++-debug" # build with GNU g++ compiler and colvars debugging enabled
The "machine" argument of the "-m" flag is used to find a The "machine" argument of the "-m" flag is used to find a
Makefile.machine to use as build recipe. If it does not already ``Makefile.machine`` file to use as build recipe. If such recipe does
exist in ``lib/colvars``, it will be auto-generated by using not already exist in ``lib/colvars``, suitable settings will be
compiler flags consistent with those parsed from the core LAMMPS auto-generated consistent with those used in the core LAMMPS makefiles.
makefiles.
.. versionchanged:: 8Feb2023
Please note that Colvars uses the Lepton library, which is now
included with the LEPTON package; if you use anything other than
the ``make lib-colvars`` command, please make sure to :ref:`build
Lepton beforehand <lepton>`.
Optional flags may be specified as environment variables: Optional flags may be specified as environment variables:
.. code-block:: bash .. code-block:: bash
$ COLVARS_DEBUG=yes make lib-colvars args="-m machine" # Build with debug code (much slower) COLVARS_DEBUG=yes make lib-colvars args="-m machine" # Build with debug code (much slower)
$ COLVARS_LEPTON=no make lib-colvars args="-m machine" # Build without Lepton (included otherwise) COLVARS_LEPTON=no make lib-colvars args="-m machine" # Build without Lepton (included otherwise)
The build should produce two files: the library ``lib/colvars/libcolvars.a`` The build should produce two files: the library
(which also includes Lepton objects if enabled) and the specification file ``lib/colvars/libcolvars.a`` and the specification file
``lib/colvars/Makefile.lammps``. The latter is auto-generated, and normally does ``lib/colvars/Makefile.lammps``. The latter is auto-generated,
not need to be edited. and normally does not need to be edited.
---------- ----------
@ -1285,27 +1336,53 @@ This package depends on the KSPACE package.
.. tab:: CMake build .. tab:: CMake build
No additional settings are needed besides ``-D PKG_KSPACE=yes`` and ``-D .. code-block:: bash
PKG_ELECTRODE=yes``.
-D PKG_ELECTRODE=yes # enable the package itself
-D PKG_KSPACE=yes # the ELECTRODE package requires KSPACE
-D USE_INTERNAL_LINALG=value #
Features in the ELECTRODE package are dependent on code in the
KSPACE package so the latter one *must* be enabled.
The ELECTRODE package also requires LAPACK (and BLAS) and CMake
can identify their locations and pass that info to the ELECTRODE
build script. But on some systems this may cause problems when
linking or the dependency is not desired. Try enabling
``USE_INTERNAL_LINALG`` in those cases to use the bundled linear
algebra library and work around the limitation.
.. tab:: Traditional make .. tab:: Traditional make
The package is activated with ``make yes-KSPACE`` and ``make Before building LAMMPS, you must configure the ELECTRODE support
yes-ELECTRODE`` libraries and settings in ``lib/electrode``. You can do this
manually, if you prefer, or do it in one step from the
``lammps/src`` dir, using a command like these, which simply
Note that the ``Makefile.lammps`` file has settings for the BLAS and invokes the ``lib/electrode/Install.py`` script with the specified
LAPACK linear algebra libraries. As explained in ``lib/awpmd/README`` args:
these can either exist on your system, or you can use the files provided
in ``lib/linalg``. In the latter case you also need to build the library
in ``lib/linalg`` with a command like these:
.. code-block:: bash .. code-block:: bash
$ make lib-linalg # print help message make lib-electrode # print help message
$ make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial") make lib-electrode args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
$ make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi") make lib-electrode args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
$ make lib-linalg args="-m gfortran" # build with GNU Fortran compiler
Note that the ``Makefile.lammps`` file has settings for the BLAS
and LAPACK linear algebra libraries. These can either exist on
your system, or you can use the files provided in ``lib/linalg``.
In the latter case you also need to build the library in
``lib/linalg`` with a command like these:
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
The package itself is activated with ``make yes-KSPACE`` and
``make yes-ELECTRODE``
---------- ----------
@ -1342,13 +1419,56 @@ at: `https://github.com/ICAMS/lammps-user-pace/ <https://github.com/ICAMS/lammps
.. code-block:: bash .. code-block:: bash
$ make lib-pace # print help message make lib-pace # print help message
$ make lib-pace args="-b" # download and build the default version in lib/pace make lib-pace args="-b" # download and build the default version in lib/pace
You should not need to edit the ``lib/pace/Makefile.lammps`` file. You should not need to edit the ``lib/pace/Makefile.lammps`` file.
---------- ----------
.. _ml-pod:
ML-POD package
-----------------------------
.. tabs::
.. tab:: CMake build
No additional settings are needed besides ``-D PKG_ML-POD=yes``.
.. tab:: Traditional make
Before building LAMMPS, you must configure the ML-POD support
settings in ``lib/mlpod``. You can do this manually, if you
prefer, or do it in one step from the ``lammps/src`` dir, using a
command like the following, which simply invoke the
``lib/mlpod/Install.py`` script with the specified args:
.. code-block:: bash
make lib-mlpod # print help message
make lib-mlpod args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-mlpod args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-mlpod args="-m mpi -e linalg" # same as above but use the bundled linalg lib
Note that the ``Makefile.lammps`` file has settings to use the BLAS
and LAPACK linear algebra libraries. These can either exist on
your system, or you can use the files provided in ``lib/linalg``.
In the latter case you also need to build the library in
``lib/linalg`` with a command like these:
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
The package itself is activated with ``make yes-ML-POD``.
----------
.. _plumed: .. _plumed:
PLUMED package PLUMED package
@ -1442,10 +1562,10 @@ LAMMPS build.
.. code-block:: bash .. code-block:: bash
$ make lib-plumed # print help message make lib-plumed # print help message
$ make lib-plumed args="-b" # download and build PLUMED in lib/plumed/plumed2 make lib-plumed args="-b" # download and build PLUMED in lib/plumed/plumed2
$ make lib-plumed args="-p $HOME/.local" # use existing PLUMED installation in $HOME/.local make lib-plumed args="-p $HOME/.local" # use existing PLUMED installation in $HOME/.local
$ make lib-plumed args="-p /usr/local -m shared" # use existing PLUMED installation in make lib-plumed args="-p /usr/local -m shared" # use existing PLUMED installation in
# /usr/local and use shared linkage mode # /usr/local and use shared linkage mode
Note that 2 symbolic (soft) links, ``includelink`` and ``liblink`` Note that 2 symbolic (soft) links, ``includelink`` and ``liblink``
@ -1458,8 +1578,8 @@ LAMMPS build.
.. code-block:: bash .. code-block:: bash
$ make yes-plumed make yes-plumed
$ make machine make machine
Once this compilation completes you should be able to run LAMMPS Once this compilation completes you should be able to run LAMMPS
in the usual way. For shared linkage mode, libplumed.so must be in the usual way. For shared linkage mode, libplumed.so must be
@ -1506,13 +1626,13 @@ the HDF5 library.
``lib/h5md``. You can do this manually if you prefer; follow the ``lib/h5md``. You can do this manually if you prefer; follow the
instructions in ``lib/h5md/README``. You can also do it in one instructions in ``lib/h5md/README``. You can also do it in one
step from the ``lammps/src`` dir, using a command like these, step from the ``lammps/src`` dir, using a command like these,
which simply invoke the ``lib/h5md/Install.py`` script with the which simply invokes the ``lib/h5md/Install.py`` script with the
specified args: specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-h5md # print help message make lib-h5md # print help message
$ make lib-h5md args="-m h5cc" # build with h5cc compiler make lib-h5md args="-m h5cc" # build with h5cc compiler
The build should produce two files: ``lib/h5md/libch5md.a`` and The build should produce two files: ``lib/h5md/libch5md.a`` and
``lib/h5md/Makefile.lammps``. The latter is copied from an ``lib/h5md/Makefile.lammps``. The latter is copied from an
@ -1562,14 +1682,14 @@ details please see ``lib/hdnnp/README`` and the `n2p2 build documentation
You can download and build the *n2p2* library manually if you prefer; You can download and build the *n2p2* library manually if you prefer;
follow the instructions in ``lib/hdnnp/README``\ . You can also do it in follow the instructions in ``lib/hdnnp/README``\ . You can also do it in
one step from the ``lammps/src`` dir, using a command like these, which one step from the ``lammps/src`` dir, using a command like these, which
simply invoke the ``lib/hdnnp/Install.py`` script with the specified args: simply invokes the ``lib/hdnnp/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-hdnnp # print help message make lib-hdnnp # print help message
$ make lib-hdnnp args="-b" # download and build in lib/hdnnp/n2p2-... make lib-hdnnp args="-b" # download and build in lib/hdnnp/n2p2-...
$ make lib-hdnnp args="-b -v 2.1.4" # download and build specific version make lib-hdnnp args="-b -v 2.1.4" # download and build specific version
$ make lib-hdnnp args="-p /usr/local/n2p2" # use the existing n2p2 installation in /usr/local/n2p2 make lib-hdnnp args="-p /usr/local/n2p2" # use the existing n2p2 installation in /usr/local/n2p2
Note that 3 symbolic (soft) links, ``includelink``, ``liblink`` and Note that 3 symbolic (soft) links, ``includelink``, ``liblink`` and
``Makefile.lammps``, will be created in ``lib/hdnnp`` to point to ``Makefile.lammps``, will be created in ``lib/hdnnp`` to point to
@ -1669,56 +1789,14 @@ MDI package
.. code-block:: bash .. code-block:: bash
$ python Install.py -m gcc # build using gcc compiler python Install.py -m gcc # build using gcc compiler
$ python Install.py -m icc # build using icc compiler python Install.py -m icc # build using icc compiler
The build should produce two files: ``lib/mdi/includelink/mdi.h`` The build should produce two files: ``lib/mdi/includelink/mdi.h``
and ``lib/mdi/liblink/libmdi.so``\ . and ``lib/mdi/liblink/libmdi.so``\ .
---------- ----------
.. _mesont:
MESONT package
-------------------------
This package includes a library written in Fortran 90 in the
``lib/mesont`` folder, so a working Fortran 90 compiler is required to
compile it. Also, the files with the force field data for running the
bundled examples are not included in the source distribution. Instead
they will be downloaded the first time this package is installed.
.. tabs::
.. tab:: CMake build
No additional settings are needed besides ``-D PKG_MESONT=yes``
.. tab:: Traditional make
Before building LAMMPS, you must build the *mesont* library in
``lib/mesont``\ . You can also do it in one step from the
``lammps/src`` dir, using a command like these, which simply
invoke the ``lib/mesont/Install.py`` script with the specified
args:
.. code-block:: bash
$ make lib-mesont # print help message
$ make lib-mesont args="-m gfortran" # build with GNU g++ compiler (settings as with "make serial")
$ make lib-mesont args="-m ifort" # build with Intel icc compiler
The build should produce two files: ``lib/mesont/libmesont.a`` and
``lib/mesont/Makefile.lammps``\ . The latter is copied from an
existing ``Makefile.lammps.\*`` and has settings needed to build
LAMMPS with the *mesont* library (though typically the settings
contain only the Fortran runtime library). If necessary, you can
edit/create a new ``lib/mesont/Makefile.machine`` file for your
system, which should define an ``EXTRAMAKE`` variable to specify a
corresponding ``Makefile.lammps.machine`` file.
----------
.. _molfile: .. _molfile:
MOLFILE package MOLFILE package
@ -1819,6 +1897,22 @@ OPENMP package
For other platforms and compilers, please consult the For other platforms and compilers, please consult the
documentation about OpenMP support for your compiler. documentation about OpenMP support for your compiler.
.. admonition:: Adding OpenMP support on macOS
:class: note
Apple offers the `Xcode package and IDE
<https://developer.apple.com/xcode/>`_ for compiling software on
macOS, so you have likely installed it to compile LAMMPS. Their
compiler is based on `Clang <https://clang.llvm.org/>`_, but while it
is capable of processing OpenMP directives, the necessary header
files and OpenMP runtime library are missing. The `R developers
<https://www.r-project.org/>`_ have figured out a way to build those
in a compatible fashion. One can download them from
`https://mac.r-project.org/openmp/
<https://mac.r-project.org/openmp/>`_. Simply adding those files as
instructed enables the Xcode C++ compiler to compile LAMMPS with ``-D
BUILD_OMP=yes``.
---------- ----------
.. _qmmm: .. _qmmm:
@ -1868,15 +1962,15 @@ verified to work in February 2020 with Quantum Espresso versions 6.3 to
``lib/qmmm``. You can do this manually if you prefer; follow the ``lib/qmmm``. You can do this manually if you prefer; follow the
first two steps explained in ``lib/qmmm/README``. You can also do first two steps explained in ``lib/qmmm/README``. You can also do
it in one step from the ``lammps/src`` dir, using a command like it in one step from the ``lammps/src`` dir, using a command like
these, which simply invoke the ``lib/qmmm/Install.py`` script with these, which simply invokes the ``lib/qmmm/Install.py`` script with
the specified args: the specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-qmmm # print help message make lib-qmmm # print help message
$ make lib-qmmm args="-m serial" # build with GNU Fortran compiler (settings as in "make serial") make lib-qmmm args="-m serial" # build with GNU Fortran compiler (settings as in "make serial")
$ make lib-qmmm args="-m mpi" # build with default MPI compiler (settings as in "make mpi") make lib-qmmm args="-m mpi" # build with default MPI compiler (settings as in "make mpi")
$ make lib-qmmm args="-m gfortran" # build with GNU Fortran compiler make lib-qmmm args="-m gfortran" # build with GNU Fortran compiler
The build should produce two files: ``lib/qmmm/libqmmm.a`` and The build should produce two files: ``lib/qmmm/libqmmm.a`` and
``lib/qmmm/Makefile.lammps``. The latter is copied from an ``lib/qmmm/Makefile.lammps``. The latter is copied from an
@ -1913,14 +2007,25 @@ within CMake will download the non-commercial use version.
.. code-block:: bash .. code-block:: bash
-D DOWNLOAD_QUIP=value # download OpenKIM API v2 for build, value = no (default) or yes -D DOWNLOAD_QUIP=value # download QUIP library for build, value = no (default) or yes
-D QUIP_LIBRARY=path # path to libquip.a (only needed if a custom location) -D QUIP_LIBRARY=path # path to libquip.a (only needed if a custom location)
-D USE_INTERNAL_LINALG=value # Use the internal linear algebra library instead of LAPACK
# value = no (default) or yes
CMake will try to download and build the QUIP library from GitHub, if it is not CMake will try to download and build the QUIP library from GitHub,
found on the local machine. This requires to have git installed. It will use the same compilers if it is not found on the local machine. This requires to have git
and flags as used for compiling LAMMPS. Currently this is only supported for the GNU and the installed. It will use the same compilers and flags as used for
Intel compilers. Set the ``QUIP_LIBRARY`` variable if you want to use a previously compiled compiling LAMMPS. Currently this is only supported for the GNU
and installed QUIP library and CMake cannot find it. and the Intel compilers. Set the ``QUIP_LIBRARY`` variable if you
want to use a previously compiled and installed QUIP library and
CMake cannot find it.
The QUIP library requires LAPACK (and BLAS) and CMake can identify
their locations and pass that info to the QUIP build script. But
on some systems this triggers a (current) limitation of CMake and
the configuration will fail. Try enabling ``USE_INTERNAL_LINALG`` in
those cases to use the bundled linear algebra library and work around
the limitation.
.. tab:: Traditional make .. tab:: Traditional make
@ -1965,7 +2070,7 @@ To build with this package, you must download and build the
You can download and build the ScaFaCoS library manually if you You can download and build the ScaFaCoS library manually if you
prefer; follow the instructions in ``lib/scafacos/README``. You prefer; follow the instructions in ``lib/scafacos/README``. You
can also do it in one step from the ``lammps/src`` dir, using a can also do it in one step from the ``lammps/src`` dir, using a
command like these, which simply invoke the command like these, which simply invokes the
``lib/scafacos/Install.py`` script with the specified args: ``lib/scafacos/Install.py`` script with the specified args:
.. code-block:: bash .. code-block:: bash
@ -2009,14 +2114,14 @@ Eigen3 is a template library, so you do not need to build it.
You can download the Eigen3 library manually if you prefer; follow You can download the Eigen3 library manually if you prefer; follow
the instructions in ``lib/smd/README``. You can also do it in one the instructions in ``lib/smd/README``. You can also do it in one
step from the ``lammps/src`` dir, using a command like these, step from the ``lammps/src`` dir, using a command like these,
which simply invoke the ``lib/smd/Install.py`` script with the which simply invokes the ``lib/smd/Install.py`` script with the
specified args: specified args:
.. code-block:: bash .. code-block:: bash
$ make lib-smd # print help message make lib-smd # print help message
$ make lib-smd args="-b" # download to lib/smd/eigen3 make lib-smd args="-b" # download to lib/smd/eigen3
$ make lib-smd args="-p /usr/include/eigen3" # use existing Eigen installation in /usr/include/eigen3 make lib-smd args="-p /usr/include/eigen3" # use existing Eigen installation in /usr/include/eigen3
Note that a symbolic (soft) link named ``includelink`` is created Note that a symbolic (soft) link named ``includelink`` is created
in ``lib/smd`` to point to the Eigen dir. When LAMMPS builds it in ``lib/smd`` to point to the Eigen dir. When LAMMPS builds it

View File

@ -1,33 +1,32 @@
Link LAMMPS as a library to another code Link LAMMPS as a library to another code
======================================== ========================================
LAMMPS is designed as a library of C++ objects that can be LAMMPS is designed as a library of C++ objects that can be integrated
integrated into other applications including Python scripts. into other applications, including Python scripts. The files
The files ``src/library.cpp`` and ``src/library.h`` define a ``src/library.cpp`` and ``src/library.h`` define a C-style API for using
C-style API for using LAMMPS as a library. See the LAMMPS as a library. See the :doc:`Howto_library` page for a
:doc:`Howto_library` page description of the interface and how to use it for your needs.
for a description of the interface and how to use it for your needs.
The :doc:`Build_basics` page explains how to build The :doc:`Build_basics` page explains how to build LAMMPS as either a
LAMMPS as either a shared or static library. This results in a file shared or static library. This results in a file in the compilation
in the compilation folder called ``liblammps.a`` or ``liblammps_<name>.a`` folder called ``liblammps.a`` or ``liblammps_<name>.a`` in case of
in case of building a static library. In case of a shared library building a static library. In case of a shared library, the name is the
the name is the same only that the suffix is going to be either ``.so`` same only that the suffix is going to be either ``.so`` or ``.dylib`` or
or ``.dylib`` or ``.dll`` instead of ``.a`` depending on the OS. ``.dll`` instead of ``.a`` depending on the OS. In some cases, the
In some cases the ``.so`` file may be a symbolic link to a file with ``.so`` file may be a symbolic link to a file with the suffix ``.so.0``
the suffix ``.so.0`` (or some other number). (or some other number).
.. note:: .. note::
Care should be taken to use the same MPI library for the calling code Care should be taken to use the same MPI library for the calling code
and the LAMMPS library unless LAMMPS is to be compiled without (real) and the LAMMPS library, unless LAMMPS is to be compiled without (real)
MPI support using the include STUBS MPI library. MPI support using the included STUBS MPI library.
Link with LAMMPS as a static library Link with LAMMPS as a static library
------------------------------------ ------------------------------------
The calling application can link to LAMMPS as a static library with The calling application can link to LAMMPS as a static library with
compilation and link commands as in the examples shown below. These compilation and link commands, as in the examples shown below. These
are examples for a code written in C in the file ``caller.c``. are examples for a code written in C in the file ``caller.c``.
The benefit of linking to a static library is, that the resulting The benefit of linking to a static library is, that the resulting
executable is independent of that library since all required executable is independent of that library since all required
@ -142,10 +141,10 @@ Link with LAMMPS as a shared library
When linking to LAMMPS built as a shared library, the situation becomes When linking to LAMMPS built as a shared library, the situation becomes
much simpler, as all dependent libraries and objects are either included much simpler, as all dependent libraries and objects are either included
in the shared library or registered as a dependent library in the shared in the shared library or registered as a dependent library in the shared
library file. Thus those libraries need not to be specified when library file. Thus, those libraries need not be specified when linking
linking the calling executable. Only the *-I* flags are needed. So the the calling executable. Only the *-I* flags are needed. So the example
example case from above of the serial version static LAMMPS library with case from above of the serial version static LAMMPS library with the
the POEMS package installed becomes: POEMS package installed becomes:
.. tabs:: .. tabs::
@ -209,7 +208,7 @@ You can verify whether all required shared libraries are found with the
.. code-block:: bash .. code-block:: bash
$ LD_LIBRARY_PATH=/home/user/lammps/src ldd caller LD_LIBRARY_PATH=/home/user/lammps/src ldd caller
linux-vdso.so.1 (0x00007ffe729e0000) linux-vdso.so.1 (0x00007ffe729e0000)
liblammps.so => /home/user/lammps/src/liblammps.so (0x00007fc91bb9e000) liblammps.so => /home/user/lammps/src/liblammps.so (0x00007fc91bb9e000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fc91b984000) libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007fc91b984000)
@ -222,7 +221,7 @@ If a required library is missing, you would get a 'not found' entry:
.. code-block:: bash .. code-block:: bash
$ ldd caller ldd caller
linux-vdso.so.1 (0x00007ffd672fe000) linux-vdso.so.1 (0x00007ffd672fe000)
liblammps.so => not found liblammps.so => not found
libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fb7c7e86000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x00007fb7c7e86000)

View File

@ -20,21 +20,22 @@ with :doc:`CMake <Build_cmake>`. The makefiles of the traditional
make based build process and the scripts they are calling expect a few make based build process and the scripts they are calling expect a few
additional tools to be available and functioning. additional tools to be available and functioning.
* a working C/C++ compiler toolchain supporting the C++11 standard; on * A working C/C++ compiler toolchain supporting the C++11 standard; on
Linux these are often the GNU compilers. Some older compilers Linux, these are often the GNU compilers. Some older compiler versions
require adding flags like ``-std=c++11`` to enable the C++11 mode. require adding flags like ``-std=c++11`` to enable the C++11 mode.
* a Bourne shell compatible "Unix" shell program (often this is ``bash``) * A Bourne shell compatible "Unix" shell program (frequently this is ``bash``)
* a few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname`` * A few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname``
* python (optional, required for ``make lib-<pkg>`` in the src folder). * Python (optional, required for ``make lib-<pkg>`` in the src
python scripts are currently tested with python 2.7 and 3.6. The procedure folder). Python scripts are currently tested with python 2.7 and
for :doc:`building the documentation <Build_manual>` requires python 3.5 or later. 3.6 to 3.11. The procedure for :doc:`building the documentation
<Build_manual>` *requires* Python 3.5 or later.
Getting started Getting started
^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
To include LAMMPS packages (i.e. optional commands and styles) you must To include LAMMPS packages (i.e. optional commands and styles) you must
enable (or "install") them first, as discussed on the :doc:`Build enable (or "install") them first, as discussed on the :doc:`Build
package <Build_package>` page. If a packages requires (provided or package <Build_package>` page. If a package requires (provided or
external) libraries, you must configure and build those libraries external) libraries, you must configure and build those libraries
**before** building LAMMPS itself and especially **before** enabling **before** building LAMMPS itself and especially **before** enabling
such a package with ``make yes-<package>``. :doc:`Building LAMMPS with such a package with ``make yes-<package>``. :doc:`Building LAMMPS with
@ -56,36 +57,36 @@ Compilation can take a long time, since LAMMPS is a large project with
many features. If your machine has multiple CPU cores (most do these many features. If your machine has multiple CPU cores (most do these
days), you can speed this up by compiling sources in parallel with days), you can speed this up by compiling sources in parallel with
``make -j N`` (with N being the maximum number of concurrently executed ``make -j N`` (with N being the maximum number of concurrently executed
tasks). Also installation of the `ccache <https://ccache.dev/>`_ (= tasks). Installation of the `ccache <https://ccache.dev/>`_ (= Compiler
Compiler Cache) software may speed up repeated compilation even more, Cache) software may speed up repeated compilation even more, e.g. during
e.g. during code development. code development, especially when repeatedly switching between branches.
After the initial build, whenever you edit LAMMPS source files, or add After the initial build, whenever you edit LAMMPS source files, or add
or remove new files to the source directory (e.g. by installing or or remove new files to the source directory (e.g. by installing or
uninstalling packages), you must re-compile and relink the LAMMPS uninstalling packages), you must re-compile and relink the LAMMPS
executable with the same ``make <machine>`` command. The makefile's executable with the same ``make <machine>`` command. The makefile's
dependency tracking should insure that only the necessary subset of dependency tracking should ensure that only the necessary subset of
files are re-compiled. If you change settings in the makefile, you have files is re-compiled. If you change settings in the makefile, you have
to recompile *everything*. To delete all objects you can use ``make to recompile *everything*. To delete all objects, you can use ``make
clean-<machine>``. clean-<machine>``.
.. note:: .. note::
Before the actual compilation starts, LAMMPS will perform several Before the actual compilation starts, LAMMPS will perform several
steps to collect information from the configuration and setup that steps to collect information from the configuration and setup that is
is then embedded into the executable. When you build LAMMPS for then embedded into the executable. When you build LAMMPS for the
the first time, it will also compile a tool to quickly assemble first time, it will also compile a tool to quickly determine a list
a list of dependencies, that are required for the make program to of dependencies. Those are required for the make program to
correctly detect which parts need to be recompiled after changes correctly detect, which files need to be recompiled or relinked
were made to the sources. after changes were made to the sources.
Customized builds and alternate makefiles Customized builds and alternate makefiles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``src/MAKE`` directory tree contains the ``Makefile.<machine>`` The ``src/MAKE`` directory tree contains the ``Makefile.<machine>``
files included in the LAMMPS distribution. Typing ``make example`` uses files included in the LAMMPS distribution. Typing ``make example`` uses
``Makefile.example`` from one of those folders, if available. Thus the ``Makefile.example`` from one of those folders, if available. The
``make serial`` and ``make mpi`` lines above use ``make serial`` and ``make mpi`` lines above, for example, use
``src/MAKE/Makefile.serial`` and ``src/MAKE/Makefile.mpi``, ``src/MAKE/Makefile.serial`` and ``src/MAKE/Makefile.mpi``,
respectively. Other makefiles are in these directories: respectively. Other makefiles are in these directories:
@ -106,17 +107,18 @@ a new name, please edit the first line with the description and machine
name, so you will not confuse yourself, when looking at the machine name, so you will not confuse yourself, when looking at the machine
summary. summary.
Makefiles you may wish to try include these (some require a package Makefiles you may wish to try out, include those listed below (some
first be installed). Many of these include specific compiler flags require a package first be installed). Many of these include specific
for optimized performance. Please note, however, that some of these compiler flags for optimized performance. Please note, however, that
customized machine Makefile are contributed by users. Since both some of these customized machine Makefile are contributed by users, and
compilers, OS configurations, and LAMMPS itself keep changing, their thus may have modifications specific to the systems of those users.
settings may become outdated: Since compilers, OS configurations, and LAMMPS itself keep changing,
their settings may become outdated, too:
.. code-block:: bash .. code-block:: bash
make mac # build serial LAMMPS on a Mac make mac # build serial LAMMPS on macOS
make mac_mpi # build parallel LAMMPS on a Mac make mac_mpi # build parallel LAMMPS on macOS
make intel_cpu # build with the INTEL package optimized for CPUs make intel_cpu # build with the INTEL package optimized for CPUs
make knl # build with the INTEL package optimized for KNLs make knl # build with the INTEL package optimized for KNLs
make opt # build with the OPT package optimized for CPUs make opt # build with the OPT package optimized for CPUs

View File

@ -2,7 +2,7 @@ Build the LAMMPS documentation
============================== ==============================
Depending on how you obtained LAMMPS and whether you have built the Depending on how you obtained LAMMPS and whether you have built the
manual yourself, this directory has a number of sub-directories and manual yourself, this directory has a number of subdirectories and
files. Here is a list with descriptions: files. Here is a list with descriptions:
.. code-block:: bash .. code-block:: bash
@ -33,7 +33,7 @@ various tools and files. Some of them have to be installed (see below). For
the rest the build process will attempt to download and install them into the rest the build process will attempt to download and install them into
a python virtual environment and local folders. a python virtual environment and local folders.
A current version of the manual (latest patch release, that is the state A current version of the manual (latest feature release, that is the state
of the *release* branch) is is available online at: of the *release* branch) is is available online at:
`https://docs.lammps.org/ <https://docs.lammps.org/>`_. `https://docs.lammps.org/ <https://docs.lammps.org/>`_.
A version of the manual corresponding to the ongoing development (that is A version of the manual corresponding to the ongoing development (that is
@ -48,18 +48,15 @@ Build using GNU make
The LAMMPS manual is written in `reStructuredText <rst_>`_ format which The LAMMPS manual is written in `reStructuredText <rst_>`_ format which
can be translated to different output format using the `Sphinx can be translated to different output format using the `Sphinx
<sphinx_>`_ document generator tool. It also incorporates programmer <https://www.sphinx-doc.org/>`_ document generator tool. It also
documentation extracted from the LAMMPS C++ sources through the `Doxygen incorporates programmer documentation extracted from the LAMMPS C++
<https://doxygen.nl>`_ program. Currently the translation to HTML, PDF sources through the `Doxygen <https://doxygen.nl/>`_ program. Currently
(via LaTeX), ePUB (for many e-book readers) and MOBI (for Amazon Kindle the translation to HTML, PDF (via LaTeX), ePUB (for many e-book readers)
readers) are supported. For that to work a Python 3 interpreter, the and MOBI (for Amazon Kindle(tm) readers) are supported. For that to work a
``doxygen`` tools and internet access to download additional files and Python interpreter version 3.8 or later, the ``doxygen`` tools and
tools are required. This download is usually only required once or internet access to download additional files and tools are required.
after the documentation folder is returned to a pristine state with This download is usually only required once or after the documentation
``make clean-all``. folder is returned to a pristine state with ``make clean-all``.
.. _rst: https://docutils.readthedocs.io/en/sphinx-docs/user/rst/quickstart.html
.. _sphinx: https://www.sphinx-doc.org
For the documentation build a python virtual environment is set up in For the documentation build a python virtual environment is set up in
the folder ``doc/docenv`` and various python packages are installed into the folder ``doc/docenv`` and various python packages are installed into
@ -90,6 +87,7 @@ folder. The following ``make`` commands are available:
make anchor_check # check for duplicate anchor labels make anchor_check # check for duplicate anchor labels
make style_check # check for complete and consistent style lists make style_check # check for complete and consistent style lists
make package_check # check for complete and consistent package lists make package_check # check for complete and consistent package lists
make link_check # check for broken or outdated URLs
make spelling # spell-check the manual make spelling # spell-check the manual
---------- ----------
@ -128,38 +126,29 @@ common setups:
.. code-block:: bash .. code-block:: bash
sudo apt-get install python-virtualenv git doxygen sudo apt-get install git doxygen
.. tab:: RHEL or CentOS (Version 7.x) .. tab:: RHEL or CentOS (Version 7.x)
.. code-block:: bash .. code-block:: bash
sudo yum install python3-virtualenv git doxygen sudo yum install git doxygen
.. tab:: Fedora or RHEL/CentOS (8.x or later) .. tab:: Fedora or RHEL/CentOS (8.x or later)
.. code-block:: bash .. code-block:: bash
sudo dnf install python3-virtualenv git doxygen sudo dnf install git doxygen
.. tab:: MacOS X .. tab:: macOS
*Python 3* *Python 3*
Download the latest Python 3 MacOS X package from If Python 3 is not available on your macOS system, you can
download the latest Python 3 macOS package from
`https://www.python.org <https://www.python.org>`_ and install it. `https://www.python.org <https://www.python.org>`_ and install it.
This will install both Python 3 and pip3. This will install both Python 3 and pip3.
*virtualenv*
Once Python 3 is installed, open a Terminal and type
.. code-block:: bash
pip3 install virtualenv
This will install virtualenv from the Python Package Index.
Prerequisites for PDF Prerequisites for PDF
--------------------- ---------------------
@ -179,7 +168,7 @@ math expressions transparently into embedded images.
For converting the generated ePUB file to a MOBI format file (for e-book For converting the generated ePUB file to a MOBI format file (for e-book
readers, like Kindle, that cannot read ePUB), you also need to have the readers, like Kindle, that cannot read ePUB), you also need to have the
``ebook-convert`` tool from the "calibre" software ``ebook-convert`` tool from the "calibre" software
installed. `http://calibre-ebook.com/ <http://calibre-ebook.com/>`_ installed. `https://calibre-ebook.com/ <https://calibre-ebook.com/>`_
Typing ``make mobi`` will first create the ePUB file and then convert Typing ``make mobi`` will first create the ePUB file and then convert
it. On the Kindle readers in particular, you also have support for PDF it. On the Kindle readers in particular, you also have support for PDF
files, so you could download and view the PDF version as an alternative. files, so you could download and view the PDF version as an alternative.
@ -219,9 +208,20 @@ be multiple tests run automatically:
- A test that only standard, printable ASCII text characters are used. - A test that only standard, printable ASCII text characters are used.
This runs the command ``env LC_ALL=C grep -n '[^ -~]' src/*.rst`` and This runs the command ``env LC_ALL=C grep -n '[^ -~]' src/*.rst`` and
thus prints all offending lines with filename and line number thus prints all offending lines with filename and line number
prepended to the screen. Special characters like the Angstrom prepended to the screen. Special characters like Greek letters
:math:`\mathrm{\mathring{A}}` should be typeset with embedded math (:math:`\alpha~~\sigma~~\epsilon`), super- or subscripts
(like this ``:math:`\mathrm{\mathring{A}}```\ ). (:math:`x^2~~\mathrm{U}_{LJ}`), mathematical expressions
(:math:`\frac{1}{2}\mathrm{N}~~x\to\infty`), or the Angstrom symbol
(:math:`\AA`) should be typeset with embedded LaTeX (like this
``:math:`\alpha \sigma \epsilon```, ``:math:`x^2 \mathrm{E}_{LJ}```,
``:math:`\frac{1}{2}\mathrm{N} x\to\infty```, or ``:math:`\AA```\ ).
- Embedded LaTeX is rendered in HTML output with `MathJax
<https://www.mathjax.org/>`_ and in PDF output by passing the embedded
text to LaTeX. Some care has to be taken, though, since there are
limitations which macros and features can be used in either mode, so
it is recommended to always check whether any new or changed
documentation does translate and render correctly with either output.
- A test whether all styles are documented and listed in their - A test whether all styles are documented and listed in their
respective overview pages. A typical output with warnings looks like this: respective overview pages. A typical output with warnings looks like this:
@ -252,6 +252,5 @@ manual with ``make spelling``. This requires `a library called enchant
positives* (e.g. keywords, names, abbreviations) those can be added to positives* (e.g. keywords, names, abbreviations) those can be added to
the file ``lammps/doc/utils/sphinx-config/false_positives.txt``. the file ``lammps/doc/utils/sphinx-config/false_positives.txt``.
.. _rst: https://docutils.readthedocs.io/en/sphinx-docs/user/rst/quickstart.html
.. _lws: https://www.lammps.org .. _lws: https://www.lammps.org
.. _rst: https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html

View File

@ -4,13 +4,14 @@ Include packages in build
In LAMMPS, a package is a group of files that enable a specific set of In LAMMPS, a package is a group of files that enable a specific set of
features. For example, force fields for molecular systems or features. For example, force fields for molecular systems or
rigid-body constraints are in packages. In the src directory, each rigid-body constraints are in packages. In the src directory, each
package is a sub-directory with the package name in capital letters. package is a subdirectory with the package name in capital letters.
An overview of packages is given on the :doc:`Packages <Packages>` doc An overview of packages is given on the :doc:`Packages <Packages>` doc
page. Brief overviews of each package are on the :doc:`Packages details <Packages_details>` page. page. Brief overviews of each package are on the :doc:`Packages details
<Packages_details>` page.
When building LAMMPS, you can choose to include or exclude each When building LAMMPS, you can choose to include or exclude each
package. In general there is no need to include a package if you package. Generally, there is no need to include a package if you
never plan to use its features. never plan to use its features.
If you get a run-time error that a LAMMPS command or style is If you get a run-time error that a LAMMPS command or style is
@ -30,23 +31,28 @@ steps, as explained on the :doc:`Build extras <Build_extras>` page.
These links take you to the extra instructions for those select These links take you to the extra instructions for those select
packages: packages:
.. this list must be kept in sync with its counterpart in Build_extras.rst
.. table_from_list:: .. table_from_list::
:columns: 6 :columns: 6
* :ref:`ADIOS <adios>` * :ref:`ADIOS <adios>`
* :ref:`ATC <atc>` * :ref:`ATC <atc>`
* :ref:`AWPMD <awpmd>` * :ref:`AWPMD <awpmd>`
* :ref:`COLVARS <colvars>` * :ref:`COLVARS <colvar>`
* :ref:`COMPRESS <compress>` * :ref:`COMPRESS <compress>`
* :ref:`ELECTRODE <electrode>`
* :ref:`GPU <gpu>` * :ref:`GPU <gpu>`
* :ref:`H5MD <h5md>` * :ref:`H5MD <h5md>`
* :ref:`INTEL <intel>` * :ref:`INTEL <intel>`
* :ref:`KIM <kim>` * :ref:`KIM <kim>`
* :ref:`KOKKOS <kokkos>` * :ref:`KOKKOS <kokkos>`
* :ref:`LATTE <latte>` * :ref:`LEPTON <lepton>`
* :ref:`MACHDYN <machdyn>` * :ref:`MACHDYN <machdyn>`
* :ref:`MDI <mdi>`
* :ref:`ML-HDNNP <ml-hdnnp>` * :ref:`ML-HDNNP <ml-hdnnp>`
* :ref:`ML-IAP <mliap>`
* :ref:`ML-PACE <ml-pace>` * :ref:`ML-PACE <ml-pace>`
* :ref:`ML-POD <ml-pod>`
* :ref:`ML-QUIP <ml-quip>` * :ref:`ML-QUIP <ml-quip>`
* :ref:`MOLFILE <molfile>` * :ref:`MOLFILE <molfile>`
* :ref:`MSCG <mscg>` * :ref:`MSCG <mscg>`
@ -87,7 +93,7 @@ versus make.
If you switch between building with CMake and make builds, no If you switch between building with CMake and make builds, no
packages in the src directory can be installed when you invoke packages in the src directory can be installed when you invoke
``cmake``. CMake will give an error if that is not the case, ``cmake``. CMake will give an error if that is not the case,
indicating how you can un-install all packages in the src dir. indicating how you can uninstall all packages in the src dir.
.. tab:: Traditional make .. tab:: Traditional make
@ -96,7 +102,7 @@ versus make.
cd lammps/src cd lammps/src
make ps # check which packages are currently installed make ps # check which packages are currently installed
make yes-name # install a package with name make yes-name # install a package with name
make no-name # un-install a package with name make no-name # uninstall a package with name
make mpi # build LAMMPS with whatever packages are now installed make mpi # build LAMMPS with whatever packages are now installed
Examples: Examples:
@ -112,13 +118,13 @@ versus make.
.. note:: .. note::
You must always re-build LAMMPS (via make) after installing or You must always re-build LAMMPS (via make) after installing or
un-installing a package, for the action to take effect. The uninstalling a package, for the action to take effect. The
included dependency tracking will make certain only files that included dependency tracking will make certain only files that
are required to be rebuilt are recompiled. are required to be rebuilt are recompiled.
.. note:: .. note::
You cannot install or un-install packages and build LAMMPS in a You cannot install or uninstall packages and build LAMMPS in a
single make command with multiple targets, e.g. ``make single make command with multiple targets, e.g. ``make
yes-colloid mpi``. This is because the make procedure creates yes-colloid mpi``. This is because the make procedure creates
a list of source files that will be out-of-date for the build a list of source files that will be out-of-date for the build
@ -143,7 +149,7 @@ other files dependent on that package are also excluded.
if you downloaded a tarball, 3 packages (KSPACE, MANYBODY, MOLECULE) if you downloaded a tarball, 3 packages (KSPACE, MANYBODY, MOLECULE)
were pre-installed via the traditional make procedure in the ``src`` were pre-installed via the traditional make procedure in the ``src``
directory. That is no longer the case, so that CMake will build directory. That is no longer the case, so that CMake will build
as-is without needing to un-install those packages. as-is without needing to uninstall those packages.
---------- ----------
@ -160,9 +166,9 @@ control flow constructs for more complex operations.
LAMMPS includes several of these files to define configuration LAMMPS includes several of these files to define configuration
"presets", similar to the options that exist for the Make based "presets", similar to the options that exist for the Make based
system. Using these files you can enable/disable portions of the system. Using these files, you can enable/disable portions of the
available packages in LAMMPS. If you need a custom preset you can take available packages in LAMMPS. If you need a custom preset, you can
one of them as a starting point and customize it to your needs. make a copy of one of them and modify it to suit your needs.
.. code-block:: bash .. code-block:: bash
@ -176,7 +182,7 @@ one of them as a starting point and customize it to your needs.
cmake -C ../cmake/presets/pgi.cmake [OPTIONS] ../cmake # change settings to use the PGI compilers by default cmake -C ../cmake/presets/pgi.cmake [OPTIONS] ../cmake # change settings to use the PGI compilers by default
cmake -C ../cmake/presets/all_on.cmake [OPTIONS] ../cmake # enable all packages cmake -C ../cmake/presets/all_on.cmake [OPTIONS] ../cmake # enable all packages
cmake -C ../cmake/presets/all_off.cmake [OPTIONS] ../cmake # disable all packages cmake -C ../cmake/presets/all_off.cmake [OPTIONS] ../cmake # disable all packages
mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake # compile with MinGW cross compilers mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake # compile with MinGW cross-compilers
Presets that have names starting with "windows" are specifically for Presets that have names starting with "windows" are specifically for
compiling LAMMPS :doc:`natively on Windows <Build_windows>` and compiling LAMMPS :doc:`natively on Windows <Build_windows>` and
@ -220,7 +226,7 @@ The following commands are useful for managing package source files
and their installation when building LAMMPS via traditional make. and their installation when building LAMMPS via traditional make.
Just type ``make`` in lammps/src to see a one-line summary. Just type ``make`` in lammps/src to see a one-line summary.
These commands install/un-install sets of packages: These commands install/uninstall sets of packages:
.. code-block:: bash .. code-block:: bash
@ -236,40 +242,40 @@ These commands install/un-install sets of packages:
make yes-ext # install packages that require external libraries make yes-ext # install packages that require external libraries
make no-ext # uninstall packages that require external libraries make no-ext # uninstall packages that require external libraries
which install/un-install various sets of packages. Typing ``make which install/uninstall various sets of packages. Typing ``make
package`` will list all the these commands. package`` will list all the these commands.
.. note:: .. note::
Installing or un-installing a package for the make based build process Installing or uninstalling a package for the make based build process
works by simply copying files back and forth between the main source works by simply copying files back and forth between the main source
directory src and the sub-directories with the package name (e.g. directory src and the subdirectories with the package name (e.g.
src/KSPACE, src/ATC), so that the files are included or excluded src/KSPACE, src/ATC), so that the files are included or excluded
when LAMMPS is built. Only source files in the src folder will be when LAMMPS is built. Only source files in the src folder will be
compiled. compiled.
The following make commands help manage files that exist in both the The following make commands help manage files that exist in both the
src directory and in package sub-directories. You do not normally src directory and in package subdirectories. You do not normally
need to use these commands unless you are editing LAMMPS files or are need to use these commands unless you are editing LAMMPS files or are
updating LAMMPS via git. updating LAMMPS via git.
Type ``make package-status`` or ``make ps`` to show which packages are Type ``make package-status`` or ``make ps`` to show which packages are
currently installed. For those that are installed, it will list any currently installed. For those that are installed, it will list any
files that are different in the src directory and package files that are different in the src directory and package
sub-directory. subdirectory.
Type ``make package-installed`` or ``make pi`` to show which packages are Type ``make package-installed`` or ``make pi`` to show which packages are
currently installed, without listing the status of packages that are currently installed, without listing the status of packages that are
not installed. not installed.
Type ``make package-update`` or ``make pu`` to overwrite src files with Type ``make package-update`` or ``make pu`` to overwrite src files with
files from the package sub-directories if the package is installed. It files from the package subdirectories if the package is installed. It
should be used after the checkout has been :doc:`updated or changed should be used after the checkout has been :doc:`updated or changed
withy git <Install_git>`, this will only update the files in the package with git <Install_git>`, this will only update the files in the package
sub-directories, but not the copies in the src folder. subdirectories, but not the copies in the src folder.
Type ``make package-overwrite`` to overwrite files in the package Type ``make package-overwrite`` to overwrite files in the package
sub-directories with src files. subdirectories with src files.
Type ``make package-diff`` to list all differences between pairs of Type ``make package-diff`` to list all differences between pairs of
files in both the source directory and the package directory. files in both the source directory and the package directory.

View File

@ -1,8 +1,8 @@
Optional build settings Optional build settings
======================= =======================
LAMMPS can be built with several optional settings. Each sub-section LAMMPS can be built with several optional settings. Each subsection
explain how to do this for building both with CMake and make. explains how to do this for building both with CMake and make.
* `C++11 standard compliance`_ when building all of LAMMPS * `C++11 standard compliance`_ when building all of LAMMPS
* `FFT library`_ for use with the :doc:`kspace_style pppm <kspace_style>` command * `FFT library`_ for use with the :doc:`kspace_style pppm <kspace_style>` command
@ -41,7 +41,7 @@ FFT library
When the KSPACE package is included in a LAMMPS build, the When the KSPACE package is included in a LAMMPS build, the
:doc:`kspace_style pppm <kspace_style>` command performs 3d FFTs which :doc:`kspace_style pppm <kspace_style>` command performs 3d FFTs which
require use of an FFT library to compute 1d FFTs. The KISS FFT require use of an FFT library to compute 1d FFTs. The KISS FFT
library is included with LAMMPS but other libraries can be faster. library is included with LAMMPS, but other libraries can be faster.
LAMMPS can use them if they are available on your system. LAMMPS can use them if they are available on your system.
.. tabs:: .. tabs::
@ -63,9 +63,9 @@ LAMMPS can use them if they are available on your system.
Usually these settings are all that is needed. If FFTW3 is Usually these settings are all that is needed. If FFTW3 is
selected, then CMake will try to detect, if threaded FFTW selected, then CMake will try to detect, if threaded FFTW
libraries are available and enable them by default. This setting libraries are available and enable them by default. This setting
is independent of whether OpenMP threads are enabled and a is independent of whether OpenMP threads are enabled and a package
packages like KOKKOS or OPENMP is used. If CMake cannot detect like KOKKOS or OPENMP is used. If CMake cannot detect the FFT
the FFT library, you can set these variables to assist: library, you can set these variables to assist:
.. code-block:: bash .. code-block:: bash
@ -111,26 +111,25 @@ LAMMPS can use them if they are available on your system.
files in its default search path. You must specify ``FFT_LIB`` files in its default search path. You must specify ``FFT_LIB``
with the appropriate FFT libraries to include in the link. with the appropriate FFT libraries to include in the link.
The `KISS FFT library <http://kissfft.sf.net>`_ is included in the LAMMPS The `KISS FFT library <https://github.com/mborgerding/kissfft>`_ is
distribution. It is portable across all platforms. Depending on the size included in the LAMMPS distribution. It is portable across all
of the FFTs and the number of processors used, the other libraries listed platforms. Depending on the size of the FFTs and the number of
here can be faster. processors used, the other libraries listed here can be faster.
However, note that long-range Coulombics are only a portion of the However, note that long-range Coulombics are only a portion of the
per-timestep CPU cost, FFTs are only a portion of long-range per-timestep CPU cost, FFTs are only a portion of long-range Coulombics,
Coulombics, and 1d FFTs are only a portion of the FFT cost (parallel and 1d FFTs are only a portion of the FFT cost (parallel communication
communication can be costly). A breakdown of these timings is printed can be costly). A breakdown of these timings is printed to the screen
to the screen at the end of a run when using the at the end of a run when using the :doc:`kspace_style pppm
:doc:`kspace_style pppm <kspace_style>` command. The <kspace_style>` command. The :doc:`Screen and logfile output
:doc:`Screen and logfile output <Run_output>` <Run_output>` page gives more details. A more detailed (and time
page gives more details. A more detailed (and time consuming) consuming) report of the FFT performance is generated with the
report of the FFT performance is generated with the
:doc:`kspace_modify fftbench yes <kspace_modify>` command. :doc:`kspace_modify fftbench yes <kspace_modify>` command.
FFTW is a fast, portable FFT library that should also work on any FFTW is a fast, portable FFT library that should also work on any
platform and can be faster than the KISS FFT library. You can platform and can be faster than the KISS FFT library. You can download
download it from `www.fftw.org <http://www.fftw.org>`_. LAMMPS requires it from `www.fftw.org <https://www.fftw.org>`_. LAMMPS requires version
version 3.X; the legacy version 2.1.X is no longer supported. 3.X; the legacy version 2.1.X is no longer supported.
Building FFTW for your box should be as simple as ``./configure; make; Building FFTW for your box should be as simple as ``./configure; make;
make install``. The install command typically requires root privileges make install``. The install command typically requires root privileges
@ -142,18 +141,18 @@ The Intel MKL math library is part of the Intel compiler suite. It
can be used with the Intel or GNU compiler (see the ``FFT_LIB`` setting can be used with the Intel or GNU compiler (see the ``FFT_LIB`` setting
above). above).
Performing 3d FFTs in parallel can be time consuming due to data Performing 3d FFTs in parallel can be time-consuming due to data access
access and required communication. This cost can be reduced by and required communication. This cost can be reduced by performing
performing single-precision FFTs instead of double precision. Single single-precision FFTs instead of double precision. Single precision
precision means the real and imaginary parts of a complex datum are means the real and imaginary parts of a complex datum are 4-byte floats.
4-byte floats. Double precision means they are 8-byte doubles. Note Double precision means they are 8-byte doubles. Note that Fourier
that Fourier transform and related PPPM operations are somewhat less transform and related PPPM operations are somewhat less sensitive to
sensitive to floating point truncation errors and thus the resulting floating point truncation errors, and thus the resulting error is
error is less than the difference in precision. Using the ``-DFFT_SINGLE`` generally less than the difference in precision. Using the
setting trades off a little accuracy for reduced memory use and ``-DFFT_SINGLE`` setting trades off a little accuracy for reduced memory
parallel communication costs for transposing 3d FFT data. use and parallel communication costs for transposing 3d FFT data.
When using ``-DFFT_SINGLE`` with FFTW3 you may need to build the FFTW When using ``-DFFT_SINGLE`` with FFTW3, you may need to build the FFTW
library a second time with support for single-precision. library a second time with support for single-precision.
For FFTW3, do the following, which should produce the additional For FFTW3, do the following, which should produce the additional
@ -178,11 +177,11 @@ ARRAY mode.
Size of LAMMPS integer types and size limits Size of LAMMPS integer types and size limits
-------------------------------------------- --------------------------------------------
LAMMPS has a few integer data types which can be defined as either LAMMPS uses a few custom integer data types, which can be defined as
4-byte (= 32-bit) or 8-byte (= 64-bit) integers at compile time. either 4-byte (= 32-bit) or 8-byte (= 64-bit) integers at compile time.
This has an impact on the size of a system that can be simulated This has an impact on the size of a system that can be simulated, or how
or how large counters can become before "rolling over". large counters can become before "rolling over". The default setting of
The default setting of "smallbig" is almost always adequate. "smallbig" is almost always adequate.
.. tabs:: .. tabs::
@ -255,7 +254,7 @@ topology information, though IDs are enabled by default. The
:doc:`atom_modify id no <atom_modify>` command will turn them off. Atom :doc:`atom_modify id no <atom_modify>` command will turn them off. Atom
IDs are required for molecular systems with bond topology (bonds, IDs are required for molecular systems with bond topology (bonds,
angles, dihedrals, etc). Similarly, some force or compute or fix styles angles, dihedrals, etc). Similarly, some force or compute or fix styles
require atom IDs. Thus if you model a molecular system or use one of require atom IDs. Thus, if you model a molecular system or use one of
those styles with more than 2 billion atoms, you need the "bigbig" those styles with more than 2 billion atoms, you need the "bigbig"
setting. setting.
@ -265,7 +264,7 @@ systems and 500 million for systems with bonds (the additional
restriction is due to using the 2 upper bits of the local atom index restriction is due to using the 2 upper bits of the local atom index
in neighbor lists for storing special bonds info). in neighbor lists for storing special bonds info).
Image flags store 3 values per atom in a single integer which count the Image flags store 3 values per atom in a single integer, which count the
number of times an atom has moved through the periodic box in each number of times an atom has moved through the periodic box in each
dimension. See the :doc:`dump <dump>` manual page for a discussion. If dimension. See the :doc:`dump <dump>` manual page for a discussion. If
an atom moves through the periodic box more than this limit, the value an atom moves through the periodic box more than this limit, the value
@ -286,7 +285,7 @@ Output of JPG, PNG, and movie files
-------------------------------------------------- --------------------------------------------------
The :doc:`dump image <dump_image>` command has options to output JPEG or The :doc:`dump image <dump_image>` command has options to output JPEG or
PNG image files. Likewise the :doc:`dump movie <dump_image>` command PNG image files. Likewise, the :doc:`dump movie <dump_image>` command
outputs movie files in a variety of movie formats. Using these options outputs movie files in a variety of movie formats. Using these options
requires the following settings: requires the following settings:
@ -355,7 +354,7 @@ Read or write compressed files
If this option is enabled, large files can be read or written with If this option is enabled, large files can be read or written with
compression by ``gzip`` or similar tools by several LAMMPS commands, compression by ``gzip`` or similar tools by several LAMMPS commands,
including :doc:`read_data <read_data>`, :doc:`rerun <rerun>`, and including :doc:`read_data <read_data>`, :doc:`rerun <rerun>`, and
:doc:`dump <dump>`. Currently supported compression tools are: :doc:`dump <dump>`. Supported compression tools are currently
``gzip``, ``bzip2``, ``zstd``, and ``lzma``. ``gzip``, ``bzip2``, ``zstd``, and ``lzma``.
.. tabs:: .. tabs::
@ -395,7 +394,7 @@ Memory allocation alignment
--------------------------------------- ---------------------------------------
This setting enables the use of the "posix_memalign()" call instead of This setting enables the use of the "posix_memalign()" call instead of
"malloc()" when LAMMPS allocates large chunks or memory. Vector "malloc()" when LAMMPS allocates large chunks of memory. Vector
instructions on CPUs may become more efficient, if dynamically allocated instructions on CPUs may become more efficient, if dynamically allocated
memory is aligned on larger-than-default byte boundaries. On most memory is aligned on larger-than-default byte boundaries. On most
current operating systems, the "malloc()" implementation returns current operating systems, the "malloc()" implementation returns
@ -497,7 +496,7 @@ Trigger selected floating-point exceptions
------------------------------------------ ------------------------------------------
Many kinds of CPUs have the capability to detect when a calculation Many kinds of CPUs have the capability to detect when a calculation
results in an invalid math operation like a division by zero or calling results in an invalid math operation, like a division by zero or calling
the square root with a negative argument. The default behavior on the square root with a negative argument. The default behavior on
most operating systems is to continue and have values for ``NaN`` (= not most operating systems is to continue and have values for ``NaN`` (= not
a number) or ``Inf`` (= infinity). This allows software to detect and a number) or ``Inf`` (= infinity). This allows software to detect and

View File

@ -21,6 +21,7 @@ commands in it are used to define a LAMMPS simulation.
Commands_pair Commands_pair
Commands_bond Commands_bond
Commands_kspace Commands_kspace
Commands_dump
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -10,17 +10,21 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
General commands General commands
================ ================
An alphabetic list of general LAMMPS commands. An alphabetic list of general LAMMPS commands. Note that style
commands with many variants, can be more easily accessed via the small
table above.
.. table_from_list:: .. table_from_list::
:columns: 5 :columns: 5
* :doc:`angle_coeff <angle_coeff>` * :doc:`angle_coeff <angle_coeff>`
* :doc:`angle_style <angle_style>` * :doc:`angle_style <angle_style>`
* :doc:`angle_write <angle_write>`
* :doc:`atom_modify <atom_modify>` * :doc:`atom_modify <atom_modify>`
* :doc:`atom_style <atom_style>` * :doc:`atom_style <atom_style>`
* :doc:`balance <balance>` * :doc:`balance <balance>`
@ -28,7 +32,6 @@ An alphabetic list of general LAMMPS commands.
* :doc:`bond_style <bond_style>` * :doc:`bond_style <bond_style>`
* :doc:`bond_write <bond_write>` * :doc:`bond_write <bond_write>`
* :doc:`boundary <boundary>` * :doc:`boundary <boundary>`
* :doc:`box <box>`
* :doc:`change_box <change_box>` * :doc:`change_box <change_box>`
* :doc:`clear <clear>` * :doc:`clear <clear>`
* :doc:`comm_modify <comm_modify>` * :doc:`comm_modify <comm_modify>`
@ -43,6 +46,7 @@ An alphabetic list of general LAMMPS commands.
* :doc:`dielectric <dielectric>` * :doc:`dielectric <dielectric>`
* :doc:`dihedral_coeff <dihedral_coeff>` * :doc:`dihedral_coeff <dihedral_coeff>`
* :doc:`dihedral_style <dihedral_style>` * :doc:`dihedral_style <dihedral_style>`
* :doc:`dihedral_write <dihedral_write>`
* :doc:`dimension <dimension>` * :doc:`dimension <dimension>`
* :doc:`displace_atoms <displace_atoms>` * :doc:`displace_atoms <displace_atoms>`
* :doc:`dump <dump>` * :doc:`dump <dump>`
@ -60,6 +64,7 @@ An alphabetic list of general LAMMPS commands.
* :doc:`kspace_modify <kspace_modify>` * :doc:`kspace_modify <kspace_modify>`
* :doc:`kspace_style <kspace_style>` * :doc:`kspace_style <kspace_style>`
* :doc:`label <label>` * :doc:`label <label>`
* :doc:`labelmap <labelmap>`
* :doc:`lattice <lattice>` * :doc:`lattice <lattice>`
* :doc:`log <log>` * :doc:`log <log>`
* :doc:`mass <mass>` * :doc:`mass <mass>`
@ -86,8 +91,7 @@ An alphabetic list of general LAMMPS commands.
* :doc:`region <region>` * :doc:`region <region>`
* :doc:`replicate <replicate>` * :doc:`replicate <replicate>`
* :doc:`rerun <rerun>` * :doc:`rerun <rerun>`
* :doc:`reset_atom_ids <reset_atom_ids>` * :doc:`reset_atoms <reset_atoms>`
* :doc:`reset_mol_ids <reset_mol_ids>`
* :doc:`reset_timestep <reset_timestep>` * :doc:`reset_timestep <reset_timestep>`
* :doc:`restart <restart>` * :doc:`restart <restart>`
* :doc:`run <run>` * :doc:`run <run>`
@ -123,6 +127,7 @@ additional letter in parenthesis: k = KOKKOS.
* :doc:`group2ndx <group2ndx>` * :doc:`group2ndx <group2ndx>`
* :doc:`hyper <hyper>` * :doc:`hyper <hyper>`
* :doc:`kim <kim_commands>` * :doc:`kim <kim_commands>`
* :doc:`fitpod <fitpod_command>`
* :doc:`mdi <mdi>` * :doc:`mdi <mdi>`
* :doc:`ndx2group <group2ndx>` * :doc:`ndx2group <group2ndx>`
* :doc:`neb <neb>` * :doc:`neb <neb>`

View File

@ -10,6 +10,7 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
.. _bond: .. _bond:
@ -41,8 +42,11 @@ OPT.
* :doc:`gaussian <bond_gaussian>` * :doc:`gaussian <bond_gaussian>`
* :doc:`gromos (o) <bond_gromos>` * :doc:`gromos (o) <bond_gromos>`
* :doc:`harmonic (iko) <bond_harmonic>` * :doc:`harmonic (iko) <bond_harmonic>`
* :doc:`harmonic/restrain <bond_harmonic_restrain>`
* :doc:`harmonic/shift (o) <bond_harmonic_shift>` * :doc:`harmonic/shift (o) <bond_harmonic_shift>`
* :doc:`harmonic/shift/cut (o) <bond_harmonic_shift_cut>` * :doc:`harmonic/shift/cut (o) <bond_harmonic_shift_cut>`
* :doc:`lepton (o) <bond_lepton>`
* :doc:`mesocnt <bond_mesocnt>`
* :doc:`mm3 <bond_mm3>` * :doc:`mm3 <bond_mm3>`
* :doc:`morse (o) <bond_morse>` * :doc:`morse (o) <bond_morse>`
* :doc:`nonlinear (o) <bond_nonlinear>` * :doc:`nonlinear (o) <bond_nonlinear>`
@ -74,6 +78,7 @@ OPT.
* *
* *
* *
* :doc:`amoeba <angle_amoeba>`
* :doc:`charmm (iko) <angle_charmm>` * :doc:`charmm (iko) <angle_charmm>`
* :doc:`class2 (ko) <angle_class2>` * :doc:`class2 (ko) <angle_class2>`
* :doc:`class2/p6 <angle_class2>` * :doc:`class2/p6 <angle_class2>`
@ -90,9 +95,11 @@ OPT.
* :doc:`fourier/simple (o) <angle_fourier_simple>` * :doc:`fourier/simple (o) <angle_fourier_simple>`
* :doc:`gaussian <angle_gaussian>` * :doc:`gaussian <angle_gaussian>`
* :doc:`harmonic (iko) <angle_harmonic>` * :doc:`harmonic (iko) <angle_harmonic>`
* :doc:`lepton (o) <angle_lepton>`
* :doc:`mesocnt <angle_mesocnt>`
* :doc:`mm3 <angle_mm3>` * :doc:`mm3 <angle_mm3>`
* :doc:`quartic (o) <angle_quartic>` * :doc:`quartic (o) <angle_quartic>`
* :doc:`sdk (o) <angle_sdk>` * :doc:`spica (o) <angle_spica>`
* :doc:`table (o) <angle_table>` * :doc:`table (o) <angle_table>`
.. _dihedral: .. _dihedral:
@ -123,6 +130,7 @@ OPT.
* :doc:`fourier (io) <dihedral_fourier>` * :doc:`fourier (io) <dihedral_fourier>`
* :doc:`harmonic (iko) <dihedral_harmonic>` * :doc:`harmonic (iko) <dihedral_harmonic>`
* :doc:`helix (o) <dihedral_helix>` * :doc:`helix (o) <dihedral_helix>`
* :doc:`lepton (o) <dihedral_lepton>`
* :doc:`multi/harmonic (o) <dihedral_multi_harmonic>` * :doc:`multi/harmonic (o) <dihedral_multi_harmonic>`
* :doc:`nharmonic (o) <dihedral_nharmonic>` * :doc:`nharmonic (o) <dihedral_nharmonic>`
* :doc:`opls (iko) <dihedral_opls>` * :doc:`opls (iko) <dihedral_opls>`
@ -152,6 +160,7 @@ OPT.
* *
* *
* *
* :doc:`amoeba <improper_amoeba>`
* :doc:`class2 (ko) <improper_class2>` * :doc:`class2 (ko) <improper_class2>`
* :doc:`cossq (o) <improper_cossq>` * :doc:`cossq (o) <improper_cossq>`
* :doc:`cvff (io) <improper_cvff>` * :doc:`cvff (io) <improper_cvff>`

View File

@ -25,7 +25,6 @@ Setup simulation box:
:columns: 4 :columns: 4
* :doc:`boundary <boundary>` * :doc:`boundary <boundary>`
* :doc:`box <box>`
* :doc:`change_box <change_box>` * :doc:`change_box <change_box>`
* :doc:`create_box <create_box>` * :doc:`create_box <create_box>`
* :doc:`dimension <dimension>` * :doc:`dimension <dimension>`

View File

@ -10,6 +10,7 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Compute commands Compute commands
================ ================
@ -45,21 +46,25 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`com/chunk <compute_com_chunk>` * :doc:`com/chunk <compute_com_chunk>`
* :doc:`contact/atom <compute_contact_atom>` * :doc:`contact/atom <compute_contact_atom>`
* :doc:`coord/atom (k) <compute_coord_atom>` * :doc:`coord/atom (k) <compute_coord_atom>`
* :doc:`count/type <compute_count_type>`
* :doc:`damage/atom <compute_damage_atom>` * :doc:`damage/atom <compute_damage_atom>`
* :doc:`dihedral <compute_dihedral>` * :doc:`dihedral <compute_dihedral>`
* :doc:`dihedral/local <compute_dihedral_local>` * :doc:`dihedral/local <compute_dihedral_local>`
* :doc:`dilatation/atom <compute_dilatation_atom>` * :doc:`dilatation/atom <compute_dilatation_atom>`
* :doc:`dipole <compute_dipole>` * :doc:`dipole <compute_dipole>`
* :doc:`dipole/chunk <compute_dipole_chunk>` * :doc:`dipole/chunk <compute_dipole_chunk>`
* :doc:`dipole/tip4p <compute_dipole>`
* :doc:`dipole/tip4p/chunk <compute_dipole_chunk>`
* :doc:`displace/atom <compute_displace_atom>` * :doc:`displace/atom <compute_displace_atom>`
* :doc:`dpd <compute_dpd>` * :doc:`dpd <compute_dpd>`
* :doc:`dpd/atom <compute_dpd_atom>` * :doc:`dpd/atom <compute_dpd_atom>`
* :doc:`edpd/temp/atom <compute_edpd_temp_atom>` * :doc:`edpd/temp/atom <compute_edpd_temp_atom>`
* :doc:`efield/atom <compute_efield_atom>` * :doc:`efield/atom <compute_efield_atom>`
* :doc:`efield/wolf/atom <compute_efield_wolf_atom>`
* :doc:`entropy/atom <compute_entropy_atom>` * :doc:`entropy/atom <compute_entropy_atom>`
* :doc:`erotate/asphere <compute_erotate_asphere>` * :doc:`erotate/asphere <compute_erotate_asphere>`
* :doc:`erotate/rigid <compute_erotate_rigid>` * :doc:`erotate/rigid <compute_erotate_rigid>`
* :doc:`erotate/sphere <compute_erotate_sphere>` * :doc:`erotate/sphere (k) <compute_erotate_sphere>`
* :doc:`erotate/sphere/atom <compute_erotate_sphere_atom>` * :doc:`erotate/sphere/atom <compute_erotate_sphere_atom>`
* :doc:`event/displace <compute_event_displace>` * :doc:`event/displace <compute_event_displace>`
* :doc:`fabric <compute_fabric>` * :doc:`fabric <compute_fabric>`
@ -86,7 +91,6 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`ke/atom/eff <compute_ke_atom_eff>` * :doc:`ke/atom/eff <compute_ke_atom_eff>`
* :doc:`ke/eff <compute_ke_eff>` * :doc:`ke/eff <compute_ke_eff>`
* :doc:`ke/rigid <compute_ke_rigid>` * :doc:`ke/rigid <compute_ke_rigid>`
* :doc:`mesont <compute_mesont>`
* :doc:`mliap <compute_mliap>` * :doc:`mliap <compute_mliap>`
* :doc:`momentum <compute_momentum>` * :doc:`momentum <compute_momentum>`
* :doc:`msd <compute_msd>` * :doc:`msd <compute_msd>`
@ -103,9 +107,11 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`pe/tally <compute_tally>` * :doc:`pe/tally <compute_tally>`
* :doc:`plasticity/atom <compute_plasticity_atom>` * :doc:`plasticity/atom <compute_plasticity_atom>`
* :doc:`pressure <compute_pressure>` * :doc:`pressure <compute_pressure>`
* :doc:`pressure/alchemy <compute_pressure_alchemy>`
* :doc:`pressure/uef <compute_pressure_uef>` * :doc:`pressure/uef <compute_pressure_uef>`
* :doc:`property/atom <compute_property_atom>` * :doc:`property/atom <compute_property_atom>`
* :doc:`property/chunk <compute_property_chunk>` * :doc:`property/chunk <compute_property_chunk>`
* :doc:`property/grid <compute_property_grid>`
* :doc:`property/local <compute_property_local>` * :doc:`property/local <compute_property_local>`
* :doc:`ptm/atom <compute_ptm_atom>` * :doc:`ptm/atom <compute_ptm_atom>`
* :doc:`rdf <compute_rdf>` * :doc:`rdf <compute_rdf>`
@ -138,6 +144,8 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`smd/vol <compute_smd_vol>` * :doc:`smd/vol <compute_smd_vol>`
* :doc:`snap <compute_sna_atom>` * :doc:`snap <compute_sna_atom>`
* :doc:`sna/atom <compute_sna_atom>` * :doc:`sna/atom <compute_sna_atom>`
* :doc:`sna/grid <compute_sna_atom>`
* :doc:`sna/grid/local <compute_sna_atom>`
* :doc:`snad/atom <compute_sna_atom>` * :doc:`snad/atom <compute_sna_atom>`
* :doc:`snav/atom <compute_sna_atom>` * :doc:`snav/atom <compute_sna_atom>`
* :doc:`sph/e/atom <compute_sph_e_atom>` * :doc:`sph/e/atom <compute_sph_e_atom>`

57
doc/src/Commands_dump.rst Normal file
View File

@ -0,0 +1,57 @@
.. table_from_list::
:columns: 3
* :doc:`General commands <Commands_all>`
* :doc:`Fix styles <Commands_fix>`
* :doc:`Compute styles <Commands_compute>`
* :doc:`Pair styles <Commands_pair>`
* :ref:`Bond styles <bond>`
* :ref:`Angle styles <angle>`
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Dump commands
=============
An alphabetic list of all LAMMPS :doc:`dump <dump>` commands.
.. table_from_list::
:columns: 5
* :doc:`atom <dump>`
* :doc:`atom/adios <dump_adios>`
* :doc:`atom/gz <dump>`
* :doc:`atom/mpiio <dump>`
* :doc:`atom/zstd <dump>`
* :doc:`cfg <dump>`
* :doc:`cfg/gz <dump>`
* :doc:`cfg/mpiio <dump>`
* :doc:`cfg/uef <dump_cfg_uef>`
* :doc:`cfg/zstd <dump>`
* :doc:`custom <dump>`
* :doc:`custom/adios <dump_adios>`
* :doc:`custom/gz <dump>`
* :doc:`custom/mpiio <dump>`
* :doc:`custom/zstd <dump>`
* :doc:`dcd <dump>`
* :doc:`grid <dump>`
* :doc:`grid/vtk <dump>`
* :doc:`h5md <dump_h5md>`
* :doc:`image <dump_image>`
* :doc:`local <dump>`
* :doc:`local/gz <dump>`
* :doc:`local/zstd <dump>`
* :doc:`molfile <dump_molfile>`
* :doc:`movie <dump_image>`
* :doc:`netcdf <dump_netcdf>`
* :doc:`netcdf/mpiio <dump>`
* :doc:`vtk <dump_vtk>`
* :doc:`xtc <dump>`
* :doc:`xyz <dump>`
* :doc:`xyz/gz <dump>`
* :doc:`xyz/mpiio <dump>`
* :doc:`xyz/zstd <dump>`
* :doc:`yaml <dump>`

View File

@ -10,6 +10,7 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Fix commands Fix commands
============ ============
@ -28,6 +29,9 @@ OPT.
* :doc:`adapt/fep <fix_adapt_fep>` * :doc:`adapt/fep <fix_adapt_fep>`
* :doc:`addforce <fix_addforce>` * :doc:`addforce <fix_addforce>`
* :doc:`addtorque <fix_addtorque>` * :doc:`addtorque <fix_addtorque>`
* :doc:`alchemy <fix_alchemy>`
* :doc:`amoeba/bitorsion <fix_amoeba_bitorsion>`
* :doc:`amoeba/pitorsion <fix_amoeba_pitorsion>`
* :doc:`append/atoms <fix_append_atoms>` * :doc:`append/atoms <fix_append_atoms>`
* :doc:`atc <fix_atc>` * :doc:`atc <fix_atc>`
* :doc:`atom/swap <fix_atom_swap>` * :doc:`atom/swap <fix_atom_swap>`
@ -35,14 +39,12 @@ OPT.
* :doc:`ave/chunk <fix_ave_chunk>` * :doc:`ave/chunk <fix_ave_chunk>`
* :doc:`ave/correlate <fix_ave_correlate>` * :doc:`ave/correlate <fix_ave_correlate>`
* :doc:`ave/correlate/long <fix_ave_correlate_long>` * :doc:`ave/correlate/long <fix_ave_correlate_long>`
* :doc:`ave/grid <fix_ave_grid>`
* :doc:`ave/histo <fix_ave_histo>` * :doc:`ave/histo <fix_ave_histo>`
* :doc:`ave/histo/weight <fix_ave_histo>` * :doc:`ave/histo/weight <fix_ave_histo>`
* :doc:`ave/time <fix_ave_time>` * :doc:`ave/time <fix_ave_time>`
* :doc:`aveforce <fix_aveforce>` * :doc:`aveforce <fix_aveforce>`
* :doc:`balance <fix_balance>` * :doc:`balance <fix_balance>`
* :doc:`brownian <fix_brownian>`
* :doc:`brownian/asphere <fix_brownian>`
* :doc:`brownian/sphere <fix_brownian>`
* :doc:`bocs <fix_bocs>` * :doc:`bocs <fix_bocs>`
* :doc:`bond/break <fix_bond_break>` * :doc:`bond/break <fix_bond_break>`
* :doc:`bond/create <fix_bond_create>` * :doc:`bond/create <fix_bond_create>`
@ -50,6 +52,9 @@ OPT.
* :doc:`bond/react <fix_bond_react>` * :doc:`bond/react <fix_bond_react>`
* :doc:`bond/swap <fix_bond_swap>` * :doc:`bond/swap <fix_bond_swap>`
* :doc:`box/relax <fix_box_relax>` * :doc:`box/relax <fix_box_relax>`
* :doc:`brownian <fix_brownian>`
* :doc:`brownian/asphere <fix_brownian>`
* :doc:`brownian/sphere <fix_brownian>`
* :doc:`charge/regulation <fix_charge_regulation>` * :doc:`charge/regulation <fix_charge_regulation>`
* :doc:`cmap <fix_cmap>` * :doc:`cmap <fix_cmap>`
* :doc:`colvars <fix_colvars>` * :doc:`colvars <fix_colvars>`
@ -62,13 +67,14 @@ OPT.
* :doc:`drude <fix_drude>` * :doc:`drude <fix_drude>`
* :doc:`drude/transform/direct <fix_drude_transform>` * :doc:`drude/transform/direct <fix_drude_transform>`
* :doc:`drude/transform/inverse <fix_drude_transform>` * :doc:`drude/transform/inverse <fix_drude_transform>`
* :doc:`dt/reset <fix_dt_reset>` * :doc:`dt/reset (k) <fix_dt_reset>`
* :doc:`edpd/source <fix_dpd_source>` * :doc:`edpd/source <fix_dpd_source>`
* :doc:`efield <fix_efield>` * :doc:`efield <fix_efield>`
* :doc:`efield/tip4p <fix_efield>`
* :doc:`ehex <fix_ehex>` * :doc:`ehex <fix_ehex>`
* :doc:`electrode/conp (i) <fix_electrode_conp>` * :doc:`electrode/conp (i) <fix_electrode>`
* :doc:`electrode/conq (i) <fix_electrode_conp>` * :doc:`electrode/conq (i) <fix_electrode>`
* :doc:`electrode/thermo (i) <fix_electrode_conp>` * :doc:`electrode/thermo (i) <fix_electrode>`
* :doc:`electron/stopping <fix_electron_stopping>` * :doc:`electron/stopping <fix_electron_stopping>`
* :doc:`electron/stopping/fit <fix_electron_stopping>` * :doc:`electron/stopping/fit <fix_electron_stopping>`
* :doc:`enforce2d (k) <fix_enforce2d>` * :doc:`enforce2d (k) <fix_enforce2d>`
@ -88,6 +94,7 @@ OPT.
* :doc:`grem <fix_grem>` * :doc:`grem <fix_grem>`
* :doc:`halt <fix_halt>` * :doc:`halt <fix_halt>`
* :doc:`heat <fix_heat>` * :doc:`heat <fix_heat>`
* :doc:`heat/flow <fix_heat_flow>`
* :doc:`hyper/global <fix_hyper_global>` * :doc:`hyper/global <fix_hyper_global>`
* :doc:`hyper/local <fix_hyper_local>` * :doc:`hyper/local <fix_hyper_local>`
* :doc:`imd <fix_imd>` * :doc:`imd <fix_imd>`
@ -97,13 +104,13 @@ OPT.
* :doc:`langevin/drude <fix_langevin_drude>` * :doc:`langevin/drude <fix_langevin_drude>`
* :doc:`langevin/eff <fix_langevin_eff>` * :doc:`langevin/eff <fix_langevin_eff>`
* :doc:`langevin/spin <fix_langevin_spin>` * :doc:`langevin/spin <fix_langevin_spin>`
* :doc:`latte <fix_latte>`
* :doc:`lb/fluid <fix_lb_fluid>` * :doc:`lb/fluid <fix_lb_fluid>`
* :doc:`lb/momentum <fix_lb_momentum>` * :doc:`lb/momentum <fix_lb_momentum>`
* :doc:`lb/viscous <fix_lb_viscous>` * :doc:`lb/viscous <fix_lb_viscous>`
* :doc:`lineforce <fix_lineforce>` * :doc:`lineforce <fix_lineforce>`
* :doc:`manifoldforce <fix_manifoldforce>` * :doc:`manifoldforce <fix_manifoldforce>`
* :doc:`mdi/aimd <fix_mdi_aimd>` * :doc:`mdi/qm <fix_mdi_qm>`
* :doc:`mdi/qmmm <fix_mdi_qmmm>`
* :doc:`meso/move <fix_meso_move>` * :doc:`meso/move <fix_meso_move>`
* :doc:`mol/swap <fix_mol_swap>` * :doc:`mol/swap <fix_mol_swap>`
* :doc:`momentum (k) <fix_momentum>` * :doc:`momentum (k) <fix_momentum>`
@ -162,8 +169,10 @@ OPT.
* :doc:`orient/fcc <fix_orient>` * :doc:`orient/fcc <fix_orient>`
* :doc:`orient/eco <fix_orient_eco>` * :doc:`orient/eco <fix_orient_eco>`
* :doc:`pafi <fix_pafi>` * :doc:`pafi <fix_pafi>`
* :doc:`pair <fix_pair>`
* :doc:`phonon <fix_phonon>` * :doc:`phonon <fix_phonon>`
* :doc:`pimd <fix_pimd>` * :doc:`pimd/langevin <fix_pimd>`
* :doc:`pimd/nvt <fix_pimd>`
* :doc:`planeforce <fix_planeforce>` * :doc:`planeforce <fix_planeforce>`
* :doc:`plumed <fix_plumed>` * :doc:`plumed <fix_plumed>`
* :doc:`poems <fix_poems>` * :doc:`poems <fix_poems>`
@ -209,6 +218,7 @@ OPT.
* :doc:`saed/vtk <fix_saed_vtk>` * :doc:`saed/vtk <fix_saed_vtk>`
* :doc:`setforce (k) <fix_setforce>` * :doc:`setforce (k) <fix_setforce>`
* :doc:`setforce/spin <fix_setforce>` * :doc:`setforce/spin <fix_setforce>`
* :doc:`sgcmc <fix_sgcmc>`
* :doc:`shake (k) <fix_shake>` * :doc:`shake (k) <fix_shake>`
* :doc:`shardlow (k) <fix_shardlow>` * :doc:`shardlow (k) <fix_shardlow>`
* :doc:`smd <fix_smd>` * :doc:`smd <fix_smd>`
@ -245,18 +255,19 @@ OPT.
* :doc:`tune/kspace <fix_tune_kspace>` * :doc:`tune/kspace <fix_tune_kspace>`
* :doc:`vector <fix_vector>` * :doc:`vector <fix_vector>`
* :doc:`viscosity <fix_viscosity>` * :doc:`viscosity <fix_viscosity>`
* :doc:`viscous <fix_viscous>` * :doc:`viscous (k) <fix_viscous>`
* :doc:`viscous/sphere <fix_viscous_sphere>` * :doc:`viscous/sphere <fix_viscous_sphere>`
* :doc:`wall/body/polygon <fix_wall_body_polygon>` * :doc:`wall/body/polygon <fix_wall_body_polygon>`
* :doc:`wall/body/polyhedron <fix_wall_body_polyhedron>` * :doc:`wall/body/polyhedron <fix_wall_body_polyhedron>`
* :doc:`wall/colloid <fix_wall>` * :doc:`wall/colloid <fix_wall>`
* :doc:`wall/ees <fix_wall_ees>` * :doc:`wall/ees <fix_wall_ees>`
* :doc:`wall/gran <fix_wall_gran>` * :doc:`wall/gran (k) <fix_wall_gran>`
* :doc:`wall/gran/region <fix_wall_gran_region>` * :doc:`wall/gran/region <fix_wall_gran_region>`
* :doc:`wall/harmonic <fix_wall>` * :doc:`wall/harmonic <fix_wall>`
* :doc:`wall/lj1043 <fix_wall>` * :doc:`wall/lj1043 <fix_wall>`
* :doc:`wall/lj126 <fix_wall>` * :doc:`wall/lj126 <fix_wall>`
* :doc:`wall/lj93 (k) <fix_wall>` * :doc:`wall/lj93 (k) <fix_wall>`
* :doc:`wall/lepton <fix_wall>`
* :doc:`wall/morse <fix_wall>` * :doc:`wall/morse <fix_wall>`
* :doc:`wall/piston <fix_wall_piston>` * :doc:`wall/piston <fix_wall_piston>`
* :doc:`wall/reflect (k) <fix_wall_reflect>` * :doc:`wall/reflect (k) <fix_wall_reflect>`
@ -264,4 +275,5 @@ OPT.
* :doc:`wall/region <fix_wall_region>` * :doc:`wall/region <fix_wall_region>`
* :doc:`wall/region/ees <fix_wall_ees>` * :doc:`wall/region/ees <fix_wall_ees>`
* :doc:`wall/srd <fix_wall_srd>` * :doc:`wall/srd <fix_wall_srd>`
* :doc:`wall/table <fix_wall>`
* :doc:`widom <fix_widom>` * :doc:`widom <fix_widom>`

View File

@ -10,6 +10,7 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
KSpace solvers KSpace solvers
============== ==============

View File

@ -10,6 +10,7 @@
* :ref:`Dihedral styles <dihedral>` * :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>` * :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>` * :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Pair_style potentials Pair_style potentials
====================== ======================
@ -36,8 +37,10 @@ OPT.
* *
* :doc:`adp (ko) <pair_adp>` * :doc:`adp (ko) <pair_adp>`
* :doc:`agni (o) <pair_agni>` * :doc:`agni (o) <pair_agni>`
* :doc:`aip/water/2dm (t) <pair_aip_water_2dm>`
* :doc:`airebo (io) <pair_airebo>` * :doc:`airebo (io) <pair_airebo>`
* :doc:`airebo/morse (io) <pair_airebo>` * :doc:`airebo/morse (io) <pair_airebo>`
* :doc:`amoeba (g) <pair_amoeba>`
* :doc:`atm <pair_atm>` * :doc:`atm <pair_atm>`
* :doc:`awpmd/cut <pair_awpmd>` * :doc:`awpmd/cut <pair_awpmd>`
* :doc:`beck (go) <pair_beck>` * :doc:`beck (go) <pair_beck>`
@ -53,6 +56,7 @@ OPT.
* :doc:`born/coul/msm (o) <pair_born>` * :doc:`born/coul/msm (o) <pair_born>`
* :doc:`born/coul/wolf (go) <pair_born>` * :doc:`born/coul/wolf (go) <pair_born>`
* :doc:`born/coul/wolf/cs (g) <pair_cs>` * :doc:`born/coul/wolf/cs (g) <pair_cs>`
* :doc:`born/gauss <pair_born_gauss>`
* :doc:`bpm/spring <pair_bpm_spring>` * :doc:`bpm/spring <pair_bpm_spring>`
* :doc:`brownian (o) <pair_brownian>` * :doc:`brownian (o) <pair_brownian>`
* :doc:`brownian/poly (o) <pair_brownian>` * :doc:`brownian/poly (o) <pair_brownian>`
@ -91,8 +95,8 @@ OPT.
* :doc:`coul/wolf/cs <pair_cs>` * :doc:`coul/wolf/cs <pair_cs>`
* :doc:`dpd (giko) <pair_dpd>` * :doc:`dpd (giko) <pair_dpd>`
* :doc:`dpd/fdt <pair_dpd_fdt>` * :doc:`dpd/fdt <pair_dpd_fdt>`
* :doc:`dpd/ext (k) <pair_dpd_ext>` * :doc:`dpd/ext (ko) <pair_dpd_ext>`
* :doc:`dpd/ext/tstat (k) <pair_dpd_ext>` * :doc:`dpd/ext/tstat (ko) <pair_dpd_ext>`
* :doc:`dpd/fdt/energy (k) <pair_dpd_fdt>` * :doc:`dpd/fdt/energy (k) <pair_dpd_fdt>`
* :doc:`dpd/tstat (gko) <pair_dpd>` * :doc:`dpd/tstat (gko) <pair_dpd>`
* :doc:`dsmc <pair_dsmc>` * :doc:`dsmc <pair_dsmc>`
@ -124,6 +128,7 @@ OPT.
* :doc:`hbond/dreiding/lj (o) <pair_hbond_dreiding>` * :doc:`hbond/dreiding/lj (o) <pair_hbond_dreiding>`
* :doc:`hbond/dreiding/morse (o) <pair_hbond_dreiding>` * :doc:`hbond/dreiding/morse (o) <pair_hbond_dreiding>`
* :doc:`hdnnp <pair_hdnnp>` * :doc:`hdnnp <pair_hdnnp>`
* :doc:`hippo (g) <pair_amoeba>`
* :doc:`ilp/graphene/hbn (t) <pair_ilp_graphene_hbn>` * :doc:`ilp/graphene/hbn (t) <pair_ilp_graphene_hbn>`
* :doc:`ilp/tmd (t) <pair_ilp_tmd>` * :doc:`ilp/tmd (t) <pair_ilp_tmd>`
* :doc:`kolmogorov/crespi/full <pair_kolmogorov_crespi_full>` * :doc:`kolmogorov/crespi/full <pair_kolmogorov_crespi_full>`
@ -131,6 +136,9 @@ OPT.
* :doc:`lcbop <pair_lcbop>` * :doc:`lcbop <pair_lcbop>`
* :doc:`lebedeva/z <pair_lebedeva_z>` * :doc:`lebedeva/z <pair_lebedeva_z>`
* :doc:`lennard/mdf <pair_mdf>` * :doc:`lennard/mdf <pair_mdf>`
* :doc:`lepton (o) <pair_lepton>`
* :doc:`lepton/coul (o) <pair_lepton>`
* :doc:`lepton/sphere (o) <pair_lepton>`
* :doc:`line/lj <pair_line_lj>` * :doc:`line/lj <pair_line_lj>`
* :doc:`lj/charmm/coul/charmm (giko) <pair_charmm>` * :doc:`lj/charmm/coul/charmm (giko) <pair_charmm>`
* :doc:`lj/charmm/coul/charmm/implicit (ko) <pair_charmm>` * :doc:`lj/charmm/coul/charmm/implicit (ko) <pair_charmm>`
@ -161,16 +169,18 @@ OPT.
* :doc:`lj/cut/coul/msm (go) <pair_lj_cut_coul>` * :doc:`lj/cut/coul/msm (go) <pair_lj_cut_coul>`
* :doc:`lj/cut/coul/msm/dielectric <pair_dielectric>` * :doc:`lj/cut/coul/msm/dielectric <pair_dielectric>`
* :doc:`lj/cut/coul/wolf (o) <pair_lj_cut_coul>` * :doc:`lj/cut/coul/wolf (o) <pair_lj_cut_coul>`
* :doc:`lj/cut/dipole/cut (go) <pair_dipole>` * :doc:`lj/cut/dipole/cut (gko) <pair_dipole>`
* :doc:`lj/cut/dipole/long (g) <pair_dipole>` * :doc:`lj/cut/dipole/long (g) <pair_dipole>`
* :doc:`lj/cut/dipole/sf (go) <pair_dipole>` * :doc:`lj/cut/dipole/sf (go) <pair_dipole>`
* :doc:`lj/cut/soft (o) <pair_fep_soft>` * :doc:`lj/cut/soft (o) <pair_fep_soft>`
* :doc:`lj/cut/sphere (o) <pair_lj_cut_sphere>`
* :doc:`lj/cut/thole/long (o) <pair_thole>` * :doc:`lj/cut/thole/long (o) <pair_thole>`
* :doc:`lj/cut/tip4p/cut (o) <pair_lj_cut_tip4p>` * :doc:`lj/cut/tip4p/cut (o) <pair_lj_cut_tip4p>`
* :doc:`lj/cut/tip4p/long (got) <pair_lj_cut_tip4p>` * :doc:`lj/cut/tip4p/long (got) <pair_lj_cut_tip4p>`
* :doc:`lj/cut/tip4p/long/soft (o) <pair_fep_soft>` * :doc:`lj/cut/tip4p/long/soft (o) <pair_fep_soft>`
* :doc:`lj/expand (gko) <pair_lj_expand>` * :doc:`lj/expand (gko) <pair_lj_expand>`
* :doc:`lj/expand/coul/long (g) <pair_lj_expand>` * :doc:`lj/expand/coul/long (gk) <pair_lj_expand>`
* :doc:`lj/expand/sphere (o) <pair_lj_expand_sphere>`
* :doc:`lj/gromacs (gko) <pair_gromacs>` * :doc:`lj/gromacs (gko) <pair_gromacs>`
* :doc:`lj/gromacs/coul/gromacs (ko) <pair_gromacs>` * :doc:`lj/gromacs/coul/gromacs (ko) <pair_gromacs>`
* :doc:`lj/long/coul/long (iot) <pair_lj_long>` * :doc:`lj/long/coul/long (iot) <pair_lj_long>`
@ -179,9 +189,9 @@ OPT.
* :doc:`lj/long/tip4p/long (o) <pair_lj_long>` * :doc:`lj/long/tip4p/long (o) <pair_lj_long>`
* :doc:`lj/mdf <pair_mdf>` * :doc:`lj/mdf <pair_mdf>`
* :doc:`lj/relres (o) <pair_lj_relres>` * :doc:`lj/relres (o) <pair_lj_relres>`
* :doc:`lj/sdk (gko) <pair_sdk>` * :doc:`lj/spica (gko) <pair_spica>`
* :doc:`lj/sdk/coul/long (go) <pair_sdk>` * :doc:`lj/spica/coul/long (go) <pair_spica>`
* :doc:`lj/sdk/coul/msm (o) <pair_sdk>` * :doc:`lj/spica/coul/msm (o) <pair_spica>`
* :doc:`lj/sf/dipole/sf (go) <pair_dipole>` * :doc:`lj/sf/dipole/sf (go) <pair_dipole>`
* :doc:`lj/smooth (go) <pair_lj_smooth>` * :doc:`lj/smooth (go) <pair_lj_smooth>`
* :doc:`lj/smooth/linear (o) <pair_lj_smooth_linear>` * :doc:`lj/smooth/linear (o) <pair_lj_smooth_linear>`
@ -194,14 +204,15 @@ OPT.
* :doc:`lubricateU/poly <pair_lubricateU>` * :doc:`lubricateU/poly <pair_lubricateU>`
* :doc:`mdpd <pair_mesodpd>` * :doc:`mdpd <pair_mesodpd>`
* :doc:`mdpd/rhosum <pair_mesodpd>` * :doc:`mdpd/rhosum <pair_mesodpd>`
* :doc:`meam <pair_meam>` * :doc:`meam (k) <pair_meam>`
* :doc:`meam/ms (k) <pair_meam>`
* :doc:`meam/spline (o) <pair_meam_spline>` * :doc:`meam/spline (o) <pair_meam_spline>`
* :doc:`meam/sw/spline <pair_meam_sw_spline>` * :doc:`meam/sw/spline <pair_meam_sw_spline>`
* :doc:`mesocnt <pair_mesocnt>` * :doc:`mesocnt <pair_mesocnt>`
* :doc:`mesont/tpm <pair_mesont_tpm>` * :doc:`mesocnt/viscous <pair_mesocnt>`
* :doc:`mgpt <pair_mgpt>` * :doc:`mgpt <pair_mgpt>`
* :doc:`mie/cut (g) <pair_mie>` * :doc:`mie/cut (g) <pair_mie>`
* :doc:`mliap <pair_mliap>` * :doc:`mliap (k) <pair_mliap>`
* :doc:`mm3/switch3/coulgauss/long <pair_lj_switch3_coulgauss_long>` * :doc:`mm3/switch3/coulgauss/long <pair_lj_switch3_coulgauss_long>`
* :doc:`momb <pair_momb>` * :doc:`momb <pair_momb>`
* :doc:`morse (gkot) <pair_morse>` * :doc:`morse (gkot) <pair_morse>`
@ -232,6 +243,8 @@ OPT.
* :doc:`oxrna2/xstk <pair_oxrna2>` * :doc:`oxrna2/xstk <pair_oxrna2>`
* :doc:`oxrna2/coaxstk <pair_oxrna2>` * :doc:`oxrna2/coaxstk <pair_oxrna2>`
* :doc:`pace (k) <pair_pace>` * :doc:`pace (k) <pair_pace>`
* :doc:`pace/extrapolation (k) <pair_pace>`
* :doc:`pod <pair_pod>`
* :doc:`peri/eps <pair_peri>` * :doc:`peri/eps <pair_peri>`
* :doc:`peri/lps (o) <pair_peri>` * :doc:`peri/lps (o) <pair_peri>`
* :doc:`peri/pmb (o) <pair_peri>` * :doc:`peri/pmb (o) <pair_peri>`
@ -268,6 +281,7 @@ OPT.
* :doc:`spin/magelec <pair_spin_magelec>` * :doc:`spin/magelec <pair_spin_magelec>`
* :doc:`spin/neel <pair_spin_neel>` * :doc:`spin/neel <pair_spin_neel>`
* :doc:`srp <pair_srp>` * :doc:`srp <pair_srp>`
* :doc:`srp/react <pair_srp>`
* :doc:`sw (giko) <pair_sw>` * :doc:`sw (giko) <pair_sw>`
* :doc:`sw/angle/table <pair_sw_angle_table>` * :doc:`sw/angle/table <pair_sw_angle_table>`
* :doc:`sw/mod (o) <pair_sw>` * :doc:`sw/mod (o) <pair_sw>`
@ -289,6 +303,7 @@ OPT.
* :doc:`vashishta (gko) <pair_vashishta>` * :doc:`vashishta (gko) <pair_vashishta>`
* :doc:`vashishta/table (o) <pair_vashishta>` * :doc:`vashishta/table (o) <pair_vashishta>`
* :doc:`wf/cut <pair_wf_cut>` * :doc:`wf/cut <pair_wf_cut>`
* :doc:`ylz <pair_ylz>`
* :doc:`yukawa (gko) <pair_yukawa>` * :doc:`yukawa (gko) <pair_yukawa>`
* :doc:`yukawa/colloid (go) <pair_yukawa_colloid>` * :doc:`yukawa/colloid (go) <pair_yukawa_colloid>`
* :doc:`zbl (gko) <pair_zbl>` * :doc:`zbl (gko) <pair_zbl>`

View File

@ -123,14 +123,15 @@ LAMMPS:
.. _six: .. _six:
6. If you want text with spaces to be treated as a single argument, it 6. If you want text with spaces to be treated as a single argument, it
can be enclosed in either single or double or triple quotes. A long can be enclosed in either single (') or double (") or triple (""")
single argument enclosed in single or double quotes can span multiple quotes. A long single argument enclosed in single or double quotes
lines if the "&" character is used, as described above. When the can span multiple lines if the "&" character is used, as described
lines are concatenated together (and the "&" characters and line in :ref:`1 <one>` above. When the lines are concatenated together
breaks removed), the text will become a single line. If you want by LAMMPS (and the "&" characters and line breaks removed), the
multiple lines of an argument to retain their line breaks, the text combined text will become a single line. If you want multiple lines
can be enclosed in triple quotes, in which case "&" characters are of an argument to retain their line breaks, the text can be enclosed
not needed. For example: in triple quotes, in which case "&" characters are not needed and do
not function as line continuation character. For example:
.. code-block:: LAMMPS .. code-block:: LAMMPS
@ -144,8 +145,9 @@ LAMMPS:
System temperature = $t System temperature = $t
""" """
In each case, the single, double, or triple quotes are removed when In each of these cases, the single, double, or triple quotes are
the single argument they enclose is stored internally. removed and the enclosed text stored internally as a single
argument.
See the :doc:`dump modify format <dump_modify>`, :doc:`print See the :doc:`dump modify format <dump_modify>`, :doc:`print
<print>`, :doc:`if <if>`, and :doc:`python <python>` commands for <print>`, :doc:`if <if>`, and :doc:`python <python>` commands for

View File

@ -2,14 +2,17 @@ Removed commands and packages
============================= =============================
This page lists LAMMPS commands and packages that have been removed from This page lists LAMMPS commands and packages that have been removed from
the distribution and provides suggestions for alternatives or replacements. the distribution and provides suggestions for alternatives or
LAMMPS has special dummy styles implemented, that will stop LAMMPS and replacements. LAMMPS has special dummy styles implemented, that will
print a suitable error message in most cases, when a style/command is used stop LAMMPS and print a suitable error message in most cases, when a
that has been removed. style/command is used that has been removed or will replace the command
with the direct alternative (if available) and print a warning.
Fix ave/spatial and fix ave/spatial/sphere Fix ave/spatial and fix ave/spatial/sphere
------------------------------------------ ------------------------------------------
.. deprecated:: 11Dec2015
The fixes ave/spatial and ave/spatial/sphere have been removed from LAMMPS The fixes ave/spatial and ave/spatial/sphere have been removed from LAMMPS
since they were superseded by the more general and extensible "chunk since they were superseded by the more general and extensible "chunk
infrastructure". Here the system is partitioned in one of many possible infrastructure". Here the system is partitioned in one of many possible
@ -17,10 +20,37 @@ ways through the :doc:`compute chunk/atom <compute_chunk_atom>` command
and then averaging is done using :doc:`fix ave/chunk <fix_ave_chunk>`. and then averaging is done using :doc:`fix ave/chunk <fix_ave_chunk>`.
Please refer to the :doc:`chunk HOWTO <Howto_chunk>` section for an overview. Please refer to the :doc:`chunk HOWTO <Howto_chunk>` section for an overview.
Reset_ids command Box command
----------------- -----------
The reset_ids command has been renamed to :doc:`reset_atom_ids <reset_atom_ids>`. .. deprecated:: 22Dec2022
The *box* command has been removed and the LAMMPS code changed so it won't
be needed. If present, LAMMPS will ignore the command and print a warning.
Reset_ids, reset_atom_ids, reset_mol_ids commands
-------------------------------------------------
.. deprecated:: 22Dec2022
The *reset_ids*, *reset_atom_ids*, and *reset_mol_ids* commands have
been folded into the :doc:`reset_atoms <reset_atoms>` command. If
present, LAMMPS will replace the commands accordingly and print a
warning.
LATTE package
-------------
.. deprecated:: TBD
The LATTE package with the fix latte command was removed from LAMMPS.
This functionality has been superseded by :doc:`fix mdi/qm <fix_mdi_qm>`
and :doc:`fix mdi/qmmm <fix_mdi_qmmm>` from the :ref:`MDI package
<PKG-MDI>`. These fixes are compatible with several quantum software
packages, including LATTE. See the ``examples/QUANTUM`` dir and the
:doc:`MDI coupling HOWTO <Howto_mdi>` page. MDI supports running LAMMPS
with LATTE as a plugin library (similar to the way fix latte worked), as
well as on a different set of MPI processors.
MEAM package MEAM package
------------ ------------
@ -30,18 +60,42 @@ The code in the :ref:`MEAM package <PKG-MEAM>` is a translation of the
Fortran code of MEAM into C++, which removes several restrictions Fortran code of MEAM into C++, which removes several restrictions
(e.g. there can be multiple instances in hybrid pair styles) and allows (e.g. there can be multiple instances in hybrid pair styles) and allows
for some optimizations leading to better performance. The pair style for some optimizations leading to better performance. The pair style
:doc:`meam <pair_meam>` has the exact same syntax. :doc:`meam <pair_meam>` has the exact same syntax. For a transition
period the C++ version of MEAM was called USER-MEAMC so it could
coexist with the Fortran version.
Minimize style fire/old
-----------------------
.. deprecated:: 8Feb2023
Minimize style *fire/old* has been removed. Its functionality can be
reproduced with *fire* with specific options. Please see the
:doc:`min_modify command <min_modify>` documentation for details.
Pair style mesont/tpm, compute style mesont, atom style mesont
--------------------------------------------------------------
.. deprecated:: 8Feb2023
Pair style *mesont/tpm*, compute style *mesont*, and atom style
*mesont* have been removed from the :ref:`MESONT package <PKG-MESONT>`.
The same functionality is available through
:doc:`pair style mesocnt <pair_mesocnt>`,
:doc:`bond style mesocnt <bond_mesocnt>` and
:doc:`angle style mesocnt <angle_mesocnt>`.
REAX package REAX package
------------ ------------
The REAX package has been removed since it was superseded by the The REAX package has been removed since it was superseded by the
:ref:`REAXFF package <PKG-REAXFF>`. The REAXFF :ref:`REAXFF package <PKG-REAXFF>`. The REAXFF package has been tested
package has been tested to yield equivalent results to the REAX package, to yield equivalent results to the REAX package, offers better
offers better performance, supports OpenMP multi-threading via OPENMP, performance, supports OpenMP multi-threading via OPENMP, and GPU and
and GPU and threading parallelization through KOKKOS. The new pair styles threading parallelization through KOKKOS. The new pair styles are not
are not syntax compatible with the removed reax pair style, so input syntax compatible with the removed reax pair style, so input files will
files will have to be adapted. have to be adapted. The REAXFF package was originally called
USER-REAXC.
USER-CUDA package USER-CUDA package
----------------- -----------------
@ -60,5 +114,6 @@ restart2data tool
The functionality of the restart2data tool has been folded into the The functionality of the restart2data tool has been folded into the
LAMMPS executable directly instead of having a separate tool. A LAMMPS executable directly instead of having a separate tool. A
combination of the commands :doc:`read_restart <read_restart>` and combination of the commands :doc:`read_restart <read_restart>` and
:doc:`write_data <write_data>` can be used to the same effect. For added :doc:`write_data <write_data>` can be used to the same effect. For
convenience this conversion can also be triggered by :doc:`command line flags <Run_options>` added convenience this conversion can also be triggered by
:doc:`command line flags <Run_options>`

View File

@ -13,12 +13,15 @@ of time and requests from the LAMMPS user community.
Developer_org Developer_org
Developer_code_design Developer_code_design
Developer_parallel Developer_parallel
Developer_atom
Developer_comm_ops Developer_comm_ops
Developer_flow Developer_flow
Developer_write Developer_write
Developer_notes Developer_notes
Developer_updating
Developer_plugins Developer_plugins
Developer_unittest Developer_unittest
Classes Classes
Developer_platform Developer_platform
Developer_utils Developer_utils
Developer_grid

View File

@ -0,0 +1,88 @@
Accessing per-atom data
-----------------------
This page discusses how per-atom data is managed in LAMMPS, how it can
be accessed, what communication patters apply, and some of the utility
functions that exist for a variety of purposes.
Owned and ghost atoms
^^^^^^^^^^^^^^^^^^^^^
As described on the :doc:`parallel partitioning algorithms
<Developer_par_part>` page, LAMMPS uses a domain decomposition of the
simulation domain, either in a *brick* or *tiled* manner. Each MPI
process *owns* exactly one subdomain and the atoms within it. To compute
forces for tuples of atoms that are spread across sub-domain boundaries,
also a "halo" of *ghost* atoms are maintained within a the communication
cutoff distance of its subdomain.
The total number of atoms is stored in `Atom::natoms` (within any
typical class this can be referred to at `atom->natoms`. The number of
*owned* (or "local" atoms) are stored in `Atom::nlocal`; the number of
*ghost* atoms is stored in `Atom::nghost`. The sum of `Atom::nlocal`
over all MPI processes should be `Atom::natoms`. This is by default
regularly checked by the Thermo class, and if the sum does not match,
LAMMPS stops with a "lost atoms" error. For convenience also the
property `Atom::nmax` is available, this is the maximum of
`Atom::nlocal + Atom::nghost` across all MPI processes.
Per-atom properties are either managed by the atom style, or individual
classes. or as custom arrays by the individual classes. If only access
to *owned* atoms is needed, they are usually allocated to be of size
`Atom::nlocal`, otherwise of size `Atom::nmax`. Please note that not all
per-atom properties are available or updated on *ghost* atoms. For
example, per-atom velocities are only updated with :doc:`comm_modify vel
yes <comm_modify>`.
Atom indexing
^^^^^^^^^^^^^
When referring to individual atoms, they may be indexed by their local
*index*, their index in their `Atom::x` array. This is densely populated
containing first all *owned* atoms (index < `Atom::nlocal`) and then all
*ghost* atoms. The order of atoms in these arrays can change due to
atoms migrating between between subdomains, atoms being added or
deleted, or atoms being sorted for better cache efficiency. Atoms are
globally uniquely identified by their *atom ID*. There may be multiple
atoms with the same atom ID present, but only one of them may be an
*owned* atom.
To find the local *index* of an atom, when the *atom ID* is known, the
`Atom::map()` function may be used. It will return the local atom index
or -1. If the returned value is between 0 (inclusive) and `Atom::nlocal`
(exclusive) it is an *owned* or "local" atom; for larger values the atom
is present as a ghost atom; for a value of -1, the atom is not present
on the current subdomain at all.
If multiple atoms with the same tag exist in the same subdomain, they
can be found via the `Atom::sametag` array. It points to the next atom
index with the same tag or -1 if there are no more atoms with the same
tag. The list will be exhaustive when starting with an index of an
*owned* atom, since the atom IDs are unique, so there can only be one
such atom. Example code to count atoms with same atom ID in subdomain:
.. code-block:: c++
for (int i = 0; i < atom->nlocal; ++i) {
int count = 0;
while (sametag[i] >= 0) {
i = sametag[i];
++count;
}
printf("Atom ID: %ld is present %d times\n", atom->tag[i], count);
}
Atom class versus AtomVec classes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `Atom` class contains all kinds of flags and counters about atoms in
the system and that includes pointers to **all** per-atom properties
available for atoms. However, only a subset of these pointers are
non-NULL and which those are depends on the atom style. For each atom
style there is a corresponding `AtomVecXXX` class derived from the
`AtomVec` base class, where the XXX indicates the atom style. This
`AtomVecXXX` class will update the counters and per-atom pointers if
atoms are added or removed to the system or migrate between subdomains.

View File

@ -1,56 +1,57 @@
Code design Code design
----------- -----------
This section explains some of the code design choices in LAMMPS with This section explains some code design choices in LAMMPS with the goal
the goal of helping developers write new code similar to the existing of helping developers write new code similar to the existing code.
code. Please see the section on :doc:`Requirements for contributed Please see the section on :doc:`Requirements for contributed code
code <Modify_style>` for more specific recommendations and guidelines. <Modify_style>` for more specific recommendations and guidelines. While
While that section is organized more in the form of a checklist for that section is organized more in the form of a checklist for code
code contributors, the focus here is on overall code design strategy, contributors, the focus here is on overall code design strategy, choices
choices made between possible alternatives, and discussing some made between possible alternatives, and discussing some relevant C++
relevant C++ programming language constructs. programming language constructs.
Historically, the basic design philosophy of the LAMMPS C++ code was a Historically, the basic design philosophy of the LAMMPS C++ code was a
"C with classes" style. The motivation was to make it easy to modify "C with classes" style. The motivation was to make it easy to modify
LAMMPS for people without significant training in C++ programming. LAMMPS for people without significant training in C++ programming. Data
Data structures and code constructs were used that resemble the structures and code constructs were used that resemble the previous
previous implementation(s) in Fortran. A contributing factor to this implementation(s) in Fortran. A contributing factor to this choice was
choice also was that at the time, C++ compilers were often not mature that at the time, C++ compilers were often not mature and some advanced
and some of the advanced features contained bugs or did not function features contained bugs or did not function as the standard required.
as the standard required. There were also disagreements between There were also disagreements between compiler vendors as to how to
compiler vendors as to how to interpret the C++ standard documents. interpret the C++ standard documents.
However, C++ compilers have now advanced significantly. In 2020 we However, C++ compilers and the C++ programming language have advanced
decided to to require the C++11 standard as the minimum C++ language significantly. In 2020, the LAMMPS developers decided to require the
standard for LAMMPS. Since then we have begun to also replace some of C++11 standard as the minimum C++ language standard for LAMMPS. Since
the C-style constructs with equivalent C++ functionality, either from then, we have begun to replace C-style constructs with equivalent C++
the C++ standard library or as custom classes or functions, in order functionality. This was taken either from the C++ standard library or
to improve readability of the code and to increase code reuse through implemented as custom classes or functions. The goal is to improve
abstraction of commonly used functionality. readability of the code and to increase code reuse through abstraction
of commonly used functionality.
.. note:: .. note::
Please note that as of spring 2022 there is still a sizable chunk Please note that as of spring 2023 there is still a sizable chunk of
of legacy code in LAMMPS that has not yet been refactored to legacy code in LAMMPS that has not yet been refactored to reflect
reflect these style conventions in full. LAMMPS has a large code these style conventions in full. LAMMPS has a large code base and
base and many different contributors and there also is a hierarchy many contributors. There is also a hierarchy of precedence in which
of precedence in which the code is adapted. Highest priority has the code is adapted. Highest priority has been the code in the
been the code in the ``src`` folder, followed by code in packages ``src`` folder, followed by code in packages in order of their
in order of their popularity and complexity (simpler code is popularity and complexity (simpler code gets adapted sooner), followed
adapted sooner), followed by code in the ``lib`` folder. Source by code in the ``lib`` folder. Source code that is downloaded from
code that is downloaded from external packages or libraries during external packages or libraries during compilation is not subject to
compilation is not subject to the conventions discussed here. the conventions discussed here.
Object oriented code Object-oriented code
^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^
LAMMPS is designed to be an object oriented code. Each simulation is LAMMPS is designed to be an object-oriented code. Each simulation is
represented by an instance of the LAMMPS class. When running in represented by an instance of the LAMMPS class. When running in
parallel each MPI process creates such an instance. This can be seen parallel, each MPI process creates such an instance. This can be seen
in the ``main.cpp`` file where the core steps of running a LAMMPS in the ``main.cpp`` file where the core steps of running a LAMMPS
simulation are the following 3 lines of code: simulation are the following 3 lines of code:
.. code-block:: C++ .. code-block:: c++
LAMMPS *lammps = new LAMMPS(argc, argv, lammps_comm); LAMMPS *lammps = new LAMMPS(argc, argv, lammps_comm);
lammps->input->file(); lammps->input->file();
@ -67,29 +68,29 @@ other special features.
The basic LAMMPS class hierarchy which is created by the LAMMPS class The basic LAMMPS class hierarchy which is created by the LAMMPS class
constructor is shown in :ref:`class-topology`. When input commands constructor is shown in :ref:`class-topology`. When input commands
are processed, additional class instances are created, or deleted, or are processed, additional class instances are created, or deleted, or
replaced. Likewise specific member functions of specific classes are replaced. Likewise, specific member functions of specific classes are
called to trigger actions such creating atoms, computing forces, called to trigger actions such creating atoms, computing forces,
computing properties, time-propagating the system, or writing output. computing properties, time-propagating the system, or writing output.
Compositing and Inheritance Compositing and Inheritance
=========================== ===========================
LAMMPS makes extensive use of the object oriented programming (OOP) LAMMPS makes extensive use of the object-oriented programming (OOP)
principles of *compositing* and *inheritance*. Classes like the principles of *compositing* and *inheritance*. Classes like the
``LAMMPS`` class are a **composite** containing pointers to instances ``LAMMPS`` class are a **composite** containing pointers to instances
of other classes like ``Atom``, ``Comm``, ``Force``, ``Neighbor``, of other classes like ``Atom``, ``Comm``, ``Force``, ``Neighbor``,
``Modify``, and so on. Each of these classes implement certain ``Modify``, and so on. Each of these classes implements certain
functionality by storing and manipulating data related to the functionality by storing and manipulating data related to the
simulation and providing member functions that trigger certain simulation and providing member functions that trigger certain
actions. Some of those classes like ``Force`` are themselves actions. Some of those classes like ``Force`` are themselves
composites, containing instances of classes describing different force composites, containing instances of classes describing different force
interactions. Similarly the ``Modify`` class contains a list of interactions. Similarly, the ``Modify`` class contains a list of
``Fix`` and ``Compute`` classes. If the input commands that ``Fix`` and ``Compute`` classes. If the input commands that
correspond to these classes include the word *style*, then LAMMPS correspond to these classes include the word *style*, then LAMMPS
stores only a single instance of that class. E.g. *atom_style*, stores only a single instance of that class. E.g. *atom_style*,
*comm_style*, *pair_style*, *bond_style*. It the input command does *comm_style*, *pair_style*, *bond_style*. If the input command does
not include the word *style*, there can be many instances of that **not** include the word *style*, then there may be many instances of
class defined. E.g. *region*, *fix*, *compute*, *dump*. that class defined, for example *region*, *fix*, *compute*, *dump*.
**Inheritance** enables creation of *derived* classes that can share **Inheritance** enables creation of *derived* classes that can share
common functionality in their base class while providing a consistent common functionality in their base class while providing a consistent
@ -100,19 +101,18 @@ derived class variant was instantiated. In LAMMPS these derived
classes are often referred to as "styles", e.g. pair styles, fix classes are often referred to as "styles", e.g. pair styles, fix
styles, atom styles and so on. styles, atom styles and so on.
This is the origin of the flexibility of LAMMPS. For example pair This is the origin of the flexibility of LAMMPS. For example, pair
styles implement a variety of different non-bonded interatomic styles implement a variety of different non-bonded interatomic
potentials functions. All details for the implementation of a potentials functions. All details for the implementation of a
potential are stored and executed in a single class. potential are stored and executed in a single class.
As mentioned above, there can be multiple instances of classes derived As mentioned above, there can be multiple instances of classes derived
from the ``Fix`` or ``Compute`` base classes. They represent a from the ``Fix`` or ``Compute`` base classes. They represent a
different facet of LAMMPS flexibility as they provide methods which different facet of LAMMPS' flexibility, as they provide methods which
can be called at different points in time within a timestep, as can be called at different points within a timestep, as explained in
explained in `Developer_flow`. This allows the input script to tailor `Developer_flow`. This allows the input script to tailor how a specific
how a specific simulation is run, what diagnostic computations are simulation is run, what diagnostic computations are performed, and how
performed, and how the output of those computations is further the output of those computations is further processed or output.
processed or output.
Additional code sharing is possible by creating derived classes from the Additional code sharing is possible by creating derived classes from the
derived classes (e.g., to implement an accelerated version of a pair derived classes (e.g., to implement an accelerated version of a pair
@ -164,15 +164,15 @@ The difference in behavior of the ``normal()`` and the ``poly()`` member
functions is which of the two member functions is called when executing functions is which of the two member functions is called when executing
`base1->call()` versus `base2->call()`. Without polymorphism, a `base1->call()` versus `base2->call()`. Without polymorphism, a
function within the base class can only call member functions within the function within the base class can only call member functions within the
same scope, that is ``Base::call()`` will always call same scope: that is, ``Base::call()`` will always call
``Base::normal()``. But for the `base2->call()` case the call of the ``Base::normal()``. But for the `base2->call()` case, the call of the
virtual member function will be dispatched to ``Derived::poly()`` virtual member function will be dispatched to ``Derived::poly()``
instead. This mechanism means that functions are called within the instead. This mechanism results in calling functions that are within
scope of the class type that was used to *create* the class instance are the scope of the class that was used to *create* the instance, even if
invoked; even if they are assigned to a pointer using the type of a base they are assigned to a pointer for their base class. This is the
class. This is the desired behavior and this way LAMMPS can even use desired behavior, and this way LAMMPS can even use styles that are loaded
styles that are loaded at runtime from a shared object file with the at runtime from a shared object file with the :doc:`plugin command
:doc:`plugin command <plugin>`. <plugin>`.
A special case of virtual functions are so-called pure functions. These A special case of virtual functions are so-called pure functions. These
are virtual functions that are initialized to 0 in the class declaration are virtual functions that are initialized to 0 in the class declaration
@ -189,12 +189,12 @@ This has the effect that an instance of the base class cannot be
created and that derived classes **must** implement these functions. created and that derived classes **must** implement these functions.
Many of the functions listed with the various class styles in the Many of the functions listed with the various class styles in the
section :doc:`Modify` are pure functions. The motivation for this is section :doc:`Modify` are pure functions. The motivation for this is
to define the interface or API of the functions but defer their to define the interface or API of the functions, but defer their
implementation to the derived classes. implementation to the derived classes.
However, there are downsides to this. For example, calls to virtual However, there are downsides to this. For example, calls to virtual
functions from within a constructor, will not be in the scope of the functions from within a constructor, will *not* be in the scope of the
derived class and thus it is good practice to either avoid calling them derived class, and thus it is good practice to either avoid calling them
or to provide an explicit scope such as ``Base::poly()`` or or to provide an explicit scope such as ``Base::poly()`` or
``Derived::poly()``. Furthermore, any destructors in classes containing ``Derived::poly()``. Furthermore, any destructors in classes containing
virtual functions should be declared virtual too, so they will be virtual functions should be declared virtual too, so they will be
@ -208,8 +208,8 @@ dispatch.
that are intended to replace a virtual or pure function use the that are intended to replace a virtual or pure function use the
``override`` property keyword. For the same reason, the use of ``override`` property keyword. For the same reason, the use of
overloads or default arguments for virtual functions should be overloads or default arguments for virtual functions should be
avoided as they lead to confusion over which function is supposed to avoided, as they lead to confusion over which function is supposed to
override which and which arguments need to be declared. override which, and which arguments need to be declared.
Style Factories Style Factories
=============== ===============
@ -219,10 +219,10 @@ uses a programming pattern called `Factory`. Those are functions that
create an instance of a specific derived class, say ``PairLJCut`` and create an instance of a specific derived class, say ``PairLJCut`` and
return a pointer to the type of the common base class of that style, return a pointer to the type of the common base class of that style,
``Pair`` in this case. To associate the factory function with the ``Pair`` in this case. To associate the factory function with the
style keyword, an ``std::map`` class is used with function pointers style keyword, a ``std::map`` class is used with function pointers
indexed by their keyword (for example "lj/cut" for ``PairLJCut`` and indexed by their keyword (for example "lj/cut" for ``PairLJCut`` and
"morse" for ``PairMorse``). A couple of typedefs help keep the code "morse" for ``PairMorse``). A couple of typedefs help keep the code
readable and a template function is used to implement the actual readable, and a template function is used to implement the actual
factory functions for the individual classes. Below is an example factory functions for the individual classes. Below is an example
of such a factory function from the ``Force`` class as declared in of such a factory function from the ``Force`` class as declared in
``force.h`` and implemented in ``force.cpp``. The file ``style_pair.h`` ``force.h`` and implemented in ``force.cpp``. The file ``style_pair.h``
@ -232,7 +232,7 @@ macro ``PairStyle()`` will associate the style name "lj/cut"
with a factory function creating an instance of the ``PairLJCut`` with a factory function creating an instance of the ``PairLJCut``
class. class.
.. code-block:: C++ .. code-block:: c++
// from force.h // from force.h
typedef Pair *(*PairCreator)(LAMMPS *); typedef Pair *(*PairCreator)(LAMMPS *);
@ -279,26 +279,26 @@ from and writing to files and console instead of C++ "iostreams".
This is mainly motivated by better performance, better control over This is mainly motivated by better performance, better control over
formatting, and less effort to achieve specific formatting. formatting, and less effort to achieve specific formatting.
Since mixing "stdio" and "iostreams" can lead to unexpected Since mixing "stdio" and "iostreams" can lead to unexpected behavior,
behavior. use of the latter is strongly discouraged. Also output to use of the latter is strongly discouraged. Output to the screen should
the screen should not use the predefined ``stdout`` FILE pointer, but *not* use the predefined ``stdout`` FILE pointer, but rather the
rather the ``screen`` and ``logfile`` FILE pointers managed by the ``screen`` and ``logfile`` FILE pointers managed by the LAMMPS class.
LAMMPS class. Furthermore, output should generally only be done by Furthermore, output should generally only be done by MPI rank 0
MPI rank 0 (``comm->me == 0``). Output that is sent to both (``comm->me == 0``). Output that is sent to both ``screen`` and
``screen`` and ``logfile`` should use the :cpp:func:`utils::logmesg() ``logfile`` should use the :cpp:func:`utils::logmesg() convenience
convenience function <LAMMPS_NS::utils::logmesg>`. function <LAMMPS_NS::utils::logmesg>`.
We also discourage the use of stringstreams because the bundled {fmt} We discourage the use of stringstreams because the bundled {fmt} library
library and the customized tokenizer classes can provide the same and the customized tokenizer classes provide the same functionality in a
functionality in a cleaner way with better performance. This also cleaner way with better performance. This also helps maintain a
helps maintain a consistent programming syntax with code from many consistent programming syntax with code from many different
different contributors. contributors.
Formatting with the {fmt} library Formatting with the {fmt} library
=================================== ===================================
The LAMMPS source code includes a copy of the `{fmt} library The LAMMPS source code includes a copy of the `{fmt} library
<https://fmt.dev>`_ which is preferred over formatting with the <https://fmt.dev>`_, which is preferred over formatting with the
"printf()" family of functions. The primary reason is that it allows "printf()" family of functions. The primary reason is that it allows
a typesafe default format for any type of supported data. This is a typesafe default format for any type of supported data. This is
particularly useful for formatting integers of a given size (32-bit or particularly useful for formatting integers of a given size (32-bit or
@ -313,17 +313,16 @@ been included into the C++20 language standard, so changes to adopt it
are future-proof. are future-proof.
Formatted strings are frequently created by calling the Formatted strings are frequently created by calling the
``fmt::format()`` function which will return a string as a ``fmt::format()`` function, which will return a string as a
``std::string`` class instance. In contrast to the ``%`` placeholder ``std::string`` class instance. In contrast to the ``%`` placeholder in
in ``printf()``, the {fmt} library uses ``{}`` to embed format ``printf()``, the {fmt} library uses ``{}`` to embed format descriptors.
descriptors. In the simplest case, no additional characters are In the simplest case, no additional characters are needed, as {fmt} will
needed as {fmt} will choose the default format based on the data type choose the default format based on the data type of the argument.
of the argument. Otherwise the ``fmt::print()`` function may be Otherwise, the ``fmt::print()`` function may be used instead of
used instead of ``printf()`` or ``fprintf()``. In addition, several ``printf()`` or ``fprintf()``. In addition, several LAMMPS output
LAMMPS output functions, that originally accepted a single string as functions, that originally accepted a single string as argument have
argument have been overloaded to accept a format string with optional been overloaded to accept a format string with optional arguments as
arguments as well (e.g., ``Error::all()``, ``Error::one()``, well (e.g., ``Error::all()``, ``Error::one()``, ``utils::logmesg()``).
``utils::logmesg()``).
Summary of the {fmt} format syntax Summary of the {fmt} format syntax
================================== ==================================
@ -332,10 +331,11 @@ The syntax of the format string is "{[<argument id>][:<format spec>]}",
where either the argument id or the format spec (separated by a colon where either the argument id or the format spec (separated by a colon
':') is optional. The argument id is usually a number starting from 0 ':') is optional. The argument id is usually a number starting from 0
that is the index to the arguments following the format string. By that is the index to the arguments following the format string. By
default these are assigned in order (i.e. 0, 1, 2, 3, 4 etc.). The most default, these are assigned in order (i.e. 0, 1, 2, 3, 4 etc.). The
common case for using argument id would be to use the same argument in most common case for using argument id would be to use the same argument
multiple places in the format string without having to provide it as an in multiple places in the format string without having to provide it as
argument multiple times. In LAMMPS the argument id is rarely used. an argument multiple times. The argument id is rarely used in the LAMMPS
source code.
More common is the use of a format specifier, which starts with a colon. More common is the use of a format specifier, which starts with a colon.
This may optionally be followed by a fill character (default is ' '). If This may optionally be followed by a fill character (default is ' '). If
@ -347,20 +347,21 @@ width, which may be followed by a dot '.' and a precision for floating
point numbers. The final character in the format string would be an point numbers. The final character in the format string would be an
indicator for the "presentation", i.e. 'd' for decimal presentation of indicator for the "presentation", i.e. 'd' for decimal presentation of
integers, 'x' for hexadecimal, 'o' for octal, 'c' for character etc. integers, 'x' for hexadecimal, 'o' for octal, 'c' for character etc.
This mostly follows the "printf()" scheme but without requiring an This mostly follows the "printf()" scheme, but without requiring an
additional length parameter to distinguish between different integer additional length parameter to distinguish between different integer
widths. The {fmt} library will detect those and adapt the formatting widths. The {fmt} library will detect those and adapt the formatting
accordingly. For floating point numbers there are correspondingly, 'g' accordingly. For floating point numbers there are correspondingly, 'g'
for generic presentation, 'e' for exponential presentation, and 'f' for for generic presentation, 'e' for exponential presentation, and 'f' for
fixed point presentation. fixed point presentation.
Thus "{:8}" would represent *any* type argument using at least 8 The format string "{:8}" would thus represent *any* type argument and be
characters; "{:<8}" would do this as left aligned, "{:^8}" as centered, replaced by at least 8 characters; "{:<8}" would do this as left
"{:>8}" as right aligned. If a specific presentation is selected, the aligned, "{:^8}" as centered, "{:>8}" as right aligned. If a specific
argument type must be compatible or else the {fmt} formatting code will presentation is selected, the argument type must be compatible or else
throw an exception. Some format string examples are given below: the {fmt} formatting code will throw an exception. Some format string
examples are given below:
.. code-block:: C .. code-block:: c++
auto mesg = fmt::format(" CPU time: {:4d}:{:02d}:{:02d}\n", cpuh, cpum, cpus); auto mesg = fmt::format(" CPU time: {:4d}:{:02d}:{:02d}\n", cpuh, cpum, cpus);
mesg = fmt::format("{:<8s}| {:<10.5g} | {:<10.5g} | {:<10.5g} |{:6.1f} |{:6.2f}\n", mesg = fmt::format("{:<8s}| {:<10.5g} | {:<10.5g} | {:<10.5g} |{:6.1f} |{:6.2f}\n",
@ -392,12 +393,12 @@ documentation <https://fmt.dev/latest/syntax.html>`_ website.
Memory management Memory management
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
Dynamical allocation of small data and objects can be done with the Dynamical allocation of small data and objects can be done with the C++
the C++ commands "new" and "delete/delete[]. Large data should use commands "new" and "delete/delete[]". Large data should use the member
the member functions of the ``Memory`` class, most commonly, functions of the ``Memory`` class, most commonly, ``Memory::create()``,
``Memory::create()``, ``Memory::grow()``, and ``Memory::destroy()``, ``Memory::grow()``, and ``Memory::destroy()``, which provide variants
which provide variants for vectors, 2d arrays, 3d arrays, etc. for vectors, 2d arrays, 3d arrays, etc. These can also be used for
These can also be used for small data. small data.
The use of ``malloc()``, ``calloc()``, ``realloc()`` and ``free()`` The use of ``malloc()``, ``calloc()``, ``realloc()`` and ``free()``
directly is strongly discouraged. To simplify adapting legacy code directly is strongly discouraged. To simplify adapting legacy code
@ -408,26 +409,24 @@ perform additional error checks for safety.
Use of these custom memory allocation functions is motivated by the Use of these custom memory allocation functions is motivated by the
following considerations: following considerations:
- memory allocation failures on *any* MPI rank during a parallel run - Memory allocation failures on *any* MPI rank during a parallel run
will trigger an immediate abort of the entire parallel calculation will trigger an immediate abort of the entire parallel calculation.
instead of stalling it - A failing "new" will trigger an exception, which is also captured by
- a failing "new" will trigger an exception which is also captured by LAMMPS and triggers a global abort.
LAMMPS and triggers a global abort - Allocation of multidimensional arrays will be done in a C compatible
- allocation of multi-dimensional arrays will be done in a C compatible fashion, but such that the storage of the actual data is stored in one
fashion but so that the storage of the actual data is stored in one large contiguous block. Thus, when MPI communication is needed,
large contiguous block. Thus when MPI communication is needed,
the data can be communicated directly (similar to Fortran arrays). the data can be communicated directly (similar to Fortran arrays).
- the "destroy()" and "sfree()" functions may safely be called on NULL - The "destroy()" and "sfree()" functions may safely be called on NULL
pointers pointers.
- the "destroy()" functions will nullify the pointer variables making - The "destroy()" functions will nullify the pointer variables, thus
"use after free" errors easy to detect making "use after free" errors easy to detect.
- it is possible to use a larger than default memory alignment (not on - It is possible to use a larger than default memory alignment (not on
all operating systems, since the allocated storage pointers must be all operating systems, since the allocated storage pointers must be
compatible with ``free()`` for technical reasons) compatible with ``free()`` for technical reasons).
In the practical implementation of code this means that any pointer In the practical implementation of code this means, that any pointer
variables that are class members should be initialized to a variables, that are class members should be initialized to a ``nullptr``
``nullptr`` value in their respective constructors. That way it is value in their respective constructors. That way, it is safe to call
safe to call ``Memory::destroy()`` or ``delete[]`` on them before ``Memory::destroy()`` or ``delete[]`` on them before *any* allocation
*any* allocation outside the constructor. This helps prevent memory outside the constructor. This helps prevent memory leaks.
leaks.

View File

@ -14,8 +14,8 @@ Owned and ghost atoms
As described on the :doc:`parallel partitioning algorithms As described on the :doc:`parallel partitioning algorithms
<Developer_par_part>` page, LAMMPS spatially decomposes the simulation <Developer_par_part>` page, LAMMPS spatially decomposes the simulation
domain, either in a *brick* or *tiled* manner. Each processor (MPI domain, either in a *brick* or *tiled* manner. Each processor (MPI
task) owns atoms within its sub-domain and additionally stores ghost task) owns atoms within its subdomain and additionally stores ghost
atoms within a cutoff distance of its sub-domain. atoms within a cutoff distance of its subdomain.
Forward and reverse communication Forward and reverse communication
================================= =================================
@ -28,7 +28,7 @@ The need to do this communication arises when data from the owned atoms
is updated (e.g. their positions) and this updated information needs to is updated (e.g. their positions) and this updated information needs to
be **copied** to the corresponding ghost atoms. be **copied** to the corresponding ghost atoms.
And second, *reverse communication* which sends ghost atom information And second, *reverse communication*, which sends ghost atom information
from each processor to the owning processor to **accumulate** (sum) from each processor to the owning processor to **accumulate** (sum)
the values with the corresponding owned atoms. The need for this the values with the corresponding owned atoms. The need for this
arises when data is computed and also stored with ghost atoms arises when data is computed and also stored with ghost atoms
@ -58,7 +58,7 @@ embedded-atom method (EAM) which compute intermediate values in the
first part of the compute() function that need to be stored by both first part of the compute() function that need to be stored by both
owned and ghost atoms for the second part of the force computation. owned and ghost atoms for the second part of the force computation.
The *Comm* class methods perform the MPI communication for buffers of The *Comm* class methods perform the MPI communication for buffers of
per-atom data. They "call back" to the *Pair* class so it can *pack* per-atom data. They "call back" to the *Pair* class, so it can *pack*
or *unpack* the buffer with data the *Pair* class owns. There are 4 or *unpack* the buffer with data the *Pair* class owns. There are 4
such methods that the *Pair* class must define, assuming it uses both such methods that the *Pair* class must define, assuming it uses both
forward and reverse communication: forward and reverse communication:
@ -70,22 +70,22 @@ forward and reverse communication:
The arguments to these methods include the buffer and a list of atoms The arguments to these methods include the buffer and a list of atoms
to pack or unpack. The *Pair* class also must set the *comm_forward* to pack or unpack. The *Pair* class also must set the *comm_forward*
and *comm_reverse* variables which store the number of values stored and *comm_reverse* variables, which store the number of values stored
in the communication buffers for each operation. This means, if in the communication buffers for each operation. This means, if
desired, it can choose to store multiple per-atom values in the desired, it can choose to store multiple per-atom values in the
buffer, and they will be communicated together to minimize buffer, and they will be communicated together to minimize
communication overhead. The communication buffers are defined vectors communication overhead. The communication buffers are defined vectors
containing ``double`` values. To correctly store integers that may be containing ``double`` values. To correctly store integers that may be
64-bit (bigint, tagint, imageint) in the buffer, you need to use the 64-bit (bigint, tagint, imageint) in the buffer, you need to use the
`ubuf union <Communication buffer coding with ubuf>`_ construct. :ref:`ubuf union <communication_buffer_coding_with_ubuf>` construct.
The *Fix*, *Compute*, and *Dump* classes can also invoke the same kind The *Fix*, *Compute*, and *Dump* classes can also invoke the same kind
of forward and reverse communication operations using the same *Comm* of forward and reverse communication operations using the same *Comm*
class methods. Likewise the same pack/unpack methods and class methods. Likewise, the same pack/unpack methods and
comm_forward/comm_reverse variables must be defined by the calling comm_forward/comm_reverse variables must be defined by the calling
*Fix*, *Compute*, or *Dump* class. *Fix*, *Compute*, or *Dump* class.
For *Fix* classes there is an optional second argument to the For *Fix* classes, there is an optional second argument to the
*forward_comm()* and *reverse_comm()* call which can be used when the *forward_comm()* and *reverse_comm()* call which can be used when the
fix performs multiple modes of communication, with different numbers fix performs multiple modes of communication, with different numbers
of values per atom. The fix should set the *comm_forward* and of values per atom. The fix should set the *comm_forward* and
@ -150,7 +150,7 @@ latter case, when the *ring* operation is complete, each processor can
examine its original buffer to extract modified values. examine its original buffer to extract modified values.
Note that the *ring* operation is similar to an MPI_Alltoall() Note that the *ring* operation is similar to an MPI_Alltoall()
operation where every processor effectively sends and receives data to operation, where every processor effectively sends and receives data to
every other processor. The difference is that the *ring* operation every other processor. The difference is that the *ring* operation
does it one step at a time, so the total volume of data does not need does it one step at a time, so the total volume of data does not need
to be stored by every processor. However, the *ring* operation is to be stored by every processor. However, the *ring* operation is
@ -184,8 +184,8 @@ The *exchange_data()* method triggers the communication to be
performed. Each processor provides the vector of *N* datums to send, performed. Each processor provides the vector of *N* datums to send,
and the size of each datum. All datums must be the same size. and the size of each datum. All datums must be the same size.
The *create_atom()* and *exchange_atom()* methods are similar except The *create_atom()* and *exchange_atom()* methods are similar, except
that the size of each datum can be different. Typically this is used that the size of each datum can be different. Typically, this is used
to communicate atoms, each with a variable amount of per-atom data, to to communicate atoms, each with a variable amount of per-atom data, to
other processors. other processors.

View File

@ -45,9 +45,9 @@ other methods in the class.
zero before each timestep, so that forces (torques, etc) can be zero before each timestep, so that forces (torques, etc) can be
accumulated. accumulated.
Now for the ``Verlet::run()`` method. Its basic structure in hi-level pseudo Now for the ``Verlet::run()`` method. Its basic structure in hi-level
code is shown below. In the actual code in ``src/verlet.cpp`` some of pseudocode is shown below. In the actual code in ``src/verlet.cpp``
these operations are conditionally invoked. some of these operations are conditionally invoked.
.. code-block:: python .. code-block:: python
@ -105,17 +105,17 @@ need it. These flags are passed to the various methods that compute
particle interactions, so that they either compute and tally the particle interactions, so that they either compute and tally the
corresponding data or can skip the extra calculations if the energy and corresponding data or can skip the extra calculations if the energy and
virial are not needed. See the comments for the ``Integrate::ev_set()`` virial are not needed. See the comments for the ``Integrate::ev_set()``
method which document the flag values. method, which document the flag values.
At various points of the timestep, fixes are invoked, At various points of the timestep, fixes are invoked,
e.g. ``fix->initial_integrate()``. In the code, this is actually done e.g. ``fix->initial_integrate()``. In the code, this is actually done
via the Modify class which stores all the Fix objects and lists of which via the Modify class, which stores all the Fix objects and lists of which
should be invoked at what point in the timestep. Fixes are the LAMMPS should be invoked at what point in the timestep. Fixes are the LAMMPS
mechanism for tailoring the operations of a timestep for a particular mechanism for tailoring the operations of a timestep for a particular
simulation. As described elsewhere, each fix has one or more methods, simulation. As described elsewhere, each fix has one or more methods,
each of which is invoked at a specific stage of the timestep, as show in each of which is invoked at a specific stage of the timestep, as show in
the timestep pseudo-code. All the active fixes defined in an input the timestep pseudocode. All the active fixes defined in an input
script, that are flagged to have an ``initial_integrate()`` method are script, that are flagged to have an ``initial_integrate()`` method, are
invoked at the beginning of each timestep. Examples are :doc:`fix nve invoked at the beginning of each timestep. Examples are :doc:`fix nve
<fix_nve>` or :doc:`fix nvt or fix npt <fix_nh>` which perform the <fix_nve>` or :doc:`fix nvt or fix npt <fix_nh>` which perform the
start-of-timestep velocity-Verlet integration operations to update start-of-timestep velocity-Verlet integration operations to update
@ -131,15 +131,15 @@ can be changed using the :doc:`neigh_modify every/delay/check
<neigh_modify>` command. If not, coordinates of ghost atoms are <neigh_modify>` command. If not, coordinates of ghost atoms are
acquired by each processor via the ``forward_comm()`` method of the Comm acquired by each processor via the ``forward_comm()`` method of the Comm
class. If neighbor lists need to be built, several operations within class. If neighbor lists need to be built, several operations within
the inner if clause of the pseudo-code are first invoked. The the inner if clause of the pseudocode are first invoked. The
``pre_exchange()`` method of any defined fixes is invoked first. ``pre_exchange()`` method of any defined fixes is invoked first.
Typically this inserts or deletes particles from the system. Typically, this inserts or deletes particles from the system.
Periodic boundary conditions are then applied by the Domain class via Periodic boundary conditions are then applied by the Domain class via
its ``pbc()`` method to remap particles that have moved outside the its ``pbc()`` method to remap particles that have moved outside the
simulation box back into the box. Note that this is not done every simulation box back into the box. Note that this is not done every
timestep, but only when neighbor lists are rebuilt. This is so that timestep, but only when neighbor lists are rebuilt. This is so that
each processor's sub-domain will have consistent (nearby) atom each processor's subdomain will have consistent (nearby) atom
coordinates for its owned and ghost atoms. It is also why dumped atom coordinates for its owned and ghost atoms. It is also why dumped atom
coordinates may be slightly outside the simulation box if not dumped coordinates may be slightly outside the simulation box if not dumped
on a step where the neighbor lists are rebuilt. on a step where the neighbor lists are rebuilt.
@ -148,15 +148,15 @@ The box boundaries are then reset (if needed) via the ``reset_box()``
method of the Domain class, e.g. if box boundaries are shrink-wrapped to method of the Domain class, e.g. if box boundaries are shrink-wrapped to
current particle coordinates. A change in the box size or shape current particle coordinates. A change in the box size or shape
requires internal information for communicating ghost atoms (Comm class) requires internal information for communicating ghost atoms (Comm class)
and neighbor list bins (Neighbor class) be updated. The ``setup()`` and neighbor list bins (Neighbor class) to be updated. The ``setup()``
method of the Comm class and ``setup_bins()`` method of the Neighbor method of the Comm class and ``setup_bins()`` method of the Neighbor
class perform the update. class perform the update.
The code is now ready to migrate atoms that have left a processor's The code is now ready to migrate atoms that have left a processor's
geometric sub-domain to new processors. The ``exchange()`` method of geometric subdomain to new processors. The ``exchange()`` method of
the Comm class performs this operation. The ``borders()`` method of the the Comm class performs this operation. The ``borders()`` method of the
Comm class then identifies ghost atoms surrounding each processor's Comm class then identifies ghost atoms surrounding each processor's
sub-domain and communicates ghost atom information to neighboring subdomain and communicates ghost atom information to neighboring
processors. It does this by looping over all the atoms owned by a processors. It does this by looping over all the atoms owned by a
processor to make lists of those to send to each neighbor processor. On processor to make lists of those to send to each neighbor processor. On
subsequent timesteps, the lists are used by the ``Comm::forward_comm()`` subsequent timesteps, the lists are used by the ``Comm::forward_comm()``
@ -217,20 +217,21 @@ file, and restart files. See the :doc:`thermo_style <thermo_style>`,
:doc:`dump <dump>`, and :doc:`restart <restart>` commands for more :doc:`dump <dump>`, and :doc:`restart <restart>` commands for more
details. details.
The the flow of control during energy minimization iterations is The flow of control during energy minimization iterations is similar to
similar to that of a molecular dynamics timestep. Forces are computed, that of a molecular dynamics timestep. Forces are computed, neighbor
neighbor lists are built as needed, atoms migrate to new processors, and lists are built as needed, atoms migrate to new processors, and atom
atom coordinates and forces are communicated to neighboring processors. coordinates and forces are communicated to neighboring processors. The
The only difference is what Fix class operations are invoked when. Only only difference is what Fix class operations are invoked when. Only a
a subset of LAMMPS fixes are useful during energy minimization, as subset of LAMMPS fixes are useful during energy minimization, as
explained in their individual doc pages. The relevant Fix class methods explained in their individual doc pages. The relevant Fix class methods
are ``min_pre_exchange()``, ``min_pre_force()``, and ``min_post_force()``. are ``min_pre_exchange()``, ``min_pre_force()``, and
Each fix is invoked at the appropriate place within the minimization ``min_post_force()``. Each fix is invoked at the appropriate place
iteration. For example, the ``min_post_force()`` method is analogous to within the minimization iteration. For example, the
the ``post_force()`` method for dynamics; it is used to alter or constrain ``min_post_force()`` method is analogous to the ``post_force()`` method
forces on each atom, which affects the minimization procedure. for dynamics; it is used to alter or constrain forces on each atom,
which affects the minimization procedure.
After all iterations are completed there is a ``cleanup`` step which After all iterations are completed, there is a ``cleanup`` step which
calls the ``post_run()`` method of fixes to perform operations only required calls the ``post_run()`` method of fixes to perform operations only required
at the end of a calculations (like freeing temporary storage or creating at the end of a calculation (like freeing temporary storage or creating
final outputs). final outputs).

845
doc/src/Developer_grid.rst Normal file
View File

@ -0,0 +1,845 @@
Use of distributed grids within style classes
---------------------------------------------
.. versionadded:: 22Dec2022
The LAMMPS source code includes two classes which facilitate the
creation and use of distributed grids. These are the Grid2d and
Grid3d classes in the src/grid2d.cpp.h and src/grid3d.cpp.h files
respectively. As the names imply, they are used for 2d or 3d
simulations, as defined by the :doc:`dimension <dimension>` command.
The :doc:`Howto_grid <Howto_grid>` page gives an overview of how
distributed grids are defined from a user perspective, lists LAMMPS
commands which use them, and explains how grid cell data is referenced
from an input script. Please read that page first as it motivates the
coding details discussed here.
This doc page is for users who wish to write new styles (input script
commands) which use distributed grids. There are a variety of
material models and analysis methods which use atoms (or
coarse-grained particles) and grids in tandem.
A *distributed* grid means each processor owns a subset of the grid
cells. In LAMMPS, the subset for each processor will be a sub-block
of grid cells with low and high index bounds in each dimension of the
grid. The union of the sub-blocks across all processors is the global
grid.
More specifically, a grid point is defined for each cell (by default
the center point), and a processor owns a grid cell if its point is
within the processor's spatial subdomain. The union of processor
subdomains is the global simulation box. If a grid point is on the
boundary of two subdomains, the lower processor owns the grid cell. A
processor may also store copies of ghost cells which surround its
owned cells.
----------
Style commands
^^^^^^^^^^^^^^
Style commands which can define and use distributed grids include the
:doc:`compute <compute>`, :doc:`fix <fix>`, :doc:`pair <pair_style>`,
and :doc:`kspace <kspace_style>` styles. If you wish grid cell data
to persist across timesteps, then use a fix. If you wish grid cell
data to be accessible by other commands, then use a fix or compute.
Currently in LAMMPS, the :doc:`pair_style amoeba <pair_amoeba>`,
:doc:`kspace_style pppm <kspace_style>`, and :doc:`kspace_style msm
<kspace_style>` commands use distributed grids but do not require
either of these capabilities; they thus create and use distributed
grids internally. Note that a pair style which needs grid cell data
to persist could be coded to work in tandem with a fix style which
provides that capability.
The *size* of a grid is specified by the number of grid cells in each
dimension of the simulation domain. In any dimension the size can be
any value >= 1. Thus a 10x10x1 grid for a 3d simulation is
effectively a 2d grid, where each grid cell spans the entire
z-dimension. A 1x100x1 grid for a 3d simulation is effectively a 1d
grid, where grid cells are a series of thin xz slabs in the
y-dimension. It is even possible to define a 1x1x1 3d grid, though it
may be inefficient to use it in a computational sense.
Note that the choice of grid size is independent of the number of
processors or their layout in a grid of processor subdomains which
overlays the simulations domain. Depending on the distributed grid
size, a single processor may own many 1000s or no grid cells.
A command can define multiple grids, each of a different size. Each
grid is an instantiation of the Grid2d or Grid3d class.
The command also defines what data it will store for each grid it
creates and it allocates the multidimensional array(s) needed to
store the data. No grid cell data is stored within the Grid2d or
Grid3d classes.
If a single value per grid cell is needed, the data array will have
the same dimension as the grid, i.e. a 2d array for a 2d grid,
likewise for 3d. If multiple values per grid cell are needed, the
data array will have one more dimension than the grid, i.e. a 3d array
for a 2d grid, or 4d array for a 3d grid. A command can choose to
define multiple data arrays for each grid it defines.
----------
Grid data allocation and access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The simplest way for a command to allocate and access grid cell data
is to use the *create_offset()* methods provided by the Memory class.
Arguments for these methods can be values returned by the
*setup_grid()* method (described below), which define the extent of
the grid cells (owned+ghost) the processor owns. These 4 methods
allocate memory for 2d (first two) and 3d (second two) grid data. The
two methods that end in "_one" allocate an array which stores a single
value per grid cell. The two that end in "_multi" allocate an array
which stores *Nvalues* per grid cell.
.. code-block:: c++
// single value per cell for a 2d grid = 2d array
memory->create2d_offset(data2d_one, nylo_out, nyhi_out,
nxlo_out, nxhi_out, "data2d_one");
// nvalues per cell for a 2d grid = 3d array
memory->create3d_offset_last(data2d_multi, nylo_out, nyhi_out,
nxlo_out, nxhi_out, nvalues, "data2d_multi");
// single value per cell for a 3d grid = 3d array
memory->create3d_offset(data3d_one, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, "data3d_one");
// nvalues per cell for a 3d grid = 4d array
memory->create4d_offset_last(data3d_multi, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, nvalues,
"data3d_multi");
Note that these multidimensional arrays are allocated as contiguous
chunks of memory where the x-index of the grid varies fastest, then y,
and the z-index slowest. For multiple values per grid cell, the
Nvalues are contiguous, so their index varies even faster than the
x-index.
The key point is that the "offset" methods create arrays which are
indexed by the range of indices which are the bounds of the sub-block
of the global grid owned by this processor. This means loops like
these can be written in the caller code to loop over owned grid cells,
where the "i" loop bounds are the range of owned grid cells for the
processor. These are the bounds returned by the *setup_grid()*
method:
.. code-block:: c++
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data2d_one[iy][ix] = 0.0;
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data2d_multi[iy][ix][m] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data3d_one[iz][iy][ix] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data3d_multi[iz][iy][ix][m] = 0.0;
Simply replacing the "i" bounds with "o" bounds, also returned by the
*setup_grid()* method, would alter this code to loop over owned+ghost
cells (the entire allocated grid).
----------
Grid class constructors
^^^^^^^^^^^^^^^^^^^^^^^
The following subsections describe the public methods of the Grid3d
class which a style command can invoke. The Grid2d methods are
similar; simply remove arguments which refer to the z-dimension.
There are 2 constructors which can be used. They differ in the extra
i/o xyz lo/hi arguments:
.. code-block:: c++
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz)
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz,
int ixlo, int ixhi, int iylo, int iyhi, int izlo, int izhi,
int oxlo, int oxhi, int oylo, int oyhi, int ozlo, int ozhi)
Both constructors take the LAMMPS instance pointer and a communicator
over which the grid will be distributed. Typically this is the
*world* communicator the LAMMPS instance is using. The
:doc:`kspace_style msm <kspace_style>` command creates a series of
grids, each of different size, which are partitioned across different
sub-communicators of processors. Both constructors are also passed
the global grid size: *gnx* by *gny* by *gnz*.
The first constructor is used when the caller wants the Grid class to
partition the global grid across processors; the Grid class defines
which grid cells each processor owns and also which it stores as ghost
cells. A subsequent call to *setup_grid()*, discussed below, returns
this info to the caller.
The second constructor allows the caller to define the extent of owned
and ghost cells, and pass them to the Grid class. The 6 arguments
which start with "i" are the inclusive lower and upper index bounds of
the owned (inner) grid cells this processor owns in each of the 3
dimensions within the global grid. Owned grid cells are indexed from
0 to N-1 in each dimension.
The 6 arguments which start with "o" are the inclusive bounds of the
owned+ghost (outer) grid cells it stores. If the ghost cells are on
the other side of a periodic boundary, then these indices may be < 0
or >= N in any dimension, so that oxlo <= ixlo and ixhi >= ixhi is
always the case.
For example, if Nx = 100, then a processor might pass ixlo=50,
ixhi=60, oxlo=48, oxhi=62 to the Grid class. Or ixlo=0, ixhi=10,
oxlo=-2, oxhi=13. If a processor owns no grid cells in a dimension,
then the ihi value should be specified as one less than the ilo value.
Note that the only reason to use the second constructor is if the
logic for assigning ghost cells is too complex for the Grid class to
compute, using the various set() methods described next. Currently
only the kspace_style pppm/electrode and kspace_style msm commands use
the second constructor.
----------
Grid class set methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods affect how the Grid class computes which owned
and ghost cells are assigned to each processor. *Set_shift_grid()* is
the only method which influences owned cell assignment; all the rest
influence ghost cell assignment. These methods are only used with the
first constructor; they are ignored if the second constructor is used.
These methods must be called before the *setup_grid()* method is
invoked, because they influence its operation.
.. code-block:: c++
void set_shift_grid(double shift);
void set_distance(double distance);
void set_stencil_atom(int lo, int hi);
void set_shift_atom(double shift_lo, double shift_hi);
void set_stencil_grid(int lo, int hi);
void set_zfactor(double factor);
Processors own a grid cell if a point within the grid cell is inside
the processor's subdomain. By default this is the center point of the
grid cell. The *set_shift_grid()* method can change this. The *shift*
argument is a value from 0.0 to 1.0 (inclusive) which is the offset of
the point within the grid cell in each dimension. The default is 0.5
for the center of the cell. A value of 0.0 is the lower left corner
point; a value of 1.0 is the upper right corner point. There is
typically no need to change the default as it is optimal for
minimizing the number of ghost cells needed.
If a processor maps its particles to grid cells, it needs to allow for
its particles being outside its subdomain between reneighboring. The
*distance* argument of the *set_distance()* method sets the furthest
distance outside a processor's subdomain which a particle can move.
Typically this is half the neighbor skin distance, assuming
reneighboring is done appropriately. This distance is used in
determining how many ghost cells a processor needs to store to enable
its particles to be mapped to grid cells. The default value is 0.0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, map values (charge in the case of PPPM) to a stencil of grid
cells beyond the grid cell the particle is in. The stencil extent may
be different in the low and high directions. The *set_stencil_atom()*
method defines the maximum values of those 2 extents, assumed to be
the same in each of the 3 dimensions. Both the lo and hi values are
specified as positive integers. The default values are both 0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, shift the position of an atom when mapping it to a grid cell,
based on the size of the stencil used to map values to the grid
(charge in the case of PPPM). The lo and hi arguments of the
*set_shift_atom()* method are the minimum shift in the low direction
and the maximum shift in the high direction, assumed to be the same in
each of the 3 dimensions. The shifts should be fractions of a grid
cell size with values between 0.0 and 1.0 inclusive. The default
values are both 0.0. See the src/pppm.cpp file for examples of these
lo/hi values for regular and staggered grids.
Some methods like the :doc:`fix ttm/grid <fix_ttm>` command, perform
finite difference kinds of operations on the grid, to diffuse electron
heat in the case of the two-temperature model (TTM). This operation
uses ghost grid values beyond the owned grid values the processor
updates. The *set_stencil_grid()* method defines the extent of this
stencil in both directions, assumed to be the same in each of the 3
dimensions. Both the lo and hi values are specified as positive
integers. The default values are both 0.
The kspace_style pppm commands allow a grid to be defined which
overlays a volume which extends beyond the simulation box in the z
dimension. This is for the purpose of modeling a 2d-periodic slab
(non-periodic in z) as if it were a larger 3d periodic system,
extended (with empty space) in the z dimension. The
:doc:`kspace_modify slab <kspace_modify>` command is used to specify
the ratio of the larger volume to the simulation volume; a volume
ratio of ~3 is typical. For this kind of model, the PPPM caller sets
the global grid size *gnz* ~3x larger than it would be otherwise.
This same ratio is passed by the PPPM caller as the *factor* argument
to the Grid class via the *set_zfactor()* method (*set_yfactor()* for
2d grids). The Grid class will then assign ownership of the 1/3 of
grid cells that overlay the simulation box to the processors which
also overlay the simulation box. The remaining 2/3 of the grid cells
are assigned to processors whose subdomains are adjacent to the upper
z boundary of the simulation box.
----------
Grid class setup_grid method
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *setup_grid()* method is called after the first constructor
(above) to partition the grid across processors, which determines
which grid cells each processor owns. It also calculates how many
ghost grid cells in each dimension and each direction each processor
needs to store.
Note that this method is NOT called if the second constructor above is
used. In that case, the caller assigns owned and ghost cells to each
processor.
Also note that this method must be invoked after any *set_*()* methods have
been used, since they can influence the assignment of owned and ghost
cells.
.. code-block:: c++
void setup_grid(int &ixlo, int &ixhi, int &iylo, int &iyhi, int &izlo, int &izhi,
int &oxlo, int &oxhi, int &oylo, int &oyhi, int &ozlo, int &ozhi)
The 6 return arguments which start with "i" are the inclusive lower
and upper index bounds of the owned (inner) grid cells this processor
owns in each of the 3 dimensions within the global grid. Owned grid
cells are indexed from 0 to N-1 in each dimension.
The 6 return arguments which start with "o" are the inclusive bounds of
the owned+ghost cells it owns. If the ghost cells are on the other
side of a periodic boundary, then these indices may be < 0 or >= N in
any dimension, so that oxlo <= ixlo and ixhi >= ixhi is always the
case.
----------
More grid class set methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following 2 methods can be used to override settings made by the
constructors above. If used, they must be called called before the
*setup_comm()* method is invoked, since it uses the settings that
these methods override. In LAMMPS these methods are called by by the
:doc:`kspace_style msm <kspace_style>` command for the grids it
instantiates using the 2nd constructor above.
.. code-block:: c++
void set_proc_neighs(int pxlo, int pxhi, int pylo, int pyhi, int pzlo, int pzhi)
void set_caller_grid(int fxlo, int fxhi, int fylo, int fyhi, int fzlo, int fzhi)
The *set_proc_neighs()* method sets the processor IDs of the 6
neighboring processors for each processor. Normally these would match
the processor grid neighbors which LAMMPS creates to overlay the
simulation box (the default). However, MSM excludes non-participating
processors from coarse grid communication when less processors are
used. This method allows MSM to override the default values.
The *set_caller_grid()* method species the size of the data arrays the
caller allocates. Normally these would match the extent of the ghost
grid cells (the default). However the MSM caller allocates a larger
data array (more ghost cells) for its finest-level grid, for use in
other operations besides owned/ghost cell communication. This method
allows MSM to override the default values.
----------
Grid class get methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods allow the caller to query the settings for a
specific grid, whether it created the grid or another command created
it.
.. code-block:: c++
void get_size(int &nxgrid, int &nygrid, int &nzgrid);
void get_bounds_owned(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
void get_bounds_ghost(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
The *get_size()* method returns the size of the global grid in each dimension.
The *get_bounds_owned()* method return the inclusive index bounds of
the grid cells this processor owns. The values range from 0 to N-1 in
each dimension. These values are the same as the "i" values returned
by *setup_grid()*.
The *get_bounds_ghost()* method return the inclusive index bounds of
the owned+ghost grid cells this processor stores. The owned cell
indices range from 0 to N-1, so these indices may be less than 0 or
greater than or equal to N in each dimension. These values are the
same as the "o" values returned by *setup_grid()*.
----------
Grid class owned/ghost communication
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If needed by the command, the following methods setup and perform
communication of grid data to/from neighboring processors. The
*forward_comm()* method sends owned grid cell data to the
corresponding ghost grid cells on other processors. The
*reverse_comm()* method sends ghost grid cell data to the
corresponding owned grid cells on another processor. The caller can
choose to sum ghost grid cell data to the owned grid cell or simply
copy it.
.. code-block:: c++
void setup_comm(int &nbuf1, int &nbuf2)
void forward_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype);
void reverse_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
int ghost_adjacent();
The *setup_comm()* method must be called one time before performing
*forward* or *reverse* communication (multiple times if needed). It
returns two integers, which should be used to allocate two buffers.
The *nbuf1* and *nbuf2* values are the number of grid cells whose data
will be stored in two buffers by the Grid class when *forward* or
*reverse* communication is performed. The caller should thus allocate
them to a size large enough to hold all the data used in any single
forward or reverse communication operation it performs. Note that the
caller may allocate and communicate multiple data arrays for a grid it
instantiates. This size includes the bytes needed for the data type
of the grid data it stores, e.g. double precision values.
The *forward_comm()* and *reverse_comm()* methods send grid cell data
from owned to ghost cells, or ghost to owned cells, respectively, as
described above. The *caller* argument should be one of these values
-- Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument is the
"this" pointer to the caller class. These 2 arguments are used to
call back to pack()/unpack() functions in the caller class, as
explained below.
The *which* argument is a flag the caller can set which is passed to
the caller's pack()/unpack() methods. This allows a single callback
method to pack/unpack data for several different flavors of
forward/reverse communication, e.g. operating on different grids or
grid data.
The *nper* argument is the number of values per grid cell to be
communicated. The *nbyte* argument is the number of bytes per value,
e.g. 8 for double-precision values. The *buf1* and *buf2* arguments
are the two allocated buffers described above. So long as they are
allocated for the maximum size communication, they can be re-used for
any *forward_comm()/reverse_comm()* call. The *datatype* argument is
the MPI_Datatype setting, which should match the buffer allocation and
the *nbyte* argument. E.g. MPI_DOUBLE for buffers storing double
precision values.
To use the *forward_grid()* method, the caller must provide two
callback functions; likewise for use of the *reverse_grid()* methods.
These are the 4 functions, their arguments are all the same.
.. code-block:: c++
void pack_forward_grid(int which, void *vbuf, int nlist, int *list);
void unpack_forward_grid(int which, void *vbuf, int nlist, int *list);
void pack_reverse_grid(int which, void *vbuf, int nlist, int *list);
void unpack_reverse_grid(int which, void *vbuf, int nlist, int *list);
The *which* argument is set to the *which* value of the
*forward_comm()* or *reverse_comm()* calls. It allows the pack/unpack
function to select what data values to pack/unpack. *Vbuf* is the
buffer to pack/unpack the data to/from. It is a void pointer so that
the caller can cast it to whatever data type it chooses, e.g. double
precision values. *Nlist* is the number of grid cells to pack/unpack
and *list* is a vector (nlist in length) of offsets to where the data
for each grid cell resides in the caller's data arrays, which is best
illustrated with an example from the src/EXTRA-FIX/fix_ttm_grid.cpp
class which stores the scalar electron temperature for 3d system in a
3d grid (one value per grid cell):
.. code-block:: c++
void FixTTMGrid::pack_forward_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *src = &T_electron[nzlo_out][nylo_out][nxlo_out];
for (int i = 0; i < nlist; i++) buf[i] = src[list[i]];
}
In this case, the *which* argument is not used, *vbuf* points to a
buffer of doubles, and the electron temperature is stored by the
FixTTMGrid class in a 3d array of owned+ghost cells called T_electron.
That array is allocated by the *memory->create_3d_offset()* method
described above so that the first grid cell it stores is indexed as
T_electron[nzlo_out][nylo_out][nxlo_out]. The *nlist* values in
*list* are integer offsets from that first grid cell. Setting *src*
to the address of the first cell allows those offsets to be used to
access the temperatures to pack into the buffer.
Here is a similar portion of code from the src/fix_ave_grid.cpp class
which can store two kinds of data, a scalar count of atoms in a grid
cell, and one or more grid-cell-averaged atom properties. The code
from its *unpack_reverse_grid()* function for 2d grids and multiple
per-atom properties per grid cell (*nvalues*) is shown here:
.. code-block:: c++
void FixAveGrid::unpack_reverse_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *count,*data,*values;
count = &count2d[nylo_out][nxlo_out];
data = &array2d[nylo_out][nxlo_out][0];
m = 0;
for (i = 0; i < nlist; i++) {
count[list[i]] += buf[m++];
values = &data[nvalues*list[i]];
for (j = 0; j < nvalues; j++)
values[j] += buf[m++];
}
}
Both the count and the multiple values per grid cell are communicated
in *vbuf*. Note that *data* is now a pointer to the first value in
the first grid cell. And *values* points to where the first value in
*data* is for an offset of grid cells, calculated by multiplying
*nvalues* by *list[i]*. Finally, because this is reverse
communication, the communicated buffer values are summed to the caller
values.
The *ghost_adjacent()* method returns a 1 if every processor can
perform the necessary owned/ghost communication with only its nearest
neighbor processors (4 in 2d, 6 in 3d). It returns a 0 if any
processor's ghost cells extend further than nearest neighbor
processors.
This can be checked by callers who have the option to change the
global grid size to ensure more efficient nearest-neighbor-only
communication if they wish. In this case, they instantiate a grid of
a given size (resolution), then invoke *setup_comm()* followed by
*ghost_adjacent()*. If the ghost cells are not adjacent, they destroy
the grid instance and start over with a higher-resolution grid.
Several of the :doc:`kspace_style pppm <kspace_style>` command
variants have this option.
----------
Grid class remap methods for load balancing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following methods are used when a load-balancing operation,
triggered by the :doc:`balance <balance>` or :doc:`fix balance
<fix_balance>` commands, changes the partitioning of the simulation
domain into processor subdomains.
In order to work with load-balancing, any style command (compute, fix,
pair, or kspace style) which allocates a grid and stores per-grid data
should define a *reset_grid()* method; it takes no arguments. It will
be called by the two balance commands after they have reset processor
subdomains and migrated atoms (particles) to new owning processors.
The *reset_grid()* method will typically perform some or all of the
following operations. See the src/fix_ave_grid.cpp and
src/EXTRA_FIX/fix_ttm_grid.cpp files for examples of *reset_grid()*
methods, as well as the *pack_remap_grid()* and *unpack_remap_grid()*
functions.
First, the *reset_grid()* method can instantiate new grid(s) of the
same global size, then call *setup_grid()* to partition them via the
new processor subdomains. At this point, it can invoke the
*identical()* method which compares the owned and ghost grid cell
index bounds between two grids, the old grid passed as a pointer
argument, and the new grid whose *identical()* method is being called.
It returns 1 if the indices match on all processors, otherwise 0. If
they all match, then the new grids can be deleted; the command can
continue to use the old grids.
If not, then the command should allocate new grid data array(s) which
depend on the new partitioning. If the command does not need to
persist its grid data from the old partitioning to the new one, then
the command can simply delete the old data array(s) and grid
instance(s). It can then return.
If the grid data does need to persist, then the data for each grid
needs to be "remapped" from the old grid partitioning to the new grid
partitioning. The *setup_remap()* and *remap()* methods are used for
that purpose.
.. code-block:: c++
int identical(Grid3d *old);
void setup_remap(Grid3d *old, int &nremap_buf1, int &nremap_buf2)
void remap(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
The arguments to these methods are identical to those for
the *setup_comm()* and *forward_comm()* or *reverse_comm()* methods.
However the returned *nremap_buf1* and *nremap2_buf* values will be
different than the *nbuf1* and *nbuf2* values. They should be used to
allocate two different remap buffers, separate from the owned/ghost
communication buffers.
To use the *remap()* method, the caller must provide two
callback functions:
.. code-block:: c++
void pack_remap_grid(int which, void *vbuf, int nlist, int *list);
void unpack_remap_grid(int which, void *vbuf, int list, int *list);
Their arguments are identical to those for the *pack_forward_grid()*
and *unpack_forward_grid()* callback functions (or the reverse
variants) discussed above. Normally, both these methods pack/unpack
all the data arrays for a given grid. The *which* argument of the
*remap()* method sets the *which* value for the pack/unpack functions.
If the command instantiates multiple grids (of different sizes), it
can be used within the pack/unpack methods to select which grid's data
is being remapped.
Note that the *pack_remap_grid()* function must copy values from the
OLD grid data arrays into the *vbuf* buffer. The *unpack_remap_grid()*
function must copy values from the *vbuf* buffer into the NEW grid
data arrays.
After the remap operation for grid cell data has been performed, the
*reset_grid()* method can deallocate the two remap buffers it created,
and can then exit.
----------
Grid class I/O methods
^^^^^^^^^^^^^^^^^^^^^^
There are two I/O methods in the Grid classes which can be used to
read and write grid cell data to files. The caller can decide on the
precise format of each file, e.g. whether header lines are prepended
or comment lines are allowed. Fundamentally, the file should contain
one line per grid cell for the entire global grid. Each line should
contain identifying info as to which grid cell it is, e.g. a unique
grid cell ID or the ix,iy,iz indices of the cell within a 3d grid.
The line should also contain one or more data values which are stored
within the grid data arrays created by the command
For grid cell IDs, the LAMMPS convention is that the IDs run from 1 to
N, where N = Nx * Ny for 2d grids and N = Nx * Ny * Nz for 3d grids.
The x-index of the grid cell varies fastest, then y, and the z-index
varies slowest. So for a 10x10x10 grid the cell IDs from 901-1000
would be in the top xy layer of the z dimension.
The *read_file()* method does something simple. It reads a chunk of
consecutive lines from the file and passes them back to the caller to
process. The caller provides a *unpack_read_grid()* function for this
purpose. The function checks the grid cell ID or indices and only
stores grid cell data for the grid cells it owns.
The *write_file()* method does something slightly more complex. Each
processor packs the data for its owned grid cells into a buffer. The
caller provides a *pack_write_grid()* function for this purpose. The
*write_file()* method then loops over all processors and each sends
its buffer one at a time to processor 0, along with the 3d (or 2d)
index bounds of its grid cell data within the global grid. Processor
0 calls back to the *unpack_write_grid()* function provided by the
caller with the buffer. The function writes one line per grid cell to
the file.
See the src/EXTRA_FIX/fix_ttm_grid.cpp file for examples of now both
these methods are used to read/write electron temperature values
from/to a file, as well as for implementations of the the pack/unpack
functions described below.
Here are the details of the two I/O methods and the 3 callback
functions. See the src/fix_ave_grid.cpp file for examples of all of
them.
.. code-block:: c++
void read_file(int caller, void *ptr, FILE *fp, int nchunk, int maxline)
void write_file(int caller, void *ptr, int which,
int nper, int nbyte, MPI_Datatype datatype
The *caller* argument in both methods should be one of these values --
Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument in
both methods is the "this" pointer to the caller class. These 2
arguments are used to call back to pack()/unpack() functions in the
caller class, as explained below.
For the *read_file()* method, the *fp* argument is a file pointer to
the file to be read from, opened on processor 0 by the caller.
*Nchunk* is the number of lines to read per chunk, and *maxline* is
the maximum number of characters per line. The Grid class will
allocate a buffer for storing chunks of lines based on these values.
For the *write_file()* method, the *which* argument is a flag the
caller can set which is passed back to the caller's pack()/unpack()
methods. If the command instantiates multiple grids (of different
sizes), this flag can be used within the pack/unpack methods to select
which grid's data is being written out (presumably to different
files). the *nper* argument is the number of values per grid cell to
be written out. The *nbyte* argument is the number of bytes per
value, e.g. 8 for double-precision values. The *datatype* argument is
the MPI_Datatype setting, which should match the *nbyte* argument.
E.g. MPI_DOUBLE for double precision values.
To use the *read_grid()* method, the caller must provide one callback
function. To use the *write_grid()* method, it provides two callback
functions:
.. code-block:: c++
int unpack_read_grid(int nlines, char *buffer)
void pack_write_grid(int which, void *vbuf)
void unpack_write_grid(int which, void *vbuf, int *bounds)
For *unpack_read_grid()* the *nlines* argument is the number of lines
of character data read from the file and contained in *buffer*. The
lines each include a newline character at the end. When the function
processes the lines, it may choose to skip some of them (header or
comment lines). It returns an integer count of the number of grid
cell lines it processed. This enables the Grid class *read_file()*
method to know when it has read the correct number of lines.
For *pack_write_grid()* and *unpack_write_grid()*, the *vbuf* argument
is the buffer to pack/unpack data to/from. It is a void pointer so
that the caller can cast it to whatever data type it chooses,
e.g. double precision values. the *which* argument is set to the
*which* value of the *write_file()* method. It allows the caller to
choose which grid data to operate on.
For *unpack_write_grid()*, the *bounds* argument is a vector of 4 or 6
integer grid indices (4 for 2d, 6 for 3d). They are the
xlo,xhi,ylo,yhi,zlo,zhi index bounds of the portion of the global grid
which the *vbuf* holds owned grid cell data values for. The caller
should loop over the values in *vbuf* with a double loop (2d) or
triple loop (3d), similar to the code snippets listed above. The
x-index varies fastest, then y, and the z-index slowest. If there are
multiple values per grid cell, the index for those values varies
fastest of all. The caller can add the x,y,z indices of the grid cell
(or the corresponding grid cell ID) to the data value(s) written as
one line to the output file.
----------
Style class grid access methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A style command can enable its grid cell data to be accessible from
other commands. For example :doc:`fix ave/grid <fix_ave_grid>` or
:doc:`dump grid <dump>` or :doc:`dump grid/vtk <dump>`. Those
commands access the grid cell data by using a *grid reference* in
their input script syntax, as described on the :doc:`Howto_grid
<Howto_grid>` doc page. They look like this:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
Each grid command instantiates has a unique *gname*, defined by the
command. Likewise each grid cell data structure (scalar or vector)
associated with a grid has a unique *dname*, also defined by the
command.
To provide access to its grid cell data, a style command needs to
implement the following 4 methods:
.. code-block:: c++
int get_grid_by_name(const std::string &name, int &dim);
void *get_grid_by_index(int index);
int get_griddata_by_name(int igrid, const std::string &name, int &ncol);
void *get_griddata_by_index(int index);
Currently only computes and fixes can implement these methods. If it
does so, the compute of fix should also set the variable
*pergrid_flag* to 1. See any of the compute or fix commands which set
"pergrid_flag = 1" for examples of how these 4 functions can be
implemented.
The *get_grid_by_name()* method takes a grid name as input and returns
two values. The *dim* argument is returned as 2 or 3 for the
dimensionality of the grid. The function return is a grid index from
0 to G-1 where *G* is the number of grids the command instantiates. A
value of -1 is returned if the grid name is not recognized.
The *get_grid_by_index()* method is called after the
*get_grid_by_name()* method, using the grid index it returned as its
argument. This method will return a pointer to the Grid2d or Grid3d
class. The caller can use this to query grid attributes, such as the
global size of the grid, to ensure it is of the expected size.
The *get_griddata_by_name()* method takes a grid index *igrid* and a
data name as input. It returns two values. The *ncol* argument is
returned as a 0 if the grid data is a single value (scalar) per grid
cell, or an integer M > 0 if there are M values (vector) per grid
cell. Note that even if M = 1, it is still a 1-length vector, not a
scalar. The function return is a data index from 0 to D-1 where *D*
is the number of data sets associated with that grid by the command.
A value of -1 is returned if the data name is not recognized.
The *get_griddata_by_index()* method is called after the
*get_griddata_by_name()* method, using the data index it returned as
its argument. This method will return a pointer to the
multidimensional array which stores the requested data.
As in the discussion above of the Memory class *create_offset()*
methods, the dimensionality of the array associated with the returned
pointer depends on whether it is a 2d or 3d grid and whether there is
a single or multiple values stored for each grid cell:
* single value per cell for a 2d grid = 2d array pointer
* multiple values per cell for a 2d grid = 3d array pointer
* single value per cell for a 3d grid = 3d array pointer
* multiple values per cell for a 3d grid = 4d array pointer
The caller will typically access the data by casting the void pointer
to the corresponding array pointer and using nested loops in x,y,z
between owned or ghost index bounds returned by the
*get_bounds_owned()* or *get_bounds_ghost()* methods to index into the
array. Example code snippets with this logic were listed above,
----------
Final notes
^^^^^^^^^^^
Finally, here are some additional issues to pay attention to for
writing any style command which uses distributed grids via the Grid2d
or Grid3d class.
The command destructor should delete all instances of the Grid class,
any buffers it allocated for forward/reverse or remap communication,
and any data arrays it allocated to store grid cell data.
If a command is intended to work for either 2d or 3d simulations, then
it should have logic to instantiate either 2d or 3d grids and their
associated data arrays, depending on the dimension of the simulation
box. The :doc:`fix ave/grid <fix_ave_grid>` command is an example of
such a command.
When a command maps its particles to the grid and updates grid cell
values, it should check that it is not updating or accessing a grid
cell value outside the range of its owned+ghost cells, and generate an
error message if that is the case. This could happen, for example, if
a particle has moved further than half the neighbor skin distance,
because the neighbor list update criterion are not adequate to prevent
it from happening. See the src/KSPACE/pppm.cpp file and its
*particle_map()* method for an example of this kind of error check.

View File

@ -11,6 +11,7 @@ Available topics are:
- `Reading and parsing of text and text files`_ - `Reading and parsing of text and text files`_
- `Requesting and accessing neighbor lists`_ - `Requesting and accessing neighbor lists`_
- `Choosing between a custom atom style, fix property/atom, and fix STORE/ATOM`_
- `Fix contributions to instantaneous energy, virial, and cumulative energy`_ - `Fix contributions to instantaneous energy, virial, and cumulative energy`_
- `KSpace PPPM FFT grids`_ - `KSpace PPPM FFT grids`_
@ -73,6 +74,8 @@ when converting "12.5", while the ValueTokenizer class will throw an
:cpp:func:`ValueTokenizer::next_int() :cpp:func:`ValueTokenizer::next_int()
<LAMMPS_NS::ValueTokenizer::next_int>` is called on the same string. <LAMMPS_NS::ValueTokenizer::next_int>` is called on the same string.
.. _request-neighbor-list:
Requesting and accessing neighbor lists Requesting and accessing neighbor lists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -102,10 +105,10 @@ build is then :doc:`processed in parallel <Developer_par_neigh>`.
The most commonly required neighbor list is a so-called "half" neighbor The most commonly required neighbor list is a so-called "half" neighbor
list, where each pair of atoms is listed only once (except when the list, where each pair of atoms is listed only once (except when the
:doc:`newton command setting <newton>` for pair is off; in that case :doc:`newton command setting <newton>` for pair is off; in that case
pairs straddling sub-domains or periodic boundaries will be listed twice). pairs straddling subdomains or periodic boundaries will be listed twice).
Thus these are the default settings when a neighbor list request is created in: Thus these are the default settings when a neighbor list request is created in:
.. code-block:: C++ .. code-block:: c++
void Pair::init_style() void Pair::init_style()
{ {
@ -129,7 +132,7 @@ neighbor list request to the specific needs of a style an additional
request flag is needed. The :doc:`tersoff <pair_tersoff>` pair style, request flag is needed. The :doc:`tersoff <pair_tersoff>` pair style,
for example, needs a "full" neighbor list: for example, needs a "full" neighbor list:
.. code-block:: C++ .. code-block:: c++
void PairTersoff::init_style() void PairTersoff::init_style()
{ {
@ -141,7 +144,7 @@ When a pair style supports r-RESPA time integration with different cutoff region
the request flag may depend on the corresponding r-RESPA settings. Here an example the request flag may depend on the corresponding r-RESPA settings. Here an example
from pair style lj/cut: from pair style lj/cut:
.. code-block:: C++ .. code-block:: c++
void PairLJCut::init_style() void PairLJCut::init_style()
{ {
@ -160,7 +163,7 @@ Granular pair styles need neighbor lists based on particle sizes and not cutoff
and also may require to have the list of previous neighbors available ("history"). and also may require to have the list of previous neighbors available ("history").
For example with: For example with:
.. code-block:: C++ .. code-block:: c++
if (use_history) neighbor->add_request(this, NeighConst::REQ_SIZE | NeighConst::REQ_HISTORY); if (use_history) neighbor->add_request(this, NeighConst::REQ_SIZE | NeighConst::REQ_HISTORY);
else neighbor->add_request(this, NeighConst::REQ_SIZE); else neighbor->add_request(this, NeighConst::REQ_SIZE);
@ -170,7 +173,7 @@ settings each request can set an id which is then used in the corresponding
``init_list()`` function to assign it to the suitable pointer variable. This is ``init_list()`` function to assign it to the suitable pointer variable. This is
done for example by the :doc:`pair style meam <pair_meam>`: done for example by the :doc:`pair style meam <pair_meam>`:
.. code-block:: C++ .. code-block:: c++
void PairMEAM::init_style() void PairMEAM::init_style()
{ {
@ -189,7 +192,7 @@ just once) and this can also be indicated by a flag. As an example here
is the request from the ``FixPeriNeigh`` class which is created is the request from the ``FixPeriNeigh`` class which is created
internally by :doc:`Peridynamics pair styles <pair_peri>`: internally by :doc:`Peridynamics pair styles <pair_peri>`:
.. code-block:: C++ .. code-block:: c++
neighbor->add_request(this, NeighConst::REQ_FULL | NeighConst::REQ_OCCASIONAL); neighbor->add_request(this, NeighConst::REQ_FULL | NeighConst::REQ_OCCASIONAL);
@ -198,7 +201,7 @@ than what is usually inferred from the pair style settings (largest cutoff of
all pair styles plus neighbor list skin). The following is used in the all pair styles plus neighbor list skin). The following is used in the
:doc:`compute rdf <compute_rdf>` command implementation: :doc:`compute rdf <compute_rdf>` command implementation:
.. code-block:: C++ .. code-block:: c++
if (cutflag) if (cutflag)
neighbor->add_request(this, NeighConst::REQ_OCCASIONAL)->set_cutoff(mycutneigh); neighbor->add_request(this, NeighConst::REQ_OCCASIONAL)->set_cutoff(mycutneigh);
@ -212,10 +215,34 @@ for printing the neighbor list summary the name of the requesting command
should be set. Below is the request from the :doc:`delete atoms <delete_atoms>` should be set. Below is the request from the :doc:`delete atoms <delete_atoms>`
command: command:
.. code-block:: C++ .. code-block:: c++
neighbor->add_request(this, "delete_atoms", NeighConst::REQ_FULL); neighbor->add_request(this, "delete_atoms", NeighConst::REQ_FULL);
Choosing between a custom atom style, fix property/atom, and fix STORE/ATOM
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There are multiple ways to manage per-atom data within LAMMPS. Often
the per-atom storage is only used locally and managed by the class that
uses it. If the data has to persist between multiple time steps and
migrate with atoms when they move from sub-domain to sub-domain or
across periodic boundaries, then using a custom atom style, or :doc:`fix
property/atom <fix_property_atom>`, or the internal fix STORE/ATOM are
possible options.
- Using the atom style is usually the most programming effort and mostly
needed when the per-atom data is an integral part of the model like a
per-atom charge or diameter and thus should be part of the Atoms
section of a :doc:`data file <read_data>`.
- Fix property/atom is useful if the data is optional or should be
entered by the user, or accessed as a (named) custom property. In this
case the fix should be entered as part of the input (and not
internally) which allows to enter and store its content with data files.
- Fix STORE/ATOM should be used when the data should be accessed internally
only and thus the fix can be created internally.
Fix contributions to instantaneous energy, virial, and cumulative energy Fix contributions to instantaneous energy, virial, and cumulative energy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -361,7 +388,7 @@ allocated as a 1d vector or 3d array. Either way, the ordering of
values within contiguous memory x fastest, then y, z slowest. values within contiguous memory x fastest, then y, z slowest.
For the ``3d decomposition`` of the grid, the global grid is For the ``3d decomposition`` of the grid, the global grid is
partitioned into bricks that correspond to the sub-domains of the partitioned into bricks that correspond to the subdomains of the
simulation box that each processor owns. Often, this is a regular 3d simulation box that each processor owns. Often, this is a regular 3d
array (Px by Py by Pz) of bricks, where P = number of processors = array (Px by Py by Pz) of bricks, where P = number of processors =
Px * Py * Pz. More generally it can be a tiled decomposition, where Px * Py * Pz. More generally it can be a tiled decomposition, where

View File

@ -7,7 +7,7 @@ but there are small a number of files in several other languages like C,
Fortran, Shell script, or Python. Fortran, Shell script, or Python.
The core of the code is located in the ``src`` folder and its The core of the code is located in the ``src`` folder and its
sub-directories. A sizable number of these files are in the ``src`` subdirectories. A sizable number of these files are in the ``src``
directory itself, but there are plenty of :doc:`packages <Packages>`, directory itself, but there are plenty of :doc:`packages <Packages>`,
which can be included or excluded when LAMMPS is built. See the which can be included or excluded when LAMMPS is built. See the
:doc:`Include packages in build <Build_package>` section of the manual :doc:`Include packages in build <Build_package>` section of the manual
@ -15,42 +15,42 @@ for more information about that part of the build process. LAMMPS
currently supports building with :doc:`conventional makefiles currently supports building with :doc:`conventional makefiles
<Build_make>` and through :doc:`CMake <Build_cmake>`. Those procedures <Build_make>` and through :doc:`CMake <Build_cmake>`. Those procedures
differ in how packages are enabled or disabled for inclusion into a differ in how packages are enabled or disabled for inclusion into a
LAMMPS binary so they cannot be mixed. The source files for each LAMMPS binary, so they cannot be mixed. The source files for each
package are in all-uppercase sub-directories of the ``src`` folder, for package are in all-uppercase subdirectories of the ``src`` folder, for
example ``src/MOLECULE`` or ``src/EXTRA-MOLECULE``. The ``src/STUBS`` example ``src/MOLECULE`` or ``src/EXTRA-MOLECULE``. The ``src/STUBS``
sub-directory is not a package but contains a dummy MPI library, that is subdirectory is not a package but contains a dummy MPI library, that is
used when building a serial version of the code. The ``src/MAKE`` used when building a serial version of the code. The ``src/MAKE``
directory and its sub-directories contain makefiles with settings and directory and its subdirectories contain makefiles with settings and
flags for a variety of configuration and machines for the build process flags for a variety of configuration and machines for the build process
with traditional makefiles. with traditional makefiles.
The ``lib`` directory contains the source code for several supporting The ``lib`` directory contains the source code for several supporting
libraries or files with configuration settings to use globally installed libraries or files with configuration settings to use globally installed
libraries, that are required by some of the optional packages. They may libraries, that are required by some optional packages. They may
include python scripts that can transparently download additional source include python scripts that can transparently download additional source
code on request. Each sub-directory, like ``lib/poems`` or ``lib/gpu``, code on request. Each subdirectory, like ``lib/poems`` or ``lib/gpu``,
contains the source files, some of which are in different languages such contains the source files, some of which are in different languages such
as Fortran or CUDA. These libraries included in the LAMMPS build, as Fortran or CUDA. These libraries included in the LAMMPS build, if the
if the corresponding package is installed. corresponding package is installed.
LAMMPS C++ source files almost always come in pairs, such as LAMMPS C++ source files almost always come in pairs, such as
``src/run.cpp`` (implementation file) and ``src/run.h`` (header file). ``src/run.cpp`` (implementation file) and ``src/run.h`` (header file).
Each pair of files defines a C++ class, for example the Each pair of files defines a C++ class, for example the
:cpp:class:`LAMMPS_NS::Run` class which contains the code invoked by the :cpp:class:`LAMMPS_NS::Run` class, which contains the code invoked by
:doc:`run <run>` command in a LAMMPS input script. As this example the :doc:`run <run>` command in a LAMMPS input script. As this example
illustrates, source file and class names often have a one-to-one illustrates, source file and class names often have a one-to-one
correspondence with a command used in a LAMMPS input script. Some correspondence with a command used in a LAMMPS input script. Some
source files and classes do not have a corresponding input script source files and classes do not have a corresponding input script
command, e.g. ``src/force.cpp`` and the :cpp:class:`LAMMPS_NS::Force` command, for example ``src/force.cpp`` and the :cpp:class:`LAMMPS_NS::Force`
class. They are discussed in the next section. class. They are discussed in the next section.
The names of all source files are in lower case and may use the The names of all source files are in lower case and may use the
underscore character '_' to separate words. Outside of bundled libraries underscore character '_' to separate words. Apart from bundled,
which may have different conventions, all C and C++ header files have a externally maintained libraries, which may have different conventions,
``.h`` extension, all C++ files have a ``.cpp`` extension, and C files a all C and C++ header files have a ``.h`` extension, all C++ files have a
``.c`` extension. A small number of C++ classes and utility functions ``.cpp`` extension, and C files a ``.c`` extension. A few C++ classes
are implemented with only a ``.h`` file. Examples are the Pointers and and utility functions are implemented with only a ``.h`` file. Examples
Commands classes or the MathVec functions. are the Pointers and Commands classes or the MathVec functions.
Class topology Class topology
-------------- --------------
@ -62,35 +62,36 @@ associated source files in the ``src`` folder, for example the class
:cpp:class:`LAMMPS_NS::Memory` corresponds to the files ``memory.cpp`` :cpp:class:`LAMMPS_NS::Memory` corresponds to the files ``memory.cpp``
and ``memory.h``, or the class :cpp:class:`LAMMPS_NS::AtomVec` and ``memory.h``, or the class :cpp:class:`LAMMPS_NS::AtomVec`
corresponds to the files ``atom_vec.cpp`` and ``atom_vec.h``. Full corresponds to the files ``atom_vec.cpp`` and ``atom_vec.h``. Full
lines in the figure represent compositing: that is the class at the base lines in the figure represent compositing: that is, the class at the
of the arrow holds a pointer to an instance of the class at the tip. base of the arrow holds a pointer to an instance of the class at the
Dashed lines instead represent inheritance: the class to the tip of the tip. Dashed lines instead represent inheritance: the class at the tip
arrow is derived from the class at the base. Classes with a red boundary of the arrow is derived from the class at the base. Classes with a red
are not instantiated directly, but they represent the base classes for boundary are not instantiated directly, but they represent the base
"styles". Those "styles" make up the bulk of the LAMMPS code and only classes for "styles". Those "styles" make up the bulk of the LAMMPS
a few representative examples are included in the figure so it remains code and only a few representative examples are included in the figure,
readable. so it remains readable.
.. _class-topology: .. _class-topology:
.. figure:: JPG/lammps-classes.png .. figure:: JPG/lammps-classes.png
LAMMPS class topology LAMMPS class topology
This figure shows some of the relations of the base classes of the This figure shows relations of base classes of the LAMMPS
LAMMPS simulation package. Full lines indicate that a class holds an simulation package. Full lines indicate that a class holds an
instance of the class it is pointing to; dashed lines point to instance of the class it is pointing to; dashed lines point to
derived classes that are given as examples of what classes may be derived classes that are given as examples of what classes may be
instantiated during a LAMMPS run based on the input commands and instantiated during a LAMMPS run based on the input commands and
accessed through the API define by their respective base classes. At accessed through the API define by their respective base classes.
the core is the :cpp:class:`LAMMPS <LAMMPS_NS::LAMMPS>` class, which At the core is the :cpp:class:`LAMMPS <LAMMPS_NS::LAMMPS>` class,
holds pointers to class instances with specific purposes. Those may which holds pointers to class instances with specific purposes.
hold instances of other classes, sometimes directly, or only Those may hold instances of other classes, sometimes directly, or
temporarily, sometimes as derived classes or derived classes of only temporarily, sometimes as derived classes or derived classes
derived classes, which may also hold instances of other classes. of derived classes, which may also hold instances of other
classes.
The :cpp:class:`LAMMPS_NS::LAMMPS` class is the topmost class and The :cpp:class:`LAMMPS_NS::LAMMPS` class is the topmost class and
represents what is generally referred to an "instance" of LAMMPS. It is represents what is generally referred to as an "instance of LAMMPS". It
a composite holding pointers to instances of other core classes is a composite holding pointers to instances of other core classes
providing the core functionality of the MD engine in LAMMPS and through providing the core functionality of the MD engine in LAMMPS and through
them abstractions of the required operations. The constructor of the them abstractions of the required operations. The constructor of the
LAMMPS class will instantiate those instances, process the command line LAMMPS class will instantiate those instances, process the command line
@ -102,42 +103,44 @@ LAMMPS while passing it the command line flags and input script. It
deletes the LAMMPS instance after the method reading the input returns deletes the LAMMPS instance after the method reading the input returns
and shuts down the MPI environment before it exits the executable. and shuts down the MPI environment before it exits the executable.
The :cpp:class:`LAMMPS_NS::Pointers` is not shown in the The :cpp:class:`LAMMPS_NS::Pointers` class is not shown in the
:ref:`class-topology` figure for clarity. It holds references to many :ref:`class-topology` figure for clarity. It holds references to many
of the members of the `LAMMPS_NS::LAMMPS`, so that all classes derived of the members of the `LAMMPS_NS::LAMMPS`, so that all classes derived
from :cpp:class:`LAMMPS_NS::Pointers` have direct access to those from :cpp:class:`LAMMPS_NS::Pointers` have direct access to those
reference. From the class topology all classes with blue boundary are references. From the class topology all classes with blue boundary are
referenced in the Pointers class and all classes in the second and third referenced in the Pointers class and all classes in the second and third
columns, that are not listed as derived classes are instead derived from columns, that are not listed as derived classes, are instead derived
:cpp:class:`LAMMPS_NS::Pointers`. To initialize the pointer references from :cpp:class:`LAMMPS_NS::Pointers`. To initialize the pointer
in Pointers, a pointer to the LAMMPS class instance needs to be passed references in Pointers, a pointer to the LAMMPS class instance needs to
to the constructor and thus all constructors for classes derived from it be passed to the constructor. All constructors for classes derived from
must do so and pass this pointer to the constructor for Pointers. it, must do so and thus pass that pointer to the constructor for
:cpp:class:`LAMMPS_NS::Pointers`. The default constructor for
:cpp:class:`LAMMPS_NS::Pointers` is disabled to enforce this.
Since all storage is supposed to be encapsulated (there are a few Since all storage is supposed to be encapsulated (there are a few
exceptions), the LAMMPS class can also be instantiated multiple times by exceptions), the LAMMPS class can also be instantiated multiple times by
a calling code. Outside of the aforementioned exceptions, those LAMMPS a calling code. Outside the aforementioned exceptions, those LAMMPS
instances can be used alternately. As of the time of this writing instances can be used alternately. As of the time of this writing
(early 2021) LAMMPS is not yet sufficiently thread-safe for concurrent (early 2023) LAMMPS is not yet sufficiently thread-safe for concurrent
execution. When running in parallel with MPI, care has to be taken, execution. When running in parallel with MPI, care has to be taken,
that suitable copies of communicators are used to not create conflicts that suitable copies of communicators are used to not create conflicts
between different instances. between different instances.
The LAMMPS class currently (early 2021) holds instances of 19 classes The LAMMPS class currently holds instances of 19 classes representing
representing the core functionality. There are a handful of virtual the core functionality. There are a handful of virtual parent classes
parent classes in LAMMPS that define what LAMMPS calls ``styles``. They in LAMMPS that define what LAMMPS calls ``styles``. These are shaded
are shaded red in the :ref:`class-topology` figure. Each of these are red in the :ref:`class-topology` figure. Each of these are parents of a
parents of a number of child classes that implement the interface number of child classes that implement the interface defined by the
defined by the parent class. There are two main categories of these parent class. There are two main categories of these ``styles``: some
``styles``: some may only have one instance active at a time (e.g. atom, may only have one instance active at a time (e.g. atom, pair, bond,
pair, bond, angle, dihedral, improper, kspace, comm) and there is a angle, dihedral, improper, kspace, comm) and there is a dedicated
dedicated pointer variable for each of them in the composite class. pointer variable for each of them in the corresponding composite class.
Setups that require a mix of different such styles have to use a Setups that require a mix of different such styles have to use a
*hybrid* class that takes the place of the one allowed instance and then *hybrid* class instance that acts as a proxy, and manages and forwards
manages and forwards calls to the corresponding sub-styles for the calls to the corresponding sub-style class instances for the designated
designated subset of atoms or data. The composite class may also have subset of atoms or data. The composite class may also have lists of
lists of class instances, e.g. Modify handles lists of compute and fix class instances, e.g. ``Modify`` handles lists of compute and fix
styles, while Output handles a list of dump class instances. styles, while ``Output`` handles a list of dump class instances.
The exception to this scheme are the ``command`` style classes. These The exception to this scheme are the ``command`` style classes. These
implement specific commands that can be invoked before, after, or in implement specific commands that can be invoked before, after, or in
@ -146,19 +149,19 @@ command() method called and then, after completion, the class instance
deleted. Examples for this are the create_box, create_atoms, minimize, deleted. Examples for this are the create_box, create_atoms, minimize,
run, set, or velocity command styles. run, set, or velocity command styles.
For all those ``styles`` certain naming conventions are employed: for For all those ``styles``, certain naming conventions are employed: for
the fix nve command the class is called FixNVE and the source files are the fix nve command the class is called FixNVE and the source files are
``fix_nve.h`` and ``fix_nve.cpp``. Similarly for fix ave/time we have ``fix_nve.h`` and ``fix_nve.cpp``. Similarly, for fix ave/time we have
FixAveTime and ``fix_ave_time.h`` and ``fix_ave_time.cpp``. Style names FixAveTime and ``fix_ave_time.h`` and ``fix_ave_time.cpp``. Style names
are lower case and without spaces or special characters. A suffix or are lower case and without spaces or special characters. A suffix or
words are appended with a forward slash '/' which denotes a variant of words are appended with a forward slash '/' which denotes a variant of
the corresponding class without the suffix. To connect the style name the corresponding class without the suffix. To connect the style name
and the class name, LAMMPS uses macros like: ``AtomStyle()``, and the class name, LAMMPS uses macros like: ``AtomStyle()``,
``PairStyle()``, ``BondStyle()``, ``RegionStyle()``, and so on in the ``PairStyle()``, ``BondStyle()``, ``RegionStyle()``, and so on in the
corresponding header file. During configuration or compilation files corresponding header file. During configuration or compilation, files
with the pattern ``style_<name>.h`` are created that consist of a list with the pattern ``style_<name>.h`` are created that consist of a list
of include statements including all headers of all styles of a given of include statements including all headers of all styles of a given
type that are currently active (or "installed). type that are currently enabled (or "installed").
More details on individual classes in the :ref:`class-topology` are as More details on individual classes in the :ref:`class-topology` are as
@ -172,8 +175,8 @@ follows:
that one or multiple simulations can be run, on the processors that one or multiple simulations can be run, on the processors
allocated for a run, e.g. by the mpirun command. allocated for a run, e.g. by the mpirun command.
- The Input class reads and processes input input strings and files, - The Input class reads and processes input (strings and files), stores
stores variables, and invokes :doc:`commands <Commands_all>`. variables, and invokes :doc:`commands <Commands_all>`.
- Command style classes are derived from the Command class. They provide - Command style classes are derived from the Command class. They provide
input script commands that perform one-time operations input script commands that perform one-time operations
@ -192,7 +195,7 @@ follows:
- The Atom class stores per-atom properties associated with atom styles. - The Atom class stores per-atom properties associated with atom styles.
More precisely, they are allocated and managed by a class derived from More precisely, they are allocated and managed by a class derived from
the AtomVec class, and the Atom class simply stores pointers to them. the AtomVec class, and the Atom class simply stores pointers to them.
The classes derived from AtomVec represent the different atom styles The classes derived from AtomVec represent the different atom styles,
and they are instantiated through the :doc:`atom_style <atom_style>` and they are instantiated through the :doc:`atom_style <atom_style>`
command. command.
@ -207,17 +210,21 @@ follows:
instance is created by pair, fix, or compute styles when they need a instance is created by pair, fix, or compute styles when they need a
particular kind of neighbor list and use the NeighRequest properties particular kind of neighbor list and use the NeighRequest properties
to select the neighbor list settings for the given request. There can to select the neighbor list settings for the given request. There can
be multiple instances of the NeighRequest class and the Neighbor class be multiple instances of the NeighRequest class. The Neighbor class
will try to optimize how they are computed by creating copies or will try to optimize how the requests are processed. Depending on the
sub-lists where possible. NeighRequest properties, neighbor lists are constructed from scratch,
aliased, or constructed by post-processing an existing list into
sub-lists.
- The Comm class performs inter-processor communication, typically of - The Comm class performs inter-processor communication, typically of
ghost atom information. This usually involves MPI message exchanges ghost atom information. This usually involves MPI message exchanges
with 6 neighboring processors in the 3d logical grid of processors with 6 neighboring processors in the 3d logical grid of processors
mapped to the simulation box. There are two :doc:`communication styles mapped to the simulation box. There are two :doc:`communication styles
<comm_style>` enabling different ways to do the domain decomposition. <comm_style>`, enabling different ways to perform the domain
Sometimes the Irregular class is used, when atoms may migrate to decomposition.
arbitrary processors.
- The Irregular class is used, when atoms may migrate to arbitrary
processors.
- The Domain class stores the simulation box geometry, as well as - The Domain class stores the simulation box geometry, as well as
geometric Regions and any user definition of a Lattice. The latter geometric Regions and any user definition of a Lattice. The latter
@ -246,7 +253,7 @@ follows:
file, dump file snapshots, and restart files. These correspond to the file, dump file snapshots, and restart files. These correspond to the
:doc:`Thermo <thermo_style>`, :doc:`Dump <dump>`, and :doc:`Thermo <thermo_style>`, :doc:`Dump <dump>`, and
:doc:`WriteRestart <write_restart>` classes respectively. The Dump :doc:`WriteRestart <write_restart>` classes respectively. The Dump
class is a base class with several derived classes implementing class is a base class, with several derived classes implementing
various dump style variants. various dump style variants.
- The Timer class logs timing information, output at the end - The Timer class logs timing information, output at the end

View File

@ -1,22 +1,22 @@
Communication Communication
^^^^^^^^^^^^^ ^^^^^^^^^^^^^
Following the partitioning scheme in use all per-atom data is Following the selected partitioning scheme, all per-atom data is
distributed across the MPI processes, which allows LAMMPS to handle very distributed across the MPI processes, which allows LAMMPS to handle very
large systems provided it uses a correspondingly large number of MPI large systems provided it uses a correspondingly large number of MPI
processes. Since The per-atom data (atom IDs, positions, velocities, processes. Since The per-atom data (atom IDs, positions, velocities,
types, etc.) To be able to compute the short-range interactions MPI types, etc.) To be able to compute the short-range interactions, MPI
processes need not only access to data of atoms they "own" but also processes need not only access to the data of atoms they "own" but also
information about atoms from neighboring sub-domains, in LAMMPS referred information about atoms from neighboring subdomains, in LAMMPS referred
to as "ghost" atoms. These are copies of atoms storing required to as "ghost" atoms. These are copies of atoms storing required
per-atom data for up to the communication cutoff distance. The green per-atom data for up to the communication cutoff distance. The green
dashed-line boxes in the :ref:`domain-decomposition` figure illustrate dashed-line boxes in the :ref:`domain-decomposition` figure illustrate
the extended ghost-atom sub-domain for one processor. the extended ghost-atom subdomain for one processor.
This approach is also used to implement periodic boundary This approach is also used to implement periodic boundary
conditions: atoms that lie within the cutoff distance across a periodic conditions: atoms that lie within the cutoff distance across a periodic
boundary are also stored as ghost atoms and taken from the periodic boundary are also stored as ghost atoms and taken from the periodic
replication of the sub-domain, which may be the same sub-domain, e.g. if replication of the subdomain, which may be the same subdomain, e.g. if
running in serial. As a consequence of this, force computation in running in serial. As a consequence of this, force computation in
LAMMPS is not subject to minimum image conventions and thus cutoffs may LAMMPS is not subject to minimum image conventions and thus cutoffs may
be larger than half the simulation domain. be larger than half the simulation domain.
@ -28,10 +28,10 @@ be larger than half the simulation domain.
ghost atom communication ghost atom communication
This figure shows the ghost atom communication patterns between This figure shows the ghost atom communication patterns between
sub-domains for "brick" (left) and "tiled" communication styles for subdomains for "brick" (left) and "tiled" communication styles for
2d simulations. The numbers indicate MPI process ranks. Here the 2d simulations. The numbers indicate MPI process ranks. Here the
sub-domains are drawn spatially separated for clarity. The subdomains are drawn spatially separated for clarity. The
dashed-line box is the extended sub-domain of processor 0 which dashed-line box is the extended subdomain of processor 0 which
includes its ghost atoms. The red- and blue-shaded boxes are the includes its ghost atoms. The red- and blue-shaded boxes are the
regions of communicated ghost atoms. regions of communicated ghost atoms.
@ -42,7 +42,7 @@ atom communication is performed in two stages for a 2d simulation (three
in 3d) for both a regular and irregular partitioning of the simulation in 3d) for both a regular and irregular partitioning of the simulation
box. For the regular case (left) atoms are exchanged first in the box. For the regular case (left) atoms are exchanged first in the
*x*-direction, then in *y*, with four neighbors in the grid of processor *x*-direction, then in *y*, with four neighbors in the grid of processor
sub-domains. subdomains.
In the *x* stage, processor ranks 1 and 2 send owned atoms in their In the *x* stage, processor ranks 1 and 2 send owned atoms in their
red-shaded regions to rank 0 (and vice versa). Then in the *y* stage, red-shaded regions to rank 0 (and vice versa). Then in the *y* stage,
@ -55,11 +55,11 @@ For the irregular case (right) the two stages are similar, but a
processor can have more than one neighbor in each direction. In the processor can have more than one neighbor in each direction. In the
*x* stage, MPI ranks 1,2,3 send owned atoms in their red-shaded regions to *x* stage, MPI ranks 1,2,3 send owned atoms in their red-shaded regions to
rank 0 (and vice versa). These include only atoms between the lower rank 0 (and vice versa). These include only atoms between the lower
and upper *y*-boundary of rank 0's sub-domain. In the *y* stage, ranks and upper *y*-boundary of rank 0's subdomain. In the *y* stage, ranks
4,5,6 send atoms in their blue-shaded regions to rank 0. This may 4,5,6 send atoms in their blue-shaded regions to rank 0. This may
include ghost atoms they received in the *x* stage, but only if they include ghost atoms they received in the *x* stage, but only if they
are needed by rank 0 to fill its extended ghost atom regions in the are needed by rank 0 to fill its extended ghost atom regions in the
+/-*y* directions (blue rectangles). Thus in this case, ranks 5 and +/-*y* directions (blue rectangles). Thus, in this case, ranks 5 and
6 do not include ghost atoms they received from each other (in the *x* 6 do not include ghost atoms they received from each other (in the *x*
stage) in the atoms they send to rank 0. The key point is that while stage) in the atoms they send to rank 0. The key point is that while
the pattern of communication is more complex in the irregular the pattern of communication is more complex in the irregular
@ -78,14 +78,14 @@ A "reverse" communication is when computed ghost atom attributes are
sent back to the processor who owns the atom. This is used (for sent back to the processor who owns the atom. This is used (for
example) to sum partial forces on ghost atoms to the complete force on example) to sum partial forces on ghost atoms to the complete force on
owned atoms. The order of the two stages described in the owned atoms. The order of the two stages described in the
:ref:`ghost-atom-comm` figure is inverted and the same lists of atoms :ref:`ghost-atom-comm` figure is inverted, and the same lists of atoms
are used to pack and unpack message buffers with per-atom forces. When are used to pack and unpack message buffers with per-atom forces. When
a received buffer is unpacked, the ghost forces are summed to owned atom a received buffer is unpacked, the ghost forces are summed to owned atom
forces. As in forward communication, forces on atoms in the four blue forces. As in forward communication, forces on atoms in the four blue
corners of the diagrams are sent, received, and summed twice (once at corners of the diagrams are sent, received, and summed twice (once at
each stage) before owning processors have the full force. each stage) before owning processors have the full force.
These two operations are used many places within LAMMPS aside from These two operations are used in many places within LAMMPS aside from
exchange of coordinates and forces, for example by manybody potentials exchange of coordinates and forces, for example by manybody potentials
to share intermediate per-atom values, or by rigid-body integrators to to share intermediate per-atom values, or by rigid-body integrators to
enable each atom in a body to access body properties. Here are enable each atom in a body to access body properties. Here are
@ -105,16 +105,16 @@ performed in LAMMPS:
atom pairs when building neighbor lists or computing forces. atom pairs when building neighbor lists or computing forces.
- The cutoff distance for exchanging ghost atoms is typically equal to - The cutoff distance for exchanging ghost atoms is typically equal to
the neighbor cutoff. But it can also chosen to be longer if needed, the neighbor cutoff. But it can also set to a larger value if needed,
e.g. half the diameter of a rigid body composed of multiple atoms or e.g. half the diameter of a rigid body composed of multiple atoms or
over 3x the length of a stretched bond for dihedral interactions. It over 3x the length of a stretched bond for dihedral interactions. It
can also exceed the periodic box size. For the regular communication can also exceed the periodic box size. For the regular communication
pattern (left), if the cutoff distance extends beyond a neighbor pattern (left), if the cutoff distance extends beyond a neighbor
processor's sub-domain, then multiple exchanges are performed in the processor's subdomain, then multiple exchanges are performed in the
same direction. Each exchange is with the same neighbor processor, same direction. Each exchange is with the same neighbor processor,
but buffers are packed/unpacked using a different list of atoms. For but buffers are packed/unpacked using a different list of atoms. For
forward communication, in the first exchange a processor sends only forward communication, in the first exchange, a processor sends only
owned atoms. In subsequent exchanges, it sends ghost atoms received owned atoms. In subsequent exchanges, it sends ghost atoms received
in previous exchanges. For the irregular pattern (right) overlaps of in previous exchanges. For the irregular pattern (right) overlaps of
a processor's extended ghost-atom sub-domain with all other processors a processor's extended ghost-atom subdomain with all other processors
in each dimension are detected. in each dimension are detected.

Some files were not shown because too many files have changed in this diff Show More