Merge branch 'doc_fixes' of https://github.com/rbberger/lammps into collected-small-fixes

This commit is contained in:
Axel Kohlmeyer
2024-08-18 16:45:48 -04:00
43 changed files with 1310 additions and 969 deletions

View File

@ -37,8 +37,9 @@ standard. A more detailed discussion of that is below.
.. code-block:: bash
-D BUILD_MPI=value # yes or no, default is yes if CMake finds MPI, else no
-D BUILD_OMP=value # yes or no, default is yes if a compatible compiler is detected
-D BUILD_MPI=value # yes or no, default is yes if CMake finds MPI
-D BUILD_OMP=value # yes or no, default is yes if a compatible
# compiler is detected
-D LAMMPS_MACHINE=name # name = mpi, serial, mybox, titan, laptop, etc
# no default value
@ -74,7 +75,7 @@ standard. A more detailed discussion of that is below.
this is ``-fopenmp``\ , which can be added to the ``CC`` and
``LINK`` makefile variables.
For the serial build the following make variables are set (see src/MAKE/Makefile.serial):
For the serial build the following make variables are set (see ``src/MAKE/Makefile.serial``):
.. code-block:: make
@ -231,24 +232,32 @@ LAMMPS.
.. code-block:: bash
# Building with GNU Compilers:
cmake ../cmake -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_Fortran_COMPILER=gfortran
cmake -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_Fortran_COMPILER=gfortran ../cmake
# Building with Intel Compilers:
cmake ../cmake -DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc -DCMAKE_Fortran_COMPILER=ifort
cmake -DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc \
-DCMAKE_Fortran_COMPILER=ifort ../cmake
# Building with Intel oneAPI Compilers:
cmake ../cmake -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DCMAKE_Fortran_COMPILER=ifx
cmake -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx \
-DCMAKE_Fortran_COMPILER=ifx ../cmake
# Building with LLVM/Clang Compilers:
cmake ../cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_Fortran_COMPILER=flang
cmake -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ \
-DCMAKE_Fortran_COMPILER=flang ../cmake
# Building with PGI/Nvidia Compilers:
cmake ../cmake -DCMAKE_C_COMPILER=pgcc -DCMAKE_CXX_COMPILER=pgc++ -DCMAKE_Fortran_COMPILER=pgfortran
cmake -DCMAKE_C_COMPILER=pgcc -DCMAKE_CXX_COMPILER=pgc++ \
-DCMAKE_Fortran_COMPILER=pgfortran ../cmake
# Building with the NVHPC Compilers:
cmake -DCMAKE_C_COMPILER=nvc -DCMAKE_CXX_COMPILER=nvc++ \
-DCMAKE_Fortran_COMPILER=nvfortran ../cmake
For compiling with the Clang/LLVM compilers a CMake preset is
provided that can be loaded with
`-C ../cmake/presets/clang.cmake`. Similarly,
`-C ../cmake/presets/intel.cmake` should switch the compiler
toolchain to the legacy Intel compilers, `-C ../cmake/presets/oneapi.cmake`
``-C ../cmake/presets/clang.cmake``. Similarly,
``-C ../cmake/presets/intel.cmake`` should switch the compiler
toolchain to the legacy Intel compilers, ``-C ../cmake/presets/oneapi.cmake``
will switch to the LLVM based oneAPI Intel compilers,
and `-C ../cmake/presets/pgi.cmake`
will switch the compiler to the PGI compilers.
``-C ../cmake/presets/pgi.cmake`` will switch the compiler to the PGI compilers,
and ``-C ../cmake/presets/nvhpc.cmake`` will switch to the NVHPC compilers.
Furthermore, you can set ``CMAKE_TUNE_FLAGS`` to specifically add
compiler flags to tune for optimal performance on given hosts.
@ -259,7 +268,7 @@ LAMMPS.
When the cmake command completes, it prints a summary to the
screen which compilers it is using and what flags and settings
will be used for the compilation. Note that if the top-level
compiler is mpicxx, it is simply a wrapper on a real compiler.
compiler is ``mpicxx``, it is simply a wrapper on a real compiler.
The underlying compiler info is what CMake will try to
determine and report. You should check to confirm you are
using the compiler and optimization flags you want.
@ -316,10 +325,10 @@ LAMMPS.
there may be specific compiler or linker flags that are either
required or recommended to enable required features and to
achieve optimal performance. You need to include these in the
CCFLAGS and LINKFLAGS settings above. For details, see the
``CCFLAGS`` and ``LINKFLAGS`` settings above. For details, see the
documentation for the individual packages listed on the
:doc:`Speed_packages` page. Or examine these files in the
src/MAKE/OPTIONS directory. They correspond to each of the 5
``src/MAKE/OPTIONS`` directory. They correspond to each of the 5
accelerator packages and their hardware variants:
.. code-block:: bash
@ -388,7 +397,8 @@ running LAMMPS from Python via its library interface.
make machine # build LAMMPS executable lmp_machine
make mode=static machine # same as "make machine"
make mode=shared machine # build LAMMPS shared lib liblammps_machine.so instead
make mode=shared machine # build LAMMPS shared lib liblammps_machine.so
# instead
The "static" build will generate a static library called
``liblammps_machine.a`` and an executable named ``lmp_machine``\ ,
@ -450,7 +460,7 @@ installation.
Including or removing debug support
-----------------------------------
By default the compilation settings will include the *-g* flag which
By default the compilation settings will include the ``-g`` flag which
instructs the compiler to include debug information (e.g. which line of
source code a particular instruction correspond to). This can be
extremely useful in case LAMMPS crashes and can help to provide crucial
@ -463,7 +473,7 @@ If this is a concern, you can change the compilation settings or remove
the debug information from the LAMMPS executable:
- **Traditional make**: edit your ``Makefile.<machine>`` to remove the
*-g* flag from the ``CCFLAGS`` and ``LINKFLAGS`` definitions
``-g`` flag from the ``CCFLAGS`` and ``LINKFLAGS`` definitions
- **CMake**: use ``-D CMAKE_BUILD_TYPE=Release`` or explicitly reset
the applicable compiler flags (best done using the text mode or
graphical user interface).
@ -488,7 +498,9 @@ using CMake or Make.
.. code-block:: bash
-D BUILD_TOOLS=value # yes or no (default). Build binary2txt, chain.x, micelle2d.x, msi2lmp, phana, stl_bin2txt
-D BUILD_TOOLS=value # yes or no (default). Build binary2txt,
# chain.x, micelle2d.x, msi2lmp, phana,
# stl_bin2txt
-D BUILD_LAMMPS_GUI=value # yes or no (default). Build LAMMPS-GUI
The generated binaries will also become part of the LAMMPS installation

View File

@ -131,20 +131,20 @@ file called ``CMakeLists.txt`` (for LAMMPS it is located in the
configuration step. The cache file contains all current CMake settings.
To modify settings, enable or disable features, you need to set
*variables* with either the *-D* command line flag (``-D
*variables* with either the ``-D`` command line flag (``-D
VARIABLE1_NAME=value``) or change them in the text mode of the graphical
user interface. The *-D* flag can be used several times in one command.
user interface. The ``-D`` flag can be used several times in one command.
For your convenience, we provide :ref:`CMake presets <cmake_presets>`
that combine multiple settings to enable optional LAMMPS packages or use
a different compiler tool chain. Those are loaded with the *-C* flag
a different compiler tool chain. Those are loaded with the ``-C`` flag
(``-C ../cmake/presets/basic.cmake``). This step would only be needed
once, as the settings from the preset files are stored in the
``CMakeCache.txt`` file. It is also possible to customize the build
by adding one or more *-D* flags to the CMake command line.
by adding one or more ``-D`` flags to the CMake command line.
Generating files for alternate build tools (e.g. Ninja) and project files
for IDEs like Eclipse, CodeBlocks, or Kate can be selected using the *-G*
for IDEs like Eclipse, CodeBlocks, or Kate can be selected using the ``-G``
command line flag. A list of available generator settings for your
specific CMake version is given when running ``cmake --help``.
@ -171,7 +171,7 @@ files. E.g. with:
In that case the resulting binaries are not in the build folder directly
but in subdirectories corresponding to the build type (i.e. Release in
the example from above). Similarly, for running unit tests the
configuration is selected with the *-C* flag:
configuration is selected with the ``-C`` flag:
.. code-block:: bash

View File

@ -181,24 +181,24 @@ The output of this command will be looking something like this:
$ ctest
Test project /home/akohlmey/compile/lammps/build-testing
Start 1: RunLammps
1/563 Test #1: RunLammps .......................................... Passed 0.28 sec
1/563 Test #1: RunLammps .................................. Passed 0.28 sec
Start 2: HelpMessage
2/563 Test #2: HelpMessage ........................................ Passed 0.06 sec
2/563 Test #2: HelpMessage ................................ Passed 0.06 sec
Start 3: InvalidFlag
3/563 Test #3: InvalidFlag ........................................ Passed 0.06 sec
3/563 Test #3: InvalidFlag ................................ Passed 0.06 sec
Start 4: Tokenizer
4/563 Test #4: Tokenizer .......................................... Passed 0.05 sec
4/563 Test #4: Tokenizer .................................. Passed 0.05 sec
Start 5: MemPool
5/563 Test #5: MemPool ............................................ Passed 0.05 sec
5/563 Test #5: MemPool .................................... Passed 0.05 sec
Start 6: ArgUtils
6/563 Test #6: ArgUtils ........................................... Passed 0.05 sec
6/563 Test #6: ArgUtils ................................... Passed 0.05 sec
[...]
Start 561: ImproperStyle:zero
561/563 Test #561: ImproperStyle:zero ................................. Passed 0.07 sec
561/563 Test #561: ImproperStyle:zero ......................... Passed 0.07 sec
Start 562: TestMliapPyUnified
562/563 Test #562: TestMliapPyUnified ................................. Passed 0.16 sec
562/563 Test #562: TestMliapPyUnified ......................... Passed 0.16 sec
Start 563: TestPairList
563/563 Test #563: TestPairList ....................................... Passed 0.06 sec
563/563 Test #563: TestPairList ............................... Passed 0.06 sec
100% tests passed, 0 tests failed out of 563
@ -216,21 +216,21 @@ The ``ctest`` command has many options, the most important ones are:
* - Option
- Function
* - -V
* - ``-V``
- verbose output: display output of individual test runs
* - -j <num>
* - ``-j <num>``
- parallel run: run <num> tests in parallel
* - -R <regex>
* - ``-R <regex>``
- run subset of tests matching the regular expression <regex>
* - -E <regex>
* - ``-E <regex>``
- exclude subset of tests matching the regular expression <regex>
* - -L <regex>
* - ``-L <regex>``
- run subset of tests with a label matching the regular expression <regex>
* - -LE <regex>
* - ``-LE <regex>``
- exclude subset of tests with a label matching the regular expression <regex>
* - -N
* - ``-N``
- dry-run: display list of tests without running them
* - -T memcheck
* - ``-T memcheck``
- run tests with valgrind memory checker (if available)
In its full implementation, the unit test framework will consist of multiple
@ -339,13 +339,13 @@ The force style test programs have a common set of options:
* - Option
- Function
* - -g <newfile>
* - ``-g <newfile>``
- regenerate reference data in new YAML file
* - -u
* - ``-u``
- update reference data in the original YAML file
* - -s
* - ``-s``
- print error statistics for each group of comparisons
* - -v
* - ``-v``
- verbose output: also print the executed LAMMPS commands
The ``ctest`` tool has no mechanism to directly pass flags to the individual
@ -359,10 +359,10 @@ set in an environment variable ``TEST_ARGS``. Example:
To add a test for a style that is not yet covered, it is usually best
to copy a YAML file for a similar style to a new file, edit the details
of the style (how to call it, how to set its coefficients) and then
run test command with either the *-g* and the replace the initial
test file with the regenerated one or the *-u* option. The *-u* option
run test command with either the ``-g`` and the replace the initial
test file with the regenerated one or the ``-u`` option. The ``-u`` option
will destroy the original file, if the generation run does not complete,
so using *-g* is recommended unless the YAML file is fully tested
so using ``-g`` is recommended unless the YAML file is fully tested
and working.
Some of the force style tests are rather slow to run and some are very
@ -512,7 +512,9 @@ After post-processing with ``gen_coverage_html`` the results are in
a folder ``coverage_html`` and can be viewed with a web browser.
The images below illustrate how the data is presented.
.. list-table::
.. only:: not latex
.. list-table::
* - .. figure:: JPG/coverage-overview-top.png
:scale: 25%
@ -534,6 +536,28 @@ The images below illustrate how the data is presented.
Source page with branches
.. only:: latex
.. figure:: JPG/coverage-overview-top.png
:width: 60%
Top of the overview page
.. figure:: JPG/coverage-overview-manybody.png
:width: 60%
Styles with good coverage
.. figure:: JPG/coverage-file-top.png
:width: 60%
Top of individual source page
.. figure:: JPG/coverage-file-branches.png
:width: 60%
Source page with branches
Coding style utilities
----------------------

View File

@ -14,7 +14,7 @@ in addition to
cmake -D PKG_NAME=yes
- .. code-block:: console
- .. code-block:: bash
make yes-name
@ -73,7 +73,7 @@ COMPRESS package
To build with this package you must have the `zlib compression library
<https://zlib.net>`_ available on your system to build dump styles with
a '/gz' suffix. There are also styles using the
a ``/gz`` suffix. There are also styles using the
`Zstandard <https://facebook.github.io/zstd/>`_ library which have a
'/zstd' suffix. The zstd library version must be at least 1.4. Older
versions use an incompatible API and thus LAMMPS will fail to compile.
@ -95,7 +95,7 @@ versions use an incompatible API and thus LAMMPS will fail to compile.
<https://www.freedesktop.org/wiki/Software/pkg-config/>`_ tool to
identify the necessary flags to compile with this library, so the
corresponding ``libzstandard.pc`` file must be in a folder where
pkg-config can find it, which may require adding it to the
``pkg-config`` can find it, which may require adding it to the
``PKG_CONFIG_PATH`` environment variable.
.. tab:: Traditional make
@ -127,46 +127,53 @@ CMake build
# value = double or mixed (default) or single
-D GPU_ARCH=value # primary GPU hardware choice for GPU_API=cuda
# value = sm_XX (see below, default is sm_50)
-D GPU_DEBUG=value # enable debug code in the GPU package library, mostly useful for developers
-D GPU_DEBUG=value # enable debug code in the GPU package library,
# mostly useful for developers
# value = yes or no (default)
-D HIP_PATH=value # value = path to HIP installation. Must be set if GPU_API=HIP
-D HIP_PATH=value # value = path to HIP installation. Must be set if
# GPU_API=HIP
-D HIP_ARCH=value # primary GPU hardware choice for GPU_API=hip
# value depends on selected HIP_PLATFORM
# default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for HIP_PLATFORM=nvcc
# default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for
# HIP_PLATFORM=nvcc
-D HIP_USE_DEVICE_SORT=value # enables GPU sorting
# value = yes (default) or no
-D CUDPP_OPT=value # use GPU binning on with CUDA (should be off for modern GPUs)
# enables CUDA Performance Primitives, must be "no" for CUDA_MPS_SUPPORT=yes
-D CUDPP_OPT=value # use GPU binning with CUDA (should be off for modern GPUs)
# enables CUDA Performance Primitives, must be "no" for
# CUDA_MPS_SUPPORT=yes
# value = yes or no (default)
-D CUDA_MPS_SUPPORT=value # enables some tweaks required to run with active nvidia-cuda-mps daemon
-D CUDA_MPS_SUPPORT=value # enables some tweaks required to run with active
# nvidia-cuda-mps daemon
# value = yes or no (default)
-D CUDA_BUILD_MULTIARCH=value # enables building CUDA kernels for all supported GPU architectures
-D CUDA_BUILD_MULTIARCH=value # enables building CUDA kernels for all supported GPU
# architectures
# value = yes (default) or no
-D USE_STATIC_OPENCL_LOADER=value # downloads/includes OpenCL ICD loader library, no local OpenCL headers/libs needed
-D USE_STATIC_OPENCL_LOADER=value # downloads/includes OpenCL ICD loader library,
# no local OpenCL headers/libs needed
# value = yes (default) or no
:code:`GPU_ARCH` settings for different GPU hardware is as follows:
``GPU_ARCH`` settings for different GPU hardware is as follows:
* sm_30 for Kepler (supported since CUDA 5 and until CUDA 10.x)
* sm_35 or sm_37 for Kepler (supported since CUDA 5 and until CUDA 11.x)
* sm_50 or sm_52 for Maxwell (supported since CUDA 6)
* sm_60 or sm_61 for Pascal (supported since CUDA 8)
* sm_70 for Volta (supported since CUDA 9)
* sm_75 for Turing (supported since CUDA 10)
* sm_80 or sm_86 for Ampere (supported since CUDA 11, sm_86 since CUDA 11.1)
* sm_89 for Lovelace (supported since CUDA 11.8)
* sm_90 for Hopper (supported since CUDA 12.0)
* ``sm_30`` for Kepler (supported since CUDA 5 and until CUDA 10.x)
* ``sm_35`` or ``sm_37`` for Kepler (supported since CUDA 5 and until CUDA 11.x)
* ``sm_50`` or ``sm_52`` for Maxwell (supported since CUDA 6)
* ``sm_60`` or ``sm_61`` for Pascal (supported since CUDA 8)
* ``sm_70`` for Volta (supported since CUDA 9)
* ``sm_75`` for Turing (supported since CUDA 10)
* ``sm_80`` or sm_86 for Ampere (supported since CUDA 11, sm_86 since CUDA 11.1)
* ``sm_89`` for Lovelace (supported since CUDA 11.8)
* ``sm_90`` for Hopper (supported since CUDA 12.0)
A more detailed list can be found, for example,
at `Wikipedia's CUDA article <https://en.wikipedia.org/wiki/CUDA#GPUs_supported>`_
CMake can detect which version of the CUDA toolkit is used and thus will
try to include support for **all** major GPU architectures supported by
this toolkit. Thus the GPU_ARCH setting is merely an optimization, to
this toolkit. Thus the ``GPU_ARCH`` setting is merely an optimization, to
have code for the preferred GPU architecture directly included rather
than having to wait for the JIT compiler of the CUDA driver to translate
it. This behavior can be turned off (e.g. to speed up compilation) by
setting :code:`CUDA_ENABLE_MULTIARCH` to :code:`no`.
setting ``CUDA_ENABLE_MULTIARCH`` to ``no``.
When compiling for CUDA or HIP with CUDA, version 8.0 or later of the
CUDA toolkit is required and a GPU architecture of Kepler or later,
@ -185,21 +192,21 @@ build, and link with a static OpenCL ICD loader library and standard
OpenCL headers. This way no local OpenCL development headers or library
needs to be present and only OpenCL compatible drivers need to be
installed to use OpenCL. If this is not desired, you can set
:code:`USE_STATIC_OPENCL_LOADER` to :code:`no`.
``USE_STATIC_OPENCL_LOADER`` to ``no``.
The GPU library has some multi-thread support using OpenMP. If LAMMPS
is built with ``-D BUILD_OMP=on`` this will also be enabled.
If you are compiling with HIP, note that before running CMake you will
have to set appropriate environment variables. Some variables such as
:code:`HCC_AMDGPU_TARGET` (for ROCm <= 4.0) or :code:`CUDA_PATH` are
necessary for :code:`hipcc` and the linker to work correctly.
``HCC_AMDGPU_TARGET`` (for ROCm <= 4.0) or ``CUDA_PATH`` are
necessary for ``hipcc`` and the linker to work correctly.
.. versionadded:: 3Aug2022
Using the CHIP-SPV implementation of HIP is supported. It allows one to
run HIP code on Intel GPUs via the OpenCL or Level Zero backends. To use
CHIP-SPV, you must set :code:`-DHIP_USE_DEVICE_SORT=OFF` in your CMake
CHIP-SPV, you must set ``-DHIP_USE_DEVICE_SORT=OFF`` in your CMake
command line as CHIP-SPV does not yet support hipCUB. As of Summer 2022,
the use of HIP for Intel GPUs is experimental. You should only use this
option in preparations to run on Aurora system at Argonne.
@ -257,28 +264,35 @@ script with the specified args:
.. code-block:: bash
make lib-gpu # print help message
make lib-gpu args="-b" # build GPU library with default Makefile.linux
make lib-gpu args="-m xk7 -p single -o xk7.single" # create new Makefile.xk7.single, altered for single-precision
make lib-gpu args="-m mpi -a sm_60 -p mixed -b" # build GPU library with mixed precision and P100 using other settings in Makefile.mpi
# print help message
make lib-gpu
# build GPU library with default Makefile.linux
make lib-gpu args="-b"
# create new Makefile.xk7.single, altered for single-precision
make lib-gpu args="-m xk7 -p single -o xk7.single"
# build GPU library with mixed precision and P100 using other settings in Makefile.mpi
make lib-gpu args="-m mpi -a sm_60 -p mixed -b"
Note that this procedure starts with a Makefile.machine in lib/gpu, as
specified by the "-m" switch. For your convenience, machine makefiles
specified by the ``-m`` switch. For your convenience, machine makefiles
for "mpi" and "serial" are provided, which have the same settings as
the corresponding machine makefiles in the main LAMMPS source
folder. In addition you can alter 4 important settings in the
Makefile.machine you start from via the corresponding -c, -a, -p, -e
Makefile.machine you start from via the corresponding ``-c``, ``-a``, ``-p``, ``-e``
switches (as in the examples above), and also save a copy of the new
Makefile if desired:
* ``CUDA_HOME`` = where NVIDIA CUDA software is installed on your system
* ``CUDA_ARCH`` = sm_XX, what GPU hardware you have, same as CMake GPU_ARCH above
* ``CUDA_ARCH`` = ``sm_XX``, what GPU hardware you have, same as CMake ``GPU_ARCH`` above
* ``CUDA_PRECISION`` = precision (double, mixed, single)
* ``EXTRAMAKE`` = which Makefile.lammps.\* file to copy to Makefile.lammps
* ``EXTRAMAKE`` = which ``Makefile.lammps.*`` file to copy to Makefile.lammps
The file Makefile.cuda is set up to include support for multiple
The file ``Makefile.cuda`` is set up to include support for multiple
GPU architectures as supported by the CUDA toolkit in use. This is done
through using the "--gencode " flag, which can be used multiple times and
through using the ``--gencode`` flag, which can be used multiple times and
thus support all GPU architectures supported by your CUDA compiler.
To enable GPU binning via CUDA performance primitives set the Makefile variable
@ -349,12 +363,16 @@ minutes to hours) to build. Of course you only need to do that once.)
.. code-block:: bash
-D DOWNLOAD_KIM=value # download OpenKIM API v2 for build, value = no (default) or yes
-D LMP_DEBUG_CURL=value # set libcurl verbose mode on/off, value = off (default) or on
-D LMP_NO_SSL_CHECK=value # tell libcurl to not verify the peer, value = no (default) or yes
-D KIM_EXTRA_UNITTESTS=value # enables extra unit tests, value = no (default) or yes
-D DOWNLOAD_KIM=value # download OpenKIM API v2 for build
# value = no (default) or yes
-D LMP_DEBUG_CURL=value # set libcurl verbose mode on/off
# value = off (default) or on
-D LMP_NO_SSL_CHECK=value # tell libcurl to not verify the peer
# value = no (default) or yes
-D KIM_EXTRA_UNITTESTS=value # enables extra unit tests
# value = no (default) or yes
If ``DOWNLOAD_KIM`` is set to *yes* (or *on*), the KIM API library
If ``DOWNLOAD_KIM`` is set to ``yes`` (or ``on``), the KIM API library
will be downloaded and built inside the CMake build directory. If
the KIM library is already installed on your system (in a location
where CMake cannot find it), you may need to set the
@ -362,7 +380,7 @@ minutes to hours) to build. Of course you only need to do that once.)
found, or run the command ``source kim-api-activate``.
Extra unit tests can only be available if they are explicitly requested
(``KIM_EXTRA_UNITTESTS`` is set to *yes* (or *on*)) and the prerequisites
(``KIM_EXTRA_UNITTESTS`` is set to ``yes`` (or ``on``)) and the prerequisites
are met. See :ref:`KIM Extra unit tests <kim_extra_unittests>` for
more details on this.
@ -376,15 +394,28 @@ minutes to hours) to build. Of course you only need to do that once.)
.. code-block:: bash
make lib-kim # print help message
make lib-kim args="-b " # (re-)install KIM API lib with only example models
make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001" # ditto plus one model
make lib-kim args="-b -a everything" # install KIM API lib with all models
make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # add one model or model driver
make lib-kim args="-p /usr/local" # use an existing KIM API installation at the provided location
make lib-kim args="-p /usr/local -a EAM_Dynamo_Ackland_W__MO_141627196590_002" # ditto but add one model or driver
# print help message
make lib-kim
When using the "-b " option, the KIM library is built using its native
# (re-)install KIM API lib with only example models
make lib-kim args="-b"
# ditto plus one model
make lib-kim args="-b -a Glue_Ercolessi_Adams_Al__MO_324507536345_001"
# install KIM API lib with all models
make lib-kim args="-b -a everything"
# add one model or model driver
make lib-kim args="-n -a EAM_Dynamo_Ackland_W__MO_141627196590_002"
# use an existing KIM API installation at the provided location
make lib-kim args="-p <prefix>"
# ditto but add one model or driver
make lib-kim args="-p <prefix> -a EAM_Dynamo_Ackland_W__MO_141627196590_002"
When using the ``-b`` option, the KIM library is built using its native
cmake build system. The ``lib/kim/Install.py`` script supports a
``CMAKE`` environment variable if the cmake executable is named other
than ``cmake`` on your system. Additional environment variables may be
@ -394,7 +425,9 @@ minutes to hours) to build. Of course you only need to do that once.)
.. code-block:: bash
CMAKE=cmake3 CXX=g++-11 CC=gcc-11 FC=gfortran-11 make lib-kim args="-b " # (re-)install KIM API lib using cmake3 and gnu v11 compilers with only example models
# (re-)install KIM API lib using cmake3 and gnu v11 compilers
# with only example models
CMAKE=cmake3 CXX=g++-11 CC=gcc-11 FC=gfortran-11 make lib-kim args="-b"
Settings for debugging OpenKIM web queries discussed below need to
be applied by adding them to the ``LMP_INC`` variable through
@ -434,7 +467,7 @@ KIM Extra unit tests (CMake only)
During development, testing, or debugging, if
:doc:`unit testing <Build_development>` is enabled in LAMMPS, one can also
enable extra tests on :doc:`KIM commands <kim_commands>` by setting the
``KIM_EXTRA_UNITTESTS`` to *yes* (or *on*).
``KIM_EXTRA_UNITTESTS`` to ``yes`` (or ``on``).
Enabling the extra unit tests have some requirements,
@ -449,10 +482,12 @@ Enabling the extra unit tests have some requirements,
*conda-forge* channel as ``conda install kim-property`` if LAMMPS is built in
Conda. More detailed information is available at:
`kim-property installation <https://github.com/openkim/kim-property#installing-kim-property>`_.
* It is also necessary to install
``EAM_Dynamo_MendelevAckland_2007v3_Zr__MO_004835508849_000``,
``EAM_Dynamo_ErcolessiAdams_1994_Al__MO_123629422045_005``, and
``LennardJones612_UniversalShifted__MO_959249795837_003`` KIM models.
* It is also necessary to install the following KIM models:
* ``EAM_Dynamo_MendelevAckland_2007v3_Zr__MO_004835508849_000``
* ``EAM_Dynamo_ErcolessiAdams_1994_Al__MO_123629422045_005``
* ``LennardJones612_UniversalShifted__MO_959249795837_003``
See `Obtaining KIM Models <https://openkim.org/doc/usage/obtaining-models>`_
to learn how to install a pre-built binary of the OpenKIM Repository of
Models or see
@ -729,7 +764,8 @@ This list was last updated for version 4.3.0 of the Kokkos library.
mkdir build-kokkos-cuda
cd build-kokkos-cuda
cmake -C ../cmake/presets/basic.cmake -C ../cmake/presets/kokkos-cuda.cmake ../cmake
cmake -C ../cmake/presets/basic.cmake \
-C ../cmake/presets/kokkos-cuda.cmake ../cmake
cmake --build .
.. tab:: Basic traditional make settings:
@ -757,9 +793,10 @@ This list was last updated for version 4.3.0 of the Kokkos library.
.. code-block:: make
KOKKOS_DEVICES = Cuda
KOKKOS_ARCH = HOSTARCH,GPUARCH # HOSTARCH = HOST from list above that is hosting the GPU
KOKKOS_CUDA_OPTIONS = "enable_lambda"
KOKKOS_ARCH = HOSTARCH,GPUARCH # HOSTARCH = HOST from list above that is
# hosting the GPU
# GPUARCH = GPU from list above
KOKKOS_CUDA_OPTIONS = "enable_lambda"
FFT_INC = -DFFT_CUFFT # enable use of cuFFT (optional)
FFT_LIB = -lcufft # link to cuFFT library
@ -787,7 +824,8 @@ This list was last updated for version 4.3.0 of the Kokkos library.
.. code-block:: make
KOKKOS_DEVICES = HIP
KOKKOS_ARCH = HOSTARCH,GPUARCH # HOSTARCH = HOST from list above that is hosting the GPU
KOKKOS_ARCH = HOSTARCH,GPUARCH # HOSTARCH = HOST from list above that is
# hosting the GPU
# GPUARCH = GPU from list above
FFT_INC = -DFFT_HIPFFT # enable use of hipFFT (optional)
FFT_LIB = -lhipfft # link to hipFFT library
@ -874,11 +912,16 @@ included in the LAMMPS source distribution in the ``lib/lepton`` folder.
.. code-block:: bash
make lib-lepton # print help message
make lib-lepton args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-lepton args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
# print help message
make lib-lepton
The "machine" argument of the "-m" flag is used to find a
# build with GNU g++ compiler (settings as with "make serial")
make lib-lepton args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-lepton args="-m mpi"
The "machine" argument of the ``-m`` flag is used to find a
Makefile.machine to use as build recipe.
The build should produce a ``build`` folder and the library ``lib/lepton/liblmplepton.a``
@ -900,7 +943,8 @@ Eigen3 is a template library, so you do not need to build it.
.. code-block:: bash
-D DOWNLOAD_EIGEN3 # download Eigen3, value = no (default) or yes
-D EIGEN3_INCLUDE_DIR=path # path to Eigen library (only needed if a custom location)
-D EIGEN3_INCLUDE_DIR=path # path to Eigen library (only needed if a
# custom location)
If ``DOWNLOAD_EIGEN3`` is set, the Eigen3 library will be
downloaded and inside the CMake build directory. If the Eigen3
@ -918,9 +962,14 @@ Eigen3 is a template library, so you do not need to build it.
.. code-block:: bash
make lib-machdyn # print help message
make lib-machdyn args="-b" # download to lib/machdyn/eigen3
make lib-machdyn args="-p /usr/include/eigen3" # use existing Eigen installation in /usr/include/eigen3
# print help message
make lib-machdyn
# download to lib/machdyn/eigen3
make lib-machdyn args="-b"
# use existing Eigen installation in /usr/include/eigen3
make lib-machdyn args="-p /usr/include/eigen3"
Note that a symbolic (soft) link named ``includelink`` is created
in ``lib/machdyn`` to point to the Eigen dir. When LAMMPS builds it
@ -994,7 +1043,7 @@ OPT package
The compiler flag ``-restrict`` must be used to build LAMMPS with
the OPT package when using Intel compilers. It should be added to
the :code:`CCFLAGS` line of your ``Makefile.machine``. See
the ``CCFLAGS`` line of your ``Makefile.machine``. See
``src/MAKE/OPTIONS/Makefile.opt`` for an example.
----------
@ -1021,10 +1070,17 @@ POEMS package
.. code-block:: bash
make lib-poems # print help message
make lib-poems args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-poems args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-poems args="-m icc" # build with Intel icc compiler
# print help message
make lib-poems
# build with GNU g++ compiler (settings as with "make serial")
make lib-poems args="-m serial"
# build with default MPI C++ compiler (settings as with "make mpi")
make lib-poems args="-m mpi"
# build with Intel icc compiler
make lib-poems args="-m icc"
The build should produce two files: ``lib/poems/libpoems.a`` and
``lib/poems/Makefile.lammps``. The latter is copied from an
@ -1088,9 +1144,12 @@ binary package provided by your operating system.
.. code-block:: bash
-D DOWNLOAD_VORO=value # download Voro++ for build, value = no (default) or yes
-D VORO_LIBRARY=path # Voro++ library file (only needed if at custom location)
-D VORO_INCLUDE_DIR=path # Voro++ include directory (only needed if at custom location)
-D DOWNLOAD_VORO=value # download Voro++ for build
# value = no (default) or yes
-D VORO_LIBRARY=path # Voro++ library file
# (only needed if at custom location)
-D VORO_INCLUDE_DIR=path # Voro++ include directory
# (only needed if at custom location)
If ``DOWNLOAD_VORO`` is set, the Voro++ library will be downloaded
and built inside the CMake build directory. If the Voro++ library
@ -1110,12 +1169,19 @@ binary package provided by your operating system.
.. code-block:: bash
make lib-voronoi # print help message
make lib-voronoi args="-b" # download and build the default version in lib/voronoi/voro++-<version>
make lib-voronoi args="-p $HOME/voro++" # use existing Voro++ installation in $HOME/voro++
make lib-voronoi args="-b -v voro++0.4.6" # download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6
# print help message
make lib-voronoi
Note that 2 symbolic (soft) links, ``includelink`` and
# download and build the default version in lib/voronoi/voro++-<version>
make lib-voronoi args="-b"
# use existing Voro++ installation in $HOME/voro++
make lib-voronoi args="-p $HOME/voro++"
# download and build the 0.4.6 version in lib/voronoi/voro++-0.4.6
make lib-voronoi args="-b -v voro++0.4.6"
Note that two symbolic (soft) links, ``includelink`` and
``liblink``, are created in lib/voronoi to point to the Voro++
source dir. When LAMMPS builds in ``src`` it will use these
links. You should not need to edit the
@ -1189,10 +1255,17 @@ The ATC package requires the MANYBODY package also be installed.
.. code-block:: bash
make lib-atc # print help message
make lib-atc args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-atc args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-atc args="-m icc" # build with Intel icc compiler
# print help message
make lib-atc
# build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-atc args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-atc args="-m mpi"
# build with Intel icc compiler
make lib-atc args="-m icc"
The build should produce two files: ``lib/atc/libatc.a`` and
``lib/atc/Makefile.lammps``. The latter is copied from an
@ -1211,10 +1284,17 @@ The ATC package requires the MANYBODY package also be installed.
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU Fortran compiler
# print help message
make lib-linalg
# build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m serial"
# build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m mpi"
# build with GNU Fortran compiler
make lib-linalg args="-m g++"
----------
@ -1240,10 +1320,17 @@ AWPMD package
.. code-block:: bash
make lib-awpmd # print help message
make lib-awpmd args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-awpmd args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-awpmd args="-m icc" # build with Intel icc compiler
# print help message
make lib-awpmd
# build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-awpmd args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-awpmd args="-m mpi"
# build with Intel icc compiler
make lib-awpmd args="-m icc"
The build should produce two files: ``lib/awpmd/libawpmd.a`` and
``lib/awpmd/Makefile.lammps``. The latter is copied from an
@ -1262,10 +1349,17 @@ AWPMD package
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
# print help message
make lib-linalg
# build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m serial"
# build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m mpi"
# build with GNU C++ compiler
make lib-linalg args="-m g++"
----------
@ -1298,10 +1392,17 @@ module included in the LAMMPS source distribution.
.. code-block:: bash
make lib-colvars # print help message
make lib-colvars args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-colvars args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-colvars args="-m g++-debug" # build with GNU g++ compiler and colvars debugging enabled
# print help message
make lib-colvars
# build with GNU g++ compiler (settings as with "make serial")
make lib-colvars args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-colvars args="-m mpi"
# build with GNU g++ compiler and colvars debugging enabled
make lib-colvars args="-m g++-debug"
The "machine" argument of the "-m" flag is used to find a
``Makefile.machine`` file to use as build recipe. If such recipe does
@ -1320,8 +1421,11 @@ module included in the LAMMPS source distribution.
.. code-block:: bash
COLVARS_DEBUG=yes make lib-colvars args="-m machine" # Build with debug code (much slower)
COLVARS_LEPTON=no make lib-colvars args="-m machine" # Build without Lepton (included otherwise)
# Build with debug code (much slower)
COLVARS_DEBUG=yes make lib-colvars args="-m machine"
# Build without Lepton (included otherwise)
COLVARS_LEPTON=no make lib-colvars args="-m machine"
The build should produce two files: the library
``lib/colvars/libcolvars.a`` and the specification file
@ -1368,9 +1472,14 @@ This package depends on the KSPACE package.
.. code-block:: bash
make lib-electrode # print help message
make lib-electrode args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-electrode args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
# print help message
make lib-electrode
# build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-electrode args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-electrode args="-m mpi"
Note that the ``Makefile.lammps`` file has settings for the BLAS
@ -1381,10 +1490,17 @@ This package depends on the KSPACE package.
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
# print help message
make lib-linalg
# build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m serial"
# build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m mpi"
# build with GNU C++ compiler
make lib-linalg args="-m g++"
The package itself is activated with ``make yes-KSPACE`` and
``make yes-ELECTRODE``
@ -1424,8 +1540,11 @@ at: `https://github.com/ICAMS/lammps-user-pace/ <https://github.com/ICAMS/lammps
.. code-block:: bash
make lib-pace # print help message
make lib-pace args="-b" # download and build the default version in lib/pace
# print help message
make lib-pace
# download and build the default version in lib/pace
make lib-pace args="-b"
You should not need to edit the ``lib/pace/Makefile.lammps`` file.
@ -1452,10 +1571,17 @@ ML-POD package
.. code-block:: bash
make lib-mlpod # print help message
make lib-mlpod args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-mlpod args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-mlpod args="-m mpi -e linalg" # same as above but use the bundled linalg lib
# print help message
make lib-mlpod
# build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-mlpod args="-m serial"
# build with default MPI compiler (settings as with "make mpi")
make lib-mlpod args="-m mpi"
# same as above but use the bundled linalg lib
make lib-mlpod args="-m mpi -e linalg"
Note that the ``Makefile.lammps`` file has settings to use the BLAS
and LAPACK linear algebra libraries. These can either exist on
@ -1465,10 +1591,17 @@ ML-POD package
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
# print help message
make lib-linalg
# build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m serial"
# build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m mpi"
# build with GNU C++ compiler
make lib-linalg args="-m g++"
The package itself is activated with ``make yes-ML-POD``.
@ -1491,9 +1624,12 @@ within CMake will download the non-commercial use version.
.. code-block:: bash
-D DOWNLOAD_QUIP=value # download QUIP library for build, value = no (default) or yes
-D QUIP_LIBRARY=path # path to libquip.a (only needed if a custom location)
-D USE_INTERNAL_LINALG=value # Use the internal linear algebra library instead of LAPACK
-D DOWNLOAD_QUIP=value # download QUIP library for build
# value = no (default) or yes
-D QUIP_LIBRARY=path # path to libquip.a
# (only needed if a custom location)
-D USE_INTERNAL_LINALG=value # Use the internal linear algebra library
# instead of LAPACK
# value = no (default) or yes
CMake will try to download and build the QUIP library from GitHub,
@ -1578,17 +1714,20 @@ LAMMPS build.
.. code-block:: bash
-D DOWNLOAD_PLUMED=value # download PLUMED for build, value = no (default) or yes
-D PLUMED_MODE=value # Linkage mode for PLUMED, value = static (default), shared, or runtime
-D DOWNLOAD_PLUMED=value # download PLUMED for build
# value = no (default) or yes
-D PLUMED_MODE=value # Linkage mode for PLUMED
# value = static (default), shared,
# or runtime
If DOWNLOAD_PLUMED is set to "yes", the PLUMED library will be
If ``DOWNLOAD_PLUMED`` is set to ``yes``, the PLUMED library will be
downloaded (the version of PLUMED that will be downloaded is
hard-coded to a vetted version of PLUMED, usually a recent stable
release version) and built inside the CMake build directory. If
``DOWNLOAD_PLUMED`` is set to "no" (the default), CMake will try
to detect and link to an installed version of PLUMED. For this to
work, the PLUMED library has to be installed into a location where
the ``pkg-config`` tool can find it or the PKG_CONFIG_PATH
the ``pkg-config`` tool can find it or the ``PKG_CONFIG_PATH``
environment variable has to be set up accordingly. PLUMED should
be installed in such a location if you compile it using the
default make; make install commands.
@ -1617,14 +1756,21 @@ LAMMPS build.
.. code-block:: bash
make lib-plumed # print help message
make lib-plumed args="-b" # download and build PLUMED in lib/plumed/plumed2
make lib-plumed args="-p $HOME/.local" # use existing PLUMED installation in $HOME/.local
make lib-plumed args="-p /usr/local -m shared" # use existing PLUMED installation in
# /usr/local and use shared linkage mode
# print help message
make lib-plumed
Note that 2 symbolic (soft) links, ``includelink`` and ``liblink``
are created in lib/plumed that point to the location of the PLUMED
# download and build PLUMED in lib/plumed/plumed2
make lib-plumed args="-b"
# use existing PLUMED installation in $HOME/.local
make lib-plumed args="-p $HOME/.local"
# use existing PLUMED installation in /usr/local and
# use shared linkage mode
make lib-plumed args="-p /usr/local -m shared"
Note that two symbolic (soft) links, ``includelink`` and ``liblink``
are created in ``lib/plumed`` that point to the location of the PLUMED
build to use. A new file ``lib/plumed/Makefile.lammps`` is also
created with settings suitable for LAMMPS to compile and link
PLUMED using the desired linkage mode. After this step is
@ -1639,17 +1785,17 @@ LAMMPS build.
Once this compilation completes you should be able to run LAMMPS
in the usual way. For shared linkage mode, libplumed.so must be
found by the LAMMPS executable, which on many operating systems
means, you have to set the LD_LIBRARY_PATH environment variable
means, you have to set the ``LD_LIBRARY_PATH`` environment variable
accordingly.
Support for the different linkage modes in LAMMPS varies for
different operating systems, using the static linkage is expected
to be the most portable, and thus set to be the default.
If you want to change the linkage mode, you have to re-run "make
lib-plumed" with the desired settings **and** do a re-install if
the PLUMED package with "make yes-plumed" to update the
required makefile settings with the changes in the lib/plumed
If you want to change the linkage mode, you have to re-run ``make
lib-plumed`` with the desired settings **and** do a re-install if
the PLUMED package with ``make yes-plumed`` to update the
required makefile settings with the changes in the ``lib/plumed``
folder.
----------
@ -1723,8 +1869,10 @@ details please see ``lib/hdnnp/README`` and the `n2p2 build documentation
.. code-block:: bash
-D DOWNLOAD_N2P2=value # download n2p2 for build, value = no (default) or yes
-D N2P2_DIR=path # n2p2 base directory (only needed if a custom location)
-D DOWNLOAD_N2P2=value # download n2p2 for build
# value = no (default) or yes
-D N2P2_DIR=path # n2p2 base directory
# (only needed if a custom location)
If ``DOWNLOAD_N2P2`` is set, the *n2p2* library will be downloaded and
built inside the CMake build directory. If the *n2p2* library is already
@ -1741,12 +1889,19 @@ details please see ``lib/hdnnp/README`` and the `n2p2 build documentation
.. code-block:: bash
make lib-hdnnp # print help message
make lib-hdnnp args="-b" # download and build in lib/hdnnp/n2p2-...
make lib-hdnnp args="-b -v 2.1.4" # download and build specific version
make lib-hdnnp args="-p /usr/local/n2p2" # use the existing n2p2 installation in /usr/local/n2p2
# print help message
make lib-hdnnp
Note that 3 symbolic (soft) links, ``includelink``, ``liblink`` and
# download and build in lib/hdnnp/n2p2-...
make lib-hdnnp args="-b"
# download and build specific version
make lib-hdnnp args="-b -v 2.1.4"
# use the existing n2p2 installation in /usr/local/n2p2
make lib-hdnnp args="-p /usr/local/n2p2"
Note that three symbolic (soft) links, ``includelink``, ``liblink`` and
``Makefile.lammps``, will be created in ``lib/hdnnp`` to point to
``n2p2/include``, ``n2p2/lib`` and ``n2p2/lib/Makefile.lammps-extra``,
respectively. When LAMMPS is built in ``src`` it will use these links.
@ -1834,7 +1989,8 @@ MDI package
.. code-block:: bash
-D DOWNLOAD_MDI=value # download MDI Library for build, value = no (default) or yes
-D DOWNLOAD_MDI=value # download MDI Library for build
# value = no (default) or yes
.. tab:: Traditional make
@ -1863,7 +2019,8 @@ MOLFILE package
.. code-block:: bash
-D MOLFILE_INCLUDE_DIR=path # (optional) path where VMD molfile plugin headers are installed
-D MOLFILE_INCLUDE_DIR=path # (optional) path where VMD molfile
# plugin headers are installed
-D PKG_MOLFILE=yes
Using ``-D PKG_MOLFILE=yes`` enables the package, and setting
@ -2022,10 +2179,17 @@ verified to work in February 2020 with Quantum Espresso versions 6.3 to
.. code-block:: bash
make lib-qmmm # print help message
make lib-qmmm args="-m serial" # build with GNU Fortran compiler (settings as in "make serial")
make lib-qmmm args="-m mpi" # build with default MPI compiler (settings as in "make mpi")
make lib-qmmm args="-m gfortran" # build with GNU Fortran compiler
# print help message
make lib-qmmm
# build with GNU Fortran compiler (settings as in "make serial")
make lib-qmmm args="-m serial"
# build with default MPI compiler (settings as in "make mpi")
make lib-qmmm args="-m mpi"
# build with GNU Fortran compiler
make lib-qmmm args="-m gfortran"
The build should produce two files: ``lib/qmmm/libqmmm.a`` and
``lib/qmmm/Makefile.lammps``. The latter is copied from an
@ -2038,10 +2202,10 @@ verified to work in February 2020 with Quantum Espresso versions 6.3 to
You can then install QMMM package and build LAMMPS in the usual
manner. After completing the LAMMPS build and compiling Quantum
ESPRESSO with external library support (via "make couple"), go
ESPRESSO with external library support (via ``make couple``), go
back to the ``lib/qmmm`` folder and follow the instructions in the
README file to build the combined LAMMPS/QE QM/MM executable
(pwqmmm.x) in the lib/qmmm folder.
(``pwqmmm.x``) in the ``lib/qmmm`` folder.
----------
@ -2111,11 +2275,16 @@ To build with this package, you must download and build the
.. code-block:: bash
make lib-scafacos # print help message
make lib-scafacos args="-b" # download and build in lib/scafacos/scafacos-<version>
make lib-scafacos args="-p $HOME/scafacos # use existing ScaFaCoS installation in $HOME/scafacos
# print help message
make lib-scafacos
Note that 2 symbolic (soft) links, ``includelink`` and ``liblink``, are
# download and build in lib/scafacos/scafacos-<version>
make lib-scafacos args="-b"
# use existing ScaFaCoS installation in $HOME/scafacos
make lib-scafacos args="-p $HOME/scafacos
Note that two symbolic (soft) links, ``includelink`` and ``liblink``, are
created in ``lib/scafacos`` to point to the ScaFaCoS src dir. When LAMMPS
builds in src it will use these links. You should not need to edit
the ``lib/scafacos/Makefile.lammps`` file.

View File

@ -37,7 +37,7 @@ executable code from the library is copied into the calling executable.
.. tab:: CMake build
This assumes that LAMMPS has been configured without setting a
``LAMMPS_MACHINE`` name, installed with "make install", and the
``LAMMPS_MACHINE`` name, installed with ``make install``, and the
``PKG_CONFIG_PATH`` environment variable has been updated to
include the ``liblammps.pc`` file installed into the configured
destination folder. The commands to compile and link a coupled
@ -59,10 +59,10 @@ executable code from the library is copied into the calling executable.
mpicc -c -O -I${HOME}/lammps/src caller.c
mpicxx -o caller caller.o -L${HOME}/lammps/src -llammps_mpi
The *-I* argument is the path to the location of the ``library.h``
The ``-I`` argument is the path to the location of the ``library.h``
header file containing the interface to the LAMMPS C-style library
interface. The *-L* argument is the path to where the
``liblammps_mpi.a`` file is located. The *-llammps_mpi* argument
interface. The ``-L`` argument is the path to where the
``liblammps_mpi.a`` file is located. The ``-llammps_mpi`` argument
is shorthand for telling the compiler to link the file
``liblammps_mpi.a``. If LAMMPS has been built as a shared
library, then the linker will use ``liblammps_mpi.so`` instead.
@ -142,7 +142,7 @@ When linking to LAMMPS built as a shared library, the situation becomes
much simpler, as all dependent libraries and objects are either included
in the shared library or registered as a dependent library in the shared
library file. Thus, those libraries need not be specified when linking
the calling executable. Only the *-I* flags are needed. So the example
the calling executable. Only the ``-I`` flags are needed. So the example
case from above of the serial version static LAMMPS library with the
POEMS package installed becomes:

View File

@ -25,7 +25,7 @@ additional tools to be available and functioning.
require adding flags like ``-std=c++11`` to enable the C++11 mode.
* A Bourne shell compatible "Unix" shell program (frequently this is ``bash``)
* A few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname``
* Python (optional, required for ``make lib-<pkg>`` in the src
* Python (optional, required for ``make lib-<pkg>`` in the ``src``
folder). Python scripts are currently tested with python 2.7 and
3.6 to 3.11. The procedure for :doc:`building the documentation
<Build_manual>` *requires* Python 3.5 or later.

View File

@ -172,18 +172,41 @@ make a copy of one of them and modify it to suit your needs.
.. code-block:: bash
cmake -C ../cmake/presets/basic.cmake [OPTIONS] ../cmake # enable just a few core packages
cmake -C ../cmake/presets/most.cmake [OPTIONS] ../cmake # enable most packages
cmake -C ../cmake/presets/download.cmake [OPTIONS] ../cmake # enable packages which download sources or potential files
cmake -C ../cmake/presets/nolib.cmake [OPTIONS] ../cmake # disable packages that do require extra libraries or tools
cmake -C ../cmake/presets/clang.cmake [OPTIONS] ../cmake # change settings to use the Clang compilers by default
cmake -C ../cmake/presets/gcc.cmake [OPTIONS] ../cmake # change settings to use the GNU compilers by default
cmake -C ../cmake/presets/intel.cmake [OPTIONS] ../cmake # change settings to use the Intel compilers by default
cmake -C ../cmake/presets/pgi.cmake [OPTIONS] ../cmake # change settings to use the PGI compilers by default
cmake -C ../cmake/presets/all_on.cmake [OPTIONS] ../cmake # enable all packages
cmake -C ../cmake/presets/all_off.cmake [OPTIONS] ../cmake # disable all packages
mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake # compile with MinGW cross-compilers
cmake -C ../cmake/presets/macos-multiarch.cmake [OPTIONS] ../cmake # compile serial multi-arch binaries on macOS
# enable just a few core packages
cmake -C ../cmake/presets/basic.cmake [OPTIONS] ../cmake
# enable most packages
cmake -C ../cmake/presets/most.cmake [OPTIONS] ../cmake
# enable packages which download sources or potential files
cmake -C ../cmake/presets/download.cmake [OPTIONS] ../cmake
# disable packages that do require extra libraries or tools
cmake -C ../cmake/presets/nolib.cmake [OPTIONS] ../cmake
# change settings to use the Clang compilers by default
cmake -C ../cmake/presets/clang.cmake [OPTIONS] ../cmake
# change settings to use the GNU compilers by default
cmake -C ../cmake/presets/gcc.cmake [OPTIONS] ../cmake
# change settings to use the Intel compilers by default
cmake -C ../cmake/presets/intel.cmake [OPTIONS] ../cmake
# change settings to use the PGI compilers by default
cmake -C ../cmake/presets/pgi.cmake [OPTIONS] ../cmake
# enable all packages
cmake -C ../cmake/presets/all_on.cmake [OPTIONS] ../cmake
# disable all packages
cmake -C ../cmake/presets/all_off.cmake [OPTIONS] ../cmake
# compile with MinGW cross-compilers
mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake
# compile serial multi-arch binaries on macOS
cmake -C ../cmake/presets/macos-multiarch.cmake [OPTIONS] ../cmake
Presets that have names starting with "windows" are specifically for
compiling LAMMPS :doc:`natively on Windows <Build_windows>` and
@ -209,7 +232,8 @@ Example
# GPU package and configure it for using CUDA. You can run.
mkdir build
cd build
cmake -C ../cmake/presets/most.cmake -C ../cmake/presets/nolib.cmake -D PKG_GPU=on -D GPU_API=cuda ../cmake
cmake -C ../cmake/presets/most.cmake -C ../cmake/presets/nolib.cmake \
-D PKG_GPU=on -D GPU_API=cuda ../cmake
# to add another package, say BODY to the previous configuration you can run:
cmake -D PKG_BODY=on .

View File

@ -1,3 +1,7 @@
.. raw:: latex
\clearpage
Optional build settings
=======================
@ -54,7 +58,7 @@ LAMMPS can use them if they are available on your system.
Alternatively, LAMMPS can use the `heFFTe
<https://icl-utk-edu.github.io/heffte/>`_ library for the MPI
communication algorithms, which comes with many optimizations for
special cases, e.g. leveraging available 2D and 3D FFTs in the back end
special cases, e.g. leveraging available 2D and 3D FFTs in the backend
libraries and better pipelining for packing and communication.
.. tabs::
@ -63,8 +67,10 @@ libraries and better pipelining for packing and communication.
.. code-block:: bash
-D FFT=value # FFTW3 or MKL or KISS, default is FFTW3 if found, else KISS
-D FFT_KOKKOS=value # FFTW3 or MKL or KISS or CUFFT or HIPFFT, default is KISS
-D FFT=value # FFTW3 or MKL or KISS, default is FFTW3 if found,
# else KISS
-D FFT_KOKKOS=value # FFTW3 or MKL or KISS or CUFFT or HIPFFT,
# default is KISS
-D FFT_SINGLE=value # yes or no (default), no = double precision
-D FFT_PACK=value # array (default) or pointer or memcpy
-D FFT_USE_HEFFTE=value # yes or no (default), yes links to heFFTe
@ -72,11 +78,11 @@ libraries and better pipelining for packing and communication.
.. note::
When the Kokkos variant of a package is compiled and selected at run time,
the FFT library selected by the FFT_KOKKOS variable applies. Otherwise,
the FFT library selected by the ``FFT_KOKKOS`` variable applies. Otherwise,
the FFT library selected by the FFT variable applies.
The same FFT settings apply to both. FFT_KOKKOS must be compatible with the
Kokkos back end - for example, when using the CUDA back end of Kokkos,
you must use either CUFFT or KISS.
The same FFT settings apply to both. ``FFT_KOKKOS`` must be compatible with the
Kokkos backend - for example, when using the CUDA backend of Kokkos,
you must use either ``CUFFT`` or ``KISS``.
Usually these settings are all that is needed. If FFTW3 is
selected, then CMake will try to detect, if threaded FFTW
@ -94,12 +100,13 @@ libraries and better pipelining for packing and communication.
-D MKL_INCLUDE_DIR=path # ditto for Intel MKL library
-D FFT_MKL_THREADS=on # enable using threaded FFTs with MKL libraries
-D MKL_LIBRARY=path # path to MKL libraries
-D FFT_HEFFTE_BACKEND=value # FFTW or MKL or empty/undefined for the stock heFFTe back end
-D FFT_HEFFTE_BACKEND=value # FFTW or MKL or empty/undefined for the stock
# heFFTe backend
-D Heffte_ROOT=path # path to an existing heFFTe installation
.. note::
heFFTe comes with a builtin (= stock) back end for FFTs, i.e. a
heFFTe comes with a builtin (= stock) backend for FFTs, i.e. a
default internal FFT implementation; however, this stock back
end is intended for testing purposes only and is not optimized
for production runs.
@ -113,10 +120,10 @@ libraries and better pipelining for packing and communication.
.. code-block:: make
FFT_INC = -DFFT_FFTW3 # -DFFT_FFTW3, -DFFT_FFTW (same as -DFFT_FFTW3), -DFFT_MKL, or -DFFT_KISS
# default is KISS if not specified
FFT_INC = -DFFT_KOKKOS_CUFFT # -DFFT_KOKKOS_{FFTW,FFTW3,MKL,CUFFT,HIPFFT,KISS}
# default is KISS if not specified
FFT_INC = -DFFT_<NAME> # where <NAME> is KISS (default), FFTW3,
# FFTW (same as FFTW3), or MKL
FFT_INC = -DFFT_KOKKOS_<NAME> # where <NAME> is KISS (default), FFTW3,
# FFTW (same as FFTW3), MKL, CUFFT, or HIPFFT
FFT_INC = -DFFT_SINGLE # do not specify for double precision
FFT_INC = -DFFT_FFTW_THREADS # enable using threaded FFTW3 libraries
FFT_INC = -DFFT_MKL_THREADS # enable using threaded FFTs with MKL libraries
@ -127,16 +134,36 @@ libraries and better pipelining for packing and communication.
FFT_INC = -I/usr/local/include
FFT_PATH = -L/usr/local/lib
FFT_LIB = -lhipfft # hipFFT either precision
FFT_LIB = -lcufft # cuFFT either precision
FFT_LIB = -lfftw3 # FFTW3 double precision
FFT_LIB = -lfftw3 -lfftw3_omp # FFTW3 double precision with threads (needs -DFFT_FFTW_THREADS)
FFT_LIB = -lfftw3 -lfftw3f # FFTW3 single precision
FFT_LIB = -lmkl_intel_lp64 -lmkl_sequential -lmkl_core # MKL with Intel compiler, serial interface
FFT_LIB = -lmkl_gf_lp64 -lmkl_sequential -lmkl_core # MKL with GNU compiler, serial interface
FFT_LIB = -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core # MKL with Intel compiler, threaded interface
FFT_LIB = -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core # MKL with GNU compiler, threaded interface
FFT_LIB = -lmkl_rt # MKL with automatic runtime selection of interface libs
# hipFFT either precision
FFT_LIB = -lhipfft
# cuFFT either precision
FFT_LIB = -lcufft
# FFTW3 double precision
FFT_LIB = -lfftw3
# FFTW3 double precision with threads (needs -DFFT_FFTW_THREADS)
FFT_LIB = -lfftw3 -lfftw3_omp
# FFTW3 single precision
FFT_LIB = -lfftw3 -lfftw3f
# serial MKL with Intel compiler
FFT_LIB = -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
# serial MKL with GNU compiler
FFT_LIB = -lmkl_gf_lp64 -lmkl_sequential -lmkl_core
# threaded MKL with Intel compiler
FFT_LIB = -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core
# threaded MKL with GNU compiler
FFT_LIB = -lmkl_gf_lp64 -lmkl_gnu_thread -lmkl_core
# MKL with automatic runtime selection of interface libs
FFT_LIB = -lmkl_rt
As with CMake, you do not need to set paths in ``FFT_INC`` or
``FFT_PATH``, if the compiler can find the FFT header and library
@ -152,13 +179,13 @@ libraries and better pipelining for packing and communication.
FFT_PATH =
FFT_LIB = $(heffte_link) $(heffte_libs)
The heFFTe install path will contain `HeffteMakefile.in`.
which will define the `heffte_` include variables needed to link to heFFTe from
The heFFTe install path will contain ``HeffteMakefile.in``.
which will define the ``heffte_`` include variables needed to link to heFFTe from
an external project using traditional make.
The `-DFFT_HEFFTE` is required to switch to using heFFTe, while the optional `-DFFT_HEFFTE_FFTW`
selects the desired heFFTe back end, e.g., `-DFFT_HEFFTE_FFTW` or `-DFFT_HEFFTE_MKL`,
omitting the variable will default to the `stock` back end.
The heFFTe `stock` back end is intended to be used for testing and debugging,
The ``-DFFT_HEFFTE`` is required to switch to using heFFTe, while the optional ``-DFFT_HEFFTE_FFTW``
selects the desired heFFTe backend, e.g., ``-DFFT_HEFFTE_FFTW`` or ``-DFFT_HEFFTE_MKL``,
omitting the variable will default to the `stock` backend.
The heFFTe `stock` backend is intended to be used for testing and debugging,
but is not performance optimized for large scale production runs.
The `KISS FFT library <https://github.com/mborgerding/kissfft>`_ is
@ -184,7 +211,7 @@ it from `www.fftw.org <https://www.fftw.org>`_. LAMMPS requires version
Building FFTW for your box should be as simple as ``./configure; make;
make install``. The install command typically requires root privileges
(e.g. invoke it via sudo), unless you specify a local directory with
the "--prefix" option of configure. Type ``./configure --help`` to see
the ``--prefix`` option of configure. Type ``./configure --help`` to see
various options.
The Intel MKL math library is part of the Intel compiler suite. It
@ -193,7 +220,7 @@ above).
The cuFFT and hipFFT FFT libraries are packaged with NVIDIA's CUDA and
AMD's HIP installations, respectively. These FFT libraries require the
Kokkos acceleration package to be enabled and the Kokkos back end to be
Kokkos acceleration package to be enabled and the Kokkos backend to be
GPU-resident (i.e., HIP or CUDA).
Performing 3d FFTs in parallel can be time-consuming due to data access
@ -220,7 +247,7 @@ produce the additional libraries ``libfftw3f.a`` and/or ``libfftw3f.so``\ .
Performing 3d FFTs requires communication to transpose the 3d FFT
grid. The data packing/unpacking for this can be done in one of 3
modes (ARRAY, POINTER, MEMCPY) as set by the FFT_PACK syntax above.
modes (ARRAY, POINTER, MEMCPY) as set by the ``FFT_PACK`` syntax above.
Depending on the machine, the size of the FFT grid, the number of
processors used, one option may be slightly faster. The default is
ARRAY mode.
@ -228,7 +255,7 @@ ARRAY mode.
When using ``-DFFT_HEFFTE`` CMake will first look for an existing
install with hints provided by ``-DHeffte_ROOT``, as recommended by the
CMake standard and note that the name is case sensitive. If CMake cannot
find a heFFTe installation with the correct back end (e.g., FFTW or
find a heFFTe installation with the correct backend (e.g., FFTW or
MKL), it will attempt to download and build the library automatically.
In this case, LAMMPS CMake will also accept all heFFTe specific
variables listed in the `heFFTe documentation
@ -237,6 +264,10 @@ and those variables will be passed into the heFFTe build.
----------
.. raw:: latex
\clearpage
.. _size:
Size of LAMMPS integer types and size limits
@ -363,7 +394,8 @@ requires the following settings:
-D WITH_JPEG=value # yes or no
# default = yes if CMake finds JPEG development files, else no
-D WITH_PNG=value # yes or no
# default = yes if CMake finds PNG and ZLIB development files, else no
# default = yes if CMake finds PNG and ZLIB development files,
# else no
-D WITH_FFMPEG=value # yes or no
# default = yes if CMake can find ffmpeg, else no
@ -387,8 +419,10 @@ requires the following settings:
LMP_INC = -DLAMMPS_JPEG -DLAMMPS_PNG -DLAMMPS_FFMPEG <other LMP_INC settings>
JPG_INC = -I/usr/local/include # path to jpeglib.h, png.h, zlib.h header files if make cannot find them
JPG_PATH = -L/usr/lib # paths to libjpeg.a, libpng.a, libz.a (.so) files if make cannot find them
JPG_INC = -I/usr/local/include # path to jpeglib.h, png.h, zlib.h headers
# if make cannot find them
JPG_PATH = -L/usr/lib # paths to libjpeg.a, libpng.a, libz.a (.so)
# files if make cannot find them
JPG_LIB = -ljpeg -lpng -lz # library names
As with CMake, you do not need to set ``JPG_INC`` or ``JPG_PATH``,
@ -429,7 +463,7 @@ including :doc:`read_data <read_data>`, :doc:`rerun <rerun>`, and
.. code-block:: bash
-D WITH_GZIP=value # yes or no
# default is yes if CMake can find the gzip program, else no
# default is yes if CMake can find the gzip program
.. tab:: Traditional make
@ -504,11 +538,11 @@ LAMMPS is compiled accordingly which needs the following settings:
Memory allocation alignment
---------------------------
This setting enables the use of the "posix_memalign()" call instead of
"malloc()" when LAMMPS allocates large chunks of memory. Vector
This setting enables the use of the ``posix_memalign()`` call instead of
``malloc()`` when LAMMPS allocates large chunks of memory. Vector
instructions on CPUs may become more efficient, if dynamically allocated
memory is aligned on larger-than-default byte boundaries. On most
current operating systems, the "malloc()" implementation returns
current operating systems, the ``malloc()`` implementation returns
pointers that are aligned to 16-byte boundaries. Using SSE vector
instructions efficiently, however, requires memory blocks being aligned
on 64-byte boundaries.
@ -522,9 +556,9 @@ on 64-byte boundaries.
-D LAMMPS_MEMALIGN=value # 0, 8, 16, 32, 64 (default)
Use a ``LAMMPS_MEMALIGN`` value of 0 to disable using
"posix_memalign()" and revert to using the "malloc()" C-library
``posix_memalign()`` and revert to using the ``malloc()`` C-library
function instead. When compiling LAMMPS for Windows systems,
"malloc()" will always be used and this setting is ignored.
``malloc()`` will always be used and this setting is ignored.
.. tab:: Traditional make
@ -533,7 +567,7 @@ on 64-byte boundaries.
LMP_INC = -DLAMMPS_MEMALIGN=value # 8, 16, 32, 64
Do not set ``-DLAMMPS_MEMALIGN``, if you want to have memory
allocated with the "malloc()" function call
allocated with the ``malloc()`` function call
instead. ``-DLAMMPS_MEMALIGN`` **cannot** be used on Windows, as
Windows different function calls with different semantics for
allocating aligned memory, that are not compatible with how LAMMPS

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,12 +14,14 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
.. raw:: latex
\clearpage
General commands
================
An alphabetic list of general LAMMPS commands. Note that style
commands with many variants, can be more easily accessed via the small
table above.
An alphabetic list of general LAMMPS commands.
.. table_from_list::
:columns: 6

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -17,8 +16,8 @@
.. _bond:
Bond_style potentials
=====================
Bond styles
===========
All LAMMPS :doc:`bond_style <bond_style>` commands. Some styles have
accelerated versions. This is indicated by additional letters in
@ -65,8 +64,8 @@ OPT.
.. _angle:
Angle_style potentials
======================
Angle styles
============
All LAMMPS :doc:`angle_style <angle_style>` commands. Some styles have
accelerated versions. This is indicated by additional letters in
@ -113,8 +112,8 @@ OPT.
.. _dihedral:
Dihedral_style potentials
=========================
Dihedral styles
===============
All LAMMPS :doc:`dihedral_style <dihedral_style>` commands. Some styles
have accelerated versions. This is indicated by additional letters in
@ -153,8 +152,8 @@ OPT.
.. _improper:
Improper_style potentials
=========================
Improper styles
===============
All LAMMPS :doc:`improper_style <improper_style>` commands. Some styles
have accelerated versions. This is indicated by additional letters in

View File

@ -1,3 +1,7 @@
.. raw:: latex
\clearpage
Commands by category
====================
@ -6,8 +10,8 @@ This page lists most of the LAMMPS commands, grouped by category. The
alphabetically. Style options for entries like fix, compute, pair etc.
have their own pages where they are listed alphabetically.
Initialization:
------------------------------
Initialization
--------------
.. table_from_list::
:columns: 5
@ -18,8 +22,8 @@ Initialization:
* :doc:`suffix <suffix>`
* :doc:`units <units>`
Setup simulation box:
------------------------------
Setup simulation box
--------------------
.. table_from_list::
:columns: 4
@ -31,8 +35,8 @@ Setup simulation box:
* :doc:`lattice <lattice>`
* :doc:`region <region>`
Setup atoms:
------------------------------
Setup atoms
-----------
.. table_from_list::
:columns: 4
@ -55,8 +59,8 @@ Setup atoms:
* :doc:`set <set>`
* :doc:`velocity <velocity>`
Force fields:
------------------------------
Force fields
------------
.. table_from_list::
:columns: 4
@ -79,8 +83,8 @@ Force fields:
* :doc:`pair_write <pair_write>`
* :doc:`special_bonds <special_bonds>`
Settings:
------------------------------
Settings
--------
.. table_from_list::
:columns: 4
@ -98,8 +102,8 @@ Settings:
* :doc:`timer <timer>`
* :doc:`timestep <timestep>`
Operations within timestepping (fixes) and diagnostics (computes):
------------------------------------------------------------------------------------------
Operations within timestepping (fixes) and diagnostics (computes)
-----------------------------------------------------------------
.. table_from_list::
:columns: 4
@ -111,8 +115,8 @@ Operations within timestepping (fixes) and diagnostics (computes):
* :doc:`uncompute <uncompute>`
* :doc:`unfix <unfix>`
Output:
------------------------------
Output
------
.. table_from_list::
:columns: 4
@ -131,8 +135,8 @@ Output:
* :doc:`write_dump <write_dump>`
* :doc:`write_restart <write_restart>`
Actions:
------------------------------
Actions
-------
.. table_from_list::
:columns: 6
@ -146,8 +150,8 @@ Actions:
* :doc:`tad <tad>`
* :doc:`temper <temper>`
Input script control:
------------------------------
Input script control
--------------------
.. table_from_list::
:columns: 7

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,8 +14,8 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Compute commands
================
Compute styles
==============
An alphabetic list of all LAMMPS :doc:`compute <compute>` commands.
Some styles have accelerated versions. This is indicated by

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,8 +14,8 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Dump commands
=============
Dump styles
===========
An alphabetic list of all LAMMPS :doc:`dump <dump>` commands.

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,8 +14,8 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Fix commands
============
Fix styles
==========
An alphabetic list of all LAMMPS :doc:`fix <fix>` commands. Some styles
have accelerated versions. This is indicated by additional letters in

View File

@ -10,14 +10,14 @@ for any commands that may be processed later. Commands may set an
internal variable, read in a file, or run a simulation. These actions
can be grouped into three categories:
a) commands that change a global setting (examples: timestep, newton,
echo, log, thermo, restart),
a) commands that change a global setting (examples: :doc:`timestep <timestep>`, :doc:`newton <newton>`,
:doc:`echo <echo>`, :doc:`log <log>`, :doc:`thermo <thermo>`, :doc:`restart <restart>`),
b) commands that add, modify, remove, or replace "styles" that are
executed during a "run" (examples: pair_style, fix, compute, dump,
thermo_style, pair_modify), and
executed during a "run" (examples: :doc:`pair_style <pair_style>`, :doc:`fix <fix>`, :doc:`compute <compute>`, :doc:`dump <dump>`,
:doc:`thermo_style <thermo_style>`, :doc:`pair_modify <pair_modify>`), and
c) commands that execute a "run" or perform some other computation or
operation (examples: print, run, minimize, temper, write_dump, rerun,
read_data, read_restart)
operation (examples: :doc:`print <print>`, :doc:`run <run>`, :doc:`minimize <minimize>`, :doc:`temper <temper>`, :doc:`write_dump <write_dump>`, :doc:`rerun <rerun>`,
:doc:`read_data <read_data>`, :doc:`read_restart <read_restart>`)
Commands in category a) have default settings, which means you only
need to use the command if you wish to change the defaults.
@ -61,7 +61,7 @@ between commands in the c) category. The following rules apply:
<read_data>` command initializes the system by setting up the
simulation box and assigning atoms to processors. If default values
are not desired, the :doc:`processors <processors>` and
:doc:`boundary <boundary>` commands need to be used before read_data
:doc:`boundary <boundary>` commands need to be used before ``read_data``
to tell LAMMPS how to map processors to the simulation box.
Many input script errors are detected by LAMMPS and an ERROR or
@ -70,6 +70,6 @@ more information on what errors mean. The documentation for each
command lists restrictions on how the command can be used.
You can use the :ref:`-skiprun <skiprun>` command line flag
to have LAMMPS skip the execution of any "run", "minimize", or similar
to have LAMMPS skip the execution of any ``run``, ``minimize``, or similar
commands to check the entire input for correct syntax to avoid crashes
on typos or syntax errors in long runs.

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,8 +14,8 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
KSpace solvers
==============
KSpace styles
=============
All LAMMPS :doc:`kspace_style <kspace_style>` solvers. Some styles have
accelerated versions. This is indicated by additional letters in

View File

@ -1,4 +1,3 @@
.. only:: html
.. table_from_list::
@ -15,8 +14,8 @@
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Pair_style potentials
======================
Pair styles
===========
All LAMMPS :doc:`pair_style <pair_style>` commands. Some styles have
accelerated versions. This is indicated by additional letters in
@ -38,10 +37,6 @@ OPT.
*
*
*
*
*
*
*
* :doc:`adp (ko) <pair_adp>`
* :doc:`agni (o) <pair_agni>`
* :doc:`aip/water/2dm (t) <pair_aip_water_2dm>`

View File

@ -42,8 +42,8 @@ LAMMPS:
If the $ is followed by text in curly brackets '{}', then the
variable name is the text inside the curly brackets. If no curly
brackets follow the $, then the variable name is the single character
immediately following the $. Thus ${myTemp} and $x refer to variables
named "myTemp" and "x", while "$xx" will be interpreted as a variable
immediately following the $. Thus ``${myTemp}`` and ``$x`` refer to variables
named "myTemp" and "x", while ``$xx`` will be interpreted as a variable
named "x" followed by an "x" character.
How the variable is converted to a text string depends on what style
@ -79,10 +79,10 @@ LAMMPS:
Additionally, the entire "immediate" variable expression may be
followed by a colon, followed by a C-style format string,
e.g. ":%f" or ":%.10g". The format string must be appropriate for
e.g. ``:%f`` or ``:%.10g``. The format string must be appropriate for
a double-precision floating-point value. The format string is used
to output the result of the variable expression evaluation. If a
format string is not specified, a high-precision "%.20g" is used as
format string is not specified, a high-precision ``%.20g`` is used as
the default format.
This can be useful for formatting print output to a desired precision:
@ -101,8 +101,8 @@ LAMMPS:
variable b2 equal 4
print "B2 = ${b$a}"
Nor can you specify an expression like "$($x-1.0)" for an immediate
variable, but you could use $(v_x-1.0), since the latter is valid
Nor can you specify an expression like ``$($x-1.0)`` for an immediate
variable, but you could use ``$(v_x-1.0)``, since the latter is valid
syntax for an :doc:`equal-style variable <variable>`.
See the :doc:`variable <variable>` command for more details of how

View File

@ -73,7 +73,7 @@ with additional switching or shifting functions that ramp the energy
and/or force smoothly to zero between an inner :math:`(a)` and outer
:math:`(b)` cutoff. The older styles with *charmm* (not *charmmfsw* or
*charmmfsh*\ ) in their name compute the LJ and Coulombic interactions
with an energy switching function (esw) S(r) which ramps the energy
with an energy switching function (esw) :math:`S(r)` which ramps the energy
smoothly to zero between the inner and outer cutoff. This can cause
irregularities in pairwise forces (due to the discontinuous second
derivative of energy at the boundaries of the switching region), which

View File

@ -36,7 +36,7 @@ the context of your application.
steps, invoke the command, etc.
In this scenario, the other code can be called as a library, as in
1., or it could be a stand-alone code, invoked by a system() call
1., or it could be a stand-alone code, invoked by a ``system()`` call
made by the command (assuming your parallel machine allows one or
more processors to start up another program). In the latter case the
stand-alone code could communicate with LAMMPS through files that the

View File

@ -1,8 +1,8 @@
Calculate diffusion coefficients
================================
The diffusion coefficient D of a material can be measured in at least
2 ways using various options in LAMMPS. See the examples/DIFFUSE
The diffusion coefficient :math:`D` of a material can be measured in at least
2 ways using various options in LAMMPS. See the ``examples/DIFFUSE``
directory for scripts that implement the 2 methods discussed here for
a simple Lennard-Jones fluid model.
@ -12,7 +12,7 @@ of the MSD versus time is proportional to the diffusion coefficient.
The instantaneous MSD values can be accumulated in a vector via the
:doc:`fix vector <fix_vector>` command, and a line fit to the vector to
compute its slope via the :doc:`variable slope <variable>` function, and
thus extract D.
thus extract :math:`D`.
The second method is to measure the velocity auto-correlation function
(VACF) of the system, via the :doc:`compute vacf <compute_vacf>`
@ -20,4 +20,4 @@ command. The time-integral of the VACF is proportional to the
diffusion coefficient. The instantaneous VACF values can be
accumulated in a vector via the :doc:`fix vector <fix_vector>` command,
and time integrated via the :doc:`variable trap <variable>` function,
and thus extract D.
and thus extract :math:`D`.

View File

@ -4,21 +4,27 @@ Calculate elastic constants
Elastic constants characterize the stiffness of a material. The formal
definition is provided by the linear relation that holds between the
stress and strain tensors in the limit of infinitesimal deformation.
In tensor notation, this is expressed as s_ij = C_ijkl \* e_kl, where
the repeated indices imply summation. s_ij are the elements of the
symmetric stress tensor. e_kl are the elements of the symmetric strain
tensor. C_ijkl are the elements of the fourth rank tensor of elastic
constants. In three dimensions, this tensor has 3\^4=81 elements. Using
Voigt notation, the tensor can be written as a 6x6 matrix, where C_ij
is now the derivative of s_i w.r.t. e_j. Because s_i is itself a
derivative w.r.t. e_i, it follows that C_ij is also symmetric, with at
most 7\*6/2 = 21 distinct elements.
In tensor notation, this is expressed as
.. math::
s_{ij} = C_{ijkl} e_{kl}
where
the repeated indices imply summation. :math:`s_{ij}` are the elements of the
symmetric stress tensor. :math:`e_{kl}` are the elements of the symmetric strain
tensor. :math:`C_{ijkl}` are the elements of the fourth rank tensor of elastic
constants. In three dimensions, this tensor has :math:`3^4=81` elements. Using
Voigt notation, the tensor can be written as a 6x6 matrix, where :math:`C_{ij}`
is now the derivative of :math:`s_i` w.r.t. :math:`e_j`. Because :math:`s_i` is itself a
derivative w.r.t. :math:`e_i`, it follows that :math:`C_{ij}` is also symmetric, with at
most :math:`\frac{7 \times 6}{2}` = 21 distinct elements.
At zero temperature, it is easy to estimate these derivatives by
deforming the simulation box in one of the six directions using the
:doc:`change_box <change_box>` command and measuring the change in the
stress tensor. A general-purpose script that does this is given in the
examples/ELASTIC directory described on the :doc:`Examples <Examples>`
``examples/ELASTIC`` directory described on the :doc:`Examples <Examples>`
doc page.
Calculating elastic constants at finite temperature is more
@ -33,7 +39,7 @@ the :doc:`compute born/matrix <compute_born_matrix>` command,
which works for any bonded or non-bonded potential in LAMMPS.
The most expensive part of the calculation is the sampling of
the stress fluctuations. Several examples of this method are
provided in the examples/ELASTIC_T/BORN_MATRIX directory
provided in the ``examples/ELASTIC_T/BORN_MATRIX`` directory
described on the :doc:`Examples <Examples>` doc page.
A second way is to measure
@ -43,7 +49,7 @@ the systematic and statistical errors in this method, the magnitude of
the deformation must be chosen judiciously, and care must be taken to
fully equilibrate the deformed cell before sampling the stress
tensor. An example of this method is provided in the
examples/ELASTIC_T/DEFORMATION directory
``examples/ELASTIC_T/DEFORMATION`` directory
described on the :doc:`Examples <Examples>` doc page.
Another approach is to sample the triclinic cell fluctuations

View File

@ -1,20 +1,22 @@
Calculate thermal conductivity
==============================
The thermal conductivity kappa of a material can be measured in at
least 4 ways using various options in LAMMPS. See the examples/KAPPA
The thermal conductivity :math:`\kappa` of a material can be measured in at
least 4 ways using various options in LAMMPS. See the ``examples/KAPPA``
directory for scripts that implement the 4 methods discussed here for
a simple Lennard-Jones fluid model. Also, see the :doc:`Howto viscosity <Howto_viscosity>` page for an analogous discussion
for viscosity.
The thermal conductivity tensor kappa is a measure of the propensity
The thermal conductivity tensor :math:`\mathbf{\kappa}` is a measure of the propensity
of a material to transmit heat energy in a diffusive manner as given
by Fourier's law
J = -kappa grad(T)
.. math::
where J is the heat flux in units of energy per area per time and
grad(T) is the spatial gradient of temperature. The thermal
J = -\kappa \cdot \text{grad}(T)
where :math:`J` is the heat flux in units of energy per area per time and
:math:`\text{grad}(T)` is the spatial gradient of temperature. The thermal
conductivity thus has units of energy per distance per time per degree
K and is often approximated as an isotropic quantity, i.e. as a
scalar.
@ -49,7 +51,7 @@ details.
The fourth method is based on the Green-Kubo (GK) formula which
relates the ensemble average of the auto-correlation of the heat flux
to kappa. The heat flux can be calculated from the fluctuations of
to :math:`\kappa`. The heat flux can be calculated from the fluctuations of
per-atom potential and kinetic energies and per-atom stress tensor in
a steady-state equilibrated simulation. This is in contrast to the
two preceding non-equilibrium methods, where energy flows continuously

View File

@ -44,7 +44,7 @@ For large numbers of independent simulations, you can use
:doc:`variables <variable>` and the :doc:`next <next>` and
:doc:`jump <jump>` commands to loop over the same input script
multiple times with different settings. For example, this
script, named in.polymer
script, named ``in.polymer``
.. code-block:: LAMMPS
@ -57,7 +57,7 @@ script, named in.polymer
next d
jump in.polymer
would run 8 simulations in different directories, using a data.polymer
would run 8 simulations in different directories, using a ``data.polymer``
file in each directory. The same concept could be used to run the
same system at 8 different temperatures, using a temperature variable
and storing the output in different log and dump files, for example
@ -83,10 +83,10 @@ partition of processors. LAMMPS can be run on multiple partitions via
the :doc:`-partition command-line switch <Run_options>`.
In the last 2 examples, if LAMMPS were run on 3 partitions, the same
scripts could be used if the "index" and "loop" variables were
scripts could be used if the ``index`` and ``loop`` variables were
replaced with *universe*\ -style variables, as described in the
:doc:`variable <variable>` command. Also, the "next t" and "next a"
commands would need to be replaced with a single "next a t" command.
:doc:`variable <variable>` command. Also, the :lammps:`next t` and :lammps:`next a`
commands would need to be replaced with a single :lammps:`next a t` command.
With these modifications, the 8 simulations of each script would run
on the 3 partitions one after the other until all were finished.
Initially, 3 simulations would be started simultaneously, one on each

View File

@ -26,8 +26,8 @@ scripts are based on. If that script had the line
restart 50 tmp.restart
added to it, it would produce 2 binary restart files (tmp.restart.50
and tmp.restart.100) as it ran.
added to it, it would produce two binary restart files (``tmp.restart.50``
and ``tmp.restart.100``) as it ran.
This script could be used to read the first restart file and re-run the
last 50 timesteps:
@ -47,21 +47,21 @@ last 50 timesteps:
run 50
Note that the following commands do not need to be repeated because
their settings are included in the restart file: *units, atom_style,
special_bonds, pair_style, bond_style*. However, these commands do
their settings are included in the restart file: :lammps:`units`, :lammps:`atom_style`,
:lammps:`special_bonds`, :lammps:`pair_style`, :lammps:`bond_style`. However, these commands do
need to be used, since their settings are not in the restart file:
*neighbor, fix, timestep*\ .
:lammps:`neighbor`, :lammps:`fix`, :lammps:`timestep`.
If you actually use this script to perform a restarted run, you will
notice that the thermodynamic data match at step 50 (if you also put a
"thermo 50" command in the original script), but do not match at step
:lammps:`thermo 50` command in the original script), but do not match at step
100. This is because the :doc:`fix langevin <fix_langevin>` command
uses random numbers in a way that does not allow for perfect restarts.
As an alternate approach, the restart file could be converted to a data
file as follows:
.. code-block:: LAMMPS
.. code-block:: bash
lmp_g++ -r tmp.restart.50 tmp.restart.data
@ -89,8 +89,8 @@ Then, this script could be used to re-run the last 50 steps:
reset_timestep 50
run 50
Note that nearly all the settings specified in the original *in.chain*
script must be repeated, except the *pair_coeff* and *bond_coeff*
Note that nearly all the settings specified in the original ``in.chain``
script must be repeated, except the :lammps:`pair_coeff` and :lammps:`bond_coeff`
commands, since the new data file lists the force field coefficients.
Also, the :doc:`reset_timestep <reset_timestep>` command is used to tell
LAMMPS the current timestep. This value is stored in restart files, but

View File

@ -341,7 +341,12 @@ data files and obtain a list of dictionaries.
.. code-block::
[{'timestep': 0, 'pe': -6.773368053259247, 'ke': 4.498875000000003}, {'timestep': 50, 'pe': -4.80824944183232, 'ke': 2.5257981827119798}, {'timestep': 100, 'pe': -4.787560887558151, 'ke': 2.5062598821985103}, {'timestep': 150, 'pe': -4.747103368600548, 'ke': 2.46609592554545}, {'timestep': 200, 'pe': -4.750905285854413, 'ke': 2.4701136792591694}, {'timestep': 250, 'pe': -4.777432735632181, 'ke': 2.4962152903997175}]
[{'timestep': 0, 'pe': -6.773368053259247, 'ke': 4.498875000000003},
{'timestep': 50, 'pe': -4.80824944183232, 'ke': 2.5257981827119798},
{'timestep': 100, 'pe': -4.787560887558151, 'ke': 2.5062598821985103},
{'timestep': 150, 'pe': -4.747103368600548, 'ke': 2.46609592554545},
{'timestep': 200, 'pe': -4.750905285854413, 'ke': 2.4701136792591694},
{'timestep': 250, 'pe': -4.777432735632181, 'ke': 2.4962152903997175}]
Line Delimited JSON (LD-JSON)
-----------------------------
@ -352,7 +357,8 @@ Each line represents one JSON object.
.. code-block:: LAMMPS
fix extra all print 50 """{"timestep": $(step), "pe": $(pe), "ke": $(ke)}""" title "" file output.json screen no
fix extra all print 50 """{"timestep": $(step), "pe": $(pe), "ke": $(ke)}""" &
title "" file output.json screen no
.. code-block:: json
:caption: output.json

View File

@ -1,22 +1,24 @@
Calculate viscosity
===================
The shear viscosity eta of a fluid can be measured in at least 6 ways
using various options in LAMMPS. See the examples/VISCOSITY directory
The shear viscosity :math:`\eta` of a fluid can be measured in at least 6 ways
using various options in LAMMPS. See the ``examples/VISCOSITY`` directory
for scripts that implement the 5 methods discussed here for a simple
Lennard-Jones fluid model and 1 method for SPC/E water model.
Also, see the :doc:`page on calculating thermal conductivity <Howto_kappa>`
for an analogous discussion for thermal conductivity.
Eta is a measure of the propensity of a fluid to transmit momentum in
:math:`\eta` is a measure of the propensity of a fluid to transmit momentum in
a direction perpendicular to the direction of velocity or momentum
flow. Alternatively it is the resistance the fluid has to being
sheared. It is given by
J = -eta grad(Vstream)
.. math::
where J is the momentum flux in units of momentum per area per time.
and grad(Vstream) is the spatial gradient of the velocity of the fluid
J = -\eta \cdot \text{grad}(V_{\text{stream}})
where :math:`J` is the momentum flux in units of momentum per area per time.
and :math:`\text{grad}(V_{\text{stream}})` is the spatial gradient of the velocity of the fluid
moving in another direction, normal to the area through which the
momentum flows. Viscosity thus has units of pressure-time.
@ -38,11 +40,11 @@ velocity to prevent the fluid from heating up.
In both cases, the velocity profile setup in the fluid by this
procedure can be monitored by the :doc:`fix ave/chunk <fix_ave_chunk>`
command, which determines grad(Vstream) in the equation above.
E.g. the derivative in the y-direction of the Vx component of fluid
motion or grad(Vstream) = dVx/dy. The Pxy off-diagonal component of
command, which determines :math:`\text{grad}(V_{\text{stream}})` in the equation above.
E.g. the derivative in the y-direction of the :math:`V_x` component of fluid
motion or :math:`\text{grad}(V_{\text{stream}}) = \frac{\text{d} V_x}{\text{d} y}`. The :math:`P_{xy}` off-diagonal component of
the pressure or stress tensor, as calculated by the :doc:`compute pressure <compute_pressure>` command, can also be monitored, which
is the J term in the equation above. See the :doc:`Howto nemd <Howto_nemd>` page for details on NEMD simulations.
is the :math:`J` term in the equation above. See the :doc:`Howto nemd <Howto_nemd>` page for details on NEMD simulations.
The third method is to perform a reverse non-equilibrium MD simulation
using the :doc:`fix viscosity <fix_viscosity>` command which implements
@ -55,7 +57,7 @@ See the :doc:`fix viscosity <fix_viscosity>` command for details.
The fourth method is based on the Green-Kubo (GK) formula which
relates the ensemble average of the auto-correlation of the
stress/pressure tensor to eta. This can be done in a fully
stress/pressure tensor to :math:`\eta`. This can be done in a fully
equilibrated simulation which is in contrast to the two preceding
non-equilibrium methods, where momentum flows continuously through the
simulation box.

View File

@ -6,7 +6,7 @@ analyzed in a variety of ways.
LAMMPS snapshots are created by the :doc:`dump <dump>` command, which
can create files in several formats. The native LAMMPS dump format is a
text file (see "dump atom" or "dump custom") which can be visualized by
text file (see :lammps:`dump atom` or :lammps:`dump custom`) which can be visualized by
`several visualization tools <https://www.lammps.org/viz.html>`_ for MD
simulation trajectories. `OVITO <https://www.ovito.org>`_ and `VMD
<https://www.ks.uiuc.edu/Research/vmd>`_ seem to be the most popular

View File

@ -35,35 +35,35 @@ you **must** build LAMMPS from the source code.
These are the files and subdirectories in the LAMMPS distribution:
+------------+---------------------------------------------+
| README | Short description of the LAMMPS package |
+------------+---------------------------------------------+
| LICENSE | GNU General Public License (GPL) |
+------------+---------------------------------------------+
| SECURITY.md| Security policy for the LAMMPS package |
+------------+---------------------------------------------+
| bench | benchmark inputs |
+------------+---------------------------------------------+
| cmake | CMake build files |
+------------+---------------------------------------------+
| doc | documentation and tools to build the manual |
+------------+---------------------------------------------+
| examples | example input files |
+------------+---------------------------------------------+
| fortran | Fortran module for LAMMPS library interface |
+------------+---------------------------------------------+
| lib | additional provided or external libraries |
+------------+---------------------------------------------+
| potentials | selected interatomic potential files |
+------------+---------------------------------------------+
| python | Python module for LAMMPS library interface |
+------------+---------------------------------------------+
| src | LAMMPS source files |
+------------+---------------------------------------------+
| tools | pre- and post-processing tools |
+------------+---------------------------------------------+
| unittest | source code and inputs for testing LAMMPS |
+------------+---------------------------------------------+
+-----------------+---------------------------------------------+
| ``README`` | Short description of the LAMMPS package |
+-----------------+---------------------------------------------+
| ``LICENSE`` | GNU General Public License (GPL) |
+-----------------+---------------------------------------------+
| ``SECURITY.md`` | Security policy for the LAMMPS package |
+-----------------+---------------------------------------------+
| ``bench`` | benchmark inputs |
+-----------------+---------------------------------------------+
| ``cmake`` | CMake build files |
+-----------------+---------------------------------------------+
| ``doc`` | documentation and tools to build the manual |
+-----------------+---------------------------------------------+
| ``examples`` | example input files |
+-----------------+---------------------------------------------+
| ``fortran`` | Fortran module for LAMMPS library interface |
+-----------------+---------------------------------------------+
| ``lib`` | additional provided or external libraries |
+-----------------+---------------------------------------------+
| ``potentials`` | selected interatomic potential files |
+-----------------+---------------------------------------------+
| ``python`` | Python module for LAMMPS library interface |
+-----------------+---------------------------------------------+
| ``src`` | LAMMPS source files |
+-----------------+---------------------------------------------+
| ``tools`` | pre- and post-processing tools |
+-----------------+---------------------------------------------+
| ``unittest`` | source code and inputs for testing LAMMPS |
+-----------------+---------------------------------------------+
You will have all of these if you downloaded the LAMMPS source code.
You will have only some of them if you downloaded executables, as

View File

@ -60,7 +60,7 @@ between them at any time using "git checkout <branch name>".)
files (mostly by accident). If you do not need access to the entire
commit history (most people don't), you can speed up the "cloning"
process and reduce local disk space requirements by using the
*--depth* git command line flag. That will create a "shallow clone"
``--depth`` git command line flag. That will create a "shallow clone"
of the repository, which contains only a subset of the git history.
Using a depth of 1000 is usually sufficient to include the head
commits of the *develop*, the *release*, and the *maintenance*
@ -122,7 +122,7 @@ changed. How to do this depends on the build system you are using.
.. code-block:: bash
cmake . --build
cmake --build .
CMake should auto-detect whether it needs to re-run the CMake
configuration step and otherwise redo the build for all files

View File

@ -31,7 +31,7 @@ command:
tar -xzvf lammps*.tar.gz
This will create a LAMMPS directory with the version date in its name,
e.g. lammps-28Mar23.
e.g. ``lammps-28Mar23``.
----------

View File

@ -34,7 +34,7 @@ When you download the installer package, you run it on your Windows
machine. It will then prompt you with a dialog, where you can choose
the installation directory, unpack and copy several executables,
potential files, documentation PDFs, selected example files, etc. It
will then update a few system settings (e.g. PATH, LAMMPS_POTENTIALS)
will then update a few system settings (e.g. ``PATH``, ``LAMMPS_POTENTIALS``)
and add an entry into the Start Menu (with references to the
documentation, LAMMPS homepage and more). From that menu, there is
also a link to an uninstaller that removes the files and undoes the

File diff suppressed because it is too large Load Diff

View File

@ -10,8 +10,8 @@ the rest of LAMMPS.
The "Examples" column is a subdirectory in the examples directory of the
distribution which has one or more input scripts that use the package.
E.g. "peptide" refers to the examples/peptide directory; PACKAGES/atc refers
to the examples/PACKAGES/atc directory. The "Lib" column indicates
E.g. ``peptide`` refers to the ``examples/peptide`` directory; ``PACKAGES/atc`` refers
to the ``examples/PACKAGES/atc`` directory. The "Lib" column indicates``
whether an extra library is needed to build and use the package:
* no = no library
@ -21,7 +21,7 @@ whether an extra library is needed to build and use the package:
.. list-table::
:header-rows: 1
:widths: auto
:widths: 20 20 30 25 5
* - Package
- Description
@ -31,7 +31,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`ADIOS <PKG-ADIOS>`
- dump output via ADIOS
- :doc:`dump adios <dump_adios>`
- PACKAGES/adios
- ``PACKAGES/adios``
- ext
* - :ref:`AMOEBA <PKG-AMOEBA>`
- AMOEBA and HIPPO force fields
@ -46,17 +46,17 @@ whether an extra library is needed to build and use the package:
* - :ref:`ATC <PKG-ATC>`
- Atom-to-Continuum coupling
- :doc:`fix atc <fix_atc>`
- PACKAGES/atc
- ``PACKAGES/atc``
- int
* - :ref:`AWPMD <PKG-AWPMD>`
- wave packet MD
- :doc:`pair_style awpmd/cut <pair_awpmd>`
- PACKAGES/awpmd
- ``PACKAGES/awpmd``
- int
* - :ref:`BOCS <PKG-BOCS>`
- BOCS bottom up coarse graining
- :doc:`fix bocs <fix_bocs>`
- PACKAGES/bocs
- ``PACKAGES/bocs``
- no
* - :ref:`BODY <PKG-BODY>`
- body-style particles
@ -71,17 +71,17 @@ whether an extra library is needed to build and use the package:
* - :ref:`BROWNIAN <PKG-BROWNIAN>`
- Brownian dynamics, self-propelled particles
- :doc:`fix brownian <fix_brownian>`, :doc:`fix propel/self <fix_propel_self>`
- PACKAGES/brownian
- ``PACKAGES/brownian``
- no
* - :ref:`CG-DNA <PKG-CG-DNA>`
- coarse-grained DNA force fields
- src/CG-DNA/README
- PACKAGES/cgdna
- ``src/CG-DNA/README``
- ``PACKAGES/cgdna``
- no
* - :ref:`CG-SPICA <PKG-CG-SPICA>`
- SPICA (SDK) coarse-graining model
- :doc:`pair_style lj/spica <pair_spica>`
- PACKAGES/cgspica
- ``PACKAGES/cgspica``
- no
* - :ref:`CLASS2 <PKG-CLASS2>`
- class 2 force fields
@ -96,7 +96,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`COLVARS <PKG-COLVARS>`
- `Colvars collective variables library <https://colvars.github.io/>`_
- :doc:`fix colvars <fix_colvars>`
- PACKAGES/colvars
- ``PACKAGES/colvars``
- int
* - :ref:`COMPRESS <PKG-COMPRESS>`
- I/O compression
@ -111,12 +111,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`DIELECTRIC <PKG-DIELECTRIC>`
- dielectric boundary solvers and force styles
- :doc:`compute efield/atom <compute_efield_atom>`
- PACKAGES/dielectric
- ``PACKAGES/dielectric``
- no
* - :ref:`DIFFRACTION <PKG-DIFFRACTION>`
- virtual x-ray and electron diffraction
- :doc:`compute xrd <compute_xrd>`
- PACKAGES/diffraction
- ``PACKAGES/diffraction``
- no
* - :ref:`DIPOLE <PKG-DIPOLE>`
- point dipole particles
@ -126,37 +126,37 @@ whether an extra library is needed to build and use the package:
* - :ref:`DPD-BASIC <PKG-DPD-BASIC>`
- basic DPD models
- :doc:`pair_styles dpd <pair_dpd>` :doc:`dpd/ext <pair_dpd_ext>`
- PACKAGES/dpd-basic
- ``PACKAGES/dpd-basic``
- no
* - :ref:`DPD-MESO <PKG-DPD-MESO>`
- mesoscale DPD models
- :doc:`pair_style edpd <pair_mesodpd>`
- PACKAGES/dpd-meso
- ``PACKAGES/dpd-meso``
- no
* - :ref:`DPD-REACT <PKG-DPD-REACT>`
- reactive dissipative particle dynamics
- src/DPD-REACT/README
- PACKAGES/dpd-react
- ``src/DPD-REACT/README``
- ``PACKAGES/dpd-react``
- no
* - :ref:`DPD-SMOOTH <PKG-DPD-SMOOTH>`
- smoothed dissipative particle dynamics
- src/DPD-SMOOTH/README
- PACKAGES/dpd-smooth
- ``src/DPD-SMOOTH/README``
- ``PACKAGES/dpd-smooth``
- no
* - :ref:`DRUDE <PKG-DRUDE>`
- Drude oscillators
- :doc:`Howto drude <Howto_drude>`
- PACKAGES/drude
- ``PACKAGES/drude``
- no
* - :ref:`EFF <PKG-EFF>`
- electron force field
- :doc:`pair_style eff/cut <pair_eff>`
- PACKAGES/eff
- ``PACKAGES/eff``
- no
* - :ref:`ELECTRODE <PKG-ELECTRODE>`
- electrode charges to match potential
- :doc:`fix electrode/conp <fix_electrode>`
- PACKAGES/electrode
- ``PACKAGES/electrode``
- no
* - :ref:`EXTRA-COMMAND <PKG-EXTRA-COMMAND>`
- additional command styles
@ -191,7 +191,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`FEP <PKG-FEP>`
- free energy perturbation
- :doc:`compute fep <compute_fep>`
- PACKAGES/fep
- ``PACKAGES/fep``
- no
* - :ref:`GPU <PKG-GPU>`
- GPU-enabled styles
@ -216,7 +216,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`INTERLAYER <PKG-INTERLAYER>`
- Inter-layer pair potentials
- :doc:`several pair styles <Commands_pair>`
- PACKAGES/interlayer
- ``PACKAGES/interlayer``
- no
* - :ref:`KIM <PKG-KIM>`
- OpenKIM wrapper
@ -236,22 +236,22 @@ whether an extra library is needed to build and use the package:
* - :ref:`LATBOLTZ <PKG-LATBOLTZ>`
- Lattice Boltzmann fluid
- :doc:`fix lb/fluid <fix_lb_fluid>`
- PACKAGES/latboltz
- ``PACKAGES/latboltz``
- no
* - :ref:`LEPTON <PKG-LEPTON>`
- evaluate strings as potential function
- :doc:`pair_style lepton <pair_lepton>`
- PACKAGES/lepton
- ``PACKAGES/lepton``
- int
* - :ref:`MACHDYN <PKG-MACHDYN>`
- smoothed Mach dynamics
- `SMD User Guide <PDF/MACHDYN_LAMMPS_userguide.pdf>`_
- PACKAGES/machdyn
- ``PACKAGES/machdyn``
- ext
* - :ref:`MANIFOLD <PKG-MANIFOLD>`
- motion on 2d surfaces
- :doc:`fix manifoldforce <fix_manifoldforce>`
- PACKAGES/manifold
- ``PACKAGES/manifold``
- no
* - :ref:`MANYBODY <PKG-MANYBODY>`
- many-body potentials
@ -266,7 +266,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`MDI <PKG-MDI>`
- client-server code coupling
- :doc:`MDI Howto <Howto_mdi>`
- PACKAGES/mdi
- ``PACKAGES/mdi``
- ext
* - :ref:`MEAM <PKG-MEAM>`
- modified EAM potential (C++)
@ -276,12 +276,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`MESONT <PKG-MESONT>`
- mesoscopic tubular potential model
- pair styles :doc:`mesocnt <pair_mesocnt>`
- PACKAGES/mesont
- ``PACKAGES/mesont``
- no
* - :ref:`MGPT <PKG-MGPT>`
- fast MGPT multi-ion potentials
- :doc:`pair_style mgpt <pair_mgpt>`
- PACKAGES/mgpt
- ``PACKAGES/mgpt``
- no
* - :ref:`MISC <PKG-MISC>`
- miscellaneous single-file commands
@ -291,7 +291,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`ML-HDNNP <PKG-ML-HDNNP>`
- High-dimensional neural network potentials
- :doc:`pair_style hdnnp <pair_hdnnp>`
- PACKAGES/hdnnp
- ``PACKAGES/hdnnp``
- ext
* - :ref:`ML-IAP <PKG-ML-IAP>`
- multiple machine learning potentials
@ -301,7 +301,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`ML-PACE <PKG-ML-PACE>`
- Atomic Cluster Expansion potential
- :doc:`pair pace <pair_pace>`
- PACKAGES/pace
- ``PACKAGES/pace``
- ext
* - :ref:`ML-POD <PKG-ML-POD>`
- Proper orthogonal decomposition potentials
@ -311,12 +311,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`ML-QUIP <PKG-ML-QUIP>`
- QUIP/libatoms interface
- :doc:`pair_style quip <pair_quip>`
- PACKAGES/quip
- ``PACKAGES/quip``
- ext
* - :ref:`ML-RANN <PKG-ML-RANN>`
- Pair style for RANN potentials
- :doc:`pair rann <pair_rann>`
- PACKAGES/rann
- ``PACKAGES/rann``
- no
* - :ref:`ML-SNAP <PKG-ML-SNAP>`
- quantum-fitted potential
@ -326,12 +326,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`ML-UF3 <PKG-ML-UF3>`
- quantum-fitted ultra fast potentials
- :doc:`pair_style uf3 <pair_uf3>`
- PACKAGES/uf3
- ``PACKAGES/uf3``
- no
* - :ref:`MOFFF <PKG-MOFFF>`
- styles for `MOF-FF <MOFplus_>`_ force field
- :doc:`pair_style buck6d/coul/gauss <pair_buck6d_coul_gauss>`
- PACKAGES/mofff
- ``PACKAGES/mofff``
- no
* - :ref:`MOLECULE <PKG-MOLECULE>`
- molecular system force fields
@ -361,7 +361,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`ORIENT <PKG-ORIENT>`
- fixes for orientation depended forces
- :doc:`fix orient/* <fix_orient>`
- PACKAGES/orient_eco
- ``PACKAGES/orient_eco``
- no
* - :ref:`PERI <PKG-PERI>`
- Peridynamics models
@ -371,7 +371,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`PHONON <PKG-PHONON>`
- phonon dynamical matrix
- :doc:`fix phonon <fix_phonon>`
- PACKAGES/phonon
- ``PACKAGES/phonon``
- no
* - :ref:`PLUGIN <PKG-PLUGIN>`
- Plugin loader command
@ -381,7 +381,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`PLUMED <PKG-PLUMED>`
- `PLUMED free energy library <https://www.plumed.org>`_
- :doc:`fix plumed <fix_plumed>`
- PACKAGES/plumed
- ``PACKAGES/plumed``
- ext
* - :ref:`POEMS <PKG-POEMS>`
- coupled rigid body motion
@ -406,7 +406,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`QMMM <PKG-QMMM>`
- QM/MM coupling
- :doc:`fix qmmm <fix_qmmm>`
- PACKAGES/qmmm
- ``PACKAGES/qmmm``
- ext
* - :ref:`QTB <PKG-QTB>`
- quantum nuclear effects
@ -421,7 +421,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`REACTION <PKG-REACTION>`
- chemical reactions in classical MD
- :doc:`fix bond/react <fix_bond_react>`
- PACKAGES/reaction
- ``PACKAGES/reaction``
- no
* - :ref:`REAXFF <PKG-REAXFF>`
- ReaxFF potential (C/C++)
@ -441,7 +441,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`SCAFACOS <PKG-SCAFACOS>`
- wrapper for ScaFaCoS Kspace solver
- :doc:`kspace_style scafacos <kspace_style>`
- PACKAGES/scafacos
- ``PACKAGES/scafacos``
- ext
* - :ref:`SHOCK <PKG-SHOCK>`
- shock loading methods
@ -451,12 +451,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`SMTBQ <PKG-SMTBQ>`
- second moment tight binding potentials
- pair styles :doc:`smtbq <pair_smtbq>`, :doc:`smatb <pair_smatb>`
- PACKAGES/smtbq
- ``PACKAGES/smtbq``
- no
* - :ref:`SPH <PKG-SPH>`
- smoothed particle hydrodynamics
- `SPH User Guide <PDF/SPH_LAMMPS_userguide.pdf>`_
- PACKAGES/sph
- ``PACKAGES/sph``
- no
* - :ref:`SPIN <PKG-SPIN>`
- magnetic atomic spin dynamics
@ -471,12 +471,12 @@ whether an extra library is needed to build and use the package:
* - :ref:`TALLY <PKG-TALLY>`
- pairwise tally computes
- :doc:`compute XXX/tally <compute_tally>`
- PACKAGES/tally
- ``PACKAGES/tally``
- no
* - :ref:`UEF <PKG-UEF>`
- extensional flow
- :doc:`fix nvt/uef <fix_nh_uef>`
- PACKAGES/uef
- ``PACKAGES/uef``
- no
* - :ref:`VORONOI <PKG-VORONOI>`
- Voronoi tesselation
@ -491,7 +491,7 @@ whether an extra library is needed to build and use the package:
* - :ref:`YAFF <PKG-YAFF>`
- additional styles implemented in YAFF
- :doc:`angle_style cross <angle_cross>`
- PACKAGES/yaff
- ``PACKAGES/yaff``
- no
.. _MOFplus: https://www.mofplus.org/content/show/MOF-FF

View File

@ -2,8 +2,8 @@ Basics of running LAMMPS
========================
LAMMPS is run from the command line, reading commands from a file via
the -in command line flag, or from standard input. Using the "-in
in.file" variant is recommended (see note below). The name of the
the ``-in`` command line flag, or from standard input. Using the ``-in
in.file`` variant is recommended (see note below). The name of the
LAMMPS executable is either ``lmp`` or ``lmp_<machine>`` with
`<machine>` being the machine string used when compiling LAMMPS. This
is required when compiling LAMMPS with the traditional build system
@ -35,7 +35,7 @@ executable itself can be placed elsewhere.
form is required.
As LAMMPS runs it prints info to the screen and a logfile named
*log.lammps*\ . More info about output is given on the :doc:`screen and
``log.lammps``. More info about output is given on the :doc:`screen and
logfile output <Run_output>` page.
If LAMMPS encounters errors in the input script or while running a
@ -69,12 +69,12 @@ defaults are often adequate.
For example, it is often important to bind MPI tasks (processes) to
physical cores (processor affinity), so that the operating system does
not migrate them during a simulation. If this is not the default
behavior on your machine, the mpirun option "--bind-to core" (OpenMPI)
or "-bind-to core" (MPICH) can be used.
behavior on your machine, the mpirun option ``--bind-to core`` (OpenMPI)
or ``-bind-to core`` (MPICH) can be used.
If the LAMMPS command(s) you are using support multi-threading, you
can set the number of threads per MPI task via the environment
variable OMP_NUM_THREADS, before you launch LAMMPS:
variable ``OMP_NUM_THREADS``, before you launch LAMMPS:
.. code-block:: bash
@ -91,7 +91,7 @@ packages and which commands support multi-threading.
You can experiment with running LAMMPS using any of the input scripts
provided in the examples or bench directory. Input scripts are named
in.\* and sample outputs are named log.\*.P where P is the number of
``in.*`` and sample outputs are named ``log.*.P`` where P is the number of
processors it was run on.
Some of the examples or benchmarks require LAMMPS to be built with

View File

@ -275,13 +275,13 @@ impact can be significant, especially for large parallel runs.
Invoke the :doc:`package <package>` command with style and args. The
syntax is the same as if the command appeared at the top of the input
script. For example "-package gpu 2" or "-pk gpu 2" is the same as
script. For example ``-package gpu 2`` or ``-pk gpu 2`` is the same as
:doc:`package gpu 2 <package>` in the input script. The possible styles
and args are documented on the :doc:`package <package>` doc page. This
switch can be used multiple times, e.g. to set options for the
INTEL and OPENMP packages which can be used together.
Along with the "-suffix" command-line switch, this is a convenient
Along with the ``-suffix`` command-line switch, this is a convenient
mechanism for invoking accelerator packages and their options without
having to edit an input script.
@ -300,7 +300,7 @@ specify the number of processors in each partition. Arguments of the
form MxN mean M partitions, each with N processors. Arguments of the
form N mean a single partition with N processors. The sum of
processors in all partitions must equal P. Thus the command
"-partition 8x2 4 5" has 10 partitions and runs on a total of 25
``-partition 8x2 4 5`` has 10 partitions and runs on a total of 25
processors.
Running with multiple partitions can be useful for running
@ -378,8 +378,8 @@ processors will be in the first partition, the second set in the second
partition. The -reorder command-line switch can alter this so that
the first N procs in the first partition and one proc in the second partition
will be ordered consecutively, e.g. as the cores on one physical node.
This can boost performance. For example, if you use "-reorder nth 4"
and "-partition 9 3" and you are running on 12 processors, the
This can boost performance. For example, if you use ``-reorder nth 4``
and ``-partition 9 3`` and you are running on 12 processors, the
processors will be reordered from
.. parsed-literal::
@ -584,11 +584,11 @@ style that accepts arguments. It allows for two packages to be
specified. The first package specified is the default and will be used
if it is available. If no style is available for the first package,
the style for the second package will be used if available. For
example, "-suffix hybrid intel omp" will use styles from the
example, ``-suffix hybrid intel omp`` will use styles from the
INTEL package if they are installed and available, but styles for
the OPENMP package otherwise.
Along with the "-package" command-line switch, this is a convenient
Along with the ``-package`` command-line switch, this is a convenient
mechanism for invoking accelerator packages and their options without
having to edit an input script.
@ -605,30 +605,30 @@ variant version does not exist, the standard version is created.
For the GPU package, using this command-line switch also invokes the
default GPU settings, as if the command "package gpu 1" were used at
the top of your input script. These settings can be changed by using
the "-package gpu" command-line switch or the :doc:`package gpu <package>` command in your script.
the ``-package gpu`` command-line switch or the :doc:`package gpu <package>` command in your script.
For the INTEL package, using this command-line switch also
invokes the default INTEL settings, as if the command "package
intel 1" were used at the top of your input script. These settings
can be changed by using the "-package intel" command-line switch or
can be changed by using the ``-package intel`` command-line switch or
the :doc:`package intel <package>` command in your script. If the
OPENMP package is also installed, the hybrid style with "intel omp"
arguments can be used to make the omp suffix a second choice, if a
requested style is not available in the INTEL package. It will
also invoke the default OPENMP settings, as if the command "package
omp 0" were used at the top of your input script. These settings can
be changed by using the "-package omp" command-line switch or the
be changed by using the ``-package omp`` command-line switch or the
:doc:`package omp <package>` command in your script.
For the KOKKOS package, using this command-line switch also invokes
the default KOKKOS settings, as if the command "package kokkos" were
used at the top of your input script. These settings can be changed
by using the "-package kokkos" command-line switch or the :doc:`package kokkos <package>` command in your script.
by using the ``-package kokkos`` command-line switch or the :doc:`package kokkos <package>` command in your script.
For the OMP package, using this command-line switch also invokes the
default OMP settings, as if the command "package omp 0" were used at
the top of your input script. These settings can be changed by using
the "-package omp" command-line switch or the :doc:`package omp <package>` command in your script.
the ``-package omp`` command-line switch or the :doc:`package omp <package>` command in your script.
The :doc:`suffix <suffix>` command can also be used within an input
script to set a suffix, or to turn off or back on any suffix setting

View File

@ -15,7 +15,7 @@ The 5 standard problems are as follow:
#. LJ = atomic fluid, Lennard-Jones potential with 2.5 sigma cutoff (55
neighbors per atom), NVE integration
#. Chain = bead-spring polymer melt of 100-mer chains, FENE bonds and LJ
pairwise interactions with a 2\^(1/6) sigma cutoff (5 neighbors per
pairwise interactions with a :math:`2^{\frac{1}{6}}` sigma cutoff (5 neighbors per
atom), NVE integration
#. EAM = metallic solid, Cu EAM potential with 4.95 Angstrom cutoff (45
neighbors per atom), NVE integration
@ -29,19 +29,19 @@ The 5 standard problems are as follow:
Input files for these 5 problems are provided in the bench directory
of the LAMMPS distribution. Each has 32,000 atoms and runs for 100
timesteps. The size of the problem (number of atoms) can be varied
using command-line switches as described in the bench/README file.
using command-line switches as described in the ``bench/README`` file.
This is an easy way to test performance and either strong or weak
scalability on your machine.
The bench directory includes a few log.\* files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop. The bench/FERMI
and bench/KEPLER directories have input files and scripts and instructions
The bench directory includes a few ``log.*`` files that show performance
of these 5 problems on 1 or 4 cores of Linux desktop. The ``bench/FERMI``
and ``bench/KEPLER`` directories have input files and scripts and instructions
for running the same (or similar) problems using OpenMP or GPU or Xeon
Phi acceleration options. See the README files in those directories and the
Phi acceleration options. See the ``README`` files in those directories and the
:doc:`Accelerator packages <Speed_packages>` pages for instructions on how
to build LAMMPS and run on that kind of hardware.
The bench/POTENTIALS directory has input files which correspond to the
The ``bench/POTENTIALS`` directory has input files which correspond to the
table of results on the
`Potentials <https://www.lammps.org/bench.html#potentials>`_ section of
the Benchmarks web page. So you can also run those test problems on
@ -50,7 +50,7 @@ your machine.
The `billion-atom <https://www.lammps.org/bench.html#billion>`_ section
of the Benchmarks web page has performance data for very large
benchmark runs of simple Lennard-Jones (LJ) models, which use the
bench/in.lj input script.
``bench/in.lj`` input script.
----------

View File

@ -38,10 +38,10 @@ to have an NVIDIA GPU and install the corresponding NVIDIA CUDA
toolkit software on your system (this is only tested on Linux
and unsupported on Windows):
* Check if you have an NVIDIA GPU: cat /proc/driver/nvidia/gpus/\*/information
* Check if you have an NVIDIA GPU: ``cat /proc/driver/nvidia/gpus/\*/information``
* Go to https://developer.nvidia.com/cuda-downloads
* Install a driver and toolkit appropriate for your system (SDK is not necessary)
* Run lammps/lib/gpu/nvc_get_devices (after building the GPU library, see below) to
* Run ``lammps/lib/gpu/nvc_get_devices`` (after building the GPU library, see below) to
list supported devices and properties
To compile and use this package in OpenCL mode, you currently need
@ -51,7 +51,7 @@ installed. There can be multiple of them for the same or different hardware
(GPUs, CPUs, Accelerators) installed at the same time. OpenCL refers to those
as 'platforms'. The GPU library will try to auto-select the best suitable platform,
but this can be overridden using the platform option of the :doc:`package <package>`
command. run lammps/lib/gpu/ocl_get_devices to get a list of available
command. run ``lammps/lib/gpu/ocl_get_devices`` to get a list of available
platforms and devices with a suitable ICD available.
To compile and use this package for Intel GPUs, OpenCL or the Intel oneAPI
@ -63,7 +63,7 @@ provides optimized C++, MPI, and many other libraries and tools. See:
If you do not have a discrete GPU card installed, this package can still provide
significant speedups on some CPUs that include integrated GPUs. Additionally, for
many macs, OpenCL is already included with the OS and Makefiles are available
in the lib/gpu directory.
in the ``lib/gpu`` directory.
To compile and use this package in HIP mode, you have to have the AMD ROCm
software installed. Versions of ROCm older than 3.5 are currently deprecated
@ -94,31 +94,36 @@ shared by 4 MPI tasks.
The GPU package also has limited support for OpenMP for both
multi-threading and vectorization of routines that are run on the CPUs.
This requires that the GPU library and LAMMPS are built with flags to
enable OpenMP support (e.g. -fopenmp). Some styles for time integration
enable OpenMP support (e.g. ``-fopenmp``). Some styles for time integration
are also available in the GPU package. These run completely on the CPUs
in full double precision, but exploit multi-threading and vectorization
for faster performance.
Use the "-sf gpu" :doc:`command-line switch <Run_options>`, which will
automatically append "gpu" to styles that support it. Use the "-pk
gpu Ng" :doc:`command-line switch <Run_options>` to set Ng = # of
GPUs/node to use. If Ng is 0, the number is selected automatically as
Use the ``-sf gpu`` :doc:`command-line switch <Run_options>`, which will
automatically append "gpu" to styles that support it. Use the ``-pk
gpu Ng`` :doc:`command-line switch <Run_options>` to set ``Ng`` = # of
GPUs/node to use. If ``Ng`` is 0, the number is selected automatically as
the number of matching GPUs that have the highest number of compute
cores.
.. code-block:: bash
lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU
mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node
mpirun -np 48 -ppn 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # ditto on 4 16-core nodes
# 1 MPI task uses 1 GPU
lmp_machine -sf gpu -pk gpu 1 -in in.script
Note that if the "-sf gpu" switch is used, it also issues a default
# 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node
mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script
# ditto on 4 16-core nodes
mpirun -np 48 -ppn 12 lmp_machine -sf gpu -pk gpu 2 -in in.script
Note that if the ``-sf gpu`` switch is used, it also issues a default
:doc:`package gpu 0 <package>` command, which will result in
automatic selection of the number of GPUs to use.
Using the "-pk" switch explicitly allows for setting of the number of
Using the ``-pk`` switch explicitly allows for setting of the number of
GPUs/node to use and additional options. Its syntax is the same as
the "package gpu" command. See the :doc:`package <package>`
the ``package gpu`` command. See the :doc:`package <package>`
command page for details, including the default values used for
all its options if it is not specified.
@ -141,7 +146,7 @@ Use the :doc:`suffix gpu <suffix>` command, or you can explicitly add an
pair_style lj/cut/gpu 2.5
You must also use the :doc:`package gpu <package>` command to enable the
GPU package, unless the "-sf gpu" or "-pk gpu" :doc:`command-line switches <Run_options>` were used. It specifies the number of
GPU package, unless the ``-sf gpu`` or ``-pk gpu`` :doc:`command-line switches <Run_options>` were used. It specifies the number of
GPUs/node to use, as well as other options.
**Speed-ups to expect:**

View File

@ -41,7 +41,7 @@ precision mode. Performance improvements are shown compared to
LAMMPS *without using other acceleration packages* as these are
under active development (and subject to performance changes). The
measurements were performed using the input files available in
the src/INTEL/TEST directory with the provided run script.
the ``src/INTEL/TEST`` directory with the provided run script.
These are scalable in size; the results given are with 512K
particles (524K for Liquid Crystal). Most of the simulations are
standard LAMMPS benchmarks (indicated by the filename extension in
@ -56,7 +56,7 @@ Results are speedups obtained on Intel Xeon E5-2697v4 processors
Knights Landing), and Intel Xeon Gold 6148 processors (code-named
Skylake) with "June 2017" LAMMPS built with Intel Parallel Studio
2017 update 2. Results are with 1 MPI task per physical core. See
*src/INTEL/TEST/README* for the raw simulation rates and
``src/INTEL/TEST/README`` for the raw simulation rates and
instructions to reproduce.
----------
@ -82,9 +82,9 @@ order of operations compared to LAMMPS without acceleration:
* The *newton* setting applies to all atoms, not just atoms shared
between MPI tasks
* Vectorization can change the order for adding pairwise forces
* When using the -DLMP_USE_MKL_RNG define (all included intel optimized
* When using the ``-DLMP_USE_MKL_RNG`` define (all included intel optimized
makefiles do) at build time, the random number generator for
dissipative particle dynamics (pair style dpd/intel) uses the Mersenne
dissipative particle dynamics (``pair style dpd/intel``) uses the Mersenne
Twister generator included in the Intel MKL library (that should be
more robust than the default Masaglia random number generator)
@ -106,36 +106,36 @@ LAMMPS should be built with the INTEL package installed.
Simulations should be run with 1 MPI task per physical *core*,
not *hardware thread*\ .
* Edit src/MAKE/OPTIONS/Makefile.intel_cpu_intelmpi as necessary.
* Set the environment variable KMP_BLOCKTIME=0
* "-pk intel 0 omp $t -sf intel" added to LAMMPS command-line
* $t should be 2 for Intel Xeon CPUs and 2 or 4 for Intel Xeon Phi
* Edit ``src/MAKE/OPTIONS/Makefile.intel_cpu_intelmpi`` as necessary.
* Set the environment variable ``KMP_BLOCKTIME=0``
* ``-pk intel 0 omp $t -sf intel`` added to LAMMPS command-line
* ``$t`` should be 2 for Intel Xeon CPUs and 2 or 4 for Intel Xeon Phi
* For some of the simple 2-body potentials without long-range
electrostatics, performance and scalability can be better with
the "newton off" setting added to the input script
* For simulations on higher node counts, add "processors \* \* \* grid
numa" to the beginning of the input script for better scalability
* If using *kspace_style pppm* in the input script, add
"kspace_modify diff ad" for better performance
the ``newton off`` setting added to the input script
* For simulations on higher node counts, add ``processors * * * grid
numa`` to the beginning of the input script for better scalability
* If using ``kspace_style pppm`` in the input script, add
``kspace_modify diff ad`` for better performance
For Intel Xeon Phi CPUs:
* Runs should be performed using MCDRAM.
For simulations using *kspace_style pppm* on Intel CPUs supporting
For simulations using ``kspace_style pppm`` on Intel CPUs supporting
AVX-512:
* Add "kspace_modify diff ad" to the input script
* Add ``kspace_modify diff ad`` to the input script
* The command-line option should be changed to
"-pk intel 0 omp $r lrt yes -sf intel" where $r is the number of
``-pk intel 0 omp $r lrt yes -sf intel`` where ``$r`` is the number of
threads minus 1.
* Do not use thread affinity (set KMP_AFFINITY=none)
* The "newton off" setting may provide better scalability
* Do not use thread affinity (set ``KMP_AFFINITY=none``)
* The ``newton off`` setting may provide better scalability
For Intel Xeon Phi co-processors (Offload):
* Edit src/MAKE/OPTIONS/Makefile.intel_co-processor as necessary
* "-pk intel N omp 1" added to command-line where N is the number of
* Edit ``src/MAKE/OPTIONS/Makefile.intel_co-processor`` as necessary
* ``-pk intel N omp 1`` added to command-line where ``N`` is the number of
co-processors per node.
----------
@ -209,7 +209,7 @@ See the :ref:`Build extras <intel>` page for
instructions. Some additional details are covered here.
For building with make, several example Makefiles for building with
the Intel compiler are included with LAMMPS in the src/MAKE/OPTIONS/
the Intel compiler are included with LAMMPS in the ``src/MAKE/OPTIONS/``
directory:
.. code-block:: bash
@ -239,35 +239,35 @@ However, if you do not have co-processors on your system, building
without offload support will produce a smaller binary.
The general requirements for Makefiles with the INTEL package
are as follows. When using Intel compilers, "-restrict" is required
and "-qopenmp" is highly recommended for CCFLAGS and LINKFLAGS.
CCFLAGS should include "-DLMP_INTEL_USELRT" (unless POSIX Threads
are not supported in the build environment) and "-DLMP_USE_MKL_RNG"
are as follows. When using Intel compilers, ``-restrict`` is required
and ``-qopenmp`` is highly recommended for ``CCFLAGS`` and ``LINKFLAGS``.
``CCFLAGS`` should include ``-DLMP_INTEL_USELRT`` (unless POSIX Threads
are not supported in the build environment) and ``-DLMP_USE_MKL_RNG``
(unless Intel Math Kernel Library (MKL) is not available in the build
environment). For Intel compilers, LIB should include "-ltbbmalloc"
or if the library is not available, "-DLMP_INTEL_NO_TBB" can be added
to CCFLAGS. For builds supporting offload, "-DLMP_INTEL_OFFLOAD" is
required for CCFLAGS and "-qoffload" is required for LINKFLAGS. Other
recommended CCFLAG options for best performance are "-O2 -fno-alias
-ansi-alias -qoverride-limits fp-model fast=2 -no-prec-div".
environment). For Intel compilers, ``LIB`` should include ``-ltbbmalloc``
or if the library is not available, ``-DLMP_INTEL_NO_TBB`` can be added
to ``CCFLAGS``. For builds supporting offload, ``-DLMP_INTEL_OFFLOAD`` is
required for ``CCFLAGS`` and ``-qoffload`` is required for ``LINKFLAGS``. Other
recommended ``CCFLAG`` options for best performance are ``-O2 -fno-alias
-ansi-alias -qoverride-limits fp-model fast=2 -no-prec-div``.
.. note::
See the src/INTEL/README file for additional flags that
See the ``src/INTEL/README`` file for additional flags that
might be needed for best performance on Intel server processors
code-named "Skylake".
.. note::
The vectorization and math capabilities can differ depending on
the CPU. For Intel compilers, the "-x" flag specifies the type of
processor for which to optimize. "-xHost" specifies that the compiler
the CPU. For Intel compilers, the ``-x`` flag specifies the type of
processor for which to optimize. ``-xHost`` specifies that the compiler
should build for the processor used for compiling. For Intel Xeon Phi
x200 series processors, this option is "-xMIC-AVX512". For fourth
generation Intel Xeon (v4/Broadwell) processors, "-xCORE-AVX2" should
be used. For older Intel Xeon processors, "-xAVX" will perform best
x200 series processors, this option is ``-xMIC-AVX512``. For fourth
generation Intel Xeon (v4/Broadwell) processors, ``-xCORE-AVX2`` should
be used. For older Intel Xeon processors, ``-xAVX`` will perform best
in general for the different simulations in LAMMPS. The default
in most of the example Makefiles is to use "-xHost", however this
in most of the example Makefiles is to use ``-xHost``, however this
should not be used when cross-compiling.
Running LAMMPS with the INTEL package
@ -304,11 +304,11 @@ almost all cases.
uniform. Unless disabled at build time, affinity for MPI tasks and
OpenMP threads on the host (CPU) will be set by default on the host
*when using offload to a co-processor*\ . In this case, it is unnecessary
to use other methods to control affinity (e.g. taskset, numactl,
I_MPI_PIN_DOMAIN, etc.). This can be disabled with the *no_affinity*
to use other methods to control affinity (e.g. ``taskset``, ``numactl``,
``I_MPI_PIN_DOMAIN``, etc.). This can be disabled with the *no_affinity*
option to the :doc:`package intel <package>` command or by disabling the
option at build time (by adding -DINTEL_OFFLOAD_NOAFFINITY to the
CCFLAGS line of your Makefile). Disabling this option is not
option at build time (by adding ``-DINTEL_OFFLOAD_NOAFFINITY`` to the
``CCFLAGS`` line of your Makefile). Disabling this option is not
recommended, especially when running on a machine with Intel
Hyper-Threading technology disabled.
@ -316,7 +316,7 @@ Run with the INTEL package from the command line
"""""""""""""""""""""""""""""""""""""""""""""""""""""
To enable INTEL optimizations for all available styles used in
the input script, the "-sf intel" :doc:`command-line switch <Run_options>` can be used without any requirement for
the input script, the ``-sf intel`` :doc:`command-line switch <Run_options>` can be used without any requirement for
editing the input script. This switch will automatically append
"intel" to styles that support it. It also invokes a default command:
:doc:`package intel 1 <package>`. This package command is used to set
@ -329,15 +329,15 @@ will be used with automatic balancing of work between the CPU and the
co-processor.
You can specify different options for the INTEL package by using
the "-pk intel Nphi" :doc:`command-line switch <Run_options>` with
keyword/value pairs as specified in the documentation. Here, Nphi = #
the ``-pk intel Nphi`` :doc:`command-line switch <Run_options>` with
keyword/value pairs as specified in the documentation. Here, ``Nphi`` = #
of Xeon Phi co-processors/node (ignored without offload
support). Common options to the INTEL package include *omp* to
override any OMP_NUM_THREADS setting and specify the number of OpenMP
override any ``OMP_NUM_THREADS`` setting and specify the number of OpenMP
threads, *mode* to set the floating-point precision mode, and *lrt* to
enable Long-Range Thread mode as described below. See the :doc:`package intel <package>` command for details, including the default values
used for all its options if not specified, and how to set the number
of OpenMP threads via the OMP_NUM_THREADS environment variable if
of OpenMP threads via the ``OMP_NUM_THREADS`` environment variable if
desired.
Examples (see documentation for your MPI/Machine for differences in
@ -345,8 +345,13 @@ launching MPI applications):
.. code-block:: bash
mpirun -np 72 -ppn 36 lmp_machine -sf intel -in in.script # 2 nodes, 36 MPI tasks/node, $OMP_NUM_THREADS OpenMP Threads
mpirun -np 72 -ppn 36 lmp_machine -sf intel -in in.script -pk intel 0 omp 2 mode double # Don't use any co-processors that might be available, use 2 OpenMP threads for each task, use double precision
# 2 nodes, 36 MPI tasks/node, $OMP_NUM_THREADS OpenMP Threads
mpirun -np 72 -ppn 36 lmp_machine -sf intel -in in.script
# Don't use any co-processors that might be available,
# use 2 OpenMP threads for each task, use double precision
mpirun -np 72 -ppn 36 lmp_machine -sf intel -in in.script \
-pk intel 0 omp 2 mode double
Or run with the INTEL package by editing an input script
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
@ -386,19 +391,19 @@ Long-Range Thread (LRT) mode is an option to the :doc:`package intel <package>`
with SMT. It generates an extra pthread for each MPI task. The thread
is dedicated to performing some of the PPPM calculations and MPI
communications. This feature requires setting the pre-processor flag
-DLMP_INTEL_USELRT in the makefile when compiling LAMMPS. It is unset
in the default makefiles (\ *Makefile.mpi* and *Makefile.serial*\ ) but
``-DLMP_INTEL_USELRT`` in the makefile when compiling LAMMPS. It is unset
in the default makefiles (``Makefile.mpi`` and ``Makefile.serial``) but
it is set in all makefiles tuned for the INTEL package. On Intel
Xeon Phi x200 series CPUs, the LRT feature will likely improve
performance, even on a single node. On Intel Xeon processors, using
this mode might result in better performance when using multiple nodes,
depending on the specific machine configuration. To enable LRT mode,
specify that the number of OpenMP threads is one less than would
normally be used for the run and add the "lrt yes" option to the "-pk"
normally be used for the run and add the ``lrt yes`` option to the ``-pk``
command-line suffix or "package intel" command. For example, if a run
would normally perform best with "-pk intel 0 omp 4", instead use
"-pk intel 0 omp 3 lrt yes". When using LRT, you should set the
environment variable "KMP_AFFINITY=none". LRT mode is not supported
``-pk intel 0 omp 3 lrt yes``. When using LRT, you should set the
environment variable ``KMP_AFFINITY=none``. LRT mode is not supported
when using offload.
.. note::
@ -411,12 +416,12 @@ Not all styles are supported in the INTEL package. You can mix
the INTEL package with styles from the :doc:`OPT <Speed_opt>`
package or the :doc:`OPENMP package <Speed_omp>`. Of course, this
requires that these packages were installed at build time. This can
performed automatically by using "-sf hybrid intel opt" or "-sf hybrid
intel omp" command-line options. Alternatively, the "opt" and "omp"
performed automatically by using ``-sf hybrid intel opt`` or ``-sf hybrid
intel omp`` command-line options. Alternatively, the "opt" and "omp"
suffixes can be appended manually in the input script. For the latter,
the :doc:`package omp <package>` command must be in the input script or
the "-pk omp Nt" :doc:`command-line switch <Run_options>` must be used
where Nt is the number of OpenMP threads. The number of OpenMP threads
the ``-pk omp Nt`` :doc:`command-line switch <Run_options>` must be used
where ``Nt`` is the number of OpenMP threads. The number of OpenMP threads
should not be set differently for the different packages. Note that
the :doc:`suffix hybrid intel omp <suffix>` command can also be used
within the input script to automatically append the "omp" suffix to
@ -436,7 +441,7 @@ alternative to LRT mode and the two cannot be used together.
Currently, when using Intel MPI with Intel Xeon Phi x200 series
CPUs, better performance might be obtained by setting the
environment variable "I_MPI_SHM_LMT=shm" for Linux kernels that do
environment variable ``I_MPI_SHM_LMT=shm`` for Linux kernels that do
not yet have full support for AVX-512. Runs on Intel Xeon Phi x200
series processors will always perform better using MCDRAM. Please
consult your system documentation for the best approach to specify
@ -515,7 +520,7 @@ per MPI task. Additionally, an offload timing summary is printed at
the end of each run. When offloading, the frequency for :doc:`atom sorting <atom_modify>` is changed to 1 so that the per-atom data is
effectively sorted at every rebuild of the neighbor lists. All the
available co-processor threads on each Phi will be divided among MPI
tasks, unless the *tptask* option of the "-pk intel" :doc:`command-line switch <Run_options>` is used to limit the co-processor threads per
tasks, unless the ``tptask`` option of the ``-pk intel`` :doc:`command-line switch <Run_options>` is used to limit the co-processor threads per
MPI task.
Restrictions

View File

@ -48,7 +48,7 @@ version 23 November 2023 and Kokkos version 4.2.
Kokkos requires using a compiler that supports the c++17 standard. For
some compilers, it may be necessary to add a flag to enable c++17 support.
For example, the GNU compiler uses the -std=c++17 flag. For a list of
For example, the GNU compiler uses the ``-std=c++17`` flag. For a list of
compilers that have been tested with the Kokkos library, see the
`requirements document of the Kokkos Wiki
<https://kokkos.github.io/kokkos-core-wiki/requirements.html>`_.
@ -111,14 +111,21 @@ for CPU acceleration, assuming one or more 16-core nodes.
.. code-block:: bash
mpirun -np 16 lmp_kokkos_mpi_only -k on -sf kk -in in.lj # 1 node, 16 MPI tasks/node, no multi-threading
mpirun -np 2 -ppn 1 lmp_kokkos_omp -k on t 16 -sf kk -in in.lj # 2 nodes, 1 MPI task/node, 16 threads/task
mpirun -np 2 lmp_kokkos_omp -k on t 8 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 8 threads/task
mpirun -np 32 -ppn 4 lmp_kokkos_omp -k on t 4 -sf kk -in in.lj # 8 nodes, 4 MPI tasks/node, 4 threads/task
# 1 node, 16 MPI tasks/node, no multi-threading
mpirun -np 16 lmp_kokkos_mpi_only -k on -sf kk -in in.lj
To run using the KOKKOS package, use the "-k on", "-sf kk" and "-pk
kokkos" :doc:`command-line switches <Run_options>` in your mpirun
command. You must use the "-k on" :doc:`command-line switch <Run_options>` to enable the KOKKOS package. It takes
# 2 nodes, 1 MPI task/node, 16 threads/task
mpirun -np 2 -ppn 1 lmp_kokkos_omp -k on t 16 -sf kk -in in.lj
# 1 node, 2 MPI tasks/node, 8 threads/task
mpirun -np 2 lmp_kokkos_omp -k on t 8 -sf kk -in in.lj
# 8 nodes, 4 MPI tasks/node, 4 threads/task
mpirun -np 32 -ppn 4 lmp_kokkos_omp -k on t 4 -sf kk -in in.lj
To run using the KOKKOS package, use the ``-k on``, ``-sf kk`` and ``-pk
kokkos`` :doc:`command-line switches <Run_options>` in your ``mpirun``
command. You must use the ``-k on`` :doc:`command-line switch <Run_options>` to enable the KOKKOS package. It takes
additional arguments for hardware settings appropriate to your system.
For OpenMP use:
@ -126,18 +133,18 @@ For OpenMP use:
-k on t Nt
The "t Nt" option specifies how many OpenMP threads per MPI task to
use with a node. The default is Nt = 1, which is MPI-only mode. Note
The ``t Nt`` option specifies how many OpenMP threads per MPI task to
use with a node. The default is ``Nt`` = 1, which is MPI-only mode. Note
that the product of MPI tasks \* OpenMP threads/task should not exceed
the physical number of cores (on a node), otherwise performance will
suffer. If Hyper-Threading (HT) is enabled, then the product of MPI
tasks \* OpenMP threads/task should not exceed the physical number of
cores \* hardware threads. The "-k on" switch also issues a
"package kokkos" command (with no additional arguments) which sets
cores \* hardware threads. The ``-k on`` switch also issues a
``package kokkos`` command (with no additional arguments) which sets
various KOKKOS options to default values, as discussed on the
:doc:`package <package>` command doc page.
The "-sf kk" :doc:`command-line switch <Run_options>` will automatically
The ``-sf kk`` :doc:`command-line switch <Run_options>` will automatically
append the "/kk" suffix to styles that support it. In this manner no
modification to the input script is needed. Alternatively, one can run
with the KOKKOS package by editing the input script as described
@ -146,20 +153,22 @@ below.
.. note::
When using a single OpenMP thread, the Kokkos Serial back end (i.e.
Makefile.kokkos_mpi_only) will give better performance than the OpenMP
back end (i.e. Makefile.kokkos_omp) because some of the overhead to make
``Makefile.kokkos_mpi_only``) will give better performance than the OpenMP
back end (i.e. ``Makefile.kokkos_omp``) because some of the overhead to make
the code thread-safe is removed.
.. note::
Use the "-pk kokkos" :doc:`command-line switch <Run_options>` to
Use the ``-pk kokkos`` :doc:`command-line switch <Run_options>` to
change the default :doc:`package kokkos <package>` options. See its doc
page for details and default settings. Experimenting with its options
can provide a speed-up for specific calculations. For example:
.. code-block:: bash
mpirun -np 16 lmp_kokkos_mpi_only -k on -sf kk -pk kokkos newton on neigh half comm no -in in.lj # Newton on, Half neighbor list, non-threaded comm
# Newton on, Half neighbor list, non-threaded comm
mpirun -np 16 lmp_kokkos_mpi_only -k on -sf kk \
-pk kokkos newton on neigh half comm no -in in.lj
If the :doc:`newton <newton>` command is used in the input
script, it can also override the Newton flag defaults.
@ -172,7 +181,7 @@ small numbers of threads (i.e. 8 or less) but does increase memory
footprint and is not scalable to large numbers of threads. An
alternative to data duplication is to use thread-level atomic operations
which do not require data duplication. The use of atomic operations can
be enforced by compiling LAMMPS with the "-DLMP_KOKKOS_USE_ATOMICS"
be enforced by compiling LAMMPS with the ``-DLMP_KOKKOS_USE_ATOMICS``
pre-processor flag. Most but not all Kokkos-enabled pair_styles support
data duplication. Alternatively, full neighbor lists avoid the need for
duplication or atomic operations but require more compute operations per
@ -190,10 +199,13 @@ they do not migrate during a simulation.
If you are not certain MPI tasks are being bound (check the defaults
for your MPI installation), binding can be forced with these flags:
.. parsed-literal::
.. code-block:: bash
OpenMPI 1.8: mpirun -np 2 --bind-to socket --map-by socket ./lmp_openmpi ...
Mvapich2 2.0: mpiexec -np 2 --bind-to socket --map-by socket ./lmp_mvapich ...
# OpenMPI 1.8
mpirun -np 2 --bind-to socket --map-by socket ./lmp_openmpi ...
# Mvapich2 2.0
mpiexec -np 2 --bind-to socket --map-by socket ./lmp_mvapich ...
For binding threads with KOKKOS OpenMP, use thread affinity environment
variables to force binding. With OpenMP 3.1 (gcc 4.7 or later, intel 12
@ -222,15 +234,24 @@ Examples of mpirun commands that follow these rules are shown below.
.. code-block:: bash
# Running on an Intel KNL node with 68 cores (272 threads/node via 4x hardware threading):
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj # 1 node, 64 MPI tasks/node, 4 threads/task
mpirun -np 66 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj # 1 node, 66 MPI tasks/node, 4 threads/task
mpirun -np 32 lmp_kokkos_phi -k on t 8 -sf kk -in in.lj # 1 node, 32 MPI tasks/node, 8 threads/task
mpirun -np 512 -ppn 64 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj # 8 nodes, 64 MPI tasks/node, 4 threads/task
# Running on an Intel KNL node with 68 cores
# (272 threads/node via 4x hardware threading):
The -np setting of the mpirun command sets the number of MPI
tasks/node. The "-k on t Nt" command-line switch sets the number of
threads/task as Nt. The product of these two values should be N, i.e.
# 1 node, 64 MPI tasks/node, 4 threads/task
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj
# 1 node, 66 MPI tasks/node, 4 threads/task
mpirun -np 66 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj
# 1 node, 32 MPI tasks/node, 8 threads/task
mpirun -np 32 lmp_kokkos_phi -k on t 8 -sf kk -in in.lj
# 8 nodes, 64 MPI tasks/node, 4 threads/task
mpirun -np 512 -ppn 64 lmp_kokkos_phi -k on t 4 -sf kk -in in.lj
The ``-np`` setting of the mpirun command sets the number of MPI
tasks/node. The ``-k on t Nt`` command-line switch sets the number of
threads/task as ``Nt``. The product of these two values should be N, i.e.
256 or 264.
.. note::
@ -240,7 +261,7 @@ threads/task as Nt. The product of these two values should be N, i.e.
flag to "on" for both pairwise and bonded interactions. This will
typically be best for many-body potentials. For simpler pairwise
potentials, it may be faster to use a "full" neighbor list with
Newton flag to "off". Use the "-pk kokkos" :doc:`command-line switch
Newton flag to "off". Use the ``-pk kokkos`` :doc:`command-line switch
<Run_options>` to change the default :doc:`package kokkos <package>`
options. See its documentation page for details and default
settings. Experimenting with its options can provide a speed-up for
@ -248,8 +269,12 @@ threads/task as Nt. The product of these two values should be N, i.e.
.. code-block:: bash
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk -pk kokkos comm host -in in.reax # Newton on, half neighbor list, threaded comm
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk -pk kokkos newton off neigh full comm no -in in.lj # Newton off, full neighbor list, non-threaded comm
# Newton on, half neighbor list, threaded comm
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk -pk kokkos comm host -in in.reax
# Newton off, full neighbor list, non-threaded comm
mpirun -np 64 lmp_kokkos_phi -k on t 4 -sf kk \
-pk kokkos newton off neigh full comm no -in in.lj
.. note::
@ -266,8 +291,8 @@ threads/task as Nt. The product of these two values should be N, i.e.
Running on GPUs
^^^^^^^^^^^^^^^
Use the "-k" :doc:`command-line switch <Run_options>` to specify the
number of GPUs per node. Typically the -np setting of the mpirun command
Use the ``-k`` :doc:`command-line switch <Run_options>` to specify the
number of GPUs per node. Typically the ``-np`` setting of the ``mpirun`` command
should set the number of MPI tasks/node to be equal to the number of
physical GPUs on the node. You can assign multiple MPI tasks to the same
GPU with the KOKKOS package, but this is usually only faster if some
@ -290,8 +315,11 @@ one or more nodes, each with two GPUs:
.. code-block:: bash
mpirun -np 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 2 GPUs/node
mpirun -np 32 -ppn 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk -in in.lj # 16 nodes, 2 MPI tasks/node, 2 GPUs/node (32 GPUs total)
# 1 node, 2 MPI tasks/node, 2 GPUs/node
mpirun -np 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk -in in.lj
# 16 nodes, 2 MPI tasks/node, 2 GPUs/node (32 GPUs total)
mpirun -np 32 -ppn 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk -in in.lj
.. note::
@ -303,7 +331,7 @@ one or more nodes, each with two GPUs:
neighbor lists and setting the Newton flag to "on" may be faster. For
many pair styles, setting the neighbor binsize equal to twice the CPU
default value will give speedup, which is the default when running on
GPUs. Use the "-pk kokkos" :doc:`command-line switch <Run_options>`
GPUs. Use the ``-pk kokkos`` :doc:`command-line switch <Run_options>`
to change the default :doc:`package kokkos <package>` options. See
its documentation page for details and default
settings. Experimenting with its options can provide a speed-up for
@ -311,7 +339,9 @@ one or more nodes, each with two GPUs:
.. code-block:: bash
mpirun -np 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk -pk kokkos newton on neigh half binsize 2.8 -in in.lj # Newton on, half neighbor list, set binsize = neighbor ghost cutoff
# Newton on, half neighbor list, set binsize = neighbor ghost cutoff
mpirun -np 2 lmp_kokkos_cuda_openmpi -k on g 2 -sf kk \
-pk kokkos newton on neigh half binsize 2.8 -in in.lj
.. note::
@ -329,7 +359,7 @@ one or more nodes, each with two GPUs:
more), the creation of the atom map (required for molecular systems)
on the GPU can slow down significantly or run out of GPU memory and
thus slow down the whole calculation or cause a crash. You can use
the "-pk kokkos atom/map no" :doc:`command-line switch <Run_options>`
the ``-pk kokkos atom/map no`` :doc:`command-line switch <Run_options>`
of the :doc:`package kokkos atom/map no <package>` command to create
the atom map on the CPU instead.
@ -346,20 +376,20 @@ one or more nodes, each with two GPUs:
.. note::
To get an accurate timing breakdown between time spend in pair,
kspace, etc., you must set the environment variable CUDA_LAUNCH_BLOCKING=1.
kspace, etc., you must set the environment variable ``CUDA_LAUNCH_BLOCKING=1``.
However, this will reduce performance and is not recommended for production runs.
Run with the KOKKOS package by editing an input script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Alternatively the effect of the "-sf" or "-pk" switches can be
Alternatively the effect of the ``-sf`` or ``-pk`` switches can be
duplicated by adding the :doc:`package kokkos <package>` or :doc:`suffix kk <suffix>` commands to your input script.
The discussion above for building LAMMPS with the KOKKOS package, the
``mpirun`` or ``mpiexec`` command, and setting appropriate thread
properties are the same.
You must still use the "-k on" :doc:`command-line switch <Run_options>`
You must still use the ``-k on`` :doc:`command-line switch <Run_options>`
to enable the KOKKOS package, and specify its additional arguments for
hardware options appropriate to your system, as documented above.
@ -378,7 +408,7 @@ wish to change any of its option defaults, as set by the "-k on"
With the KOKKOS package, both OpenMP multi-threading and GPUs can be
compiled and used together in a few special cases. In the makefile for
the conventional build, the KOKKOS_DEVICES variable must include both,
the conventional build, the ``KOKKOS_DEVICES`` variable must include both,
"Cuda" and "OpenMP", as is the case for ``/src/MAKE/OPTIONS/Makefile.kokkos_cuda_mpi``.
.. code-block:: bash
@ -390,14 +420,14 @@ in the ``kokkos-cuda.cmake`` CMake preset file.
.. code-block:: bash
cmake ../cmake -DKokkos_ENABLE_CUDA=yes -DKokkos_ENABLE_OPENMP=yes
cmake -DKokkos_ENABLE_CUDA=yes -DKokkos_ENABLE_OPENMP=yes ../cmake
The suffix "/kk" is equivalent to "/kk/device", and for Kokkos CUDA,
using the "-sf kk" in the command line gives the default CUDA version
using the ``-sf kk`` in the command line gives the default CUDA version
everywhere. However, if the "/kk/host" suffix is added to a specific
style in the input script, the Kokkos OpenMP (CPU) version of that
specific style will be used instead. Set the number of OpenMP threads
as "t Nt" and the number of GPUs as "g Ng"
as ``t Nt`` and the number of GPUs as ``g Ng``
.. parsed-literal::
@ -409,7 +439,7 @@ For example, the command to run with 1 GPU and 8 OpenMP threads is then:
mpiexec -np 1 lmp_kokkos_cuda_openmpi -in in.lj -k on g 1 t 8 -sf kk
Conversely, if the "-sf kk/host" is used in the command line and then
Conversely, if the ``-sf kk/host`` is used in the command line and then
the "/kk" or "/kk/device" suffix is added to a specific style in your
input script, then only that specific style will run on the GPU while
everything else will run on the CPU in OpenMP mode. Note that the
@ -418,11 +448,11 @@ special case:
A kspace style and/or molecular topology (bonds, angles, etc.) running
on the host CPU can overlap with a pair style running on the
GPU. First compile with "--default-stream per-thread" added to CCFLAGS
GPU. First compile with ``--default-stream per-thread`` added to ``CCFLAGS``
in the Kokkos CUDA Makefile. Then explicitly use the "/kk/host"
suffix for kspace and bonds, angles, etc. in the input file and the
"kk" suffix (equal to "kk/device") on the command line. Also make
sure the environment variable CUDA_LAUNCH_BLOCKING is not set to "1"
sure the environment variable ``CUDA_LAUNCH_BLOCKING`` is not set to "1"
so CPU/GPU overlap can occur.
Performance to expect

View File

@ -28,32 +28,39 @@ These examples assume one or more 16-core nodes.
.. code-block:: bash
env OMP_NUM_THREADS=16 lmp_omp -sf omp -in in.script # 1 MPI task, 16 threads according to OMP_NUM_THREADS
lmp_mpi -sf omp -in in.script # 1 MPI task, no threads, optimized kernels
mpirun -np 4 lmp_omp -sf omp -pk omp 4 -in in.script # 4 MPI tasks, 4 threads/task
mpirun -np 32 -ppn 4 lmp_omp -sf omp -pk omp 4 -in in.script # 8 nodes, 4 MPI tasks/node, 4 threads/task
# 1 MPI task, 16 threads according to OMP_NUM_THREADS
env OMP_NUM_THREADS=16 lmp_omp -sf omp -in in.script
# 1 MPI task, no threads, optimized kernels
lmp_mpi -sf omp -in in.script
# 4 MPI tasks, 4 threads/task
mpirun -np 4 lmp_omp -sf omp -pk omp 4 -in in.script
# 8 nodes, 4 MPI tasks/node, 4 threads/task
mpirun -np 32 -ppn 4 lmp_omp -sf omp -pk omp 4 -in in.script
The ``mpirun`` or ``mpiexec`` command sets the total number of MPI tasks
used by LAMMPS (one or multiple per compute node) and the number of MPI
tasks used per node. E.g. the mpirun command in MPICH does this via
its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
its ``-np`` and ``-ppn`` switches. Ditto for OpenMPI via ``-np`` and ``-npernode``.
You need to choose how many OpenMP threads per MPI task will be used
by the OPENMP package. Note that the product of MPI tasks \*
threads/task should not exceed the physical number of cores (on a
node), otherwise performance will suffer.
As in the lines above, use the "-sf omp" :doc:`command-line switch <Run_options>`, which will automatically append "omp" to
styles that support it. The "-sf omp" switch also issues a default
As in the lines above, use the ``-sf omp`` :doc:`command-line switch <Run_options>`, which will automatically append "omp" to
styles that support it. The ``-sf omp`` switch also issues a default
:doc:`package omp 0 <package>` command, which will set the number of
threads per MPI task via the OMP_NUM_THREADS environment variable.
threads per MPI task via the ``OMP_NUM_THREADS`` environment variable.
You can also use the "-pk omp Nt" :doc:`command-line switch <Run_options>`, to explicitly set Nt = # of OpenMP threads
You can also use the ``-pk omp Nt`` :doc:`command-line switch <Run_options>`, to explicitly set ``Nt`` = # of OpenMP threads
per MPI task to use, as well as additional options. Its syntax is the
same as the :doc:`package omp <package>` command whose page gives
details, including the default values used if it is not specified. It
also gives more details on how to set the number of threads via the
OMP_NUM_THREADS environment variable.
``OMP_NUM_THREADS`` environment variable.
Or run with the OPENMP package by editing an input script
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
@ -71,7 +78,7 @@ Use the :doc:`suffix omp <suffix>` command, or you can explicitly add an
You must also use the :doc:`package omp <package>` command to enable the
OPENMP package. When you do this you also specify how many threads
per MPI task to use. The command page explains other options and
how to set the number of threads via the OMP_NUM_THREADS environment
how to set the number of threads via the ``OMP_NUM_THREADS`` environment
variable.
Speed-up to expect

View File

@ -80,23 +80,30 @@ it provides, follow these general steps. Details vary from package to
package and are explained in the individual accelerator doc pages,
listed above:
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
+-----------------------------------------------------------+---------------------------------------------+
| build the accelerator library | only for GPU package |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| install the accelerator package | make yes-opt, make yes-intel, etc |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| add compile/link flags to Makefile.machine in src/MAKE | only for INTEL, KOKKOS, OPENMP, OPT packages |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| re-build LAMMPS | make machine |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| prepare and test a regular LAMMPS simulation | lmp_machine -in in.script; mpirun -np 32 lmp_machine -in in.script |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| enable specific accelerator support via '-k on' :doc:`command-line switch <Run_options>`, | only needed for KOKKOS package |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| set any needed options for the package via "-pk" :doc:`command-line switch <Run_options>` or :doc:`package <package>` command, | only if defaults need to be changed |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
| use accelerated styles in your input via "-sf" :doc:`command-line switch <Run_options>` or :doc:`suffix <suffix>` command | lmp_machine -in in.script -sf gpu |
+--------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------+
+-----------------------------------------------------------+---------------------------------------------+
| install the accelerator package | ``make yes-opt``, ``make yes-intel``, etc |
+-----------------------------------------------------------+---------------------------------------------+
| add compile/link flags to ``Makefile.machine`` | only for INTEL, KOKKOS, OPENMP, |
| in ``src/MAKE`` | OPT packages |
+-----------------------------------------------------------+---------------------------------------------+
| re-build LAMMPS | ``make machine`` |
+-----------------------------------------------------------+---------------------------------------------+
| prepare and test a regular LAMMPS simulation | ``lmp_machine -in in.script;`` |
| | ``mpirun -np 32 lmp_machine -in in.script`` |
+-----------------------------------------------------------+---------------------------------------------+
| enable specific accelerator support via ``-k on`` | only needed for KOKKOS package |
| :doc:`command-line switch <Run_options>` | |
+-----------------------------------------------------------+---------------------------------------------+
| set any needed options for the package via ``-pk`` | only if defaults need to be changed |
| :doc:`command-line switch <Run_options>` or | |
| :doc:`package <package>` command | |
+-----------------------------------------------------------+---------------------------------------------+
| use accelerated styles in your input via ``-sf`` | ``lmp_machine -in in.script -sf gpu`` |
| :doc:`command-line switch <Run_options>` or | |
| :doc:`suffix <suffix>` command | |
+-----------------------------------------------------------+---------------------------------------------+
Note that the first 4 steps can be done as a single command with
suitable make command invocations. This is discussed on the

View File

@ -275,12 +275,17 @@ else:
mathjax_path = 'mathjax/es5/tex-mml-chtml.js'
# hack to enable use of \AA in :math:
# add :lammps: role for inline LAMMPS code highlight
rst_prolog = r"""
.. only:: html
:math:`\renewcommand{\AA}{\text{Å}}`
.. role:: lammps(code)
:language: LAMMPS
:class: highlight
"""
# -- Options for LaTeX output ---------------------------------------------
@ -294,6 +299,8 @@ latex_elements = {
# Additional stuff for the LaTeX preamble.
'preamble': r'''
\usepackage{afterpage}
\usepackage{xcolor}
\setcounter{tocdepth}{2}
\renewcommand{\sfdefault}{ptm} % Use Times New Roman font for \textrm
\renewcommand{\sfdefault}{phv} % Use Helvetica font for \textsf
@ -339,7 +346,13 @@ latex_elements = {
\renewcommand*\l@subsection{\@dottedtocline{2}{4.6em}{4.5em}}
}
\makeatother
'''
''',
'maketitle': r'''
\pagecolor{black}
\color{white}
\afterpage{\nopagecolor\color{black}}
\sphinxmaketitle
''',
}
# copy custom style file for tweaking index layout