Compare commits

..

342 Commits

Author SHA1 Message Date
83ba1c9d20 Merge pull request #3645 from akohlmey/more-backports-to-stable
More backports of fixes to stable release
2023-02-17 16:27:13 -05:00
ce10614cab backport region check move to init() function for fix gcmc and fix widom 2023-02-17 12:44:58 -05:00
facbeac052 move definition of MAXBIGINT_DOUBLE to variable.cpp 2023-02-17 12:29:17 -05:00
188ee5af15 use MAXBIGINT_DOUBLE which does not overflow when casting back to bigint 2023-02-12 04:08:11 -05:00
f176b8b14c consistently support special_bonds settings in pair style gauss 2023-02-10 05:09:58 -05:00
2396b2feea Fixed bugs with gauss/gpu in bonded systems, including factor_lj in forces and energies 2023-02-10 05:02:43 -05:00
4399c1b6c1 Merge pull request #3593 from akohlmey/maintenance-2022-06-23
Third round of maintenance fixes and backports for the stable release
2023-02-09 22:53:11 -05:00
fd046c8fd8 Merge branch 'maintenance' into maintenance-2022-06-23 2023-02-09 20:17:06 -05:00
09b7694601 Merge pull request #3595 from akohlmey/maintenance-many-files
Additional non-functional maintenance changes for the stable version
2023-02-09 20:09:28 -05:00
df20503434 make fallback url function available to plugin compilations 2023-02-09 08:14:23 -05:00
f4aa24a36a roll back changes for vec3_scale() and vec3_scaleadd() and use temporary vector 2023-02-08 20:33:38 -05:00
007c04bc97 correct preprocessor logic for non-Linux machines 2023-02-08 16:45:48 -05:00
418d1e16e1 recover compilation of tersoff kernels with CUDA 2023-02-08 11:17:09 -05:00
6471d781d0 recover kernel failure for tersoff with mixed and single precision 2023-02-08 09:14:37 -05:00
97ddc5917c another OpenCL bugfix attempt from Trung 2023-02-08 08:26:22 -05:00
a95ff20647 swap nvcc default arch from Maxwell to Pascal
This is to avoid deprecation warnings with CUDA 11.6 and later
2023-02-07 08:34:01 -05:00
9e0a9e2601 correct logic 2023-02-07 00:00:17 -05:00
8b34d65970 add download fallback handling 2023-02-07 00:00:07 -05:00
0a1c2bcccc fix failing unit tests with OpenCL 2023-02-06 18:40:07 -05:00
c9442c591c re-enable new neighbor lists for CUDA 12.0 and later 2023-02-05 03:01:46 -05:00
b7d316031d nullify freed pointers in list of dump data 2023-02-03 20:39:47 -05:00
361e9f3ea5 avoid illegal memory access in destructor after variables have been deleted 2023-02-03 20:26:42 -05:00
28120793b8 backport PR #3631 2023-02-02 22:21:15 -05:00
f32ce8377e change default arch in nvcc_wrapper, so it can still run with cuda 12 2023-02-01 11:35:59 -05:00
9021b8bc6a implement download fallback for traditional make build 2023-02-01 06:53:53 -05:00
838fe3020d add support for building a static lammps-shell executable with Linux/MUSL 2023-01-31 22:23:41 -05:00
b4d4dcbcbc simplify 2023-01-31 20:35:18 -05:00
52a892ec46 simplify 2023-01-31 20:32:41 -05:00
0ee3d9da5d port triclinic region vs box check from fix gcmc to fix widom 2023-01-31 20:29:18 -05:00
50afb292b0 compare region extent with box bounds for triclinic 2023-01-31 20:28:25 -05:00
275ef9da17 update n2p2 lib version for traditional make, too. 2023-01-31 20:28:15 -05:00
b6a87390a3 revert MD5 hash to current value after GitHub reversed its change 2023-01-31 20:28:04 -05:00
72178631c5 update N2P2 library to version 2.2.0 2023-01-31 20:27:57 -05:00
f8859c5fca implement download fallback URLs pointing to download.lammps.org for CMake 2023-01-31 20:22:06 -05:00
979119a29b backport fixes to KOKKOS and REAXFF from PR #3621 2023-01-31 20:18:38 -05:00
bc66572275 Fix out of bounds access in pair_vashishta_kokkos with skip list 2023-01-31 20:00:45 -05:00
609231675f Allow neighbor class to set newton flag in Kokkos neigh list 2023-01-31 19:55:07 -05:00
d9675b5da4 Fix QUIP compilation with Intel compilers. 2023-01-30 08:11:52 -05:00
7d32b4f42a make Kokkos lib compatible with musl-libc
Note: this was adapted from https://github.com/kokkos/kokkos/pull/5678
to be usable without requiring C++17
2023-01-27 12:21:39 -05:00
697e5b15ec forcibly disable COMPRESS package is zlib is not found 2023-01-27 07:29:25 -05:00
ade0718c11 make compatible to non-glibc Linux 2023-01-27 07:26:23 -05:00
31033ff6e0 must initialize "np" in constructor 2023-01-26 18:34:21 -05:00
9a598ba5a8 backport fix pimd bugfix from develop 2023-01-26 15:59:26 -05:00
ff20448b1d add image to the cover page of the PDF version of the manual 2023-01-26 11:23:46 -05:00
af5229ba58 swap constexpr back to const 2023-01-26 09:58:26 -05:00
b180200c48 check if variable value is a valid number before converting it 2023-01-26 07:10:50 -05:00
27441cf2ea update developer contact info in a few more files 2023-01-25 22:24:22 -05:00
db61bf609b plug memory leaks in couple examples 2023-01-25 21:48:29 -05:00
015fa4cb0a update embedded docs 2023-01-25 21:44:04 -05:00
62f6f91146 minor typo and rewording 2023-01-25 21:42:37 -05:00
e163b0b1d7 portability improvements for Solaris/OpenIndiana 2023-01-25 21:40:23 -05:00
169a886898 cannot test PYTHON package if it is not installed 2023-01-25 21:37:22 -05:00
cbd276c49d correct prototype for documentation 2023-01-25 21:32:03 -05:00
183c6c06ff small tweaks to the "breadcrumbs" part of the theme to avoid double inserting a separation character 2023-01-25 21:28:18 -05:00
93a46da58e add image to the cover page of the PDF version of the manual 2023-01-25 21:24:27 -05:00
6b6a47bd3c Small tweaks 2023-01-25 21:21:08 -05:00
4a0a98a0fd Small bugfixes for Kokkos 2023-01-25 21:20:59 -05:00
369ea4fd26 Add this 2023-01-25 21:17:30 -05:00
d63c002bf5 Use group for Kokkos nvt temp compute 2023-01-25 21:17:22 -05:00
e931d3153b small improvements from upstream 2023-01-13 17:52:28 -05:00
2913c063d4 whitespace 2023-01-13 14:51:21 -05:00
5606b57646 Update SECURITY.md
I found the overlapping meanings of release/update/patch a bit confusing, especially when sometimes referring to a branch name and sometimes used as a general description.  So I reworked it, trying to preserve meaning. I deleted the last sentence, because I did not understand it, it may need to be added again.
2023-01-13 11:30:07 -07:00
0fafe34008 import updates to library plugin loader from upstream 2023-01-13 05:21:33 -05:00
a9a1640d67 reorder 2023-01-12 18:28:17 -05:00
812363fb99 lammpsplugin bugfix from Stan 2023-01-12 18:24:04 -05:00
b40e0be1c9 reset to current state of the library interface and remove parts from upstream that have crept in 2023-01-12 12:08:00 -05:00
1be973da07 update from upstream 2023-01-11 22:31:06 -05:00
aca2c52795 update LAMMPS developer contact info 2023-01-11 22:25:25 -05:00
536b2ab7e5 restore accidentally deleted file 2023-01-11 22:16:31 -05:00
ccef293161 remove obsolete comment 2023-01-11 22:11:53 -05:00
4b0de87813 silence compiler warning 2023-01-11 21:59:35 -05:00
fa22aef31b Fix obscure bug in Kokkos neigh list build 2023-01-11 21:53:16 -05:00
cb7544a615 import modernization from upstream 2023-01-11 21:41:58 -05:00
a9be4906b7 import safer ghost cutoff determination for manybody GPU styles from upstream 2023-01-11 21:41:43 -05:00
6f36d21a04 GPU library updates 2023-01-11 21:34:42 -05:00
c55a15c4dc make AWPMD compatible with MSVC and c++-linalg on Windows 2023-01-11 21:23:03 -05:00
8f01dad1a9 add tools/tabulate 2023-01-11 21:21:51 -05:00
db6e1aa20d some more documentation updates 2023-01-11 21:21:03 -05:00
3cee69a077 correct Kokkos device/arch info ouput in CMake summary 2023-01-11 18:15:56 -05:00
69ffe71595 update unit tests for code corrections 2023-01-11 07:45:50 -05:00
16fa033111 fix issues with bundled meam/spline potentials 2023-01-11 06:40:54 -05:00
8e494aa771 updates and bugfixes for liblammpsplugin plugin loader for LAMMPS shared lib 2023-01-11 06:11:46 -05:00
d203cce8b5 documentation updates from upstream 2023-01-11 06:07:19 -05:00
f8de1b1a75 use official API for utils::logmesg(), stricter/consistent checking for integer and floats 2023-01-11 05:54:35 -05:00
de89a25a25 final CMake sync with upstream 2023-01-11 05:03:00 -05:00
f982e95267 update developer info in unittest tree 2023-01-11 01:28:52 -05:00
293d0cdb58 fix typo 2023-01-11 01:26:54 -05:00
011f2651ee update 2023-01-11 01:26:48 -05:00
a8d3c43a77 update version 2023-01-11 01:26:35 -05:00
c19641f8b3 synchronize CMake scripting with upstream 2023-01-11 01:04:32 -05:00
6596b343ff sync docs with fire minimizer code features 2023-01-10 21:55:56 -05:00
b6dbb0330c update list of commands in pygments LAMMPS lexer 2023-01-10 21:55:56 -05:00
0dd138666a update for accelerated versions 2023-01-10 21:55:56 -05:00
33b9fec150 synchronize sphinx configuration with upstream 2023-01-10 21:55:56 -05:00
32b020a165 Increase communication cutoff for TIP4P pair styles, if needed
This avoids error of H atom not found when the O atom is a ghost.
2023-01-10 21:55:56 -05:00
c1db230331 Fix bug in Kokkos ReaxFF on GPUs 2023-01-10 21:55:56 -05:00
254c052ecc Fix GPU tag issues in other Kokkos styles 2023-01-10 21:55:56 -05:00
8e889dfa7c offset is not used (by construction of the potential) 2023-01-10 21:55:55 -05:00
5b6a52a646 correct suffix handling with compute fep 2023-01-10 21:55:55 -05:00
55f56deb63 bugfix for minimization with KOKKOS when using fix box/relax 2023-01-10 21:55:55 -05:00
bfe127a720 cosmetic 2023-01-10 21:55:55 -05:00
d95c8911a3 tweak intel compiler settings 2023-01-10 21:55:55 -05:00
0380f9d854 consistently prefix deep_copy() with Kokkos:: 2023-01-10 21:55:55 -05:00
71b1d60363 bugfix for gaussian bond/angle styles to avoid premature truncation of potential 2023-01-10 21:55:55 -05:00
8b1f92fabd better error handling when reading table files 2023-01-10 21:55:55 -05:00
419af0cf28 dead code removal 2023-01-10 21:55:45 -05:00
9030c59932 bugfix for nm/cut argument parsing 2023-01-10 21:55:21 -05:00
ee88078150 bugfix for DPD with exclusions other than 0.0 or 1.0 2023-01-10 21:55:21 -05:00
04451f6072 recover compilation 2023-01-10 21:55:21 -05:00
2364f7f08b bugfix for incorrect stress tally in dihedral style table 2023-01-10 21:55:21 -05:00
7f82a58f51 auto loop optimizations 2023-01-10 21:55:21 -05:00
1caf074ba1 avoid excess string copy in auto loops 2023-01-10 21:55:20 -05:00
34677f78c2 initialize ADIOS dumps only the first time when used in multiple runs 2023-01-10 21:55:20 -05:00
e095609ac6 update lammps theme base theme from read-the-docs version 1.0.0 to 1.1.1 2023-01-10 21:54:35 -05:00
1122408957 dynamic cast whitespace 2023-01-10 21:53:53 -05:00
5f9b78ca01 update developer reference text 2023-01-10 21:53:09 -05:00
fe138fc75c add support for building/using the ADIOS package without MPI
This needs the ADIOS2 installation being configured accordingly.
2023-01-10 12:38:20 -05:00
31c324ff61 remove references to long obsolete .d dependency files 2023-01-10 12:32:22 -05:00
30564ed8b7 import traditional build system updates and fixes from develop branch 2023-01-10 12:16:59 -05:00
f05bfe45a8 Synchronize GitHub related files and settings with develop branch 2023-01-10 11:50:49 -05:00
88c8b6ec6f Merge pull request #3460 from akohlmey/maintenance-2022-06-23
Second round of maintenance fixes and backports for the stable release
2022-11-03 12:21:59 -04:00
f01e28f574 add missing parts to ELECTRODE package docs for traditional make. sync with upstream. 2022-10-27 16:29:28 -04:00
96627d27b1 add support to detect the BuildID of Windows 10 22H2 2022-10-27 12:56:30 -04:00
b3fc574a6a use googletest aliased targets consistently 2022-10-26 22:46:31 -04:00
8a3f7560c9 drop special OpenMP flags from presets. Will be detected by FindOpenMP. 2022-10-26 22:46:21 -04:00
8406e92a9a downgrade KOKKOS OpenMP check to version 3.1
need to apply special exception for NVHPC/PGI compilers
2022-10-26 22:46:13 -04:00
3b376b4448 modernize OpenMP detection and check for omp.h in CMake 2022-10-26 22:46:03 -04:00
ca3b7be623 add compatibility to VTK version 9.0 and above 2022-10-24 16:25:25 -04:00
c825c52d2f update required version 2022-10-23 03:45:57 -04:00
0ea0e4ce59 modernize calls to access the list of fixes in the Modify class 2022-10-23 03:16:26 -04:00
d53d4b4d99 use inline insertion sort for short array 2022-10-23 03:16:13 -04:00
b37cd14dd1 avoid superfluous calls to utils::strdup and improve error messages 2022-10-23 03:15:58 -04:00
a921a6bdc1 silence compiler warning about not copying the final null byte 2022-10-23 03:15:47 -04:00
51a0345941 Update fix_bond_react.rst 2022-10-23 03:15:35 -04:00
8d70960e2d bond/react: create atoms error check
check that post-reaction template has 'Coords' section if it has 'CreateIDs' section
2022-10-23 03:15:12 -04:00
5661703b30 Update pair_threebody_table.cpp
Correcting for hard coded ntheta = 79 in the extreme case that theta is exactly equal to 180.0 degrees.
2022-10-23 03:13:50 -04:00
bc30304f72 update plumed package version to 2.8.1 2022-10-22 23:01:47 -04:00
c76da483fb must bootstrap centos 7 from dockerhub now 2022-10-22 22:59:52 -04:00
036a1e47d2 replace one more suffix 2022-10-22 22:28:35 -04:00
5430c3b592 add workaround for missing links to fortran functions in sphinx output 2022-10-21 19:01:31 -04:00
9b7cb8200c small sphinx tweaks. require sphinx 5.2 or later. 2022-10-21 19:01:24 -04:00
550eedbb1f make Linux behavior default for loading Python shared lib
This adds portability to platforms like FreeBSD
2022-10-21 15:52:26 -04:00
3a058f278d Python support in ML-IAP requires NumPy. Check for it if CMake supports it. 2022-10-21 15:50:08 -04:00
0f7f0b5f86 find cythonize executable on recent FreeBSD versions 2022-10-21 11:39:02 -04:00
3de7534b84 try to make more portable (in case this ever gets ported to windows) 2022-10-21 11:38:50 -04:00
7065462faf add md5sums for plumed 2.7.5 and 2.8.1, update default version to 2.8.1 2022-10-21 11:38:40 -04:00
2e9d8e1ccb preserve pair/only package setting during clear command 2022-10-19 14:50:27 -04:00
19b84f7cbd delete atomfile variables when using the clear command 2022-10-19 14:44:10 -04:00
9b7c445a15 include non-buffered flag 2022-10-19 14:44:04 -04:00
91e56444ce add CMake check that will refuse compilation of unit tests or skip tests
This is mainly because the default compilers on RHEL/CentOS 7.x are
not sufficient to compile googletest. Also some Fortran module test
requires a working F90 module and others are more recent Fortran compiler.
2022-10-17 18:12:21 -04:00
9b3c8c36bd update version 2022-10-14 21:35:16 -04:00
3403520967 Fix issue with KSpace slab correction energy with non-neutral systems 2022-10-11 16:37:45 -04:00
d8f969f1df update python package requirements for building the manual 2022-09-30 20:18:05 -04:00
3487deccb6 update broken URLs 2022-09-27 08:03:11 -04:00
0926fc627d step update counter 2022-09-25 09:04:45 -04:00
7999778d94 initialize sllod fixes consistently 2022-09-25 07:02:35 -04:00
b4ef4c1ff2 correct indentation 2022-09-25 07:02:35 -04:00
72b08e4b87 backport dump fixes from develop 2022-09-25 07:02:28 -04:00
faa64a84e8 bugfixes and updates to the DIELECTRIC package from upstream 2022-09-09 19:42:01 -04:00
32b67fff2b print an error if the filename before '*' is too long for the regex matcher 2022-09-07 21:06:19 -04:00
f3dbf4122d extend the length to which the regex matcher checks strings to 256 chars. 2022-09-07 20:47:16 -04:00
e25ac786da must apply bond/angle offsets when determining shake bond/angle types 2022-09-05 10:52:06 -04:00
f30fba0061 support paths with blanks and avoid race condition when updating potentials 2022-09-02 21:33:30 -04:00
03f319604f recover dump_modify every behavior 2022-08-31 17:26:09 -04:00
0782dab1ec properly initialize result storage for per-chunk arrays 2022-08-29 13:04:40 -04:00
c43cce54ab re-initialize neighbor lists at end to clear out the occasional list entry 2022-08-28 11:47:27 -04:00
281a368702 correct pair coeff mixing diagnostic for CLASS2 pair styles 2022-08-28 05:51:38 -04:00
f28d69b429 bugfix for writing data files with atom style dielectric 2022-08-19 16:18:38 -04:00
e674e0c927 correctly handle the case where there are no atoms in the fix group 2022-08-14 03:53:02 -04:00
eebabf99b8 adjust location of local ref targets for recent sphinx versions 2022-08-05 22:09:01 -04:00
23a19f4431 need new CSS hack to hide duplicate headers derived from the navigation bar 2022-08-05 21:46:38 -04:00
d618b0ffc0 Merge pull request #3324 from akohlmey/maintenance-2022-06-23
First round of maintenance fixes for the stable release
2022-08-05 16:57:43 -04:00
ffc71b8733 energy is not an array 2022-08-05 08:23:23 -04:00
564df78698 fix typo 2022-08-05 08:22:59 -04:00
8db0b5ca39 fix index copy-n-paste error 2022-08-05 08:22:09 -04:00
79e26fe829 correct bond style bpm/rotational example 2022-08-05 03:24:29 -04:00
523d4b0242 correct issues in fix adapt and fix adapt/fep related to using fix STORE 2022-08-04 10:19:26 -04:00
fe39a3e581 Documentation updates for simulations including dipoles 2022-08-03 16:47:29 -04:00
081cc1f992 clarification on what constituets single, double, and triple quotes. 2022-08-03 01:51:43 -04:00
53c80c2c00 match pow(0,0) = 1.0 behavior in powint() 2022-07-31 18:52:08 -04:00
554b64a147 avoid deprecation warning and update PyPy package requirements 2022-07-30 17:37:35 -04:00
dc08dba592 update embedded search box 2022-07-28 18:58:58 -04:00
0eaa2775cd document missing call 2022-07-27 22:13:33 -04:00
852673ce41 fix off-by-one bug 2022-07-27 21:44:22 -04:00
8c711e405a correct make command line example 2022-07-27 08:38:37 -04:00
25b9f95061 add check on extracting elements twice from the library to avoid opaque error later 2022-07-26 15:01:03 -04:00
ee66a6f8c1 correct formatting 2022-07-26 12:34:05 -04:00
b694a5f582 add reference 2022-07-26 12:33:57 -04:00
7ab3fce93f correct typos 2022-07-26 12:33:48 -04:00
1f9509cb6f strip off -pendantic-errors flag when compiling with nvcc_wrapper to fix error compiling ML-PACE 2022-07-18 14:00:53 -04:00
cad1d8ece4 correct unit tests for dump local 2022-07-17 12:16:01 -04:00
b709d75f80 add support for dump_modify colname to dump local 2022-07-17 11:52:15 -04:00
5839909061 fix cut-n-paste error and improve error message 2022-07-17 11:46:51 -04:00
30f374de58 clarify 2022-07-16 06:42:19 -04:00
0f9fec05fb disallow use of variable functions vdisplace(), swiggle(), and cwiggle() with fix dt/reset 2022-07-16 06:42:11 -04:00
972a86f0ec fix cut-n-paste typo 2022-07-15 19:06:14 -04:00
7338ebfc94 Update Errors_warnings.rst 2022-07-15 12:28:07 -04:00
7132152693 Update Errors_messages.rst 2022-07-15 12:27:57 -04:00
c9925f64f7 cosmetic changes, silence warnings, avoid temporary char buffers 2022-07-15 12:27:48 -04:00
6da523c8b8 very-small-templates bugfix 2022-07-15 12:27:36 -04:00
0522284589 bugfix: specials update corner case 2022-07-15 12:27:26 -04:00
e10a66dabc allow ramp(x,y) to be used in between runs (returning x) and avoid division by zero on run 0 2022-07-15 05:41:12 -04:00
51dd631a76 Fix bug in vtk dump 2022-07-15 04:29:54 -04:00
d37249787e work around issues with Intel compilers compiling the GPU package 2022-07-12 00:38:51 -04:00
f44841de69 update unit test 2022-07-07 10:32:47 -04:00
54c5337d2d apply clang-format 2022-07-07 10:32:32 -04:00
efb0e63bf6 correct force and energy for excluded pairs 2022-07-07 10:32:20 -04:00
13d78c3afa Update Kokkos version in CMake 2022-07-04 10:49:03 -04:00
f2910b1d9c Update Kokkos library in LAMMPS to v3.6.1 2022-07-04 10:48:51 -04:00
78b22a64aa formatting corrections and minor tweaks to the Argon viscosity howto 2022-07-01 09:27:43 -04:00
8bb1880c9d Fixed temperature in argon GK example 2022-07-01 09:27:36 -04:00
e7b36c7b90 make certain to switch to the expected source folder when building n2p2 lib 2022-07-01 05:49:07 -04:00
d7804e3770 MPI may need to include multiple folders (e.g. on Ubuntu with OpenMPI) 2022-06-30 23:53:57 -04:00
8d0f9695d2 update googletest to version 1.12.1 2022-06-30 14:57:22 -04:00
52b2e4f364 add Update 1 string to version info 2022-06-29 17:44:29 -04:00
41140149ea whitespace 2022-06-29 17:06:11 -04:00
85e556ac8f add more unit tests for boolean expressions 2022-06-29 17:05:37 -04:00
cd5437a7e2 fix bug in recent bugfix 2022-06-29 17:05:27 -04:00
00cc82ac94 update and expand unit tests for if() command boolean evaluation 2022-06-29 17:04:49 -04:00
20f87e3f1d change boolean = single string to an error 2022-06-29 17:04:34 -04:00
97e34f0667 better error strings 2022-06-29 17:04:23 -04:00
3e5da9b09a more consistency checks 2022-06-29 17:04:12 -04:00
a62fcca7a4 Boolean expression corner case 2022-06-29 17:04:01 -04:00
778d59fa6b whitespace 2022-06-29 05:19:10 -04:00
3833a85d7a Add missing grow to Kokkos unpack_exchange 2022-06-29 05:17:55 -04:00
6d961ab29f Fix small memory leak in SNAP 2022-06-29 05:17:46 -04:00
001824e0f6 Small tweaks 2022-06-29 05:17:36 -04:00
953d32f9b3 Prevent view bounds error when a proc has no atoms 2022-06-29 05:17:26 -04:00
edba922665 Add missing GPU <--> CPU data transfer in minimize Kokkos 2022-06-29 05:17:17 -04:00
53806d4601 Add more missing Kokkos data movement 2022-06-29 05:17:06 -04:00
67597722d5 intergrate references to dump cfg/uef into the dump command docs 2022-06-25 06:19:04 -04:00
337794a9e9 add crosscompiling with MPI support to plugins package 2022-06-24 06:52:08 -04:00
5f5fb895ff add "package" target to support building a windows installer with NSIS 2022-06-24 01:25:54 -04:00
0302d03bc6 must set thirdparty download URL variable for downloading MPICH4Win 2022-06-23 23:20:49 -04:00
0a4fef369f may check for MPI library Fortran support only if MPI is enabled 2022-06-23 15:57:54 -04:00
7d5fc356fe Merge pull request #3311 from akohlmey/next-stable-release
Update stable branch to next stable release
2022-06-22 17:33:34 -04:00
8103e5a18f Merge branch 'release' into next-stable-release 2022-06-22 16:29:19 -04:00
e5b56b67fe Merge branch 'next_patch_release' into next-stable-release 2022-06-21 09:00:40 -04:00
8ffb7e5f89 Merge branch 'collected-small-fixes' into next-stable-release 2022-06-21 09:00:31 -04:00
cb9ab48ce7 Merge branch 'develop' into next-stable-release 2022-06-21 09:00:12 -04:00
1ebb1cee40 Merge branch 'release' into next-stable-release 2022-06-02 21:49:47 -04:00
f0e7101bd2 Merge branch 'develop' into next-stable-release 2022-05-18 06:35:57 -04:00
6fd8b2b177 Merge pull request #3122 from akohlmey/maintenance-2021-09-29
Third round of maintenance fixes for the stable release
2022-03-24 14:20:52 -04:00
6edaf42b3d fix temperature initialization bug in KOKKOS nose-hoover code 2022-03-24 11:44:24 -04:00
79c047487d fix parallel execution bug for shell command 2022-03-24 07:38:44 -04:00
ac5acb9abf update threebody example 2022-03-24 07:31:02 -04:00
87fbbd3b13 small kokkos fixes from upstream 2022-03-24 07:18:24 -04:00
8ac0ec6473 Changes needed to compile LAMMPS with latest Kokkos develop 2022-03-24 06:09:03 -04:00
8acba74c4d correct input to load potential file from local folder 2022-03-22 22:32:39 -04:00
34bcbdf41d update extep potential file 2022-03-22 22:31:48 -04:00
d519ca0213 add missing reaxff files to purge list 2022-03-21 14:34:14 -04:00
a392e8dc09 accept infile with 0 lines, so we can create a template from the restart 2022-03-21 00:33:40 -04:00
a4d4f77bc2 run setup_bodies_dynamic() before processing infile in case that is not resetting all data 2022-03-21 00:32:49 -04:00
83a8f72d83 fix off-by-one bug when writing restart files for rigid bodies 2022-03-20 19:14:13 -04:00
3c54b56cfe update overlooked date stamp 2022-03-19 21:00:14 -04:00
ff1a08f148 fixes to CMake build for ML-QUIP package from upstream 2022-03-17 18:07:12 -04:00
5a53b0fc03 import python3 compatibility changes to tools/python from upstream 2022-03-16 13:24:53 -04:00
e550600ebe Error fixed. Epsilon and sigma must also be symmetric 2022-03-16 09:09:52 -04:00
7cb13be52a fix bug where it was not possible to use an absolute path for write_coeff 2022-03-16 09:08:47 -04:00
ab56d7ecd7 augment cmake library search path to include the CUDA stubs library folder
this will help configuring and compiling LAMMPS with CUDA support on
machines where there is no CUDA driver installed
2022-03-10 23:02:57 -05:00
bd6ac3ee6d for 2d systems, rigid bodies always have a moment of inertia and no DOFs need to be subtracted 2022-03-02 16:41:35 -05:00
27ca0a8f41 trigger building an "intel" style neighbor list so that buffers are allocated 2022-02-27 14:50:48 -05:00
f688b9b6b5 use consistent names, avoid memory leaks, fix off-by-1 error in fourier dihedral 2022-02-27 12:25:32 -05:00
16c61b3cc0 add support for plumed 2.6.5, 2.6.6, 2.7.3, 2.7.4, and 2.8.0 (default 2.7.4) 2022-02-25 16:37:00 -05:00
fb480f22fc make cythonize detection compatible with /bin/dash on ubunutu 2022-02-24 21:24:04 -05:00
d0507559a4 when updating ML-IAP due to adding/removing PYTHON we need to delete and re-add cythonize support 2022-02-24 20:40:55 -05:00
ali
58eb331b08 Python 3 compatibility for log commands in tools/python 2022-02-23 10:22:29 -05:00
c68015ca87 Bug fix for Intel package skip lists with multiple runs. 2022-02-18 05:11:34 -05:00
583c22d6e0 update tools/eam_database from upstream 2022-02-16 11:46:11 -05:00
58a4694d92 Remove incorrect error check in ReaxFF 2022-02-11 16:19:00 -05:00
97cf345528 don't allow exceptions to "escape" a destructor 2022-02-10 21:13:26 -05:00
0658abbdd4 silence possible warnings about missing files on "make clean-all" 2022-02-10 21:10:34 -05:00
72026a58bf make certain that "offset" is always initialized 2022-02-10 21:05:12 -05:00
7152231a10 plug memory leak 2022-02-10 20:56:51 -05:00
8fe8a667b6 update create.f with changes from NIST database
also add parameters for Cr and document in README file and change
the code to create output files with .eam.alloy extension
2022-02-10 20:45:16 -05:00
560c543e69 add extra communication of special neighbors when using angle constraints 2022-02-10 20:44:39 -05:00
c5e6650924 import bugfixes for crashes and memory leaks in MSM kspace style from develop 2022-02-10 20:36:35 -05:00
10373ea5c9 avoid failures with "most" presets 2022-02-10 20:11:00 -05:00
992b1cf582 label as update #3 2022-01-25 07:42:00 -05:00
1505f3de06 fix tag caching issue in INTEL package 2022-01-25 07:41:37 -05:00
566efe04f2 always fall back to using the .so extension if available in the LAMMPS module folder 2022-01-19 10:12:50 -05:00
7586adbb6a Merge pull request #3029 from akohlmey/maintenance-2021-09-29
Second round of maintenance fixes for the stable release
2022-01-06 19:58:51 -05:00
69d6ddccc5 create missing de,df table elements from linear extrapolation 2022-01-05 15:34:30 -05:00
5ae496dcef backport array dimension bugfix for NETCDF package in simplified form 2022-01-03 19:55:23 -05:00
bc5d742623 explain that the computed force in python pair is force/r same as in Pair:single() 2022-01-03 10:12:38 -05:00
882e699163 Incorporate bugfixes from issue #3074, a few additional cleanups 2022-01-03 10:11:18 -05:00
9c725d79d6 correct code example for current code 2022-01-01 16:42:28 -05:00
79fbf437a3 correct format string for Error::one() 2021-12-29 16:19:10 -05:00
d130aa4289 address segfault issue with fix nve/gpu when group is not "all" 2021-12-29 14:06:52 -05:00
5d8b83a251 backport GPU package build system updates from upstream 2021-12-27 20:30:43 -05:00
5a2548a83d have internal fix/compute ids include the fix id for fix reaxff/species
this allows using the fix multiple times
also remove code and warning that checks for multiple fix instances

# Conflicts:
#	src/REAXFF/fix_reaxff_species.cpp
2021-12-23 11:36:28 -05:00
a85b310e1f add missing fclose() 2021-12-23 11:28:24 -05:00
e51fd40547 correct names of the pack/unpack routines for forward communication 2021-12-09 18:33:13 -05:00
62f271658b correct setting forward/reverse buffer size info 2021-12-08 13:58:12 -05:00
0aa742934f correct docs for pair style local/density 2021-12-08 00:51:52 -05:00
a26a709a7b correct handling of data packing for forward and reverse communication 2021-12-08 00:51:52 -05:00
027293d285 whitespace 2021-11-24 15:47:05 -05:00
f7d049ac2d generate atom tags for newly created atoms, if tags are enabled. triclinic support. 2021-11-24 15:36:16 -05:00
ea0ff1c8f7 Update CMake utility function get_lammps_version()
With the introduction of LAMMPS_UPDATE, version.h is no longer a single line
file. With this change the CMake utility will only process the LAMMPS_VERSION
line. Fixes issue #3038
2021-11-23 10:44:40 -05:00
5c1bb5f13a Write dump header after sort to fix incorrect atom count for multiproc 2021-11-22 15:52:27 -05:00
24d9b4b611 Update lebedeva potential file and docs based on email on mailing list
https://matsci.org/t/lammps-users-webpage-and-parameter-file-for-the-lebedeva-potential/39059
2021-11-17 08:45:55 -05:00
a0e75c9006 correct unit description of eta_n0 parameters. fixes #3016 2021-11-17 08:38:09 -05:00
2435b953e1 increment update counter 2021-11-17 07:04:44 -05:00
c042e12323 clarifications and corrections for the discussion of the main git branches 2021-11-17 07:04:13 -05:00
e9efe46db9 update branch names 2021-11-17 07:03:56 -05:00
ecc14b7308 update documentation to refer to the new branch names (develop, release) 2021-11-17 07:03:27 -05:00
0152fe5cdf fix segfault when using atom style smd as part of a hybrid style
also remove redundant for clearing
2021-11-16 21:49:56 -05:00
892d17af22 plug memory leaks 2021-11-16 21:49:41 -05:00
2cca00203e Avoid file name collisions in dump unit tests
# Conflicts:
#	unittest/formats/test_dump_atom.cpp
2021-11-16 15:08:27 -05:00
9f4626a62a correct uninitialized data access bug due to shadowing of a base class member 2021-11-16 10:51:46 -05:00
e890a0b45e Merge pull request #2999 from akohlmey/maintenance-2021-09-29
Maintenance fixes for the stable release
2021-11-09 15:11:19 -05:00
68223f0385 mention that dump sorting is limited to less than 2 billion atoms 2021-11-07 08:31:15 -05:00
1291a88bff skip MPI tests if they would be oversubscribing the available processors 2021-11-07 08:30:19 -05:00
d9b687450a account for increased floating point errors when summing numbers to zero 2021-11-07 08:30:04 -05:00
bd950b37d7 change git:// protocol for accessing github to https:// protocol
https://github.blog/2021-09-01-improving-git-protocol-security-github/
2021-11-02 15:30:27 -04:00
21fcdf8c56 Fix bug in Kokkos neighborlist where stencil wasn't updated for occasional list 2021-11-02 13:17:28 -04:00
6b400fb4bf fix indexing bug 2021-10-31 16:19:17 -04:00
d982298ab2 update new LAMMPS paper citation info 2021-10-28 10:09:01 -04:00
765fd7f763 Use correct sizeof in memset 2021-10-27 17:46:37 -04:00
0325047c01 update a few GPU kernels so they can be compiled on GPUs without double precisions support 2021-10-21 07:34:05 -04:00
2dce8923ee more direct version of clearing out loaded plugins 2021-10-19 08:28:19 -04:00
8d1ba074be wipe out all loaded plugins before destroying the LAMMPS instance 2021-10-18 18:06:09 -04:00
4675a3b560 Only check for GPU double precision support if a GPU is present 2021-10-18 13:44:37 -04:00
8999b1f69f add a LAMMPS_UPDATE string define to signal updates to stable releases 2021-10-17 18:06:04 -04:00
6c2b19c11b Add support for an "Update #" appendix to the version string
This is for informative output only, so that any code depending
on the LAMMPS_VERSION define will not have to be changed and no
warnings will be printed etc.
2021-10-17 18:05:29 -04:00
a425334928 port dump vtk to correctly support custom per-atom arrays and fix some bugs 2021-10-17 11:00:33 -04:00
db2faf2789 fix bugs related to custom per-atom properties in dump style custom 2021-10-17 11:00:21 -04:00
fdbb7d0da4 Report only compatible GPU, i.e. no GPU if mixed/double precision is requested by the hardware does not support it 2021-10-15 20:26:47 -04:00
52cd99918f pppm kspace styles also require -DFFT_SINGLE when using GPUs in single precision 2021-10-15 20:24:47 -04:00
a3e6a95ffb allow single precision FFT introspection 2021-10-15 20:24:47 -04:00
5b65169997 correct expansion of fix/compute/variable arguments to avoid bogus thermo outpu 2021-10-15 20:23:57 -04:00
5f3bf69e30 plug memory leaks 2021-10-15 17:00:46 -04:00
507c02b9af must set define to "see" the lammps_open() library function 2021-10-09 10:21:31 -04:00
b7fe47ba48 Fix bugs and compilation issues in KOKKOS 2021-10-08 09:39:53 -04:00
7dfd11da4b re-freeze Sphinx and other pip installed packages for doc build
The change relative to the stable release fixes a bug with python 3.10 support
2021-10-05 10:52:34 -04:00
97ba95f30e fix a couple more bugs like in 5246cedda6 2021-10-05 10:39:03 -04:00
c1945b4ec9 Fix misplaced MPI calls bug in pair style drip 2021-10-04 07:12:50 -04:00
c4291a4b8e unfreeze versions of python packages used to build the documentation 2021-10-02 23:57:23 -04:00
5b5dfa86c5 also update eigen download for traditional build 2021-10-02 23:56:28 -04:00
3ca3f6959f update eigen3 to the latest release and move download to our own server 2021-10-02 22:55:06 -04:00
f7b7bfa406 Avoid assertions in PythonCapabilities check when using external KOKKOS 2021-10-01 12:05:59 -04:00
3d2f29c92d fix memory allocation bug causing memory corruption on 32-bit arches 2021-10-01 01:16:45 -04:00
4044 changed files with 251837 additions and 880819 deletions

6
.github/CODEOWNERS vendored
View File

@ -18,7 +18,7 @@ src/AMOEBA/* @sjplimp
src/BPM/* @jtclemm
src/BROWNIAN/* @samueljmcameron
src/CG-DNA/* @ohenrich
src/CG-SPICA/* @yskmiyazaki
src/CG-SDK/* @yskmiyazaki
src/COLVARS/* @giacomofiorin
src/COMPRESS/* @rbberger
src/DIELECTRIC/* @ndtrung81
@ -37,7 +37,6 @@ src/MESONT/* @iafoss
src/ML-HDNNP/* @singraber
src/ML-IAP/* @athomps
src/ML-PACE/* @yury-lysogorskiy
src/ML-POD/* @exapde @rohskopf
src/MOFFF/* @hheenen
src/MOLFILE/* @akohlmey
src/NETCDF/* @pastewka
@ -63,10 +62,7 @@ src/MANYBODY/pair_vashishta_table.* @andeplane
src/MANYBODY/pair_atm.* @sergeylishchuk
src/REPLICA/*_grem.* @dstelter92
src/EXTRA-COMPUTE/compute_stress_mop*.* @RomainVermorel
src/EXTRA-COMPUTE/compute_born_matrix.* @Bibobu @athomps
src/MISC/*_tracker.* @jtclemm
src/MC/fix_gcmc.* @athomps
src/MC/fix_sgcmc.* @athomps
# core LAMMPS classes
src/lammps.* @sjplimp

View File

@ -49,9 +49,7 @@ jobs:
shell: bash
working-directory: build
run: |
cmake -C ../cmake/presets/most.cmake \
-D DOWNLOAD_POTENTIALS=off \
../cmake
cmake -C ../cmake/presets/most.cmake ../cmake
cmake --build . --parallel 2
- name: Perform CodeQL Analysis

View File

@ -36,7 +36,6 @@ jobs:
nuget install MSMPIsdk
nuget install MSMPIDIST
cmake -C cmake/presets/windows.cmake \
-D DOWNLOAD_POTENTIALS=off \
-D PKG_PYTHON=on \
-D WITH_PNG=off \
-D WITH_JPEG=off \
@ -44,7 +43,7 @@ jobs:
-D BUILD_SHARED_LIBS=on \
-D LAMMPS_EXCEPTIONS=on \
-D ENABLE_TESTING=on
cmake --build build --config Release --parallel 2
cmake --build build --config Release
- name: Run LAMMPS executable
shell: bash

View File

@ -1,103 +0,0 @@
name: "Run Coverity Scan"
on:
schedule:
- cron: "0 0 * * FRI"
workflow_dispatch:
jobs:
analyze:
name: Analyze
if: ${{ github.repository == 'lammps/lammps' }}
runs-on: ubuntu-latest
container:
image: lammps/buildenv:ubuntu20.04
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Create Build and Download Folder
run: mkdir build download
- name: Cache Coverity
id: cache-coverity
uses: actions/cache@v3
with:
path: ./download/
key: ${{ runner.os }}-download-${{ hashFiles('**/coverity_tool.*') }}
- name: Download Coverity if necessary
if: steps.cache-coverity.outputs.cache-hit != 'true'
working-directory: download
run: |
wget -nv https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_TOKEN }}&project=LAMMPS" -O coverity_tool.tgz
wget -nv https://scan.coverity.com/download/linux64 --post-data "token=${{ secrets.COVERITY_TOKEN }}&project=LAMMPS&md5=1" -O coverity_tool.md5
echo " coverity_tool.tgz" >> coverity_tool.md5
md5sum -c coverity_tool.md5
- name: Setup Coverity
run: |
tar xzf download/coverity_tool.tgz
ln -s cov-analysis-linux64-* coverity
- name: Configure LAMMPS via CMake
shell: bash
working-directory: build
run: |
cmake \
-C ../cmake/presets/clang.cmake \
-C ../cmake/presets/most.cmake \
-C ../cmake/presets/kokkos-openmp.cmake \
-D CMAKE_BUILD_TYPE="RelWithDebug" \
-D CMAKE_TUNE_FLAGS="-Wall -Wextra -Wno-unused-result" \
-D BUILD_MPI=on \
-D BUILD_OMP=on \
-D BUILD_SHARED_LIBS=on \
-D LAMMPS_SIZES=SMALLBIG \
-D LAMMPS_EXCEPTIONS=off \
-D PKG_MESSAGE=on \
-D PKG_MPIIO=on \
-D PKG_ATC=on \
-D PKG_AWPMD=on \
-D PKG_BOCS=on \
-D PKG_EFF=on \
-D PKG_H5MD=on \
-D PKG_INTEL=on \
-D PKG_LATBOLTZ=on \
-D PKG_MANIFOLD=on \
-D PKG_MGPT=on \
-D PKG_ML-PACE=on \
-D PKG_ML-RANN=on \
-D PKG_MOLFILE=on \
-D PKG_NETCDF=on \
-D PKG_PTM=on \
-D PKG_QTB=on \
-D PKG_SMTBQ=on \
-D PKG_TALLY=on \
../cmake
- name: Run Coverity Scan
shell: bash
working-directory: build
run: |
export PATH=$GITHUB_WORKSPACE/coverity/bin:$PATH
cov-build --dir cov-int cmake --build . --parallel 2
- name: Create tarball with scan results
shell: bash
working-directory: build
run: tar czf lammps.tgz cov-int
- name: Upload scan result to Coverity
shell: bash
run: |
curl --form token=${{ secrets.COVERITY_TOKEN }} \
--form email=${{ secrets.COVERITY_EMAIL }} \
--form file=@build/lammps.tgz \
--form version=${{ github.sha }} \
--form description="LAMMPS automated build" \
https://scan.coverity.com/builds?project=LAMMPS

View File

@ -47,7 +47,6 @@ jobs:
python3 -m pip install pyyaml
cmake -C ../cmake/presets/clang.cmake \
-C ../cmake/presets/most.cmake \
-D DOWNLOAD_POTENTIALS=off \
-D CMAKE_CXX_COMPILER_LAUNCHER=ccache \
-D CMAKE_C_COMPILER_LAUNCHER=ccache \
-D ENABLE_TESTING=on \

2
.gitignore vendored
View File

@ -55,5 +55,3 @@ out/RelWithDebInfo
out/Release
out/x86
out/x64
src/Makefile.package-e
src/Makefile.package.settings-e

View File

@ -221,7 +221,6 @@ option(CMAKE_VERBOSE_MAKEFILE "Generate verbose Makefiles" OFF)
set(STANDARD_PACKAGES
ADIOS
AMOEBA
ASPHERE
ATC
AWPMD
@ -230,7 +229,7 @@ set(STANDARD_PACKAGES
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -258,7 +257,6 @@ set(STANDARD_PACKAGES
KSPACE
LATBOLTZ
LATTE
LEPTON
MACHDYN
MANIFOLD
MANYBODY
@ -274,7 +272,6 @@ set(STANDARD_PACKAGES
ML-QUIP
ML-RANN
ML-SNAP
ML-POD
MOFFF
MOLECULE
MOLFILE
@ -397,7 +394,6 @@ pkg_depends(MPIIO MPI)
pkg_depends(ATC MANYBODY)
pkg_depends(LATBOLTZ MPI)
pkg_depends(SCAFACOS MPI)
pkg_depends(AMOEBA KSPACE)
pkg_depends(DIELECTRIC KSPACE)
pkg_depends(DIELECTRIC EXTRA-PAIR)
pkg_depends(CG-DNA MOLECULE)
@ -441,19 +437,21 @@ if(BUILD_OMP)
target_link_libraries(lmp PRIVATE OpenMP::OpenMP_CXX)
endif()
if(PKG_MSCG OR PKG_ATC OR PKG_AWPMD OR PKG_ML-QUIP OR PKG_ML-POD OR PKG_LATTE OR PKG_ELECTRODE)
if(PKG_MSCG OR PKG_ATC OR PKG_AWPMD OR PKG_ML-QUIP OR PKG_LATTE OR PKG_ELECTRODE)
enable_language(C)
if (NOT USE_INTERNAL_LINALG)
find_package(LAPACK)
find_package(BLAS)
endif()
if(NOT LAPACK_FOUND OR NOT BLAS_FOUND OR USE_INTERNAL_LINALG)
file(GLOB LINALG_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/linalg/[^.]*.cpp)
find_package(LAPACK)
find_package(BLAS)
if(NOT LAPACK_FOUND OR NOT BLAS_FOUND)
include(CheckGeneratorSupport)
if(NOT CMAKE_GENERATOR_SUPPORT_FORTRAN)
status(FATAL_ERROR "Cannot build internal linear algebra library as CMake build tool lacks Fortran support")
endif()
enable_language(Fortran)
file(GLOB LINALG_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/linalg/[^.]*.[fF])
add_library(linalg STATIC ${LINALG_SOURCES})
set_target_properties(linalg PROPERTIES OUTPUT_NAME lammps_linalg${LAMMPS_MACHINE})
set(BLAS_LIBRARIES "$<TARGET_FILE:linalg>")
set(LAPACK_LIBRARIES "$<TARGET_FILE:linalg>")
target_link_libraries(lammps PRIVATE linalg)
else()
list(APPEND LAPACK_LIBRARIES ${BLAS_LIBRARIES})
endif()
@ -521,7 +519,7 @@ else()
endif()
foreach(PKG_WITH_INCL KSPACE PYTHON ML-IAP VORONOI COLVARS ML-HDNNP MDI MOLFILE NETCDF
PLUMED QMMM ML-QUIP SCAFACOS MACHDYN VTK KIM LATTE MSCG COMPRESS ML-PACE LEPTON)
PLUMED QMMM ML-QUIP SCAFACOS MACHDYN VTK KIM LATTE MSCG COMPRESS ML-PACE)
if(PKG_${PKG_WITH_INCL})
include(Packages/${PKG_WITH_INCL})
endif()
@ -566,8 +564,6 @@ RegisterStyles(${LAMMPS_SOURCE_DIR})
########################################################
# Fetch missing external files and archives for packages
########################################################
option(DOWNLOAD_POTENTIALS "Automatically download large potential files" ON)
mark_as_advanced(DOWNLOAD_POTENTIALS)
foreach(PKG ${STANDARD_PACKAGES} ${SUFFIX_PACKAGES})
if(PKG_${PKG})
FetchPotentials(${LAMMPS_SOURCE_DIR}/${PKG} ${LAMMPS_POTENTIALS_DIR})
@ -620,11 +616,18 @@ endforeach()
##############################################
# add lib sources of (simple) enabled packages
############################################
foreach(PKG_LIB POEMS ATC AWPMD H5MD)
foreach(PKG_LIB POEMS ATC AWPMD H5MD MESONT)
if(PKG_${PKG_LIB})
string(TOLOWER "${PKG_LIB}" PKG_LIB)
file(GLOB_RECURSE ${PKG_LIB}_SOURCES ${CONFIGURE_DEPENDS}
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.c ${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.cpp)
if(PKG_LIB STREQUAL "mesont")
enable_language(Fortran)
file(GLOB_RECURSE ${PKG_LIB}_SOURCES ${CONFIGURE_DEPENDS}
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.f90)
else()
file(GLOB_RECURSE ${PKG_LIB}_SOURCES ${CONFIGURE_DEPENDS}
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.c
${LAMMPS_LIB_SOURCE_DIR}/${PKG_LIB}/[^.]*.cpp)
endif()
add_library(${PKG_LIB} STATIC ${${PKG_LIB}_SOURCES})
set_target_properties(${PKG_LIB} PROPERTIES OUTPUT_NAME lammps_${PKG_LIB}${LAMMPS_MACHINE})
target_link_libraries(lammps PRIVATE ${PKG_LIB})
@ -638,7 +641,7 @@ foreach(PKG_LIB POEMS ATC AWPMD H5MD)
endif()
endforeach()
if(PKG_ELECTRODE OR PKG_ML-POD)
if(PKG_ELECTRODE)
target_link_libraries(lammps PRIVATE ${LAPACK_LIBRARIES})
endif()
@ -667,7 +670,7 @@ endif()
# packages which selectively include variants based on enabled styles
# e.g. accelerator packages
######################################################################
foreach(PKG_WITH_INCL CORESHELL DPD-SMOOTH MC MISC PHONON QEQ OPENMP KOKKOS OPT INTEL GPU)
foreach(PKG_WITH_INCL CORESHELL DPD-SMOOTH PHONON QEQ OPENMP KOKKOS OPT INTEL GPU)
if(PKG_${PKG_WITH_INCL})
include(Packages/${PKG_WITH_INCL})
endif()

View File

@ -88,6 +88,18 @@ function(get_lammps_version version_header variable)
set(${variable} "${date}" PARENT_SCOPE)
endfunction()
# determine canonical URL for downloading backup copy from download.lammps.org/thirdparty
function(GetFallbackURL input output)
string(REPLACE "_URL" "" _tmp ${input})
string(TOLOWER ${_tmp} libname)
string(REGEX REPLACE "^https://.*/([^/]+gz)" "${LAMMPS_THIRDPARTY_URL}/${libname}-\\1" newurl "${${input}}")
if ("${newurl}" STREQUAL "${${input}}")
set(${output} "" PARENT_SCOPE)
else()
set(${output} ${newurl} PARENT_SCOPE)
endif()
endfunction(GetFallbackURL)
#################################################################################
# LAMMPS C++ interface. We only need the header related parts except on windows.
add_library(lammps INTERFACE)

View File

@ -99,15 +99,8 @@ function(check_for_autogen_files source_dir)
endfunction()
macro(pkg_depends PKG1 PKG2)
if(DEFINED BUILD_${PKG2})
if(PKG_${PKG1} AND NOT BUILD_${PKG2})
message(FATAL_ERROR "The ${PKG1} package needs LAMMPS to be built with -D BUILD_${PKG2}=ON")
endif()
elseif(DEFINED PKG_${PKG2})
if(PKG_${PKG1} AND NOT PKG_${PKG2})
message(WARNING "The ${PKG1} package depends on the ${PKG2} package. Enabling it.")
set(PKG_${PKG2} ON CACHE BOOL "" FORCE)
endif()
if(PKG_${PKG1} AND NOT (PKG_${PKG2} OR BUILD_${PKG2}))
message(FATAL_ERROR "The ${PKG1} package needs LAMMPS to be built with the ${PKG2} package")
endif()
endmacro()
@ -125,27 +118,25 @@ endfunction(GenerateBinaryHeader)
# fetch missing potential files
function(FetchPotentials pkgfolder potfolder)
if(DOWNLOAD_POTENTIALS)
if(EXISTS "${pkgfolder}/potentials.txt")
file(STRINGS "${pkgfolder}/potentials.txt" linelist REGEX "^[^#].")
foreach(line ${linelist})
string(FIND ${line} " " blank)
math(EXPR plusone "${blank}+1")
string(SUBSTRING ${line} 0 ${blank} pot)
string(SUBSTRING ${line} ${plusone} -1 sum)
if(EXISTS "${LAMMPS_POTENTIALS_DIR}/${pot}")
file(MD5 "${LAMMPS_POTENTIALS_DIR}/${pot}" oldsum)
endif()
if(NOT sum STREQUAL oldsum)
message(STATUS "Downloading external potential ${pot} from ${LAMMPS_POTENTIALS_URL}")
string(RANDOM LENGTH 10 TMP_EXT)
file(DOWNLOAD "${LAMMPS_POTENTIALS_URL}/${pot}.${sum}" "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}"
EXPECTED_HASH MD5=${sum} SHOW_PROGRESS)
file(COPY "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}" DESTINATION "${LAMMPS_POTENTIALS_DIR}")
file(RENAME "${LAMMPS_POTENTIALS_DIR}/${pot}.${TMP_EXT}" "${LAMMPS_POTENTIALS_DIR}/${pot}")
endif()
endforeach()
endif()
if(EXISTS "${pkgfolder}/potentials.txt")
file(STRINGS "${pkgfolder}/potentials.txt" linelist REGEX "^[^#].")
foreach(line ${linelist})
string(FIND ${line} " " blank)
math(EXPR plusone "${blank}+1")
string(SUBSTRING ${line} 0 ${blank} pot)
string(SUBSTRING ${line} ${plusone} -1 sum)
if(EXISTS "${LAMMPS_POTENTIALS_DIR}/${pot}")
file(MD5 "${LAMMPS_POTENTIALS_DIR}/${pot}" oldsum)
endif()
if(NOT sum STREQUAL oldsum)
message(STATUS "Downloading external potential ${pot} from ${LAMMPS_POTENTIALS_URL}")
string(MD5 TMP_EXT "${CMAKE_BINARY_DIR}")
file(DOWNLOAD "${LAMMPS_POTENTIALS_URL}/${pot}.${sum}" "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}"
EXPECTED_HASH MD5=${sum} SHOW_PROGRESS)
file(COPY "${CMAKE_BINARY_DIR}/${pot}.${TMP_EXT}" DESTINATION "${LAMMPS_POTENTIALS_DIR}")
file(RENAME "${LAMMPS_POTENTIALS_DIR}/${pot}.${TMP_EXT}" "${LAMMPS_POTENTIALS_DIR}/${pot}")
endif()
endforeach()
endif()
endfunction(FetchPotentials)
@ -159,6 +150,7 @@ if((CMAKE_SYSTEM_NAME STREQUAL "Linux") AND (EXISTS /etc/os-release))
set(CMAKE_DISTRO_VERSION ${disversion})
endif()
# determine canonical URL for downloading backup copy from download.lammps.org/thirdparty
function(GetFallbackURL input output)
string(REPLACE "_URL" "" _tmp ${input})
string(TOLOWER ${_tmp} libname)

View File

@ -2,14 +2,19 @@ set(COLVARS_SOURCE_DIR ${LAMMPS_LIB_SOURCE_DIR}/colvars)
file(GLOB COLVARS_SOURCES ${CONFIGURE_DEPENDS} ${COLVARS_SOURCE_DIR}/[^.]*.cpp)
option(COLVARS_DEBUG "Enable debugging messages for Colvars (quite verbose)" OFF)
option(COLVARS_DEBUG "Debugging messages for Colvars (quite verbose)" OFF)
option(COLVARS_LEPTON "Use the Lepton library for custom expressions" ON)
# Build Lepton by default
option(COLVARS_LEPTON "Build and link the Lepton library" ON)
if(COLVARS_LEPTON)
if(NOT LEPTON_SOURCE_DIR)
include(Packages/LEPTON)
endif()
set(LEPTON_DIR ${LAMMPS_LIB_SOURCE_DIR}/colvars/lepton)
file(GLOB LEPTON_SOURCES ${CONFIGURE_DEPENDS} ${LEPTON_DIR}/src/[^.]*.cpp)
add_library(lepton STATIC ${LEPTON_SOURCES})
# Change the define below to LEPTON_BUILDING_SHARED_LIBRARY when linking Lepton as a DLL with MSVC
target_compile_definitions(lepton PRIVATE -DLEPTON_BUILDING_STATIC_LIBRARY)
set_target_properties(lepton PROPERTIES OUTPUT_NAME lammps_lepton${LAMMPS_MACHINE})
target_include_directories(lepton PRIVATE ${LEPTON_DIR}/include)
endif()
add_library(colvars STATIC ${COLVARS_SOURCES})
@ -25,11 +30,14 @@ target_include_directories(colvars PRIVATE ${LAMMPS_SOURCE_DIR})
target_link_libraries(lammps PRIVATE colvars)
if(COLVARS_DEBUG)
# Need to export the define publicly to be valid in interface code
# Need to export the macro publicly to also affect the proxy
target_compile_definitions(colvars PUBLIC -DCOLVARS_DEBUG)
endif()
if(COLVARS_LEPTON)
target_link_libraries(lammps PRIVATE lepton)
target_compile_definitions(colvars PRIVATE -DLEPTON)
target_link_libraries(colvars PUBLIC lepton)
# Disable the line below when linking Lepton as a DLL with MSVC
target_compile_definitions(colvars PRIVATE -DLEPTON_USE_STATIC_LIBRARIES)
target_include_directories(colvars PUBLIC ${LEPTON_DIR}/include)
endif()

View File

@ -26,19 +26,6 @@ elseif(GPU_PREC STREQUAL "SINGLE")
set(GPU_PREC_SETTING "SINGLE_SINGLE")
endif()
option(GPU_DEBUG "Enable debugging code of the GPU package" OFF)
mark_as_advanced(GPU_DEBUG)
if(PKG_AMOEBA AND FFT_SINGLE)
message(FATAL_ERROR "GPU acceleration of AMOEBA is not (yet) compatible with single precision FFT")
endif()
if (PKG_AMOEBA)
list(APPEND GPU_SOURCES
${GPU_SOURCES_DIR}/amoeba_convolution_gpu.h
${GPU_SOURCES_DIR}/amoeba_convolution_gpu.cpp)
endif()
file(GLOB GPU_LIB_SOURCES ${CONFIGURE_DEPENDS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/[^.]*.cpp)
file(MAKE_DIRECTORY ${LAMMPS_LIB_BINARY_DIR}/gpu)
@ -164,12 +151,7 @@ if(GPU_API STREQUAL "CUDA")
add_library(gpu STATIC ${GPU_LIB_SOURCES} ${GPU_LIB_CUDPP_SOURCES} ${GPU_OBJS})
target_link_libraries(gpu PRIVATE ${CUDA_LIBRARIES} ${CUDA_CUDA_LIBRARY})
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu ${CUDA_INCLUDE_DIRS})
target_compile_definitions(gpu PRIVATE -DUSE_CUDA -D_${GPU_PREC_SETTING} ${GPU_CUDA_MPS_FLAGS})
if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DUCL_NO_EXIT)
endif()
target_compile_definitions(gpu PRIVATE -DUSE_CUDA -D_${GPU_PREC_SETTING} -DMPI_GERYON -DUCL_NO_EXIT ${GPU_CUDA_MPS_FLAGS})
if(CUDPP_OPT)
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_SOURCE_DIR}/gpu/cudpp_mini)
target_compile_definitions(gpu PRIVATE -DUSE_CUDPP)
@ -210,7 +192,6 @@ elseif(GPU_API STREQUAL "OPENCL")
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu
${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo.cu
)
foreach(GPU_KERNEL ${GPU_LIB_CU})
@ -227,7 +208,6 @@ elseif(GPU_API STREQUAL "OPENCL")
GenerateOpenCLHeader(tersoff ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff.cu)
GenerateOpenCLHeader(tersoff_zbl ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_zbl.cu)
GenerateOpenCLHeader(tersoff_mod ${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_tersoff_mod.cu)
GenerateOpenCLHeader(hippo ${CMAKE_CURRENT_BINARY_DIR}/gpu/hippo_cl.h ${OCL_COMMON_HEADERS} ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo_extra.h ${LAMMPS_LIB_SOURCE_DIR}/gpu/lal_hippo.cu)
list(APPEND GPU_LIB_SOURCES
${CMAKE_CURRENT_BINARY_DIR}/gpu/gayberne_cl.h
@ -237,18 +217,14 @@ elseif(GPU_API STREQUAL "OPENCL")
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_zbl_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/tersoff_mod_cl.h
${CMAKE_CURRENT_BINARY_DIR}/gpu/hippo_cl.h
)
add_library(gpu STATIC ${GPU_LIB_SOURCES})
target_link_libraries(gpu PRIVATE OpenCL::OpenCL)
target_include_directories(gpu PRIVATE ${CMAKE_CURRENT_BINARY_DIR}/gpu)
target_compile_definitions(gpu PRIVATE -DUSE_OPENCL -D_${GPU_PREC_SETTING})
if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DGERYON_NUMA_FISSION -DUCL_NO_EXIT)
endif()
target_compile_definitions(gpu PRIVATE -D_${GPU_PREC_SETTING} -DMPI_GERYON -DGERYON_NUMA_FISSION -DUCL_NO_EXIT)
target_compile_definitions(gpu PRIVATE -DUSE_OPENCL)
target_link_libraries(lammps PRIVATE gpu)
add_executable(ocl_get_devices ${LAMMPS_LIB_SOURCE_DIR}/gpu/geryon/ucl_get_devices.cpp)
@ -398,12 +374,8 @@ elseif(GPU_API STREQUAL "HIP")
add_library(gpu STATIC ${GPU_LIB_SOURCES})
target_include_directories(gpu PRIVATE ${LAMMPS_LIB_BINARY_DIR}/gpu)
target_compile_definitions(gpu PRIVATE -DUSE_HIP -D_${GPU_PREC_SETTING})
if(GPU_DEBUG)
target_compile_definitions(gpu PRIVATE -DUCL_DEBUG -DGERYON_KERNEL_DUMP)
else()
target_compile_definitions(gpu PRIVATE -DMPI_GERYON -DUCL_NO_EXIT)
endif()
target_compile_definitions(gpu PRIVATE -D_${GPU_PREC_SETTING} -DMPI_GERYON -DUCL_NO_EXIT)
target_compile_definitions(gpu PRIVATE -DUSE_HIP)
target_link_libraries(gpu PRIVATE hip::host)
if(HIP_USE_DEVICE_SORT)

View File

@ -112,5 +112,9 @@ if(PKG_KSPACE)
RegisterIntegrateStyle(${INTEL_SOURCES_DIR}/verlet_lrt_intel.h)
endif()
if(PKG_ELECTRODE)
list(APPEND INTEL_SOURCES ${INTEL_SOURCES_DIR}/electrode_accel_intel.cpp)
endif()
target_sources(lammps PRIVATE ${INTEL_SOURCES})
target_include_directories(lammps PRIVATE ${INTEL_SOURCES_DIR})

View File

@ -49,8 +49,8 @@ if(DOWNLOAD_KOKKOS)
list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_CXX_EXTENSIONS=${CMAKE_CXX_EXTENSIONS}")
list(APPEND KOKKOS_LIB_BUILD_ARGS "-DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE}")
include(ExternalProject)
set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/3.7.01.tar.gz" CACHE STRING "URL for KOKKOS tarball")
set(KOKKOS_MD5 "f140e02b826223b1045207d9bc10d404" CACHE STRING "MD5 checksum of KOKKOS tarball")
set(KOKKOS_URL "https://github.com/kokkos/kokkos/archive/3.6.01.tar.gz" CACHE STRING "URL for KOKKOS tarball")
set(KOKKOS_MD5 "0ec97fc0c356dd65bd2487defe81a7bf" CACHE STRING "MD5 checksum of KOKKOS tarball")
mark_as_advanced(KOKKOS_URL)
mark_as_advanced(KOKKOS_MD5)
GetFallbackURL(KOKKOS_URL KOKKOS_FALLBACK)
@ -76,7 +76,7 @@ if(DOWNLOAD_KOKKOS)
add_dependencies(LAMMPS::KOKKOSCORE kokkos_build)
add_dependencies(LAMMPS::KOKKOSCONTAINERS kokkos_build)
elseif(EXTERNAL_KOKKOS)
find_package(Kokkos 3.7.01 REQUIRED CONFIG)
find_package(Kokkos 3.6.01 REQUIRED CONFIG)
target_link_libraries(lammps PRIVATE Kokkos::kokkos)
target_link_libraries(lmp PRIVATE Kokkos::kokkos)
else()
@ -92,6 +92,7 @@ else()
endif()
add_subdirectory(${LAMMPS_LIB_KOKKOS_SRC_DIR} ${LAMMPS_LIB_KOKKOS_BIN_DIR})
set(Kokkos_INCLUDE_DIRS ${LAMMPS_LIB_KOKKOS_SRC_DIR}/core/src
${LAMMPS_LIB_KOKKOS_SRC_DIR}/containers/src
${LAMMPS_LIB_KOKKOS_SRC_DIR}/algorithms/src
@ -126,7 +127,7 @@ set(KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/kokkos.cpp
if(PKG_KSPACE)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/fft3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/grid3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/gridcomm_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/remap_kokkos.cpp)
if(Kokkos_ENABLE_CUDA)
if(NOT (FFT STREQUAL "KISS"))
@ -141,29 +142,6 @@ if(PKG_KSPACE)
endif()
endif()
if(PKG_ML-IAP)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/mliap_data_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_descriptor_so3_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_model_linear_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_model_python_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_unified_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/mliap_so3_kokkos.cpp)
# Add KOKKOS version of ML-IAP Python coupling if non-KOKKOS version is included
if(MLIAP_ENABLE_PYTHON AND Cythonize_EXECUTABLE)
file(GLOB MLIAP_KOKKOS_CYTHON_SRC ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/KOKKOS/*.pyx)
foreach(MLIAP_CYTHON_FILE ${MLIAP_KOKKOS_CYTHON_SRC})
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_FILE} NAME_WE)
add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_FILE} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
WORKING_DIRECTORY ${MLIAP_BINARY_DIR}
MAIN_DEPENDENCY ${MLIAP_CYTHON_FILE}
COMMENT "Generating C++ sources with cythonize...")
list(APPEND KOKKOS_PKG_SOURCES ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp)
endforeach()
endif()
endif()
if(PKG_PHONON)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/dynamical_matrix_kokkos.cpp)

View File

@ -1,35 +0,0 @@
# avoid including this file twice
if(LEPTON_SOURCE_DIR)
return()
endif()
set(LEPTON_SOURCE_DIR ${LAMMPS_LIB_SOURCE_DIR}/lepton)
file(GLOB LEPTON_SOURCES ${CONFIGURE_DEPENDS} ${LEPTON_SOURCE_DIR}/src/[^.]*.cpp)
if((CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "amd64") OR
(CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "AMD64") OR
(CMAKE_HOST_SYSTEM_PROCESSOR STREQUAL "x86_64"))
option(LEPTON_ENABLE_JIT "Enable Just-In-Time compiler for Lepton" ON)
else()
option(LEPTON_ENABLE_JIT "Enable Just-In-Time compiler for Lepton" OFF)
endif()
if(LEPTON_ENABLE_JIT)
file(GLOB ASMJIT_SOURCES ${CONFIGURE_DEPENDS} ${LEPTON_SOURCE_DIR}/asmjit/*/[^.]*.cpp)
endif()
add_library(lepton STATIC ${LEPTON_SOURCES} ${ASMJIT_SOURCES})
set_target_properties(lepton PROPERTIES OUTPUT_NAME lammps_lepton${LAMMPS_MACHINE})
target_compile_definitions(lepton PUBLIC LEPTON_BUILDING_STATIC_LIBRARY=1)
target_include_directories(lepton PUBLIC ${LEPTON_SOURCE_DIR}/include)
if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
find_library(LIB_RT rt QUIET)
target_link_libraries(lepton PUBLIC ${LIB_RT})
endif()
if(LEPTON_ENABLE_JIT)
target_compile_definitions(lepton PUBLIC "LEPTON_USE_JIT=1;ASMJIT_BUILD_X86=1;ASMJIT_STATIC=1;ASMJIT_BUILD_RELEASE=1")
target_include_directories(lepton PUBLIC ${LEPTON_SOURCE_DIR})
endif()
target_link_libraries(lammps PRIVATE lepton)

View File

@ -1,9 +0,0 @@
# fix sgcmc may only be installed if also the EAM pair style from MANYBODY is installed
if(NOT PKG_MANYBODY)
get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX)
list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MC/fix_sgcmc.h)
set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}")
get_target_property(LAMMPS_SOURCES lammps SOURCES)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MC/fix_sgcmc.cpp)
set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}")
endif()

View File

@ -8,8 +8,8 @@ option(DOWNLOAD_MDI "Download and compile the MDI library instead of using an al
if(DOWNLOAD_MDI)
message(STATUS "MDI download requested - we will build our own")
set(MDI_URL "https://github.com/MolSSI-MDI/MDI_Library/archive/v1.4.12.tar.gz" CACHE STRING "URL for MDI tarball")
set(MDI_MD5 "7a222353ae8e03961d5365e6cd48baee" CACHE STRING "MD5 checksum for MDI tarball")
set(MDI_URL "https://github.com/MolSSI-MDI/MDI_Library/archive/v1.3.2.tar.gz" CACHE STRING "URL for MDI tarball")
set(MDI_MD5 "836f5da400d8cff0f0e4435640f9454f" CACHE STRING "MD5 checksum for MDI tarball")
mark_as_advanced(MDI_URL)
mark_as_advanced(MDI_MD5)
GetFallbackURL(MDI_URL MDI_FALLBACK)
@ -27,21 +27,8 @@ if(DOWNLOAD_MDI)
# detect if we have python development support and thus can enable python plugins
set(MDI_USE_PYTHON_PLUGINS OFF)
if(CMAKE_VERSION VERSION_LESS 3.12)
if(NOT PYTHON_VERSION_STRING)
set(Python_ADDITIONAL_VERSIONS 3.12 3.11 3.10 3.9 3.8 3.7 3.6)
# search for interpreter first, so we have a consistent library
find_package(PythonInterp) # Deprecated since version 3.12
if(PYTHONINTERP_FOUND)
set(Python_EXECUTABLE ${PYTHON_EXECUTABLE})
endif()
endif()
# search for the library matching the selected interpreter
set(Python_ADDITIONAL_VERSIONS ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR})
find_package(PythonLibs QUIET) # Deprecated since version 3.12
if(PYTHONLIBS_FOUND)
if(NOT (PYTHON_VERSION_STRING STREQUAL PYTHONLIBS_VERSION_STRING))
message(FATAL_ERROR "Python Library version ${PYTHONLIBS_VERSION_STRING} does not match Interpreter version ${PYTHON_VERSION_STRING}")
endif()
set(MDI_USE_PYTHON_PLUGINS ON)
endif()
else()
@ -50,14 +37,6 @@ if(DOWNLOAD_MDI)
set(MDI_USE_PYTHON_PLUGINS ON)
endif()
endif()
# python plugins are not supported and thus must be always off on Windows
if(CMAKE_SYSTEM_NAME STREQUAL "Windows")
unset(Python_Development_FOUND)
set(MDI_USE_PYTHON_PLUGINS OFF)
if(CMAKE_CROSSCOMPILING)
set(CMAKE_INSTALL_LIBDIR lib)
endif()
endif()
# download/ build MDI library
# always build static library with -fpic
@ -66,9 +45,8 @@ if(DOWNLOAD_MDI)
ExternalProject_Add(mdi_build
URL ${MDI_URL} ${MDI_FALLBACK}
URL_MD5 ${MDI_MD5}
PREFIX ${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext
CMAKE_ARGS
-DCMAKE_INSTALL_PREFIX=${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext
CMAKE_ARGS ${CMAKE_REQUEST_PIC}
-DCMAKE_INSTALL_PREFIX=<INSTALL_DIR>
-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}
-DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}
@ -77,26 +55,25 @@ if(DOWNLOAD_MDI)
-Dlanguage=C
-Dlibtype=STATIC
-Dmpi=${MDI_USE_MPI}
-Dplugins=ON
-Dpython_plugins=${MDI_USE_PYTHON_PLUGINS}
UPDATE_COMMAND ""
INSTALL_COMMAND ${CMAKE_COMMAND} --build ${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext/src/mdi_build-build --target install
BUILD_BYPRODUCTS "${CMAKE_CURRENT_BINARY_DIR}/mdi_build_ext/${CMAKE_INSTALL_LIBDIR}/mdi/${CMAKE_STATIC_LIBRARY_PREFIX}mdi${CMAKE_STATIC_LIBRARY_SUFFIX}"
INSTALL_COMMAND ""
BUILD_BYPRODUCTS "<BINARY_DIR>/MDI_Library/libmdi.a"
)
# where is the compiled library?
ExternalProject_get_property(mdi_build PREFIX)
ExternalProject_get_property(mdi_build BINARY_DIR)
set(MDI_BINARY_DIR "${BINARY_DIR}/MDI_Library")
# workaround for older CMake versions
file(MAKE_DIRECTORY ${PREFIX}/${CMAKE_INSTALL_LIBDIR}/mdi)
file(MAKE_DIRECTORY ${PREFIX}/include/mdi)
file(MAKE_DIRECTORY ${MDI_BINARY_DIR})
# create imported target for the MDI library
add_library(LAMMPS::MDI UNKNOWN IMPORTED)
add_dependencies(LAMMPS::MDI mdi_build)
set_target_properties(LAMMPS::MDI PROPERTIES
IMPORTED_LOCATION "${PREFIX}/${CMAKE_INSTALL_LIBDIR}/mdi/${CMAKE_STATIC_LIBRARY_PREFIX}mdi${CMAKE_STATIC_LIBRARY_SUFFIX}"
INTERFACE_INCLUDE_DIRECTORIES ${PREFIX}/include/mdi
)
IMPORTED_LOCATION "${MDI_BINARY_DIR}/libmdi.a"
INTERFACE_INCLUDE_DIRECTORIES ${MDI_BINARY_DIR}
)
set(MDI_DEP_LIBS "")
# if compiling with python plugins we need

View File

@ -1,13 +0,0 @@
# pair style and fix srp/react depend on the fixes bond/break and bond/create from the MC package
if(NOT PKG_MC)
get_property(LAMMPS_FIX_HEADERS GLOBAL PROPERTY FIX)
list(REMOVE_ITEM LAMMPS_FIX_HEADERS ${LAMMPS_SOURCE_DIR}/MISC/fix_srp_react.h)
set_property(GLOBAL PROPERTY FIX "${LAMMPS_FIX_HEADERS}")
get_property(LAMMPS_PAIR_HEADERS GLOBAL PROPERTY PAIR)
list(REMOVE_ITEM LAMMPS_PAIR_HEADERS ${LAMMPS_SOURCE_DIR}/MISC/pair_srp_react.h)
set_property(GLOBAL PROPERTY PAIR "${LAMMPS_PAIR_HEADERS}")
get_target_property(LAMMPS_SOURCES lammps SOURCES)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MISC/fix_srp_react.cpp)
list(REMOVE_ITEM LAMMPS_SOURCES ${LAMMPS_SOURCE_DIR}/MISC/pair_srp_react.cpp)
set_property(TARGET lammps PROPERTY SOURCES "${LAMMPS_SOURCES}")
endif()

View File

@ -34,18 +34,16 @@ if(MLIAP_ENABLE_PYTHON)
endif()
set(MLIAP_BINARY_DIR ${CMAKE_BINARY_DIR}/cython)
file(GLOB MLIAP_CYTHON_SRC ${CONFIGURE_DEPENDS} ${LAMMPS_SOURCE_DIR}/ML-IAP/*.pyx)
set(MLIAP_CYTHON_SRC ${LAMMPS_SOURCE_DIR}/ML-IAP/mliap_model_python_couple.pyx)
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_SRC} NAME_WE)
file(MAKE_DIRECTORY ${MLIAP_BINARY_DIR})
foreach(MLIAP_CYTHON_FILE ${MLIAP_CYTHON_SRC})
get_filename_component(MLIAP_CYTHON_BASE ${MLIAP_CYTHON_FILE} NAME_WE)
add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_FILE} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
WORKING_DIRECTORY ${MLIAP_BINARY_DIR}
MAIN_DEPENDENCY ${MLIAP_CYTHON_FILE}
COMMENT "Generating C++ sources with cythonize...")
target_sources(lammps PRIVATE ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp)
endforeach()
add_custom_command(OUTPUT ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.h
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MLIAP_CYTHON_SRC} ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
COMMAND ${Cythonize_EXECUTABLE} -3 ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.pyx
WORKING_DIRECTORY ${MLIAP_BINARY_DIR}
MAIN_DEPENDENCY ${MLIAP_CYTHON_SRC}
COMMENT "Generating C++ sources with cythonize...")
target_compile_definitions(lammps PRIVATE -DMLIAP_PYTHON)
target_sources(lammps PRIVATE ${MLIAP_BINARY_DIR}/${MLIAP_CYTHON_BASE}.cpp)
target_include_directories(lammps PRIVATE ${MLIAP_BINARY_DIR})
endif()

View File

@ -1,6 +1,6 @@
set(PACELIB_URL "https://github.com/ICAMS/lammps-user-pace/archive/refs/tags/v.2023.01.3.fix.tar.gz" CACHE STRING "URL for PACE evaluator library sources")
set(PACELIB_URL "https://github.com/ICAMS/lammps-user-pace/archive/refs/tags/v.2021.10.25.fix2.tar.gz" CACHE STRING "URL for PACE evaluator library sources")
set(PACELIB_MD5 "4f0b3b5b14456fe9a73b447de3765caa" CACHE STRING "MD5 checksum of PACE evaluator library tarball")
set(PACELIB_MD5 "32394d799bc282bb57696c78c456e64f" CACHE STRING "MD5 checksum of PACE evaluator library tarball")
mark_as_advanced(PACELIB_URL)
mark_as_advanced(PACELIB_MD5)
GetFallbackURL(PACELIB_URL PACELIB_FALLBACK)
@ -29,9 +29,23 @@ execute_process(
)
get_newest_file(${CMAKE_BINARY_DIR}/lammps-user-pace-* lib-pace)
add_subdirectory(${lib-pace} build-pace)
set_target_properties(pace PROPERTIES CXX_EXTENSIONS ON OUTPUT_NAME lammps_pace${LAMMPS_MACHINE})
# enforce building libyaml-cpp as static library and turn off optional features
set(YAML_BUILD_SHARED_LIBS OFF)
set(YAML_CPP_BUILD_CONTRIB OFF)
set(YAML_CPP_BUILD_TOOLS OFF)
add_subdirectory(${lib-pace}/yaml-cpp build-yaml-cpp)
set(YAML_CPP_INCLUDE_DIR ${lib-pace}/yaml-cpp/include)
file(GLOB PACE_EVALUATOR_INCLUDE_DIR ${CONFIGURE_DEPENDS} ${lib-pace}/ML-PACE)
file(GLOB PACE_EVALUATOR_SOURCES ${CONFIGURE_DEPENDS} ${lib-pace}/ML-PACE/*.cpp)
list(FILTER PACE_EVALUATOR_SOURCES EXCLUDE REGEX pair_pace.cpp)
add_library(pace STATIC ${PACE_EVALUATOR_SOURCES})
set_target_properties(pace PROPERTIES CXX_EXTENSIONS ON OUTPUT_NAME lammps_pace${LAMMPS_MACHINE})
target_include_directories(pace PUBLIC ${PACE_EVALUATOR_INCLUDE_DIR} ${YAML_CPP_INCLUDE_DIR})
target_link_libraries(pace PRIVATE yaml-cpp-pace)
if(CMAKE_PROJECT_NAME STREQUAL "lammps")
target_link_libraries(lammps PRIVATE pace)
endif()

View File

@ -3,7 +3,6 @@
set(ALL_PACKAGES
ADIOS
AMOEBA
ASPHERE
ATC
AWPMD
@ -12,7 +11,7 @@ set(ALL_PACKAGES
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -44,7 +43,6 @@ set(ALL_PACKAGES
KSPACE
LATBOLTZ
LATTE
LEPTON
MACHDYN
MANIFOLD
MANYBODY
@ -57,7 +55,6 @@ set(ALL_PACKAGES
ML-HDNNP
ML-IAP
ML-PACE
ML-POD
ML-QUIP
ML-RANN
ML-SNAP

View File

@ -5,7 +5,6 @@
set(ALL_PACKAGES
ADIOS
AMOEBA
ASPHERE
ATC
AWPMD
@ -14,7 +13,7 @@ set(ALL_PACKAGES
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -46,7 +45,6 @@ set(ALL_PACKAGES
KSPACE
LATBOLTZ
LATTE
LEPTON
MACHDYN
MANIFOLD
MANYBODY
@ -59,7 +57,6 @@ set(ALL_PACKAGES
ML-HDNNP
ML-IAP
ML-PACE
ML-POD
ML-QUIP
ML-RANN
ML-SNAP

View File

@ -1,5 +1,4 @@
set(WIN_PACKAGES
AMOEBA
ASPHERE
ATC
AWPMD
@ -8,7 +7,7 @@ set(WIN_PACKAGES
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -36,7 +35,6 @@ set(WIN_PACKAGES
INTERLAYER
KSPACE
LATTE
LEPTON
MACHDYN
MANIFOLD
MANYBODY
@ -48,7 +46,6 @@ set(WIN_PACKAGES
MISC
ML-HDNNP
ML-IAP
ML-POD
ML-RANN
ML-SNAP
MOFFF

View File

@ -3,14 +3,13 @@
# are removed. The resulting binary should be able to run most inputs.
set(ALL_PACKAGES
AMOEBA
ASPHERE
BOCS
BODY
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -35,15 +34,12 @@ set(ALL_PACKAGES
GRANULAR
INTERLAYER
KSPACE
LEPTON
MACHDYN
MANYBODY
MC
MEAM
MESONT
MISC
ML-IAP
ML-POD
ML-SNAP
MOFFF
MOLECULE

View File

@ -13,9 +13,9 @@ set(PACKAGES_WITH_LIB
KOKKOS
LATBOLTZ
LATTE
LEPTON
MACHDYN
MDI
MESONT
ML-HDNNP
ML-PACE
ML-QUIP

View File

@ -1,13 +1,11 @@
set(WIN_PACKAGES
AMOEBA
ASPHERE
AWPMD
BOCS
BODY
BPM
BROWNIAN
CG-DNA
CG-SPICA
CG-SDK
CLASS2
COLLOID
COLVARS
@ -21,7 +19,6 @@ set(WIN_PACKAGES
DPD-SMOOTH
DRUDE
EFF
ELECTRODE
EXTRA-COMPUTE
EXTRA-DUMP
EXTRA-FIX
@ -31,15 +28,12 @@ set(WIN_PACKAGES
GRANULAR
INTERLAYER
KSPACE
LEPTON
MANIFOLD
MANYBODY
MC
MEAM
MESONT
MISC
ML-IAP
ML-POD
ML-SNAP
MOFFF
MOLECULE

View File

@ -86,13 +86,13 @@ html: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
@if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi
@$(MAKE) $(MFLAGS) -C graphviz all
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \
sphinx-build -E $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
touch $(RSTDIR)/Fortran.rst ;\
sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) html ;\
ln -sf Manual.html html/index.html;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
echo "############################################" ;\
rst_anchor_check src/*.rst ;\
python $(BUILDDIR)/utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
@ -114,9 +114,9 @@ fasthtml: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
@$(MAKE) $(MFLAGS) -C graphviz all
@mkdir -p fasthtml
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \
sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/fasthtml/doctrees $(RSTDIR) fasthtml ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
touch $(RSTDIR)/Fortran.rst ;\
sphinx-build $(SPHINXEXTRA) -b html -c $(SPHINXCONFIG) -d $(BUILDDIR)/fasthtml/doctrees $(RSTDIR) fasthtml ;\
deactivate ;\
)
@ -131,8 +131,8 @@ fasthtml: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
spelling: xmlgen $(SPHINXCONFIG)/conf.py $(VENV) $(SPHINXCONFIG)/false_positives.txt
@if [ "$(HAS_BASH)" == "NO" ] ; then echo "bash was not found at $(OSHELL)! Please use: $(MAKE) SHELL=/path/to/bash" 1>&2; exit 1; fi
@(\
. $(VENV)/bin/activate ; \
cp $(SPHINXCONFIG)/false_positives.txt $(RSTDIR)/; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \
cp $(SPHINXCONFIG)/false_positives.txt $(RSTDIR)/ ; env PYTHONWARNINGS= \
sphinx-build -b spelling -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) spelling ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
deactivate ;\
@ -146,9 +146,9 @@ epub: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
@rm -f LAMMPS.epub
@cp src/JPG/*.* epub/JPG
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ;\
sphinx-build -E $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
touch $(RSTDIR)/Fortran.rst ;\
sphinx-build $(SPHINXEXTRA) -b epub -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
deactivate ;\
@ -167,12 +167,12 @@ pdf: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
@$(MAKE) $(MFLAGS) -C graphviz all
@if [ "$(HAS_PDFLATEX)" == "NO" ] ; then echo "PDFLaTeX or latexmk were not found! Please check README for further instructions" 1>&2; exit 1; fi
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ; env PYTHONWARNINGS= \
sphinx-build -E $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\
touch $(RSTDIR)/Fortran.rst ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
touch $(RSTDIR)/Fortran.rst ;\
sphinx-build $(SPHINXEXTRA) -b latex -c $(SPHINXCONFIG) -d $(BUILDDIR)/doctrees $(RSTDIR) latex ;\
rm -f $(BUILDDIR)/doxygen/xml/run.stamp;\
echo "############################################" ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
echo "############################################" ;\
rst_anchor_check src/*.rst ;\
python utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
@ -200,21 +200,21 @@ pdf: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
anchor_check : $(ANCHORCHECK)
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ;\
rst_anchor_check src/*.rst ;\
deactivate ;\
)
style_check : $(VENV)
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate ;\
python utils/check-styles.py -s ../src -d src ;\
deactivate ;\
)
package_check : $(VENV)
@(\
. $(VENV)/bin/activate ; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \\
. $(VENV)/bin/activate ;\
python utils/check-packages.py -s ../src -d src ;\
deactivate ;\
)
@ -252,7 +252,7 @@ $(MATHJAX):
$(ANCHORCHECK): $(VENV)
@( \
. $(VENV)/bin/activate; env PYTHONWARNINGS= PYTHONDONTWRITEBYTECODE=1 \
. $(VENV)/bin/activate; \
pip $(PIP_OPTIONS) install -e utils/converters;\
deactivate;\
)

View File

@ -1,7 +1,7 @@
.TH LAMMPS "1" "8 February 2023" "2023-02-08"
.TH LAMMPS "1" "23 June 2022 - Update 3" "2022-6-23"
.SH NAME
.B LAMMPS
\- Molecular Dynamics Simulator. Version 8 February 2023
\- Molecular Dynamics Simulator. Version 23 June 2022 - Update 3
.SH SYNOPSIS
.B lmp

View File

@ -63,7 +63,7 @@ In the src directory, there is one top-level Makefile and several
low-level machine-specific files named Makefile.xxx where xxx = the
machine name. If a low-level Makefile exists for your platform, you do
not need to edit the top-level Makefile. However you should check the
system-specific section of the low-level Makefile to ensure the
system-specific section of the low-level Makefile to insure the
various paths are correct for your environment. If a low-level
Makefile does not exist for your platform, you will need to add a
suitable target to the top-level Makefile. You will also need to

View File

@ -1206,7 +1206,7 @@ this command is not typically needed if the &quot;nonbond style&quot; and &quot;
an exception to this is if a short cutoff is used initially,
but a longer cutoff will be used for a subsequent run (in the same
input script), in this case the &quot;maximum cutoff&quot; command should be
used to ensure enough memory is allocated for the later run
used to insure enough memory is allocated for the later run
note that a restart file contains nonbond cutoffs (so it is not necessary
to use a &quot;nonbond style&quot; command before &quot;read restart&quot;), but LAMMPS
still needs to know what the maximum cutoff will be before the

View File

@ -6,9 +6,9 @@ either traditional makefiles for use with GNU make (which may require
manual editing), or using a build environment generated by CMake (Unix
Makefiles, Ninja, Xcode, Visual Studio, KDevelop, CodeBlocks and more).
As an alternative, you can download a package with pre-built executables
or automated build trees, as described in the :doc:`Install <Install>`
section of the manual.
As an alternative you can download a package with pre-built executables
or automated build trees as described on the :doc:`Install <Install>`
page.
.. toctree::
:maxdepth: 1

View File

@ -44,7 +44,7 @@ standard. A more detailed discussion of that is below.
The executable created by CMake (after running make) is named
``lmp`` unless the ``LAMMPS_MACHINE`` option is set. When setting
``LAMMPS_MACHINE=name``, the executable will be called
``LAMMPS_MACHINE=name`` the executable will be called
``lmp_name``. Using ``BUILD_MPI=no`` will enforce building a
serial executable using the MPI STUBS library.
@ -60,7 +60,7 @@ standard. A more detailed discussion of that is below.
Any ``make machine`` command will look up the make settings from a
file ``Makefile.machine`` in the folder ``src/MAKE`` or one of its
subdirectories ``MINE``, ``MACHINES``, or ``OPTIONS``, create a
sub-directories ``MINE``, ``MACHINES``, or ``OPTIONS``, create a
folder ``Obj_machine`` with all objects and generated files and an
executable called ``lmp_machine``\ . The standard parallel build
with ``make mpi`` assumes a standard MPI installation with MPI
@ -107,9 +107,9 @@ MPI and OpenMP support in LAMMPS
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you are installing MPI yourself to build a parallel LAMMPS
executable, we recommend either MPICH or OpenMPI, which are regularly
executable, we recommend either MPICH or OpenMPI which are regularly
used and tested with LAMMPS by the LAMMPS developers. MPICH can be
downloaded from the `MPICH home page <https://www.mpich.org>`_, and
downloaded from the `MPICH home page <https://www.mpich.org>`_ and
OpenMPI can be downloaded correspondingly from the `OpenMPI home page
<https://www.open-mpi.org>`_. Other MPI packages should also work. No
specific vendor provided and standard compliant MPI library is currently
@ -130,12 +130,12 @@ package can be compiled to include OpenMP threading.
In addition, there are a few commands in LAMMPS that have native OpenMP
support included as well. These are commands in the ``MPIIO``,
``ML-SNAP``, ``DIFFRACTION``, and ``DPD-REACT`` packages. Furthermore,
``ML-SNAP``, ``DIFFRACTION``, and ``DPD-REACT`` packages. In addition
some packages support OpenMP threading indirectly through the libraries
they interface to: e.g. ``LATTE``, ``KSPACE``, and ``COLVARS``. See the
:doc:`Packages details <Packages_details>` page for more info on these
packages, and the pages for their respective commands for OpenMP
threading info.
they interface to: e.g. ``LATTE``, ``KSPACE``, and ``COLVARS``.
See the :doc:`Packages details <Packages_details>` page for more
info on these packages and the pages for their respective commands
for OpenMP threading info.
For CMake, if you use ``BUILD_OMP=yes``, you can use these packages
and turn on their native OpenMP support and turn on their native OpenMP
@ -144,9 +144,9 @@ variable before you launch LAMMPS.
For building via conventional make, the ``CCFLAGS`` and ``LINKFLAGS``
variables in Makefile.machine need to include the compiler flag that
enables OpenMP. For the GNU compilers or Clang, it is ``-fopenmp``\ .
For (recent) Intel compilers, it is ``-qopenmp``\ . If you are using a
different compiler, please refer to its documentation.
enables OpenMP. For GNU compilers it is ``-fopenmp``\ . For (recent) Intel
compilers it is ``-qopenmp``\ . If you are using a different compiler,
please refer to its documentation.
.. _default-none-issues:
@ -174,16 +174,15 @@ Choice of compiler and compile/link options
The choice of compiler and compiler flags can be important for maximum
performance. Vendor provided compilers for a specific hardware can
produce faster code than open-source compilers like the GNU compilers.
On the most common x86 hardware, the most popular C++ compilers are
quite similar in their ability to optimize regular C/C++ source code at
high optimization levels. When using the ``INTEL`` package, there is a
distinct advantage in using the `Intel C++ compiler <intel_>`_ due to
much improved vectorization through SSE and AVX instructions on
compatible hardware. The source code in that package conditionally
includes compiler specific directives to enable these high degrees of
vectorization. This may change over time as equivalent vectorization
directives are included into the OpenMP standard and other compilers
adopt them.
On the most common x86 hardware most popular C++ compilers are quite
similar in performance of C/C++ code at high optimization levels. When
using the ``INTEL`` package, there is a distinct advantage in using
the `Intel C++ compiler <intel_>`_ due to much improved vectorization
through SSE and AVX instructions on compatible hardware as the source
code includes changes and Intel compiler specific directives to enable
high degrees of vectorization. This may change over time as equivalent
vectorization directives are included into OpenMP standard revisions and
other compilers adopt them.
.. _intel: https://software.intel.com/en-us/intel-compilers
@ -197,7 +196,7 @@ LAMMPS.
.. tab:: CMake build
By default CMake will use the compiler it finds according to
internal preferences, and it will add optimization flags
internal preferences and it will add optimization flags
appropriate to that compiler and any :doc:`accelerator packages
<Speed_packages>` you have included in the build. CMake will
check if the detected or selected compiler is compatible with the
@ -251,9 +250,9 @@ LAMMPS.
and `-C ../cmake/presets/pgi.cmake`
will switch the compiler to the PGI compilers.
Furthermore, you can set ``CMAKE_TUNE_FLAGS`` to specifically add
compiler flags to tune for optimal performance on given hosts.
This variable is empty by default.
In addition you can set ``CMAKE_TUNE_FLAGS`` to specifically add
compiler flags to tune for optimal performance on given hosts. By
default this variable is empty.
.. note::
@ -369,10 +368,10 @@ running LAMMPS from Python via its library interface.
# no default value
The compilation will always produce a LAMMPS library and an
executable linked to it. By default, this will be a static
library named ``liblammps.a`` and an executable named ``lmp``
Setting ``BUILD_SHARED_LIBS=yes`` will instead produce a shared
library called ``liblammps.so`` (or ``liblammps.dylib`` or
executable linked to it. By default this will be a static library
named ``liblammps.a`` and an executable named ``lmp`` Setting
``BUILD_SHARED_LIBS=yes`` will instead produce a shared library
called ``liblammps.so`` (or ``liblammps.dylib`` or
``liblammps.dll`` depending on the platform) If
``LAMMPS_MACHINE=name`` is set in addition, the name of the
generated libraries will be changed to either ``liblammps_name.a``
@ -430,7 +429,7 @@ You may need to use ``sudo make install`` in place of the last line if
you do not have write privileges for ``/usr/local/lib`` or use the
``--prefix`` configuration option to select an installation folder,
where you do have write access. The end result should be the file
``/usr/local/lib/libmpich.so``. On many Linux installations, the folder
``/usr/local/lib/libmpich.so``. On many Linux installations the folder
``${HOME}/.local`` is an alternative to using ``/usr/local`` and does
not require superuser or sudo access. In that case the configuration
step becomes:
@ -439,10 +438,9 @@ step becomes:
./configure --enable-shared --prefix=${HOME}/.local
Avoiding the use of "sudo" for custom software installation (i.e. from
source and not through a package manager tool provided by the OS) is
generally recommended to ensure the integrity of the system software
installation.
Avoiding to use "sudo" for custom software installation (i.e. from source
and not through a package manager tool provided by the OS) is generally
recommended to ensure the integrity of the system software installation.
----------
@ -516,11 +514,11 @@ using CMake or Make.
Install LAMMPS after a build
------------------------------------------
After building LAMMPS, you may wish to copy the LAMMPS executable or
library, along with other LAMMPS files (library header, doc files), to a
globally visible place on your system, for others to access. Note that
you may need super-user privileges (e.g. sudo) if the directory you want
to copy files to is protected.
After building LAMMPS, you may wish to copy the LAMMPS executable of
library, along with other LAMMPS files (library header, doc files) to
a globally visible place on your system, for others to access. Note
that you may need super-user privileges (e.g. sudo) if the directory
you want to copy files to is protected.
.. tabs::
@ -538,7 +536,7 @@ to copy files to is protected.
environment variable, if you are installing LAMMPS into a non-system
location and/or are linking to libraries in a non-system location that
depend on such runtime path settings.
As an alternative, you may set the CMake variable ``LAMMPS_INSTALL_RPATH``
As an alternative you may set the CMake variable ``LAMMPS_INSTALL_RPATH``
to ``on`` and then the runtime paths for any linked shared libraries
and the library installation folder for the LAMMPS library will be
embedded and thus the requirement to set environment variables is avoided.

View File

@ -9,10 +9,10 @@ page.
The following text assumes some familiarity with CMake and focuses on
using the command line tool ``cmake`` and what settings are supported
for building LAMMPS. A more detailed tutorial on how to use CMake
itself, the text mode or graphical user interface, to change the
generated output files for different build tools and development
environments is on a :doc:`separate page <Howto_cmake>`.
for building LAMMPS. A more detailed tutorial on how to use ``cmake``
itself, the text mode or graphical user interface, change the generated
output files for different build tools and development environments is
on a :doc:`separate page <Howto_cmake>`.
.. note::
@ -22,12 +22,12 @@ environments is on a :doc:`separate page <Howto_cmake>`.
.. warning::
You must not mix the :doc:`traditional make based <Build_make>`
LAMMPS build procedure with using CMake. No packages may be
LAMMPS build procedure with using CMake. Thus no packages may be
installed or a build been previously attempted in the LAMMPS source
directory by using ``make <machine>``. CMake will detect if this is
the case and generate an error. To remove conflicting files from the
``src`` you can use the command ``make no-all purge`` which will
uninstall all packages and delete all auto-generated files.
un-install all packages and delete all auto-generated files.
Advantages of using CMake
@ -44,9 +44,9 @@ software or for people that want to modify or extend LAMMPS.
and adapt the LAMMPS default build configuration accordingly.
- CMake can generate files for different build tools and integrated
development environments (IDE).
- CMake supports customization of settings with a command line, text
mode, or graphical user interface. No knowledge of file formats or
complex command line syntax is required.
- CMake supports customization of settings with a text mode or graphical
user interface. No knowledge of file formats or and complex command
line syntax required.
- All enabled components are compiled in a single build operation.
- Automated dependency tracking for all files and configuration options.
- Support for true out-of-source compilation. Multiple configurations
@ -55,23 +55,23 @@ software or for people that want to modify or extend LAMMPS.
source tree.
- Simplified packaging of LAMMPS for Linux distributions, environment
modules, or automated build tools like `Homebrew <https://brew.sh/>`_.
- Integration of automated unit and regression testing (the LAMMPS side
of this is still under active development).
- Integration of automated regression testing (the LAMMPS side for that
is still under development).
.. _cmake_build:
Getting started
^^^^^^^^^^^^^^^
Building LAMMPS with CMake is a two-step process. In the first step,
you use CMake to generate a build environment in a new directory. For
that purpose you can use either the command-line utility ``cmake`` (or
``cmake3``), the text-mode UI utility ``ccmake`` (or ``ccmake3``) or the
graphical utility ``cmake-gui``, or use them interchangeably. The
second step is then the compilation and linking of all objects,
libraries, and executables using the selected build tool. Here is a
minimal example using the command line version of CMake to build LAMMPS
with no add-on packages enabled and no customization:
Building LAMMPS with CMake is a two-step process. First you use CMake
to generate a build environment in a new directory. For that purpose
you can use either the command-line utility ``cmake`` (or ``cmake3``),
the text-mode UI utility ``ccmake`` (or ``ccmake3``) or the graphical
utility ``cmake-gui``, or use them interchangeably. The second step is
then the compilation and linking of all objects, libraries, and
executables. Here is a minimal example using the command line version of
CMake to build LAMMPS with no add-on packages enabled and no
customization:
.. code-block:: bash
@ -96,17 +96,17 @@ Compilation can take a long time, since LAMMPS is a large project with
many features. If your machine has multiple CPU cores (most do these
days), you can speed this up by compiling sources in parallel with
``make -j N`` (with N being the maximum number of concurrently executed
tasks). Installation of the `ccache <https://ccache.dev/>`_ (= Compiler
Cache) software may speed up repeated compilation even more, e.g. during
code development, especially when repeatedly switching between branches.
tasks). Also installation of the `ccache <https://ccache.dev/>`_ (=
Compiler Cache) software may speed up repeated compilation even more,
e.g. during code development.
After the initial build, whenever you edit LAMMPS source files, enable
or disable packages, change compiler flags or build options, you must
re-compile and relink the LAMMPS executable with ``cmake --build .`` (or
``make``). If the compilation fails for some reason, try running
``cmake .`` and then compile again. The included dependency tracking
should make certain that only the necessary subset of files is
re-compiled. You can also delete compiled objects, libraries, and
should make certain that only the necessary subset of files are
re-compiled. You can also delete compiled objects, libraries and
executables with ``cmake --build . --target clean`` (or ``make clean``).
After compilation, you may optionally install the LAMMPS executable into
@ -132,12 +132,12 @@ file called ``CMakeLists.txt`` (for LAMMPS it is located in the
``CMakeCache.txt``, which is generated at the end of the CMake
configuration step. The cache file contains all current CMake settings.
To modify settings, enable or disable features, you need to set
*variables* with either the *-D* command line flag (``-D
VARIABLE1_NAME=value``) or change them in the text mode of the graphical
user interface. The *-D* flag can be used several times in one command.
To modify settings, enable or disable features, you need to set *variables*
with either the *-D* command line flag (``-D VARIABLE1_NAME=value``) or
change them in the text mode of graphical user interface. The *-D* flag
can be used several times in one command.
For your convenience, we provide :ref:`CMake presets <cmake_presets>`
For your convenience we provide :ref:`CMake presets <cmake_presets>`
that combine multiple settings to enable optional LAMMPS packages or use
a different compiler tool chain. Those are loaded with the *-C* flag
(``-C ../cmake/presets/basic.cmake``). This step would only be needed
@ -155,23 +155,22 @@ specific CMake version is given when running ``cmake --help``.
Multi-configuration build systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Throughout this manual, it is mostly assumed that LAMMPS is being built
Throughout this manual it is mostly assumed that LAMMPS is being built
on a Unix-like operating system with "make" as the underlying "builder",
since this is the most common case. In this case the build
"configuration" is chose using ``-D CMAKE_BUILD_TYPE=<configuration>``
with ``<configuration>`` being one of "Release", "Debug",
"RelWithDebInfo", or "MinSizeRel". Some build tools, however, can also
use or even require having a so-called multi-configuration build system
setup. For a multi-configuration build, the built type (or
configuration) is selected at compile time using the same build
files. E.g. with:
since this is the most common case. In this case the build "configuration"
is chose using ``-D CMAKE_BUILD_TYPE=<configuration>`` with ``<configuration>``
being one of "Release", "Debug", "RelWithDebInfo", or "MinSizeRel".
Some build tools, however, can also use or even require to have a so-called
multi-configuration build system setup. For those the built type (or
configuration) is chosen at compile time using the same build files. E.g.
with:
.. code-block:: bash
cmake --build build-multi --config Release
In that case the resulting binaries are not in the build folder directly
but in subdirectories corresponding to the build type (i.e. Release in
but in sub-directories corresponding to the build type (i.e. Release in
the example from above). Similarly, for running unit tests the
configuration is selected with the *-C* flag:
@ -185,7 +184,7 @@ particular applicable to compiling packages that require additional libraries
that would be downloaded and compiled by CMake. The "windows" preset file
tries to keep track of which packages can be compiled natively with the
MSVC compilers out-of-the box. Not all of those external libraries are
portable to Windows, either.
portable to Windows either.
Installing CMake

View File

@ -46,7 +46,7 @@ It can be enabled for all C++ code with the following CMake flag
With this flag enabled all source files will be processed twice, first to
be compiled and then to be analyzed. Please note that the analysis can be
significantly more time-consuming than the compilation itself.
significantly more time consuming than the compilation itself.
----------

View File

@ -44,13 +44,12 @@ This is the list of packages that may require additional steps.
* :ref:`KIM <kim>`
* :ref:`KOKKOS <kokkos>`
* :ref:`LATTE <latte>`
* :ref:`LEPTON <lepton>`
* :ref:`MACHDYN <machdyn>`
* :ref:`MDI <mdi>`
* :ref:`MESONT <mesont>`
* :ref:`ML-HDNNP <ml-hdnnp>`
* :ref:`ML-IAP <mliap>`
* :ref:`ML-PACE <ml-pace>`
* :ref:`ML-POD <ml-pod>`
* :ref:`ML-QUIP <ml-quip>`
* :ref:`MOLFILE <molfile>`
* :ref:`MSCG <mscg>`
@ -126,11 +125,10 @@ CMake build
-D GPU_API=value # value = opencl (default) or cuda or hip
-D GPU_PREC=value # precision setting
# value = double or mixed (default) or single
-D HIP_PATH # path to HIP installation. Must be set if GPU_API=HIP
-D GPU_ARCH=value # primary GPU hardware choice for GPU_API=cuda
# value = sm_XX (see below, default is sm_50)
-D GPU_DEBUG=value # enable debug code in the GPU package library, mostly useful for developers
# value = yes or no (default)
-D HIP_PATH=value # value = path to HIP installation. Must be set if GPU_API=HIP
# value = sm_XX, see below
# default is sm_50
-D HIP_ARCH=value # primary GPU hardware choice for GPU_API=hip
# value depends on selected HIP_PLATFORM
# default is 'gfx906' for HIP_PLATFORM=amd and 'sm_50' for HIP_PLATFORM=nvcc
@ -186,17 +184,10 @@ set appropriate environment variables. Some variables such as
:code:`HCC_AMDGPU_TARGET` (for ROCm <= 4.0) or :code:`CUDA_PATH` are necessary for :code:`hipcc`
and the linker to work correctly.
Using CHIP-SPV implementation of HIP is now supported. It allows one to run HIP
code on Intel GPUs via the OpenCL or Level Zero backends. To use CHIP-SPV, you must
set :code:`-DHIP_USE_DEVICE_SORT=OFF` in your CMake command line as CHIP-SPV does not
yet support hipCUB. The use of HIP for Intel GPUs is still experimental so you
should only use this option in preparations to run on Aurora system at ANL.
.. code:: bash
# AMDGPU target (ROCm <= 4.0)
export HIP_PLATFORM=hcc
export HIP_PATH=/path/to/HIP/install
export HCC_AMDGPU_TARGET=gfx906
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc ..
make -j 4
@ -205,7 +196,6 @@ should only use this option in preparations to run on Aurora system at ANL.
# AMDGPU target (ROCm >= 4.1)
export HIP_PLATFORM=amd
export HIP_PATH=/path/to/HIP/install
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=gfx906 -D CMAKE_CXX_COMPILER=hipcc ..
make -j 4
@ -214,20 +204,10 @@ should only use this option in preparations to run on Aurora system at ANL.
# CUDA target (not recommended, use GPU_ARCH=cuda)
# !!! DO NOT set CMAKE_CXX_COMPILER !!!
export HIP_PLATFORM=nvcc
export HIP_PATH=/path/to/HIP/install
export CUDA_PATH=/usr/local/cuda
cmake -D PKG_GPU=on -D GPU_API=HIP -D HIP_ARCH=sm_70 ..
make -j 4
.. code:: bash
# SPIR-V target (Intel GPUs)
export HIP_PLATFORM=spirv
export HIP_PATH=/path/to/HIP/install
export CMAKE_CXX_COMPILER=<hipcc/clang++>
cmake -D PKG_GPU=on -D GPU_API=HIP ..
make -j 4
Traditional make
^^^^^^^^^^^^^^^^
@ -288,7 +268,7 @@ your machine are not correct, the LAMMPS build will fail, and
.. note::
If you re-build the GPU library in ``lib/gpu``, you should always
uninstall the GPU package in ``lammps/src``, then re-install it and
un-install the GPU package in ``lammps/src``, then re-install it and
re-build LAMMPS. This is because the compilation of files in the GPU
package uses the library settings from the ``lib/gpu/Makefile.machine``
used to build the GPU library.
@ -489,9 +469,6 @@ They must be specified in uppercase.
* - **Arch-ID**
- **HOST or GPU**
- **Description**
* - NATIVE
- HOST
- Local machine
* - AMDAVX
- HOST
- AMD 64-bit x86 CPU (AVX 1)
@ -531,21 +508,9 @@ They must be specified in uppercase.
* - BDW
- HOST
- Intel Broadwell Xeon E-class CPU (AVX 2 + transactional mem)
* - SKL
- HOST
- Intel Skylake Client CPU
* - SKX
- HOST
- Intel Skylake Xeon Server CPU (AVX512)
* - ICL
- HOST
- Intel Ice Lake Client CPU (AVX512)
* - ICX
- HOST
- Intel Ice Lake Xeon Server CPU (AVX512)
* - SPR
- HOST
- Intel Sapphire Rapids Xeon Server CPU (AVX512)
- Intel Sky Lake Xeon E-class HPC CPU (AVX512 + transactional mem)
* - KNC
- HOST
- Intel Knights Corner Xeon Phi
@ -606,12 +571,6 @@ They must be specified in uppercase.
* - AMPERE86
- GPU
- NVIDIA Ampere generation CC 8.6 GPU
* - ADA89
- GPU
- NVIDIA Ada Lovelace generation CC 8.9 GPU
* - HOPPER90
- GPU
- NVIDIA Hopper generation CC 9.0 GPU
* - VEGA900
- GPU
- AMD GPU MI25 GFX900
@ -623,10 +582,7 @@ They must be specified in uppercase.
- AMD GPU MI100 GFX908
* - VEGA90A
- GPU
- AMD GPU MI200 GFX90A
* - INTEL_GEN
- GPU
- SPIR64-based devices, e.g. Intel GPUs, using JIT
- AMD GPU
* - INTEL_DG1
- GPU
- Intel Iris XeMAX GPU
@ -641,12 +597,9 @@ They must be specified in uppercase.
- Intel GPU Gen12LP
* - INTEL_XEHP
- GPU
- Intel GPU Xe-HP
* - INTEL_PVC
- GPU
- Intel GPU Ponte Vecchio
- Intel GPUs Xe-HP
This list was last updated for version 3.7.1 of the Kokkos library.
This list was last updated for version 3.5.0 of the Kokkos library.
.. tabs::
@ -840,10 +793,8 @@ library.
.. code-block:: bash
-D DOWNLOAD_LATTE=value # download LATTE for build, value = no (default) or yes
-D LATTE_LIBRARY=path # LATTE library file (only needed if a custom location)
-D USE_INTERNAL_LINALG=value # Use the internal linear algebra library instead of LAPACK
# value = no (default) or yes
-D DOWNLOAD_LATTE=value # download LATTE for build, value = no (default) or yes
-D LATTE_LIBRARY=path # LATTE library file (only needed if a custom location)
If ``DOWNLOAD_LATTE`` is set, the LATTE library will be downloaded
and built inside the CMake build directory. If the LATTE library
@ -851,13 +802,6 @@ library.
``LATTE_LIBRARY`` is the filename (plus path) of the LATTE library
file, not the directory the library file is in.
The LATTE library requires LAPACK (and BLAS) and CMake can identify
their locations and pass that info to the LATTE build script. But
on some systems this triggers a (current) limitation of CMake and
the configuration will fail. Try enabling ``USE_INTERNAL_LINALG`` in
those cases to use the bundled linear algebra library and work around
the limitation.
.. tab:: Traditional make
You can download and build the LATTE library manually if you
@ -883,55 +827,6 @@ library.
----------
.. _lepton:
LEPTON package
--------------
To build with this package, you must build the Lepton library which is
included in the LAMMPS source distribution in the ``lib/lepton`` folder.
.. tabs::
.. tab:: CMake build
This is the recommended build procedure for using Lepton in
LAMMPS. No additional settings are normally needed besides
``-D PKG_LEPTON=yes``.
On x86 hardware the Lepton library will also include a just-in-time
compiler for faster execution. This is auto detected but can
be explicitly disabled by setting ``-D LEPTON_ENABLE_JIT=no``
(or enabled by setting it to yes).
.. tab:: Traditional make
Before building LAMMPS, one must build the Lepton library in lib/lepton.
This can be done manually in the same folder by using or adapting
one of the provided Makefiles: for example, ``Makefile.serial`` for
the GNU C++ compiler, or ``Makefile.mpi`` for the MPI compiler wrapper.
The Lepton library is written in C++-11 and thus the C++ compiler
may need to be instructed to enable support for that.
In general, it is safer to use build setting consistent with the
rest of LAMMPS. This is best carried out from the LAMMPS src
directory using a command like these, which simply invokes the
``lib/lepton/Install.py`` script with the specified args:
.. code-block:: bash
make lib-lepton # print help message
make lib-lepton args="-m serial" # build with GNU g++ compiler (settings as with "make serial")
make lib-lepton args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
The "machine" argument of the "-m" flag is used to find a
Makefile.machine to use as build recipe.
The build should produce a ``build`` folder and the library ``lib/lepton/liblmplepton.a``
----------
.. _mliap:
ML-IAP package
@ -1344,13 +1239,17 @@ module included in the LAMMPS source distribution.
.. tab:: Traditional make
As with other libraries distributed with LAMMPS, the Colvars library
needs to be built before building the LAMMPS program with the COLVARS
package enabled.
Before building LAMMPS, one must build the Colvars library in lib/colvars.
From the LAMMPS ``src`` directory, this is most easily and safely done
via one of the following commands, which implicitly rely on the
``lib/colvars/Install.py`` script with optional arguments:
This can be done manually in the same folder by using or adapting
one of the provided Makefiles: for example, ``Makefile.g++`` for
the GNU C++ compiler. C++11 compatibility may need to be enabled
for some older compilers (as is done in the example makefile).
In general, it is safer to use build setting consistent with the
rest of LAMMPS. This is best carried out from the LAMMPS src
directory using a command like these, which simply invokes the
``lib/colvars/Install.py`` script with the specified args:
.. code-block:: bash
@ -1360,17 +1259,10 @@ module included in the LAMMPS source distribution.
make lib-colvars args="-m g++-debug" # build with GNU g++ compiler and colvars debugging enabled
The "machine" argument of the "-m" flag is used to find a
``Makefile.machine`` file to use as build recipe. If such recipe does
not already exist in ``lib/colvars``, suitable settings will be
auto-generated consistent with those used in the core LAMMPS makefiles.
.. versionchanged:: 8Feb2023
Please note that Colvars uses the Lepton library, which is now
included with the LEPTON package; if you use anything other than
the ``make lib-colvars`` command, please make sure to :ref:`build
Lepton beforehand <lepton>`.
Makefile.machine to use as build recipe. If it does not already
exist in ``lib/colvars``, it will be auto-generated by using
compiler flags consistent with those parsed from the core LAMMPS
makefiles.
Optional flags may be specified as environment variables:
@ -1379,10 +1271,10 @@ module included in the LAMMPS source distribution.
COLVARS_DEBUG=yes make lib-colvars args="-m machine" # Build with debug code (much slower)
COLVARS_LEPTON=no make lib-colvars args="-m machine" # Build without Lepton (included otherwise)
The build should produce two files: the library
``lib/colvars/libcolvars.a`` and the specification file
``lib/colvars/Makefile.lammps``. The latter is auto-generated,
and normally does not need to be edited.
The build should produce two files: the library ``lib/colvars/libcolvars.a``
(which also includes Lepton objects if enabled) and the specification file
``lib/colvars/Makefile.lammps``. The latter is auto-generated, and normally does
not need to be edited.
----------
@ -1397,21 +1289,8 @@ This package depends on the KSPACE package.
.. tab:: CMake build
.. code-block:: bash
-D PKG_ELECTRODE=yes # enable the package itself
-D PKG_KSPACE=yes # the ELECTRODE package requires KSPACE
-D USE_INTERNAL_LINALG=value #
Features in the ELECTRODE package are dependent on code in the
KSPACE package so the latter one *must* be enabled.
The ELECTRODE package also requires LAPACK (and BLAS) and CMake
can identify their locations and pass that info to the LATTE build
script. But on some systems this may cause problems when linking
or the dependency is not desired. Try enabling
``USE_INTERNAL_LINALG`` in those cases to use the bundled linear
algebra library and work around the limitation.
No additional settings are needed besides ``-D PKG_KSPACE=yes`` and
``-D PKG_ELECTRODE=yes``.
.. tab:: Traditional make
@ -1437,10 +1316,10 @@ This package depends on the KSPACE package.
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU Fortran compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI Fortran compiler (settings as with "make mpi")
make lib-linalg args="-m gfortran" # build with GNU Fortran compiler
The package itself is activated with ``make yes-KSPACE`` and
``make yes-ELECTRODE``
@ -1487,49 +1366,6 @@ at: `https://github.com/ICAMS/lammps-user-pace/ <https://github.com/ICAMS/lammps
----------
.. _ml-pod:
ML-POD package
-----------------------------
.. tabs::
.. tab:: CMake build
No additional settings are needed besides ``-D PKG_ML-POD=yes``.
.. tab:: Traditional make
Before building LAMMPS, you must configure the ML-POD support
settings in ``lib/mlpod``. You can do this manually, if you
prefer, or do it in one step from the ``lammps/src`` dir, using a
command like the following, which simply invoke the
``lib/mlpod/Install.py`` script with the specified args:
.. code-block:: bash
make lib-mlpod # print help message
make lib-mlpod args="-m serial" # build with GNU g++ compiler and MPI STUBS (settings as with "make serial")
make lib-mlpod args="-m mpi" # build with default MPI compiler (settings as with "make mpi")
make lib-mlpod args="-m mpi -e linalg" # same as above but use the bundled linalg lib
Note that the ``Makefile.lammps`` file has settings to use the BLAS
and LAPACK linear algebra libraries. These can either exist on
your system, or you can use the files provided in ``lib/linalg``.
In the latter case you also need to build the library in
``lib/linalg`` with a command like these:
.. code-block:: bash
make lib-linalg # print help message
make lib-linalg args="-m serial" # build with GNU C++ compiler (settings as with "make serial")
make lib-linalg args="-m mpi" # build with default MPI C++ compiler (settings as with "make mpi")
make lib-linalg args="-m g++" # build with GNU C++ compiler
The package itself is activated with ``make yes-ML-POD``.
----------
.. _plumed:
PLUMED package
@ -1858,6 +1694,48 @@ MDI package
----------
.. _mesont:
MESONT package
-------------------------
This package includes a library written in Fortran 90 in the
``lib/mesont`` folder, so a working Fortran 90 compiler is required to
compile it. Also, the files with the force field data for running the
bundled examples are not included in the source distribution. Instead
they will be downloaded the first time this package is installed.
.. tabs::
.. tab:: CMake build
No additional settings are needed besides ``-D PKG_MESONT=yes``
.. tab:: Traditional make
Before building LAMMPS, you must build the *mesont* library in
``lib/mesont``\ . You can also do it in one step from the
``lammps/src`` dir, using a command like these, which simply
invokes the ``lib/mesont/Install.py`` script with the specified
args:
.. code-block:: bash
make lib-mesont # print help message
make lib-mesont args="-m gfortran" # build with GNU g++ compiler (settings as with "make serial")
make lib-mesont args="-m ifort" # build with Intel icc compiler
The build should produce two files: ``lib/mesont/libmesont.a`` and
``lib/mesont/Makefile.lammps``\ . The latter is copied from an
existing ``Makefile.lammps.\*`` and has settings needed to build
LAMMPS with the *mesont* library (though typically the settings
contain only the Fortran runtime library). If necessary, you can
edit/create a new ``lib/mesont/Makefile.machine`` file for your
system, which should define an ``EXTRAMAKE`` variable to specify a
corresponding ``Makefile.lammps.machine`` file.
----------
.. _molfile:
MOLFILE package
@ -2070,23 +1948,12 @@ within CMake will download the non-commercial use version.
-D DOWNLOAD_QUIP=value # download QUIP library for build, value = no (default) or yes
-D QUIP_LIBRARY=path # path to libquip.a (only needed if a custom location)
-D USE_INTERNAL_LINALG=value # Use the internal linear algebra library instead of LAPACK
# value = no (default) or yes
CMake will try to download and build the QUIP library from GitHub,
if it is not found on the local machine. This requires to have git
installed. It will use the same compilers and flags as used for
compiling LAMMPS. Currently this is only supported for the GNU
and the Intel compilers. Set the ``QUIP_LIBRARY`` variable if you
want to use a previously compiled and installed QUIP library and
CMake cannot find it.
The QUIP library requires LAPACK (and BLAS) and CMake can identify
their locations and pass that info to the QUIP build script. But
on some systems this triggers a (current) limitation of CMake and
the configuration will fail. Try enabling ``USE_INTERNAL_LINALG`` in
those cases to use the bundled linear algebra library and work around
the limitation.
CMake will try to download and build the QUIP library from GitHub, if it is not
found on the local machine. This requires to have git installed. It will use the same compilers
and flags as used for compiling LAMMPS. Currently this is only supported for the GNU and the
Intel compilers. Set the ``QUIP_LIBRARY`` variable if you want to use a previously compiled
and installed QUIP library and CMake cannot find it.
.. tab:: Traditional make

View File

@ -1,32 +1,33 @@
Link LAMMPS as a library to another code
========================================
LAMMPS is designed as a library of C++ objects that can be integrated
into other applications, including Python scripts. The files
``src/library.cpp`` and ``src/library.h`` define a C-style API for using
LAMMPS as a library. See the :doc:`Howto_library` page for a
description of the interface and how to use it for your needs.
LAMMPS is designed as a library of C++ objects that can be
integrated into other applications including Python scripts.
The files ``src/library.cpp`` and ``src/library.h`` define a
C-style API for using LAMMPS as a library. See the
:doc:`Howto_library` page
for a description of the interface and how to use it for your needs.
The :doc:`Build_basics` page explains how to build LAMMPS as either a
shared or static library. This results in a file in the compilation
folder called ``liblammps.a`` or ``liblammps_<name>.a`` in case of
building a static library. In case of a shared library, the name is the
same only that the suffix is going to be either ``.so`` or ``.dylib`` or
``.dll`` instead of ``.a`` depending on the OS. In some cases, the
``.so`` file may be a symbolic link to a file with the suffix ``.so.0``
(or some other number).
The :doc:`Build_basics` page explains how to build
LAMMPS as either a shared or static library. This results in a file
in the compilation folder called ``liblammps.a`` or ``liblammps_<name>.a``
in case of building a static library. In case of a shared library
the name is the same only that the suffix is going to be either ``.so``
or ``.dylib`` or ``.dll`` instead of ``.a`` depending on the OS.
In some cases the ``.so`` file may be a symbolic link to a file with
the suffix ``.so.0`` (or some other number).
.. note::
Care should be taken to use the same MPI library for the calling code
and the LAMMPS library, unless LAMMPS is to be compiled without (real)
MPI support using the included STUBS MPI library.
and the LAMMPS library unless LAMMPS is to be compiled without (real)
MPI support using the include STUBS MPI library.
Link with LAMMPS as a static library
------------------------------------
The calling application can link to LAMMPS as a static library with
compilation and link commands, as in the examples shown below. These
compilation and link commands as in the examples shown below. These
are examples for a code written in C in the file ``caller.c``.
The benefit of linking to a static library is, that the resulting
executable is independent of that library since all required
@ -141,10 +142,10 @@ Link with LAMMPS as a shared library
When linking to LAMMPS built as a shared library, the situation becomes
much simpler, as all dependent libraries and objects are either included
in the shared library or registered as a dependent library in the shared
library file. Thus, those libraries need not be specified when linking
the calling executable. Only the *-I* flags are needed. So the example
case from above of the serial version static LAMMPS library with the
POEMS package installed becomes:
library file. Thus those libraries need not to be specified when
linking the calling executable. Only the *-I* flags are needed. So the
example case from above of the serial version static LAMMPS library with
the POEMS package installed becomes:
.. tabs::

View File

@ -20,22 +20,21 @@ with :doc:`CMake <Build_cmake>`. The makefiles of the traditional
make based build process and the scripts they are calling expect a few
additional tools to be available and functioning.
* A working C/C++ compiler toolchain supporting the C++11 standard; on
Linux, these are often the GNU compilers. Some older compiler versions
* a working C/C++ compiler toolchain supporting the C++11 standard; on
Linux these are often the GNU compilers. Some older compilers
require adding flags like ``-std=c++11`` to enable the C++11 mode.
* A Bourne shell compatible "Unix" shell program (frequently this is ``bash``)
* A few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname``
* Python (optional, required for ``make lib-<pkg>`` in the src
folder). Python scripts are currently tested with python 2.7 and
3.6 to 3.11. The procedure for :doc:`building the documentation
<Build_manual>` *requires* Python 3.5 or later.
* a Bourne shell compatible "Unix" shell program (often this is ``bash``)
* a few shell utilities: ``ls``, ``mv``, ``ln``, ``rm``, ``grep``, ``sed``, ``tr``, ``cat``, ``touch``, ``diff``, ``dirname``
* python (optional, required for ``make lib-<pkg>`` in the src folder).
python scripts are currently tested with python 2.7 and 3.6. The procedure
for :doc:`building the documentation <Build_manual>` requires python 3.5 or later.
Getting started
^^^^^^^^^^^^^^^
To include LAMMPS packages (i.e. optional commands and styles) you must
enable (or "install") them first, as discussed on the :doc:`Build
package <Build_package>` page. If a package requires (provided or
package <Build_package>` page. If a packages requires (provided or
external) libraries, you must configure and build those libraries
**before** building LAMMPS itself and especially **before** enabling
such a package with ``make yes-<package>``. :doc:`Building LAMMPS with
@ -57,36 +56,36 @@ Compilation can take a long time, since LAMMPS is a large project with
many features. If your machine has multiple CPU cores (most do these
days), you can speed this up by compiling sources in parallel with
``make -j N`` (with N being the maximum number of concurrently executed
tasks). Installation of the `ccache <https://ccache.dev/>`_ (= Compiler
Cache) software may speed up repeated compilation even more, e.g. during
code development, especially when repeatedly switching between branches.
tasks). Also installation of the `ccache <https://ccache.dev/>`_ (=
Compiler Cache) software may speed up repeated compilation even more,
e.g. during code development.
After the initial build, whenever you edit LAMMPS source files, or add
or remove new files to the source directory (e.g. by installing or
uninstalling packages), you must re-compile and relink the LAMMPS
executable with the same ``make <machine>`` command. The makefile's
dependency tracking should ensure that only the necessary subset of
files is re-compiled. If you change settings in the makefile, you have
to recompile *everything*. To delete all objects, you can use ``make
dependency tracking should insure that only the necessary subset of
files are re-compiled. If you change settings in the makefile, you have
to recompile *everything*. To delete all objects you can use ``make
clean-<machine>``.
.. note::
Before the actual compilation starts, LAMMPS will perform several
steps to collect information from the configuration and setup that is
then embedded into the executable. When you build LAMMPS for the
first time, it will also compile a tool to quickly determine a list
of dependencies. Those are required for the make program to
correctly detect, which files need to be recompiled or relinked
after changes were made to the sources.
steps to collect information from the configuration and setup that
is then embedded into the executable. When you build LAMMPS for
the first time, it will also compile a tool to quickly assemble
a list of dependencies, that are required for the make program to
correctly detect which parts need to be recompiled after changes
were made to the sources.
Customized builds and alternate makefiles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``src/MAKE`` directory tree contains the ``Makefile.<machine>``
files included in the LAMMPS distribution. Typing ``make example`` uses
``Makefile.example`` from one of those folders, if available. The
``make serial`` and ``make mpi`` lines above, for example, use
``Makefile.example`` from one of those folders, if available. Thus the
``make serial`` and ``make mpi`` lines above use
``src/MAKE/Makefile.serial`` and ``src/MAKE/Makefile.mpi``,
respectively. Other makefiles are in these directories:
@ -107,13 +106,12 @@ a new name, please edit the first line with the description and machine
name, so you will not confuse yourself, when looking at the machine
summary.
Makefiles you may wish to try out, include those listed below (some
require a package first be installed). Many of these include specific
compiler flags for optimized performance. Please note, however, that
some of these customized machine Makefile are contributed by users, and
thus may have modifications specific to the systems of those users.
Since compilers, OS configurations, and LAMMPS itself keep changing,
their settings may become outdated, too:
Makefiles you may wish to try include these (some require a package
first be installed). Many of these include specific compiler flags
for optimized performance. Please note, however, that some of these
customized machine Makefile are contributed by users. Since both
compilers, OS configurations, and LAMMPS itself keep changing, their
settings may become outdated:
.. code-block:: bash

View File

@ -2,7 +2,7 @@ Build the LAMMPS documentation
==============================
Depending on how you obtained LAMMPS and whether you have built the
manual yourself, this directory has a number of subdirectories and
manual yourself, this directory has a number of sub-directories and
files. Here is a list with descriptions:
.. code-block:: bash

View File

@ -4,14 +4,13 @@ Include packages in build
In LAMMPS, a package is a group of files that enable a specific set of
features. For example, force fields for molecular systems or
rigid-body constraints are in packages. In the src directory, each
package is a subdirectory with the package name in capital letters.
package is a sub-directory with the package name in capital letters.
An overview of packages is given on the :doc:`Packages <Packages>` doc
page. Brief overviews of each package are on the :doc:`Packages details
<Packages_details>` page.
page. Brief overviews of each package are on the :doc:`Packages details <Packages_details>` page.
When building LAMMPS, you can choose to include or exclude each
package. Generally, there is no need to include a package if you
package. In general there is no need to include a package if you
never plan to use its features.
If you get a run-time error that a LAMMPS command or style is
@ -47,13 +46,12 @@ packages:
* :ref:`KIM <kim>`
* :ref:`KOKKOS <kokkos>`
* :ref:`LATTE <latte>`
* :ref:`LEPTON <lepton>`
* :ref:`MACHDYN <machdyn>`
* :ref:`MDI <mdi>`
* :ref:`MESONT <mesont>`
* :ref:`ML-HDNNP <ml-hdnnp>`
* :ref:`ML-IAP <mliap>`
* :ref:`ML-PACE <ml-pace>`
* :ref:`ML-POD <ml-pod>`
* :ref:`ML-QUIP <ml-quip>`
* :ref:`MOLFILE <molfile>`
* :ref:`MSCG <mscg>`
@ -94,7 +92,7 @@ versus make.
If you switch between building with CMake and make builds, no
packages in the src directory can be installed when you invoke
``cmake``. CMake will give an error if that is not the case,
indicating how you can uninstall all packages in the src dir.
indicating how you can un-install all packages in the src dir.
.. tab:: Traditional make
@ -103,7 +101,7 @@ versus make.
cd lammps/src
make ps # check which packages are currently installed
make yes-name # install a package with name
make no-name # uninstall a package with name
make no-name # un-install a package with name
make mpi # build LAMMPS with whatever packages are now installed
Examples:
@ -119,13 +117,13 @@ versus make.
.. note::
You must always re-build LAMMPS (via make) after installing or
uninstalling a package, for the action to take effect. The
un-installing a package, for the action to take effect. The
included dependency tracking will make certain only files that
are required to be rebuilt are recompiled.
.. note::
You cannot install or uninstall packages and build LAMMPS in a
You cannot install or un-install packages and build LAMMPS in a
single make command with multiple targets, e.g. ``make
yes-colloid mpi``. This is because the make procedure creates
a list of source files that will be out-of-date for the build
@ -150,7 +148,7 @@ other files dependent on that package are also excluded.
if you downloaded a tarball, 3 packages (KSPACE, MANYBODY, MOLECULE)
were pre-installed via the traditional make procedure in the ``src``
directory. That is no longer the case, so that CMake will build
as-is without needing to uninstall those packages.
as-is without needing to un-install those packages.
----------
@ -167,9 +165,9 @@ control flow constructs for more complex operations.
LAMMPS includes several of these files to define configuration
"presets", similar to the options that exist for the Make based
system. Using these files, you can enable/disable portions of the
available packages in LAMMPS. If you need a custom preset, you can
make a copy of one of them and modify it to suit your needs.
system. Using these files you can enable/disable portions of the
available packages in LAMMPS. If you need a custom preset you can take
one of them as a starting point and customize it to your needs.
.. code-block:: bash
@ -183,7 +181,7 @@ make a copy of one of them and modify it to suit your needs.
cmake -C ../cmake/presets/pgi.cmake [OPTIONS] ../cmake # change settings to use the PGI compilers by default
cmake -C ../cmake/presets/all_on.cmake [OPTIONS] ../cmake # enable all packages
cmake -C ../cmake/presets/all_off.cmake [OPTIONS] ../cmake # disable all packages
mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake # compile with MinGW cross-compilers
mingw64-cmake -C ../cmake/presets/mingw-cross.cmake [OPTIONS] ../cmake # compile with MinGW cross compilers
Presets that have names starting with "windows" are specifically for
compiling LAMMPS :doc:`natively on Windows <Build_windows>` and
@ -227,7 +225,7 @@ The following commands are useful for managing package source files
and their installation when building LAMMPS via traditional make.
Just type ``make`` in lammps/src to see a one-line summary.
These commands install/uninstall sets of packages:
These commands install/un-install sets of packages:
.. code-block:: bash
@ -243,40 +241,40 @@ These commands install/uninstall sets of packages:
make yes-ext # install packages that require external libraries
make no-ext # uninstall packages that require external libraries
which install/uninstall various sets of packages. Typing ``make
which install/un-install various sets of packages. Typing ``make
package`` will list all the these commands.
.. note::
Installing or uninstalling a package for the make based build process
Installing or un-installing a package for the make based build process
works by simply copying files back and forth between the main source
directory src and the subdirectories with the package name (e.g.
directory src and the sub-directories with the package name (e.g.
src/KSPACE, src/ATC), so that the files are included or excluded
when LAMMPS is built. Only source files in the src folder will be
compiled.
The following make commands help manage files that exist in both the
src directory and in package subdirectories. You do not normally
src directory and in package sub-directories. You do not normally
need to use these commands unless you are editing LAMMPS files or are
updating LAMMPS via git.
Type ``make package-status`` or ``make ps`` to show which packages are
currently installed. For those that are installed, it will list any
files that are different in the src directory and package
subdirectory.
sub-directory.
Type ``make package-installed`` or ``make pi`` to show which packages are
currently installed, without listing the status of packages that are
not installed.
Type ``make package-update`` or ``make pu`` to overwrite src files with
files from the package subdirectories if the package is installed. It
files from the package sub-directories if the package is installed. It
should be used after the checkout has been :doc:`updated or changed
with git <Install_git>`, this will only update the files in the package
subdirectories, but not the copies in the src folder.
withy git <Install_git>`, this will only update the files in the package
sub-directories, but not the copies in the src folder.
Type ``make package-overwrite`` to overwrite files in the package
subdirectories with src files.
sub-directories with src files.
Type ``make package-diff`` to list all differences between pairs of
files in both the source directory and the package directory.

View File

@ -1,8 +1,8 @@
Optional build settings
=======================
LAMMPS can be built with several optional settings. Each subsection
explains how to do this for building both with CMake and make.
LAMMPS can be built with several optional settings. Each sub-section
explain how to do this for building both with CMake and make.
* `C++11 standard compliance`_ when building all of LAMMPS
* `FFT library`_ for use with the :doc:`kspace_style pppm <kspace_style>` command
@ -41,7 +41,7 @@ FFT library
When the KSPACE package is included in a LAMMPS build, the
:doc:`kspace_style pppm <kspace_style>` command performs 3d FFTs which
require use of an FFT library to compute 1d FFTs. The KISS FFT
library is included with LAMMPS, but other libraries can be faster.
library is included with LAMMPS but other libraries can be faster.
LAMMPS can use them if they are available on your system.
.. tabs::
@ -63,9 +63,9 @@ LAMMPS can use them if they are available on your system.
Usually these settings are all that is needed. If FFTW3 is
selected, then CMake will try to detect, if threaded FFTW
libraries are available and enable them by default. This setting
is independent of whether OpenMP threads are enabled and a package
like KOKKOS or OPENMP is used. If CMake cannot detect the FFT
library, you can set these variables to assist:
is independent of whether OpenMP threads are enabled and a
packages like KOKKOS or OPENMP is used. If CMake cannot detect
the FFT library, you can set these variables to assist:
.. code-block:: bash
@ -141,18 +141,18 @@ The Intel MKL math library is part of the Intel compiler suite. It
can be used with the Intel or GNU compiler (see the ``FFT_LIB`` setting
above).
Performing 3d FFTs in parallel can be time-consuming due to data access
and required communication. This cost can be reduced by performing
single-precision FFTs instead of double precision. Single precision
means the real and imaginary parts of a complex datum are 4-byte floats.
Double precision means they are 8-byte doubles. Note that Fourier
transform and related PPPM operations are somewhat less sensitive to
floating point truncation errors, and thus the resulting error is
generally less than the difference in precision. Using the
``-DFFT_SINGLE`` setting trades off a little accuracy for reduced memory
use and parallel communication costs for transposing 3d FFT data.
Performing 3d FFTs in parallel can be time consuming due to data
access and required communication. This cost can be reduced by
performing single-precision FFTs instead of double precision. Single
precision means the real and imaginary parts of a complex datum are
4-byte floats. Double precision means they are 8-byte doubles. Note
that Fourier transform and related PPPM operations are somewhat less
sensitive to floating point truncation errors and thus the resulting
error is less than the difference in precision. Using the ``-DFFT_SINGLE``
setting trades off a little accuracy for reduced memory use and
parallel communication costs for transposing 3d FFT data.
When using ``-DFFT_SINGLE`` with FFTW3, you may need to build the FFTW
When using ``-DFFT_SINGLE`` with FFTW3 you may need to build the FFTW
library a second time with support for single-precision.
For FFTW3, do the following, which should produce the additional
@ -177,11 +177,11 @@ ARRAY mode.
Size of LAMMPS integer types and size limits
--------------------------------------------
LAMMPS uses a few custom integer data types, which can be defined as
either 4-byte (= 32-bit) or 8-byte (= 64-bit) integers at compile time.
This has an impact on the size of a system that can be simulated, or how
large counters can become before "rolling over". The default setting of
"smallbig" is almost always adequate.
LAMMPS has a few integer data types which can be defined as either
4-byte (= 32-bit) or 8-byte (= 64-bit) integers at compile time.
This has an impact on the size of a system that can be simulated
or how large counters can become before "rolling over".
The default setting of "smallbig" is almost always adequate.
.. tabs::
@ -254,7 +254,7 @@ topology information, though IDs are enabled by default. The
:doc:`atom_modify id no <atom_modify>` command will turn them off. Atom
IDs are required for molecular systems with bond topology (bonds,
angles, dihedrals, etc). Similarly, some force or compute or fix styles
require atom IDs. Thus, if you model a molecular system or use one of
require atom IDs. Thus if you model a molecular system or use one of
those styles with more than 2 billion atoms, you need the "bigbig"
setting.
@ -264,7 +264,7 @@ systems and 500 million for systems with bonds (the additional
restriction is due to using the 2 upper bits of the local atom index
in neighbor lists for storing special bonds info).
Image flags store 3 values per atom in a single integer, which count the
Image flags store 3 values per atom in a single integer which count the
number of times an atom has moved through the periodic box in each
dimension. See the :doc:`dump <dump>` manual page for a discussion. If
an atom moves through the periodic box more than this limit, the value
@ -285,7 +285,7 @@ Output of JPG, PNG, and movie files
--------------------------------------------------
The :doc:`dump image <dump_image>` command has options to output JPEG or
PNG image files. Likewise, the :doc:`dump movie <dump_image>` command
PNG image files. Likewise the :doc:`dump movie <dump_image>` command
outputs movie files in a variety of movie formats. Using these options
requires the following settings:
@ -354,7 +354,7 @@ Read or write compressed files
If this option is enabled, large files can be read or written with
compression by ``gzip`` or similar tools by several LAMMPS commands,
including :doc:`read_data <read_data>`, :doc:`rerun <rerun>`, and
:doc:`dump <dump>`. Supported compression tools are currently
:doc:`dump <dump>`. Currently supported compression tools are:
``gzip``, ``bzip2``, ``zstd``, and ``lzma``.
.. tabs::
@ -394,7 +394,7 @@ Memory allocation alignment
---------------------------------------
This setting enables the use of the "posix_memalign()" call instead of
"malloc()" when LAMMPS allocates large chunks of memory. Vector
"malloc()" when LAMMPS allocates large chunks or memory. Vector
instructions on CPUs may become more efficient, if dynamically allocated
memory is aligned on larger-than-default byte boundaries. On most
current operating systems, the "malloc()" implementation returns
@ -496,7 +496,7 @@ Trigger selected floating-point exceptions
------------------------------------------
Many kinds of CPUs have the capability to detect when a calculation
results in an invalid math operation, like a division by zero or calling
results in an invalid math operation like a division by zero or calling
the square root with a negative argument. The default behavior on
most operating systems is to continue and have values for ``NaN`` (= not
a number) or ``Inf`` (= infinity). This allows software to detect and

View File

@ -21,7 +21,6 @@ commands in it are used to define a LAMMPS simulation.
Commands_pair
Commands_bond
Commands_kspace
Commands_dump
.. toctree::
:maxdepth: 1

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
General commands
================
@ -24,7 +23,6 @@ table above.
* :doc:`angle_coeff <angle_coeff>`
* :doc:`angle_style <angle_style>`
* :doc:`angle_write <angle_write>`
* :doc:`atom_modify <atom_modify>`
* :doc:`atom_style <atom_style>`
* :doc:`balance <balance>`
@ -32,6 +30,7 @@ table above.
* :doc:`bond_style <bond_style>`
* :doc:`bond_write <bond_write>`
* :doc:`boundary <boundary>`
* :doc:`box <box>`
* :doc:`change_box <change_box>`
* :doc:`clear <clear>`
* :doc:`comm_modify <comm_modify>`
@ -46,7 +45,6 @@ table above.
* :doc:`dielectric <dielectric>`
* :doc:`dihedral_coeff <dihedral_coeff>`
* :doc:`dihedral_style <dihedral_style>`
* :doc:`dihedral_write <dihedral_write>`
* :doc:`dimension <dimension>`
* :doc:`displace_atoms <displace_atoms>`
* :doc:`dump <dump>`
@ -64,7 +62,6 @@ table above.
* :doc:`kspace_modify <kspace_modify>`
* :doc:`kspace_style <kspace_style>`
* :doc:`label <label>`
* :doc:`labelmap <labelmap>`
* :doc:`lattice <lattice>`
* :doc:`log <log>`
* :doc:`mass <mass>`
@ -91,7 +88,8 @@ table above.
* :doc:`region <region>`
* :doc:`replicate <replicate>`
* :doc:`rerun <rerun>`
* :doc:`reset_atoms <reset_atoms>`
* :doc:`reset_atom_ids <reset_atom_ids>`
* :doc:`reset_mol_ids <reset_mol_ids>`
* :doc:`reset_timestep <reset_timestep>`
* :doc:`restart <restart>`
* :doc:`run <run>`
@ -127,7 +125,6 @@ additional letter in parenthesis: k = KOKKOS.
* :doc:`group2ndx <group2ndx>`
* :doc:`hyper <hyper>`
* :doc:`kim <kim_commands>`
* :doc:`fitpod <fitpod_command>`
* :doc:`mdi <mdi>`
* :doc:`ndx2group <group2ndx>`
* :doc:`neb <neb>`

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
.. _bond:
@ -44,8 +43,6 @@ OPT.
* :doc:`harmonic (iko) <bond_harmonic>`
* :doc:`harmonic/shift (o) <bond_harmonic_shift>`
* :doc:`harmonic/shift/cut (o) <bond_harmonic_shift_cut>`
* :doc:`lepton (o) <bond_lepton>`
* :doc:`mesocnt <bond_mesocnt>`
* :doc:`mm3 <bond_mm3>`
* :doc:`morse (o) <bond_morse>`
* :doc:`nonlinear (o) <bond_nonlinear>`
@ -77,7 +74,6 @@ OPT.
*
*
*
* :doc:`amoeba <angle_amoeba>`
* :doc:`charmm (iko) <angle_charmm>`
* :doc:`class2 (ko) <angle_class2>`
* :doc:`class2/p6 <angle_class2>`
@ -94,11 +90,9 @@ OPT.
* :doc:`fourier/simple (o) <angle_fourier_simple>`
* :doc:`gaussian <angle_gaussian>`
* :doc:`harmonic (iko) <angle_harmonic>`
* :doc:`lepton (o) <angle_lepton>`
* :doc:`mesocnt <angle_mesocnt>`
* :doc:`mm3 <angle_mm3>`
* :doc:`quartic (o) <angle_quartic>`
* :doc:`spica (o) <angle_spica>`
* :doc:`sdk (o) <angle_sdk>`
* :doc:`table (o) <angle_table>`
.. _dihedral:
@ -129,7 +123,6 @@ OPT.
* :doc:`fourier (io) <dihedral_fourier>`
* :doc:`harmonic (iko) <dihedral_harmonic>`
* :doc:`helix (o) <dihedral_helix>`
* :doc:`lepton (o) <dihedral_lepton>`
* :doc:`multi/harmonic (o) <dihedral_multi_harmonic>`
* :doc:`nharmonic (o) <dihedral_nharmonic>`
* :doc:`opls (iko) <dihedral_opls>`
@ -159,7 +152,6 @@ OPT.
*
*
*
* :doc:`amoeba <improper_amoeba>`
* :doc:`class2 (ko) <improper_class2>`
* :doc:`cossq (o) <improper_cossq>`
* :doc:`cvff (io) <improper_cvff>`

View File

@ -25,6 +25,7 @@ Setup simulation box:
:columns: 4
* :doc:`boundary <boundary>`
* :doc:`box <box>`
* :doc:`change_box <change_box>`
* :doc:`create_box <create_box>`
* :doc:`dimension <dimension>`

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Compute commands
================
@ -57,7 +56,6 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`dpd/atom <compute_dpd_atom>`
* :doc:`edpd/temp/atom <compute_edpd_temp_atom>`
* :doc:`efield/atom <compute_efield_atom>`
* :doc:`efield/wolf/atom <compute_efield_wolf_atom>`
* :doc:`entropy/atom <compute_entropy_atom>`
* :doc:`erotate/asphere <compute_erotate_asphere>`
* :doc:`erotate/rigid <compute_erotate_rigid>`
@ -88,6 +86,7 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`ke/atom/eff <compute_ke_atom_eff>`
* :doc:`ke/eff <compute_ke_eff>`
* :doc:`ke/rigid <compute_ke_rigid>`
* :doc:`mesont <compute_mesont>`
* :doc:`mliap <compute_mliap>`
* :doc:`momentum <compute_momentum>`
* :doc:`msd <compute_msd>`
@ -107,7 +106,6 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`pressure/uef <compute_pressure_uef>`
* :doc:`property/atom <compute_property_atom>`
* :doc:`property/chunk <compute_property_chunk>`
* :doc:`property/grid <compute_property_grid>`
* :doc:`property/local <compute_property_local>`
* :doc:`ptm/atom <compute_ptm_atom>`
* :doc:`rdf <compute_rdf>`
@ -140,8 +138,6 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`smd/vol <compute_smd_vol>`
* :doc:`snap <compute_sna_atom>`
* :doc:`sna/atom <compute_sna_atom>`
* :doc:`sna/grid <compute_sna_atom>`
* :doc:`sna/grid/local <compute_sna_atom>`
* :doc:`snad/atom <compute_sna_atom>`
* :doc:`snav/atom <compute_sna_atom>`
* :doc:`sph/e/atom <compute_sph_e_atom>`

View File

@ -1,57 +0,0 @@
.. table_from_list::
:columns: 3
* :doc:`General commands <Commands_all>`
* :doc:`Fix styles <Commands_fix>`
* :doc:`Compute styles <Commands_compute>`
* :doc:`Pair styles <Commands_pair>`
* :ref:`Bond styles <bond>`
* :ref:`Angle styles <angle>`
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Dump commands
=============
An alphabetic list of all LAMMPS :doc:`dump <dump>` commands.
.. table_from_list::
:columns: 5
* :doc:`atom <dump>`
* :doc:`atom/adios <dump_adios>`
* :doc:`atom/gz <dump>`
* :doc:`atom/mpiio <dump>`
* :doc:`atom/zstd <dump>`
* :doc:`cfg <dump>`
* :doc:`cfg/gz <dump>`
* :doc:`cfg/mpiio <dump>`
* :doc:`cfg/uef <dump_cfg_uef>`
* :doc:`cfg/zstd <dump>`
* :doc:`custom <dump>`
* :doc:`custom/adios <dump_adios>`
* :doc:`custom/gz <dump>`
* :doc:`custom/mpiio <dump>`
* :doc:`custom/zstd <dump>`
* :doc:`dcd <dump>`
* :doc:`grid <dump>`
* :doc:`grid/vtk <dump>`
* :doc:`h5md <dump_h5md>`
* :doc:`image <dump_image>`
* :doc:`local <dump>`
* :doc:`local/gz <dump>`
* :doc:`local/zstd <dump>`
* :doc:`molfile <dump_molfile>`
* :doc:`movie <dump_image>`
* :doc:`netcdf <dump_netcdf>`
* :doc:`netcdf/mpiio <dump>`
* :doc:`vtk <dump_vtk>`
* :doc:`xtc <dump>`
* :doc:`xyz <dump>`
* :doc:`xyz/gz <dump>`
* :doc:`xyz/mpiio <dump>`
* :doc:`xyz/zstd <dump>`
* :doc:`yaml <dump>`

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Fix commands
============
@ -29,8 +28,6 @@ OPT.
* :doc:`adapt/fep <fix_adapt_fep>`
* :doc:`addforce <fix_addforce>`
* :doc:`addtorque <fix_addtorque>`
* :doc:`amoeba/bitorsion <fix_amoeba_bitorsion>`
* :doc:`amoeba/pitorsion <fix_amoeba_pitorsion>`
* :doc:`append/atoms <fix_append_atoms>`
* :doc:`atc <fix_atc>`
* :doc:`atom/swap <fix_atom_swap>`
@ -38,7 +35,6 @@ OPT.
* :doc:`ave/chunk <fix_ave_chunk>`
* :doc:`ave/correlate <fix_ave_correlate>`
* :doc:`ave/correlate/long <fix_ave_correlate_long>`
* :doc:`ave/grid <fix_ave_grid>`
* :doc:`ave/histo <fix_ave_histo>`
* :doc:`ave/histo/weight <fix_ave_histo>`
* :doc:`ave/time <fix_ave_time>`
@ -66,13 +62,13 @@ OPT.
* :doc:`drude <fix_drude>`
* :doc:`drude/transform/direct <fix_drude_transform>`
* :doc:`drude/transform/inverse <fix_drude_transform>`
* :doc:`dt/reset (k) <fix_dt_reset>`
* :doc:`dt/reset <fix_dt_reset>`
* :doc:`edpd/source <fix_dpd_source>`
* :doc:`efield <fix_efield>`
* :doc:`ehex <fix_ehex>`
* :doc:`electrode/conp (i) <fix_electrode>`
* :doc:`electrode/conq (i) <fix_electrode>`
* :doc:`electrode/thermo (i) <fix_electrode>`
* :doc:`electrode/conp (i) <fix_electrode_conp>`
* :doc:`electrode/conq (i) <fix_electrode_conp>`
* :doc:`electrode/thermo (i) <fix_electrode_conp>`
* :doc:`electron/stopping <fix_electron_stopping>`
* :doc:`electron/stopping/fit <fix_electron_stopping>`
* :doc:`enforce2d (k) <fix_enforce2d>`
@ -107,7 +103,7 @@ OPT.
* :doc:`lb/viscous <fix_lb_viscous>`
* :doc:`lineforce <fix_lineforce>`
* :doc:`manifoldforce <fix_manifoldforce>`
* :doc:`mdi/qm <fix_mdi_qm>`
* :doc:`mdi/aimd <fix_mdi_aimd>`
* :doc:`meso/move <fix_meso_move>`
* :doc:`mol/swap <fix_mol_swap>`
* :doc:`momentum (k) <fix_momentum>`
@ -166,7 +162,6 @@ OPT.
* :doc:`orient/fcc <fix_orient>`
* :doc:`orient/eco <fix_orient_eco>`
* :doc:`pafi <fix_pafi>`
* :doc:`pair <fix_pair>`
* :doc:`phonon <fix_phonon>`
* :doc:`pimd <fix_pimd>`
* :doc:`planeforce <fix_planeforce>`
@ -214,7 +209,6 @@ OPT.
* :doc:`saed/vtk <fix_saed_vtk>`
* :doc:`setforce (k) <fix_setforce>`
* :doc:`setforce/spin <fix_setforce>`
* :doc:`sgcmc <fix_sgcmc>`
* :doc:`shake (k) <fix_shake>`
* :doc:`shardlow (k) <fix_shardlow>`
* :doc:`smd <fix_smd>`
@ -251,7 +245,7 @@ OPT.
* :doc:`tune/kspace <fix_tune_kspace>`
* :doc:`vector <fix_vector>`
* :doc:`viscosity <fix_viscosity>`
* :doc:`viscous (k) <fix_viscous>`
* :doc:`viscous <fix_viscous>`
* :doc:`viscous/sphere <fix_viscous_sphere>`
* :doc:`wall/body/polygon <fix_wall_body_polygon>`
* :doc:`wall/body/polyhedron <fix_wall_body_polyhedron>`

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
KSpace solvers
==============

View File

@ -10,7 +10,6 @@
* :ref:`Dihedral styles <dihedral>`
* :ref:`Improper styles <improper>`
* :doc:`KSpace styles <Commands_kspace>`
* :doc:`Dump styles <Commands_dump>`
Pair_style potentials
======================
@ -39,7 +38,6 @@ OPT.
* :doc:`agni (o) <pair_agni>`
* :doc:`airebo (io) <pair_airebo>`
* :doc:`airebo/morse (io) <pair_airebo>`
* :doc:`amoeba (g) <pair_amoeba>`
* :doc:`atm <pair_atm>`
* :doc:`awpmd/cut <pair_awpmd>`
* :doc:`beck (go) <pair_beck>`
@ -126,7 +124,6 @@ OPT.
* :doc:`hbond/dreiding/lj (o) <pair_hbond_dreiding>`
* :doc:`hbond/dreiding/morse (o) <pair_hbond_dreiding>`
* :doc:`hdnnp <pair_hdnnp>`
* :doc:`hippo (g) <pair_amoeba>`
* :doc:`ilp/graphene/hbn (t) <pair_ilp_graphene_hbn>`
* :doc:`ilp/tmd (t) <pair_ilp_tmd>`
* :doc:`kolmogorov/crespi/full <pair_kolmogorov_crespi_full>`
@ -134,8 +131,6 @@ OPT.
* :doc:`lcbop <pair_lcbop>`
* :doc:`lebedeva/z <pair_lebedeva_z>`
* :doc:`lennard/mdf <pair_mdf>`
* :doc:`lepton (o) <pair_lepton>`
* :doc:`lepton/coul (o) <pair_lepton>`
* :doc:`line/lj <pair_line_lj>`
* :doc:`lj/charmm/coul/charmm (giko) <pair_charmm>`
* :doc:`lj/charmm/coul/charmm/implicit (ko) <pair_charmm>`
@ -166,7 +161,7 @@ OPT.
* :doc:`lj/cut/coul/msm (go) <pair_lj_cut_coul>`
* :doc:`lj/cut/coul/msm/dielectric <pair_dielectric>`
* :doc:`lj/cut/coul/wolf (o) <pair_lj_cut_coul>`
* :doc:`lj/cut/dipole/cut (gko) <pair_dipole>`
* :doc:`lj/cut/dipole/cut (go) <pair_dipole>`
* :doc:`lj/cut/dipole/long (g) <pair_dipole>`
* :doc:`lj/cut/dipole/sf (go) <pair_dipole>`
* :doc:`lj/cut/soft (o) <pair_fep_soft>`
@ -175,7 +170,7 @@ OPT.
* :doc:`lj/cut/tip4p/long (got) <pair_lj_cut_tip4p>`
* :doc:`lj/cut/tip4p/long/soft (o) <pair_fep_soft>`
* :doc:`lj/expand (gko) <pair_lj_expand>`
* :doc:`lj/expand/coul/long (gk) <pair_lj_expand>`
* :doc:`lj/expand/coul/long (g) <pair_lj_expand>`
* :doc:`lj/gromacs (gko) <pair_gromacs>`
* :doc:`lj/gromacs/coul/gromacs (ko) <pair_gromacs>`
* :doc:`lj/long/coul/long (iot) <pair_lj_long>`
@ -184,9 +179,9 @@ OPT.
* :doc:`lj/long/tip4p/long (o) <pair_lj_long>`
* :doc:`lj/mdf <pair_mdf>`
* :doc:`lj/relres (o) <pair_lj_relres>`
* :doc:`lj/spica (gko) <pair_spica>`
* :doc:`lj/spica/coul/long (go) <pair_spica>`
* :doc:`lj/spica/coul/msm (o) <pair_spica>`
* :doc:`lj/sdk (gko) <pair_sdk>`
* :doc:`lj/sdk/coul/long (go) <pair_sdk>`
* :doc:`lj/sdk/coul/msm (o) <pair_sdk>`
* :doc:`lj/sf/dipole/sf (go) <pair_dipole>`
* :doc:`lj/smooth (go) <pair_lj_smooth>`
* :doc:`lj/smooth/linear (o) <pair_lj_smooth_linear>`
@ -199,15 +194,14 @@ OPT.
* :doc:`lubricateU/poly <pair_lubricateU>`
* :doc:`mdpd <pair_mesodpd>`
* :doc:`mdpd/rhosum <pair_mesodpd>`
* :doc:`meam (k) <pair_meam>`
* :doc:`meam/ms (k) <pair_meam>`
* :doc:`meam <pair_meam>`
* :doc:`meam/spline (o) <pair_meam_spline>`
* :doc:`meam/sw/spline <pair_meam_sw_spline>`
* :doc:`mesocnt <pair_mesocnt>`
* :doc:`mesocnt/viscous <pair_mesocnt>`
* :doc:`mesont/tpm <pair_mesont_tpm>`
* :doc:`mgpt <pair_mgpt>`
* :doc:`mie/cut (g) <pair_mie>`
* :doc:`mliap (k) <pair_mliap>`
* :doc:`mliap <pair_mliap>`
* :doc:`mm3/switch3/coulgauss/long <pair_lj_switch3_coulgauss_long>`
* :doc:`momb <pair_momb>`
* :doc:`morse (gkot) <pair_morse>`
@ -238,8 +232,6 @@ OPT.
* :doc:`oxrna2/xstk <pair_oxrna2>`
* :doc:`oxrna2/coaxstk <pair_oxrna2>`
* :doc:`pace (k) <pair_pace>`
* :doc:`pace/extrapolation (k) <pair_pace>`
* :doc:`pod <pair_pod>`
* :doc:`peri/eps <pair_peri>`
* :doc:`peri/lps (o) <pair_peri>`
* :doc:`peri/pmb (o) <pair_peri>`
@ -276,7 +268,6 @@ OPT.
* :doc:`spin/magelec <pair_spin_magelec>`
* :doc:`spin/neel <pair_spin_neel>`
* :doc:`srp <pair_srp>`
* :doc:`srp/react <pair_srp>`
* :doc:`sw (giko) <pair_sw>`
* :doc:`sw/angle/table <pair_sw_angle_table>`
* :doc:`sw/mod (o) <pair_sw>`
@ -298,7 +289,6 @@ OPT.
* :doc:`vashishta (gko) <pair_vashishta>`
* :doc:`vashishta/table (o) <pair_vashishta>`
* :doc:`wf/cut <pair_wf_cut>`
* :doc:`ylz <pair_ylz>`
* :doc:`yukawa (gko) <pair_yukawa>`
* :doc:`yukawa/colloid (go) <pair_yukawa_colloid>`
* :doc:`zbl (gko) <pair_zbl>`

View File

@ -20,23 +20,10 @@ ways through the :doc:`compute chunk/atom <compute_chunk_atom>` command
and then averaging is done using :doc:`fix ave/chunk <fix_ave_chunk>`.
Please refer to the :doc:`chunk HOWTO <Howto_chunk>` section for an overview.
Box command
-----------
Reset_ids command
-----------------
.. deprecated:: 22Dec2022
The *box* command has been removed and the LAMMPS code changed so it won't
be needed. If present, LAMMPS will ignore the command and print a warning.
Reset_ids, reset_atom_ids, reset_mol_ids commands
-------------------------------------------------
.. deprecated:: 22Dec2022
The *reset_ids*, *reset_atom_ids*, and *reset_mol_ids* commands have
been folded into the :doc:`reset_atoms <reset_atoms>` command. If
present, LAMMPS will replace the commands accordingly and print a
warning.
The reset_ids command has been renamed to :doc:`reset_atom_ids <reset_atom_ids>`.
MEAM package
------------
@ -50,27 +37,6 @@ for some optimizations leading to better performance. The pair style
period the C++ version of MEAM was called USER-MEAMC so it could
coexist with the Fortran version.
Minimize style fire/old
-----------------------
.. deprecated:: 8Feb2023
Minimize style *fire/old* has been removed. Its functionality can be
reproduced with *fire* with specific options. Please see the
:doc:`min_modify command <min_modify>` documentation for details.
Pair style mesont/tpm, compute style mesont, atom style mesont
--------------------------------------------------------------
.. deprecated:: 8Feb2023
Pair style *mesont/tpm*, compute style *mesont*, and atom style
*mesont* have been removed from the :ref:`MESONT package <PKG-MESONT>`.
The same functionality is available through
:doc:`pair style mesocnt <pair_mesocnt>`,
:doc:`bond style mesocnt <bond_mesocnt>` and
:doc:`angle style mesocnt <angle_mesocnt>`.
REAX package
------------

View File

@ -23,4 +23,3 @@ of time and requests from the LAMMPS user community.
Classes
Developer_platform
Developer_utils
Developer_grid

View File

@ -1,53 +1,52 @@
Code design
-----------
This section explains some code design choices in LAMMPS with the goal
of helping developers write new code similar to the existing code.
Please see the section on :doc:`Requirements for contributed code
<Modify_style>` for more specific recommendations and guidelines. While
that section is organized more in the form of a checklist for code
contributors, the focus here is on overall code design strategy, choices
made between possible alternatives, and discussing some relevant C++
programming language constructs.
This section explains some of the code design choices in LAMMPS with
the goal of helping developers write new code similar to the existing
code. Please see the section on :doc:`Requirements for contributed
code <Modify_style>` for more specific recommendations and guidelines.
While that section is organized more in the form of a checklist for
code contributors, the focus here is on overall code design strategy,
choices made between possible alternatives, and discussing some
relevant C++ programming language constructs.
Historically, the basic design philosophy of the LAMMPS C++ code was a
"C with classes" style. The motivation was to make it easy to modify
LAMMPS for people without significant training in C++ programming. Data
structures and code constructs were used that resemble the previous
implementation(s) in Fortran. A contributing factor to this choice was
that at the time, C++ compilers were often not mature and some advanced
features contained bugs or did not function as the standard required.
There were also disagreements between compiler vendors as to how to
interpret the C++ standard documents.
LAMMPS for people without significant training in C++ programming.
Data structures and code constructs were used that resemble the
previous implementation(s) in Fortran. A contributing factor to this
choice also was that at the time, C++ compilers were often not mature
and some of the advanced features contained bugs or did not function
as the standard required. There were also disagreements between
compiler vendors as to how to interpret the C++ standard documents.
However, C++ compilers and the C++ programming language have advanced
significantly. In 2020, the LAMMPS developers decided to require the
C++11 standard as the minimum C++ language standard for LAMMPS. Since
then, we have begun to replace C-style constructs with equivalent C++
functionality. This was taken either from the C++ standard library or
implemented as custom classes or functions. The goal is to improve
readability of the code and to increase code reuse through abstraction
of commonly used functionality.
However, C++ compilers have now advanced significantly. In 2020 we
decided to to require the C++11 standard as the minimum C++ language
standard for LAMMPS. Since then we have begun to also replace some of
the C-style constructs with equivalent C++ functionality, either from
the C++ standard library or as custom classes or functions, in order
to improve readability of the code and to increase code reuse through
abstraction of commonly used functionality.
.. note::
Please note that as of spring 2023 there is still a sizable chunk of
legacy code in LAMMPS that has not yet been refactored to reflect
these style conventions in full. LAMMPS has a large code base and
many contributors. There is also a hierarchy of precedence in which
the code is adapted. Highest priority has been the code in the
``src`` folder, followed by code in packages in order of their
popularity and complexity (simpler code gets adapted sooner), followed
by code in the ``lib`` folder. Source code that is downloaded from
external packages or libraries during compilation is not subject to
the conventions discussed here.
Please note that as of spring 2022 there is still a sizable chunk
of legacy code in LAMMPS that has not yet been refactored to
reflect these style conventions in full. LAMMPS has a large code
base and many different contributors and there also is a hierarchy
of precedence in which the code is adapted. Highest priority has
been the code in the ``src`` folder, followed by code in packages
in order of their popularity and complexity (simpler code is
adapted sooner), followed by code in the ``lib`` folder. Source
code that is downloaded from external packages or libraries during
compilation is not subject to the conventions discussed here.
Object-oriented code
Object oriented code
^^^^^^^^^^^^^^^^^^^^
LAMMPS is designed to be an object-oriented code. Each simulation is
LAMMPS is designed to be an object oriented code. Each simulation is
represented by an instance of the LAMMPS class. When running in
parallel, each MPI process creates such an instance. This can be seen
parallel each MPI process creates such an instance. This can be seen
in the ``main.cpp`` file where the core steps of running a LAMMPS
simulation are the following 3 lines of code:
@ -68,14 +67,14 @@ other special features.
The basic LAMMPS class hierarchy which is created by the LAMMPS class
constructor is shown in :ref:`class-topology`. When input commands
are processed, additional class instances are created, or deleted, or
replaced. Likewise, specific member functions of specific classes are
replaced. Likewise specific member functions of specific classes are
called to trigger actions such creating atoms, computing forces,
computing properties, time-propagating the system, or writing output.
Compositing and Inheritance
===========================
LAMMPS makes extensive use of the object-oriented programming (OOP)
LAMMPS makes extensive use of the object oriented programming (OOP)
principles of *compositing* and *inheritance*. Classes like the
``LAMMPS`` class are a **composite** containing pointers to instances
of other classes like ``Atom``, ``Comm``, ``Force``, ``Neighbor``,
@ -84,7 +83,7 @@ functionality by storing and manipulating data related to the
simulation and providing member functions that trigger certain
actions. Some of those classes like ``Force`` are themselves
composites, containing instances of classes describing different force
interactions. Similarly, the ``Modify`` class contains a list of
interactions. Similarly the ``Modify`` class contains a list of
``Fix`` and ``Compute`` classes. If the input commands that
correspond to these classes include the word *style*, then LAMMPS
stores only a single instance of that class. E.g. *atom_style*,
@ -101,18 +100,19 @@ derived class variant was instantiated. In LAMMPS these derived
classes are often referred to as "styles", e.g. pair styles, fix
styles, atom styles and so on.
This is the origin of the flexibility of LAMMPS. For example, pair
This is the origin of the flexibility of LAMMPS. For example pair
styles implement a variety of different non-bonded interatomic
potentials functions. All details for the implementation of a
potential are stored and executed in a single class.
As mentioned above, there can be multiple instances of classes derived
from the ``Fix`` or ``Compute`` base classes. They represent a
different facet of LAMMPS' flexibility, as they provide methods which
can be called at different points within a timestep, as explained in
`Developer_flow`. This allows the input script to tailor how a specific
simulation is run, what diagnostic computations are performed, and how
the output of those computations is further processed or output.
different facet of LAMMPS flexibility as they provide methods which
can be called at different points in time within a timestep, as
explained in `Developer_flow`. This allows the input script to tailor
how a specific simulation is run, what diagnostic computations are
performed, and how the output of those computations is further
processed or output.
Additional code sharing is possible by creating derived classes from the
derived classes (e.g., to implement an accelerated version of a pair
@ -164,15 +164,15 @@ The difference in behavior of the ``normal()`` and the ``poly()`` member
functions is which of the two member functions is called when executing
`base1->call()` versus `base2->call()`. Without polymorphism, a
function within the base class can only call member functions within the
same scope: that is, ``Base::call()`` will always call
``Base::normal()``. But for the `base2->call()` case, the call of the
same scope, that is ``Base::call()`` will always call
``Base::normal()``. But for the `base2->call()` case the call of the
virtual member function will be dispatched to ``Derived::poly()``
instead. This mechanism results in calling functions that are within
the scope of the class that was used to *create* the instance, even if
they are assigned to a pointer for their base class. This is the
desired behavior, and this way LAMMPS can even use styles that are loaded
at runtime from a shared object file with the :doc:`plugin command
<plugin>`.
instead. This mechanism means that functions are called within the
scope of the class type that was used to *create* the class instance are
invoked; even if they are assigned to a pointer using the type of a base
class. This is the desired behavior and this way LAMMPS can even use
styles that are loaded at runtime from a shared object file with the
:doc:`plugin command <plugin>`.
A special case of virtual functions are so-called pure functions. These
are virtual functions that are initialized to 0 in the class declaration
@ -189,12 +189,12 @@ This has the effect that an instance of the base class cannot be
created and that derived classes **must** implement these functions.
Many of the functions listed with the various class styles in the
section :doc:`Modify` are pure functions. The motivation for this is
to define the interface or API of the functions, but defer their
to define the interface or API of the functions but defer their
implementation to the derived classes.
However, there are downsides to this. For example, calls to virtual
functions from within a constructor, will *not* be in the scope of the
derived class, and thus it is good practice to either avoid calling them
functions from within a constructor, will not be in the scope of the
derived class and thus it is good practice to either avoid calling them
or to provide an explicit scope such as ``Base::poly()`` or
``Derived::poly()``. Furthermore, any destructors in classes containing
virtual functions should be declared virtual too, so they will be
@ -208,8 +208,8 @@ dispatch.
that are intended to replace a virtual or pure function use the
``override`` property keyword. For the same reason, the use of
overloads or default arguments for virtual functions should be
avoided, as they lead to confusion over which function is supposed to
override which, and which arguments need to be declared.
avoided as they lead to confusion over which function is supposed to
override which and which arguments need to be declared.
Style Factories
===============
@ -219,10 +219,10 @@ uses a programming pattern called `Factory`. Those are functions that
create an instance of a specific derived class, say ``PairLJCut`` and
return a pointer to the type of the common base class of that style,
``Pair`` in this case. To associate the factory function with the
style keyword, a ``std::map`` class is used with function pointers
style keyword, an ``std::map`` class is used with function pointers
indexed by their keyword (for example "lj/cut" for ``PairLJCut`` and
"morse" for ``PairMorse``). A couple of typedefs help keep the code
readable, and a template function is used to implement the actual
readable and a template function is used to implement the actual
factory functions for the individual classes. Below is an example
of such a factory function from the ``Force`` class as declared in
``force.h`` and implemented in ``force.cpp``. The file ``style_pair.h``
@ -279,26 +279,26 @@ from and writing to files and console instead of C++ "iostreams".
This is mainly motivated by better performance, better control over
formatting, and less effort to achieve specific formatting.
Since mixing "stdio" and "iostreams" can lead to unexpected behavior,
use of the latter is strongly discouraged. Output to the screen should
*not* use the predefined ``stdout`` FILE pointer, but rather the
``screen`` and ``logfile`` FILE pointers managed by the LAMMPS class.
Furthermore, output should generally only be done by MPI rank 0
(``comm->me == 0``). Output that is sent to both ``screen`` and
``logfile`` should use the :cpp:func:`utils::logmesg() convenience
function <LAMMPS_NS::utils::logmesg>`.
Since mixing "stdio" and "iostreams" can lead to unexpected
behavior. use of the latter is strongly discouraged. Also output to
the screen should not use the predefined ``stdout`` FILE pointer, but
rather the ``screen`` and ``logfile`` FILE pointers managed by the
LAMMPS class. Furthermore, output should generally only be done by
MPI rank 0 (``comm->me == 0``). Output that is sent to both
``screen`` and ``logfile`` should use the :cpp:func:`utils::logmesg()
convenience function <LAMMPS_NS::utils::logmesg>`.
We discourage the use of stringstreams because the bundled {fmt} library
and the customized tokenizer classes provide the same functionality in a
cleaner way with better performance. This also helps maintain a
consistent programming syntax with code from many different
contributors.
We also discourage the use of stringstreams because the bundled {fmt}
library and the customized tokenizer classes can provide the same
functionality in a cleaner way with better performance. This also
helps maintain a consistent programming syntax with code from many
different contributors.
Formatting with the {fmt} library
===================================
The LAMMPS source code includes a copy of the `{fmt} library
<https://fmt.dev>`_, which is preferred over formatting with the
<https://fmt.dev>`_ which is preferred over formatting with the
"printf()" family of functions. The primary reason is that it allows
a typesafe default format for any type of supported data. This is
particularly useful for formatting integers of a given size (32-bit or
@ -313,16 +313,17 @@ been included into the C++20 language standard, so changes to adopt it
are future-proof.
Formatted strings are frequently created by calling the
``fmt::format()`` function, which will return a string as a
``std::string`` class instance. In contrast to the ``%`` placeholder in
``printf()``, the {fmt} library uses ``{}`` to embed format descriptors.
In the simplest case, no additional characters are needed, as {fmt} will
choose the default format based on the data type of the argument.
Otherwise, the ``fmt::print()`` function may be used instead of
``printf()`` or ``fprintf()``. In addition, several LAMMPS output
functions, that originally accepted a single string as argument have
been overloaded to accept a format string with optional arguments as
well (e.g., ``Error::all()``, ``Error::one()``, ``utils::logmesg()``).
``fmt::format()`` function which will return a string as a
``std::string`` class instance. In contrast to the ``%`` placeholder
in ``printf()``, the {fmt} library uses ``{}`` to embed format
descriptors. In the simplest case, no additional characters are
needed as {fmt} will choose the default format based on the data type
of the argument. Otherwise the ``fmt::print()`` function may be
used instead of ``printf()`` or ``fprintf()``. In addition, several
LAMMPS output functions, that originally accepted a single string as
argument have been overloaded to accept a format string with optional
arguments as well (e.g., ``Error::all()``, ``Error::one()``,
``utils::logmesg()``).
Summary of the {fmt} format syntax
==================================
@ -331,11 +332,10 @@ The syntax of the format string is "{[<argument id>][:<format spec>]}",
where either the argument id or the format spec (separated by a colon
':') is optional. The argument id is usually a number starting from 0
that is the index to the arguments following the format string. By
default, these are assigned in order (i.e. 0, 1, 2, 3, 4 etc.). The
most common case for using argument id would be to use the same argument
in multiple places in the format string without having to provide it as
an argument multiple times. The argument id is rarely used in the LAMMPS
source code.
default these are assigned in order (i.e. 0, 1, 2, 3, 4 etc.). The most
common case for using argument id would be to use the same argument in
multiple places in the format string without having to provide it as an
argument multiple times. In LAMMPS the argument id is rarely used.
More common is the use of a format specifier, which starts with a colon.
This may optionally be followed by a fill character (default is ' '). If
@ -347,19 +347,18 @@ width, which may be followed by a dot '.' and a precision for floating
point numbers. The final character in the format string would be an
indicator for the "presentation", i.e. 'd' for decimal presentation of
integers, 'x' for hexadecimal, 'o' for octal, 'c' for character etc.
This mostly follows the "printf()" scheme, but without requiring an
This mostly follows the "printf()" scheme but without requiring an
additional length parameter to distinguish between different integer
widths. The {fmt} library will detect those and adapt the formatting
accordingly. For floating point numbers there are correspondingly, 'g'
for generic presentation, 'e' for exponential presentation, and 'f' for
fixed point presentation.
The format string "{:8}" would thus represent *any* type argument and be
replaced by at least 8 characters; "{:<8}" would do this as left
aligned, "{:^8}" as centered, "{:>8}" as right aligned. If a specific
presentation is selected, the argument type must be compatible or else
the {fmt} formatting code will throw an exception. Some format string
examples are given below:
Thus "{:8}" would represent *any* type argument using at least 8
characters; "{:<8}" would do this as left aligned, "{:^8}" as centered,
"{:>8}" as right aligned. If a specific presentation is selected, the
argument type must be compatible or else the {fmt} formatting code will
throw an exception. Some format string examples are given below:
.. code-block:: c++
@ -393,12 +392,12 @@ documentation <https://fmt.dev/latest/syntax.html>`_ website.
Memory management
^^^^^^^^^^^^^^^^^
Dynamical allocation of small data and objects can be done with the C++
commands "new" and "delete/delete[]". Large data should use the member
functions of the ``Memory`` class, most commonly, ``Memory::create()``,
``Memory::grow()``, and ``Memory::destroy()``, which provide variants
for vectors, 2d arrays, 3d arrays, etc. These can also be used for
small data.
Dynamical allocation of small data and objects can be done with the
the C++ commands "new" and "delete/delete[]. Large data should use
the member functions of the ``Memory`` class, most commonly,
``Memory::create()``, ``Memory::grow()``, and ``Memory::destroy()``,
which provide variants for vectors, 2d arrays, 3d arrays, etc.
These can also be used for small data.
The use of ``malloc()``, ``calloc()``, ``realloc()`` and ``free()``
directly is strongly discouraged. To simplify adapting legacy code
@ -409,24 +408,26 @@ perform additional error checks for safety.
Use of these custom memory allocation functions is motivated by the
following considerations:
- Memory allocation failures on *any* MPI rank during a parallel run
will trigger an immediate abort of the entire parallel calculation.
- A failing "new" will trigger an exception, which is also captured by
LAMMPS and triggers a global abort.
- Allocation of multidimensional arrays will be done in a C compatible
fashion, but such that the storage of the actual data is stored in one
large contiguous block. Thus, when MPI communication is needed,
- memory allocation failures on *any* MPI rank during a parallel run
will trigger an immediate abort of the entire parallel calculation
instead of stalling it
- a failing "new" will trigger an exception which is also captured by
LAMMPS and triggers a global abort
- allocation of multi-dimensional arrays will be done in a C compatible
fashion but so that the storage of the actual data is stored in one
large contiguous block. Thus when MPI communication is needed,
the data can be communicated directly (similar to Fortran arrays).
- The "destroy()" and "sfree()" functions may safely be called on NULL
pointers.
- The "destroy()" functions will nullify the pointer variables, thus
making "use after free" errors easy to detect.
- It is possible to use a larger than default memory alignment (not on
- the "destroy()" and "sfree()" functions may safely be called on NULL
pointers
- the "destroy()" functions will nullify the pointer variables making
"use after free" errors easy to detect
- it is possible to use a larger than default memory alignment (not on
all operating systems, since the allocated storage pointers must be
compatible with ``free()`` for technical reasons).
compatible with ``free()`` for technical reasons)
In the practical implementation of code this means, that any pointer
variables, that are class members should be initialized to a ``nullptr``
value in their respective constructors. That way, it is safe to call
``Memory::destroy()`` or ``delete[]`` on them before *any* allocation
outside the constructor. This helps prevent memory leaks.
In the practical implementation of code this means that any pointer
variables that are class members should be initialized to a
``nullptr`` value in their respective constructors. That way it is
safe to call ``Memory::destroy()`` or ``delete[]`` on them before
*any* allocation outside the constructor. This helps prevent memory
leaks.

View File

@ -14,8 +14,8 @@ Owned and ghost atoms
As described on the :doc:`parallel partitioning algorithms
<Developer_par_part>` page, LAMMPS spatially decomposes the simulation
domain, either in a *brick* or *tiled* manner. Each processor (MPI
task) owns atoms within its subdomain and additionally stores ghost
atoms within a cutoff distance of its subdomain.
task) owns atoms within its sub-domain and additionally stores ghost
atoms within a cutoff distance of its sub-domain.
Forward and reverse communication
=================================
@ -28,7 +28,7 @@ The need to do this communication arises when data from the owned atoms
is updated (e.g. their positions) and this updated information needs to
be **copied** to the corresponding ghost atoms.
And second, *reverse communication*, which sends ghost atom information
And second, *reverse communication* which sends ghost atom information
from each processor to the owning processor to **accumulate** (sum)
the values with the corresponding owned atoms. The need for this
arises when data is computed and also stored with ghost atoms
@ -58,7 +58,7 @@ embedded-atom method (EAM) which compute intermediate values in the
first part of the compute() function that need to be stored by both
owned and ghost atoms for the second part of the force computation.
The *Comm* class methods perform the MPI communication for buffers of
per-atom data. They "call back" to the *Pair* class, so it can *pack*
per-atom data. They "call back" to the *Pair* class so it can *pack*
or *unpack* the buffer with data the *Pair* class owns. There are 4
such methods that the *Pair* class must define, assuming it uses both
forward and reverse communication:
@ -70,7 +70,7 @@ forward and reverse communication:
The arguments to these methods include the buffer and a list of atoms
to pack or unpack. The *Pair* class also must set the *comm_forward*
and *comm_reverse* variables, which store the number of values stored
and *comm_reverse* variables which store the number of values stored
in the communication buffers for each operation. This means, if
desired, it can choose to store multiple per-atom values in the
buffer, and they will be communicated together to minimize
@ -81,11 +81,11 @@ containing ``double`` values. To correctly store integers that may be
The *Fix*, *Compute*, and *Dump* classes can also invoke the same kind
of forward and reverse communication operations using the same *Comm*
class methods. Likewise, the same pack/unpack methods and
class methods. Likewise the same pack/unpack methods and
comm_forward/comm_reverse variables must be defined by the calling
*Fix*, *Compute*, or *Dump* class.
For *Fix* classes, there is an optional second argument to the
For *Fix* classes there is an optional second argument to the
*forward_comm()* and *reverse_comm()* call which can be used when the
fix performs multiple modes of communication, with different numbers
of values per atom. The fix should set the *comm_forward* and
@ -150,7 +150,7 @@ latter case, when the *ring* operation is complete, each processor can
examine its original buffer to extract modified values.
Note that the *ring* operation is similar to an MPI_Alltoall()
operation, where every processor effectively sends and receives data to
operation where every processor effectively sends and receives data to
every other processor. The difference is that the *ring* operation
does it one step at a time, so the total volume of data does not need
to be stored by every processor. However, the *ring* operation is
@ -184,8 +184,8 @@ The *exchange_data()* method triggers the communication to be
performed. Each processor provides the vector of *N* datums to send,
and the size of each datum. All datums must be the same size.
The *create_atom()* and *exchange_atom()* methods are similar, except
that the size of each datum can be different. Typically, this is used
The *create_atom()* and *exchange_atom()* methods are similar except
that the size of each datum can be different. Typically this is used
to communicate atoms, each with a variable amount of per-atom data, to
other processors.

View File

@ -45,9 +45,9 @@ other methods in the class.
zero before each timestep, so that forces (torques, etc) can be
accumulated.
Now for the ``Verlet::run()`` method. Its basic structure in hi-level
pseudocode is shown below. In the actual code in ``src/verlet.cpp``
some of these operations are conditionally invoked.
Now for the ``Verlet::run()`` method. Its basic structure in hi-level pseudo
code is shown below. In the actual code in ``src/verlet.cpp`` some of
these operations are conditionally invoked.
.. code-block:: python
@ -105,17 +105,17 @@ need it. These flags are passed to the various methods that compute
particle interactions, so that they either compute and tally the
corresponding data or can skip the extra calculations if the energy and
virial are not needed. See the comments for the ``Integrate::ev_set()``
method, which document the flag values.
method which document the flag values.
At various points of the timestep, fixes are invoked,
e.g. ``fix->initial_integrate()``. In the code, this is actually done
via the Modify class, which stores all the Fix objects and lists of which
via the Modify class which stores all the Fix objects and lists of which
should be invoked at what point in the timestep. Fixes are the LAMMPS
mechanism for tailoring the operations of a timestep for a particular
simulation. As described elsewhere, each fix has one or more methods,
each of which is invoked at a specific stage of the timestep, as show in
the timestep pseudocode. All the active fixes defined in an input
script, that are flagged to have an ``initial_integrate()`` method, are
the timestep pseudo-code. All the active fixes defined in an input
script, that are flagged to have an ``initial_integrate()`` method are
invoked at the beginning of each timestep. Examples are :doc:`fix nve
<fix_nve>` or :doc:`fix nvt or fix npt <fix_nh>` which perform the
start-of-timestep velocity-Verlet integration operations to update
@ -131,15 +131,15 @@ can be changed using the :doc:`neigh_modify every/delay/check
<neigh_modify>` command. If not, coordinates of ghost atoms are
acquired by each processor via the ``forward_comm()`` method of the Comm
class. If neighbor lists need to be built, several operations within
the inner if clause of the pseudocode are first invoked. The
the inner if clause of the pseudo-code are first invoked. The
``pre_exchange()`` method of any defined fixes is invoked first.
Typically, this inserts or deletes particles from the system.
Typically this inserts or deletes particles from the system.
Periodic boundary conditions are then applied by the Domain class via
its ``pbc()`` method to remap particles that have moved outside the
simulation box back into the box. Note that this is not done every
timestep, but only when neighbor lists are rebuilt. This is so that
each processor's subdomain will have consistent (nearby) atom
each processor's sub-domain will have consistent (nearby) atom
coordinates for its owned and ghost atoms. It is also why dumped atom
coordinates may be slightly outside the simulation box if not dumped
on a step where the neighbor lists are rebuilt.
@ -148,15 +148,15 @@ The box boundaries are then reset (if needed) via the ``reset_box()``
method of the Domain class, e.g. if box boundaries are shrink-wrapped to
current particle coordinates. A change in the box size or shape
requires internal information for communicating ghost atoms (Comm class)
and neighbor list bins (Neighbor class) to be updated. The ``setup()``
and neighbor list bins (Neighbor class) be updated. The ``setup()``
method of the Comm class and ``setup_bins()`` method of the Neighbor
class perform the update.
The code is now ready to migrate atoms that have left a processor's
geometric subdomain to new processors. The ``exchange()`` method of
geometric sub-domain to new processors. The ``exchange()`` method of
the Comm class performs this operation. The ``borders()`` method of the
Comm class then identifies ghost atoms surrounding each processor's
subdomain and communicates ghost atom information to neighboring
sub-domain and communicates ghost atom information to neighboring
processors. It does this by looping over all the atoms owned by a
processor to make lists of those to send to each neighbor processor. On
subsequent timesteps, the lists are used by the ``Comm::forward_comm()``
@ -217,21 +217,20 @@ file, and restart files. See the :doc:`thermo_style <thermo_style>`,
:doc:`dump <dump>`, and :doc:`restart <restart>` commands for more
details.
The flow of control during energy minimization iterations is similar to
that of a molecular dynamics timestep. Forces are computed, neighbor
lists are built as needed, atoms migrate to new processors, and atom
coordinates and forces are communicated to neighboring processors. The
only difference is what Fix class operations are invoked when. Only a
subset of LAMMPS fixes are useful during energy minimization, as
The the flow of control during energy minimization iterations is
similar to that of a molecular dynamics timestep. Forces are computed,
neighbor lists are built as needed, atoms migrate to new processors, and
atom coordinates and forces are communicated to neighboring processors.
The only difference is what Fix class operations are invoked when. Only
a subset of LAMMPS fixes are useful during energy minimization, as
explained in their individual doc pages. The relevant Fix class methods
are ``min_pre_exchange()``, ``min_pre_force()``, and
``min_post_force()``. Each fix is invoked at the appropriate place
within the minimization iteration. For example, the
``min_post_force()`` method is analogous to the ``post_force()`` method
for dynamics; it is used to alter or constrain forces on each atom,
which affects the minimization procedure.
are ``min_pre_exchange()``, ``min_pre_force()``, and ``min_post_force()``.
Each fix is invoked at the appropriate place within the minimization
iteration. For example, the ``min_post_force()`` method is analogous to
the ``post_force()`` method for dynamics; it is used to alter or constrain
forces on each atom, which affects the minimization procedure.
After all iterations are completed, there is a ``cleanup`` step which
After all iterations are completed there is a ``cleanup`` step which
calls the ``post_run()`` method of fixes to perform operations only required
at the end of a calculation (like freeing temporary storage or creating
at the end of a calculations (like freeing temporary storage or creating
final outputs).

View File

@ -1,845 +0,0 @@
Use of distributed grids within style classes
---------------------------------------------
.. versionadded:: 22Dec2022
The LAMMPS source code includes two classes which facilitate the
creation and use of distributed grids. These are the Grid2d and
Grid3d classes in the src/grid2d.cpp.h and src/grid3d.cpp.h files
respectively. As the names imply, they are used for 2d or 3d
simulations, as defined by the :doc:`dimension <dimension>` command.
The :doc:`Howto_grid <Howto_grid>` page gives an overview of how
distributed grids are defined from a user perspective, lists LAMMPS
commands which use them, and explains how grid cell data is referenced
from an input script. Please read that page first as it motivates the
coding details discussed here.
This doc page is for users who wish to write new styles (input script
commands) which use distributed grids. There are a variety of
material models and analysis methods which use atoms (or
coarse-grained particles) and grids in tandem.
A *distributed* grid means each processor owns a subset of the grid
cells. In LAMMPS, the subset for each processor will be a sub-block
of grid cells with low and high index bounds in each dimension of the
grid. The union of the sub-blocks across all processors is the global
grid.
More specifically, a grid point is defined for each cell (by default
the center point), and a processor owns a grid cell if its point is
within the processor's spatial subdomain. The union of processor
subdomains is the global simulation box. If a grid point is on the
boundary of two subdomains, the lower processor owns the grid cell. A
processor may also store copies of ghost cells which surround its
owned cells.
----------
Style commands
^^^^^^^^^^^^^^
Style commands which can define and use distributed grids include the
:doc:`compute <compute>`, :doc:`fix <fix>`, :doc:`pair <pair_style>`,
and :doc:`kspace <kspace_style>` styles. If you wish grid cell data
to persist across timesteps, then use a fix. If you wish grid cell
data to be accessible by other commands, then use a fix or compute.
Currently in LAMMPS, the :doc:`pair_style amoeba <pair_amoeba>`,
:doc:`kspace_style pppm <kspace_style>`, and :doc:`kspace_style msm
<kspace_style>` commands use distributed grids but do not require
either of these capabilities; they thus create and use distributed
grids internally. Note that a pair style which needs grid cell data
to persist could be coded to work in tandem with a fix style which
provides that capability.
The *size* of a grid is specified by the number of grid cells in each
dimension of the simulation domain. In any dimension the size can be
any value >= 1. Thus a 10x10x1 grid for a 3d simulation is
effectively a 2d grid, where each grid cell spans the entire
z-dimension. A 1x100x1 grid for a 3d simulation is effectively a 1d
grid, where grid cells are a series of thin xz slabs in the
y-dimension. It is even possible to define a 1x1x1 3d grid, though it
may be inefficient to use it in a computational sense.
Note that the choice of grid size is independent of the number of
processors or their layout in a grid of processor subdomains which
overlays the simulations domain. Depending on the distributed grid
size, a single processor may own many 1000s or no grid cells.
A command can define multiple grids, each of a different size. Each
grid is an instantiation of the Grid2d or Grid3d class.
The command also defines what data it will store for each grid it
creates and it allocates the multidimensional array(s) needed to
store the data. No grid cell data is stored within the Grid2d or
Grid3d classes.
If a single value per grid cell is needed, the data array will have
the same dimension as the grid, i.e. a 2d array for a 2d grid,
likewise for 3d. If multiple values per grid cell are needed, the
data array will have one more dimension than the grid, i.e. a 3d array
for a 2d grid, or 4d array for a 3d grid. A command can choose to
define multiple data arrays for each grid it defines.
----------
Grid data allocation and access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The simplest way for a command to allocate and access grid cell data
is to use the *create_offset()* methods provided by the Memory class.
Arguments for these methods can be values returned by the
*setup_grid()* method (described below), which define the extent of
the grid cells (owned+ghost) the processor owns. These 4 methods
allocate memory for 2d (first two) and 3d (second two) grid data. The
two methods that end in "_one" allocate an array which stores a single
value per grid cell. The two that end in "_multi" allocate an array
which stores *Nvalues* per grid cell.
.. code-block:: c++
// single value per cell for a 2d grid = 2d array
memory->create2d_offset(data2d_one, nylo_out, nyhi_out,
nxlo_out, nxhi_out, "data2d_one");
// nvalues per cell for a 2d grid = 3d array
memory->create3d_offset_last(data2d_multi, nylo_out, nyhi_out,
nxlo_out, nxhi_out, nvalues, "data2d_multi");
// single value per cell for a 3d grid = 3d array
memory->create3d_offset(data3d_one, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, "data3d_one");
// nvalues per cell for a 3d grid = 4d array
memory->create4d_offset_last(data3d_multi, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, nvalues,
"data3d_multi");
Note that these multidimensional arrays are allocated as contiguous
chunks of memory where the x-index of the grid varies fastest, then y,
and the z-index slowest. For multiple values per grid cell, the
Nvalues are contiguous, so their index varies even faster than the
x-index.
The key point is that the "offset" methods create arrays which are
indexed by the range of indices which are the bounds of the sub-block
of the global grid owned by this processor. This means loops like
these can be written in the caller code to loop over owned grid cells,
where the "i" loop bounds are the range of owned grid cells for the
processor. These are the bounds returned by the *setup_grid()*
method:
.. code-block:: c++
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data2d_one[iy][ix] = 0.0;
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data2d_multi[iy][ix][m] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data3d_one[iz][iy][ix] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data3d_multi[iz][iy][ix][m] = 0.0;
Simply replacing the "i" bounds with "o" bounds, also returned by the
*setup_grid()* method, would alter this code to loop over owned+ghost
cells (the entire allocated grid).
----------
Grid class constructors
^^^^^^^^^^^^^^^^^^^^^^^
The following subsections describe the public methods of the Grid3d
class which a style command can invoke. The Grid2d methods are
similar; simply remove arguments which refer to the z-dimension.
There are 2 constructors which can be used. They differ in the extra
i/o xyz lo/hi arguments:
.. code-block:: c++
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz)
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz,
int ixlo, int ixhi, int iylo, int iyhi, int izlo, int izhi,
int oxlo, int oxhi, int oylo, int oyhi, int ozlo, int ozhi)
Both constructors take the LAMMPS instance pointer and a communicator
over which the grid will be distributed. Typically this is the
*world* communicator the LAMMPS instance is using. The
:doc:`kspace_style msm <kspace_style>` command creates a series of
grids, each of different size, which are partitioned across different
sub-communicators of processors. Both constructors are also passed
the global grid size: *gnx* by *gny* by *gnz*.
The first constructor is used when the caller wants the Grid class to
partition the global grid across processors; the Grid class defines
which grid cells each processor owns and also which it stores as ghost
cells. A subsequent call to *setup_grid()*, discussed below, returns
this info to the caller.
The second constructor allows the caller to define the extent of owned
and ghost cells, and pass them to the Grid class. The 6 arguments
which start with "i" are the inclusive lower and upper index bounds of
the owned (inner) grid cells this processor owns in each of the 3
dimensions within the global grid. Owned grid cells are indexed from
0 to N-1 in each dimension.
The 6 arguments which start with "o" are the inclusive bounds of the
owned+ghost (outer) grid cells it stores. If the ghost cells are on
the other side of a periodic boundary, then these indices may be < 0
or >= N in any dimension, so that oxlo <= ixlo and ixhi >= ixhi is
always the case.
For example, if Nx = 100, then a processor might pass ixlo=50,
ixhi=60, oxlo=48, oxhi=62 to the Grid class. Or ixlo=0, ixhi=10,
oxlo=-2, oxhi=13. If a processor owns no grid cells in a dimension,
then the ihi value should be specified as one less than the ilo value.
Note that the only reason to use the second constructor is if the
logic for assigning ghost cells is too complex for the Grid class to
compute, using the various set() methods described next. Currently
only the kspace_style pppm/electrode and kspace_style msm commands use
the second constructor.
----------
Grid class set methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods affect how the Grid class computes which owned
and ghost cells are assigned to each processor. *Set_shift_grid()* is
the only method which influences owned cell assignment; all the rest
influence ghost cell assignment. These methods are only used with the
first constructor; they are ignored if the second constructor is used.
These methods must be called before the *setup_grid()* method is
invoked, because they influence its operation.
.. code-block:: c++
void set_shift_grid(double shift);
void set_distance(double distance);
void set_stencil_atom(int lo, int hi);
void set_shift_atom(double shift_lo, double shift_hi);
void set_stencil_grid(int lo, int hi);
void set_zfactor(double factor);
Processors own a grid cell if a point within the grid cell is inside
the processor's subdomain. By default this is the center point of the
grid cell. The *set_shift_grid()* method can change this. The *shift*
argument is a value from 0.0 to 1.0 (inclusive) which is the offset of
the point within the grid cell in each dimension. The default is 0.5
for the center of the cell. A value of 0.0 is the lower left corner
point; a value of 1.0 is the upper right corner point. There is
typically no need to change the default as it is optimal for
minimizing the number of ghost cells needed.
If a processor maps its particles to grid cells, it needs to allow for
its particles being outside its subdomain between reneighboring. The
*distance* argument of the *set_distance()* method sets the furthest
distance outside a processor's subdomain which a particle can move.
Typically this is half the neighbor skin distance, assuming
reneighboring is done appropriately. This distance is used in
determining how many ghost cells a processor needs to store to enable
its particles to be mapped to grid cells. The default value is 0.0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, map values (charge in the case of PPPM) to a stencil of grid
cells beyond the grid cell the particle is in. The stencil extent may
be different in the low and high directions. The *set_stencil_atom()*
method defines the maximum values of those 2 extents, assumed to be
the same in each of the 3 dimensions. Both the lo and hi values are
specified as positive integers. The default values are both 0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, shift the position of an atom when mapping it to a grid cell,
based on the size of the stencil used to map values to the grid
(charge in the case of PPPM). The lo and hi arguments of the
*set_shift_atom()* method are the minimum shift in the low direction
and the maximum shift in the high direction, assumed to be the same in
each of the 3 dimensions. The shifts should be fractions of a grid
cell size with values between 0.0 and 1.0 inclusive. The default
values are both 0.0. See the src/pppm.cpp file for examples of these
lo/hi values for regular and staggered grids.
Some methods like the :doc:`fix ttm/grid <fix_ttm>` command, perform
finite difference kinds of operations on the grid, to diffuse electron
heat in the case of the two-temperature model (TTM). This operation
uses ghost grid values beyond the owned grid values the processor
updates. The *set_stencil_grid()* method defines the extent of this
stencil in both directions, assumed to be the same in each of the 3
dimensions. Both the lo and hi values are specified as positive
integers. The default values are both 0.
The kspace_style pppm commands allow a grid to be defined which
overlays a volume which extends beyond the simulation box in the z
dimension. This is for the purpose of modeling a 2d-periodic slab
(non-periodic in z) as if it were a larger 3d periodic system,
extended (with empty space) in the z dimension. The
:doc:`kspace_modify slab <kspace_modify>` command is used to specify
the ratio of the larger volume to the simulation volume; a volume
ratio of ~3 is typical. For this kind of model, the PPPM caller sets
the global grid size *gnz* ~3x larger than it would be otherwise.
This same ratio is passed by the PPPM caller as the *factor* argument
to the Grid class via the *set_zfactor()* method (*set_yfactor()* for
2d grids). The Grid class will then assign ownership of the 1/3 of
grid cells that overlay the simulation box to the processors which
also overlay the simulation box. The remaining 2/3 of the grid cells
are assigned to processors whose subdomains are adjacent to the upper
z boundary of the simulation box.
----------
Grid class setup_grid method
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *setup_grid()* method is called after the first constructor
(above) to partition the grid across processors, which determines
which grid cells each processor owns. It also calculates how many
ghost grid cells in each dimension and each direction each processor
needs to store.
Note that this method is NOT called if the second constructor above is
used. In that case, the caller assigns owned and ghost cells to each
processor.
Also note that this method must be invoked after any *set_*()* methods have
been used, since they can influence the assignment of owned and ghost
cells.
.. code-block:: c++
void setup_grid(int &ixlo, int &ixhi, int &iylo, int &iyhi, int &izlo, int &izhi,
int &oxlo, int &oxhi, int &oylo, int &oyhi, int &ozlo, int &ozhi)
The 6 return arguments which start with "i" are the inclusive lower
and upper index bounds of the owned (inner) grid cells this processor
owns in each of the 3 dimensions within the global grid. Owned grid
cells are indexed from 0 to N-1 in each dimension.
The 6 return arguments which start with "o" are the inclusive bounds of
the owned+ghost cells it owns. If the ghost cells are on the other
side of a periodic boundary, then these indices may be < 0 or >= N in
any dimension, so that oxlo <= ixlo and ixhi >= ixhi is always the
case.
----------
More grid class set methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following 2 methods can be used to override settings made by the
constructors above. If used, they must be called called before the
*setup_comm()* method is invoked, since it uses the settings that
these methods override. In LAMMPS these methods are called by by the
:doc:`kspace_style msm <kspace_style>` command for the grids it
instantiates using the 2nd constructor above.
.. code-block:: c++
void set_proc_neighs(int pxlo, int pxhi, int pylo, int pyhi, int pzlo, int pzhi)
void set_caller_grid(int fxlo, int fxhi, int fylo, int fyhi, int fzlo, int fzhi)
The *set_proc_neighs()* method sets the processor IDs of the 6
neighboring processors for each processor. Normally these would match
the processor grid neighbors which LAMMPS creates to overlay the
simulation box (the default). However, MSM excludes non-participating
processors from coarse grid communication when less processors are
used. This method allows MSM to override the default values.
The *set_caller_grid()* method species the size of the data arrays the
caller allocates. Normally these would match the extent of the ghost
grid cells (the default). However the MSM caller allocates a larger
data array (more ghost cells) for its finest-level grid, for use in
other operations besides owned/ghost cell communication. This method
allows MSM to override the default values.
----------
Grid class get methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods allow the caller to query the settings for a
specific grid, whether it created the grid or another command created
it.
.. code-block:: c++
void get_size(int &nxgrid, int &nygrid, int &nzgrid);
void get_bounds_owned(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
void get_bounds_ghost(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
The *get_size()* method returns the size of the global grid in each dimension.
The *get_bounds_owned()* method return the inclusive index bounds of
the grid cells this processor owns. The values range from 0 to N-1 in
each dimension. These values are the same as the "i" values returned
by *setup_grid()*.
The *get_bounds_ghost()* method return the inclusive index bounds of
the owned+ghost grid cells this processor stores. The owned cell
indices range from 0 to N-1, so these indices may be less than 0 or
greater than or equal to N in each dimension. These values are the
same as the "o" values returned by *setup_grid()*.
----------
Grid class owned/ghost communication
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If needed by the command, the following methods setup and perform
communication of grid data to/from neighboring processors. The
*forward_comm()* method sends owned grid cell data to the
corresponding ghost grid cells on other processors. The
*reverse_comm()* method sends ghost grid cell data to the
corresponding owned grid cells on another processor. The caller can
choose to sum ghost grid cell data to the owned grid cell or simply
copy it.
.. code-block:: c++
void setup_comm(int &nbuf1, int &nbuf2)
void forward_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype);
void reverse_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
int ghost_adjacent();
The *setup_comm()* method must be called one time before performing
*forward* or *reverse* communication (multiple times if needed). It
returns two integers, which should be used to allocate two buffers.
The *nbuf1* and *nbuf2* values are the number of grid cells whose data
will be stored in two buffers by the Grid class when *forward* or
*reverse* communication is performed. The caller should thus allocate
them to a size large enough to hold all the data used in any single
forward or reverse communication operation it performs. Note that the
caller may allocate and communicate multiple data arrays for a grid it
instantiates. This size includes the bytes needed for the data type
of the grid data it stores, e.g. double precision values.
The *forward_comm()* and *reverse_comm()* methods send grid cell data
from owned to ghost cells, or ghost to owned cells, respectively, as
described above. The *caller* argument should be one of these values
-- Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument is the
"this" pointer to the caller class. These 2 arguments are used to
call back to pack()/unpack() functions in the caller class, as
explained below.
The *which* argument is a flag the caller can set which is passed to
the caller's pack()/unpack() methods. This allows a single callback
method to pack/unpack data for several different flavors of
forward/reverse communication, e.g. operating on different grids or
grid data.
The *nper* argument is the number of values per grid cell to be
communicated. The *nbyte* argument is the number of bytes per value,
e.g. 8 for double-precision values. The *buf1* and *buf2* arguments
are the two allocated buffers described above. So long as they are
allocated for the maximum size communication, they can be re-used for
any *forward_comm()/reverse_comm()* call. The *datatype* argument is
the MPI_Datatype setting, which should match the buffer allocation and
the *nbyte* argument. E.g. MPI_DOUBLE for buffers storing double
precision values.
To use the *forward_grid()* method, the caller must provide two
callback functions; likewise for use of the *reverse_grid()* methods.
These are the 4 functions, their arguments are all the same.
.. code-block:: c++
void pack_forward_grid(int which, void *vbuf, int nlist, int *list);
void unpack_forward_grid(int which, void *vbuf, int nlist, int *list);
void pack_reverse_grid(int which, void *vbuf, int nlist, int *list);
void unpack_reverse_grid(int which, void *vbuf, int nlist, int *list);
The *which* argument is set to the *which* value of the
*forward_comm()* or *reverse_comm()* calls. It allows the pack/unpack
function to select what data values to pack/unpack. *Vbuf* is the
buffer to pack/unpack the data to/from. It is a void pointer so that
the caller can cast it to whatever data type it chooses, e.g. double
precision values. *Nlist* is the number of grid cells to pack/unpack
and *list* is a vector (nlist in length) of offsets to where the data
for each grid cell resides in the caller's data arrays, which is best
illustrated with an example from the src/EXTRA-FIX/fix_ttm_grid.cpp
class which stores the scalar electron temperature for 3d system in a
3d grid (one value per grid cell):
.. code-block:: c++
void FixTTMGrid::pack_forward_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *src = &T_electron[nzlo_out][nylo_out][nxlo_out];
for (int i = 0; i < nlist; i++) buf[i] = src[list[i]];
}
In this case, the *which* argument is not used, *vbuf* points to a
buffer of doubles, and the electron temperature is stored by the
FixTTMGrid class in a 3d array of owned+ghost cells called T_electron.
That array is allocated by the *memory->create_3d_offset()* method
described above so that the first grid cell it stores is indexed as
T_electron[nzlo_out][nylo_out][nxlo_out]. The *nlist* values in
*list* are integer offsets from that first grid cell. Setting *src*
to the address of the first cell allows those offsets to be used to
access the temperatures to pack into the buffer.
Here is a similar portion of code from the src/fix_ave_grid.cpp class
which can store two kinds of data, a scalar count of atoms in a grid
cell, and one or more grid-cell-averaged atom properties. The code
from its *unpack_reverse_grid()* function for 2d grids and multiple
per-atom properties per grid cell (*nvalues*) is shown here:
.. code-block:: c++
void FixAveGrid::unpack_reverse_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *count,*data,*values;
count = &count2d[nylo_out][nxlo_out];
data = &array2d[nylo_out][nxlo_out][0];
m = 0;
for (i = 0; i < nlist; i++) {
count[list[i]] += buf[m++];
values = &data[nvalues*list[i]];
for (j = 0; j < nvalues; j++)
values[j] += buf[m++];
}
}
Both the count and the multiple values per grid cell are communicated
in *vbuf*. Note that *data* is now a pointer to the first value in
the first grid cell. And *values* points to where the first value in
*data* is for an offset of grid cells, calculated by multiplying
*nvalues* by *list[i]*. Finally, because this is reverse
communication, the communicated buffer values are summed to the caller
values.
The *ghost_adjacent()* method returns a 1 if every processor can
perform the necessary owned/ghost communication with only its nearest
neighbor processors (4 in 2d, 6 in 3d). It returns a 0 if any
processor's ghost cells extend further than nearest neighbor
processors.
This can be checked by callers who have the option to change the
global grid size to ensure more efficient nearest-neighbor-only
communication if they wish. In this case, they instantiate a grid of
a given size (resolution), then invoke *setup_comm()* followed by
*ghost_adjacent()*. If the ghost cells are not adjacent, they destroy
the grid instance and start over with a higher-resolution grid.
Several of the :doc:`kspace_style pppm <kspace_style>` command
variants have this option.
----------
Grid class remap methods for load balancing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following methods are used when a load-balancing operation,
triggered by the :doc:`balance <balance>` or :doc:`fix balance
<fix_balance>` commands, changes the partitioning of the simulation
domain into processor subdomains.
In order to work with load-balancing, any style command (compute, fix,
pair, or kspace style) which allocates a grid and stores per-grid data
should define a *reset_grid()* method; it takes no arguments. It will
be called by the two balance commands after they have reset processor
subdomains and migrated atoms (particles) to new owning processors.
The *reset_grid()* method will typically perform some or all of the
following operations. See the src/fix_ave_grid.cpp and
src/EXTRA_FIX/fix_ttm_grid.cpp files for examples of *reset_grid()*
methods, as well as the *pack_remap_grid()* and *unpack_remap_grid()*
functions.
First, the *reset_grid()* method can instantiate new grid(s) of the
same global size, then call *setup_grid()* to partition them via the
new processor subdomains. At this point, it can invoke the
*identical()* method which compares the owned and ghost grid cell
index bounds between two grids, the old grid passed as a pointer
argument, and the new grid whose *identical()* method is being called.
It returns 1 if the indices match on all processors, otherwise 0. If
they all match, then the new grids can be deleted; the command can
continue to use the old grids.
If not, then the command should allocate new grid data array(s) which
depend on the new partitioning. If the command does not need to
persist its grid data from the old partitioning to the new one, then
the command can simply delete the old data array(s) and grid
instance(s). It can then return.
If the grid data does need to persist, then the data for each grid
needs to be "remapped" from the old grid partitioning to the new grid
partitioning. The *setup_remap()* and *remap()* methods are used for
that purpose.
.. code-block:: c++
int identical(Grid3d *old);
void setup_remap(Grid3d *old, int &nremap_buf1, int &nremap_buf2)
void remap(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
The arguments to these methods are identical to those for
the *setup_comm()* and *forward_comm()* or *reverse_comm()* methods.
However the returned *nremap_buf1* and *nremap2_buf* values will be
different than the *nbuf1* and *nbuf2* values. They should be used to
allocate two different remap buffers, separate from the owned/ghost
communication buffers.
To use the *remap()* method, the caller must provide two
callback functions:
.. code-block:: c++
void pack_remap_grid(int which, void *vbuf, int nlist, int *list);
void unpack_remap_grid(int which, void *vbuf, int list, int *list);
Their arguments are identical to those for the *pack_forward_grid()*
and *unpack_forward_grid()* callback functions (or the reverse
variants) discussed above. Normally, both these methods pack/unpack
all the data arrays for a given grid. The *which* argument of the
*remap()* method sets the *which* value for the pack/unpack functions.
If the command instantiates multiple grids (of different sizes), it
can be used within the pack/unpack methods to select which grid's data
is being remapped.
Note that the *pack_remap_grid()* function must copy values from the
OLD grid data arrays into the *vbuf* buffer. The *unpack_remap_grid()*
function must copy values from the *vbuf* buffer into the NEW grid
data arrays.
After the remap operation for grid cell data has been performed, the
*reset_grid()* method can deallocate the two remap buffers it created,
and can then exit.
----------
Grid class I/O methods
^^^^^^^^^^^^^^^^^^^^^^
There are two I/O methods in the Grid classes which can be used to
read and write grid cell data to files. The caller can decide on the
precise format of each file, e.g. whether header lines are prepended
or comment lines are allowed. Fundamentally, the file should contain
one line per grid cell for the entire global grid. Each line should
contain identifying info as to which grid cell it is, e.g. a unique
grid cell ID or the ix,iy,iz indices of the cell within a 3d grid.
The line should also contain one or more data values which are stored
within the grid data arrays created by the command
For grid cell IDs, the LAMMPS convention is that the IDs run from 1 to
N, where N = Nx * Ny for 2d grids and N = Nx * Ny * Nz for 3d grids.
The x-index of the grid cell varies fastest, then y, and the z-index
varies slowest. So for a 10x10x10 grid the cell IDs from 901-1000
would be in the top xy layer of the z dimension.
The *read_file()* method does something simple. It reads a chunk of
consecutive lines from the file and passes them back to the caller to
process. The caller provides a *unpack_read_grid()* function for this
purpose. The function checks the grid cell ID or indices and only
stores grid cell data for the grid cells it owns.
The *write_file()* method does something slightly more complex. Each
processor packs the data for its owned grid cells into a buffer. The
caller provides a *pack_write_grid()* function for this purpose. The
*write_file()* method then loops over all processors and each sends
its buffer one at a time to processor 0, along with the 3d (or 2d)
index bounds of its grid cell data within the global grid. Processor
0 calls back to the *unpack_write_grid()* function provided by the
caller with the buffer. The function writes one line per grid cell to
the file.
See the src/EXTRA_FIX/fix_ttm_grid.cpp file for examples of now both
these methods are used to read/write electron temperature values
from/to a file, as well as for implementations of the the pack/unpack
functions described below.
Here are the details of the two I/O methods and the 3 callback
functions. See the src/fix_ave_grid.cpp file for examples of all of
them.
.. code-block:: c++
void read_file(int caller, void *ptr, FILE *fp, int nchunk, int maxline)
void write_file(int caller, void *ptr, int which,
int nper, int nbyte, MPI_Datatype datatype
The *caller* argument in both methods should be one of these values --
Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument in
both methods is the "this" pointer to the caller class. These 2
arguments are used to call back to pack()/unpack() functions in the
caller class, as explained below.
For the *read_file()* method, the *fp* argument is a file pointer to
the file to be read from, opened on processor 0 by the caller.
*Nchunk* is the number of lines to read per chunk, and *maxline* is
the maximum number of characters per line. The Grid class will
allocate a buffer for storing chunks of lines based on these values.
For the *write_file()* method, the *which* argument is a flag the
caller can set which is passed back to the caller's pack()/unpack()
methods. If the command instantiates multiple grids (of different
sizes), this flag can be used within the pack/unpack methods to select
which grid's data is being written out (presumably to different
files). the *nper* argument is the number of values per grid cell to
be written out. The *nbyte* argument is the number of bytes per
value, e.g. 8 for double-precision values. The *datatype* argument is
the MPI_Datatype setting, which should match the *nbyte* argument.
E.g. MPI_DOUBLE for double precision values.
To use the *read_grid()* method, the caller must provide one callback
function. To use the *write_grid()* method, it provides two callback
functions:
.. code-block:: c++
int unpack_read_grid(int nlines, char *buffer)
void pack_write_grid(int which, void *vbuf)
void unpack_write_grid(int which, void *vbuf, int *bounds)
For *unpack_read_grid()* the *nlines* argument is the number of lines
of character data read from the file and contained in *buffer*. The
lines each include a newline character at the end. When the function
processes the lines, it may choose to skip some of them (header or
comment lines). It returns an integer count of the number of grid
cell lines it processed. This enables the Grid class *read_file()*
method to know when it has read the correct number of lines.
For *pack_write_grid()* and *unpack_write_grid()*, the *vbuf* argument
is the buffer to pack/unpack data to/from. It is a void pointer so
that the caller can cast it to whatever data type it chooses,
e.g. double precision values. the *which* argument is set to the
*which* value of the *write_file()* method. It allows the caller to
choose which grid data to operate on.
For *unpack_write_grid()*, the *bounds* argument is a vector of 4 or 6
integer grid indices (4 for 2d, 6 for 3d). They are the
xlo,xhi,ylo,yhi,zlo,zhi index bounds of the portion of the global grid
which the *vbuf* holds owned grid cell data values for. The caller
should loop over the values in *vbuf* with a double loop (2d) or
triple loop (3d), similar to the code snippets listed above. The
x-index varies fastest, then y, and the z-index slowest. If there are
multiple values per grid cell, the index for those values varies
fastest of all. The caller can add the x,y,z indices of the grid cell
(or the corresponding grid cell ID) to the data value(s) written as
one line to the output file.
----------
Style class grid access methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A style command can enable its grid cell data to be accessible from
other commands. For example :doc:`fix ave/grid <fix_ave_grid>` or
:doc:`dump grid <dump>` or :doc:`dump grid/vtk <dump>`. Those
commands access the grid cell data by using a *grid reference* in
their input script syntax, as described on the :doc:`Howto_grid
<Howto_grid>` doc page. They look like this:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
Each grid command instantiates has a unique *gname*, defined by the
command. Likewise each grid cell data structure (scalar or vector)
associated with a grid has a unique *dname*, also defined by the
command.
To provide access to its grid cell data, a style command needs to
implement the following 4 methods:
.. code-block:: c++
int get_grid_by_name(const std::string &name, int &dim);
void *get_grid_by_index(int index);
int get_griddata_by_name(int igrid, const std::string &name, int &ncol);
void *get_griddata_by_index(int index);
Currently only computes and fixes can implement these methods. If it
does so, the compute of fix should also set the variable
*pergrid_flag* to 1. See any of the compute or fix commands which set
"pergrid_flag = 1" for examples of how these 4 functions can be
implemented.
The *get_grid_by_name()* method takes a grid name as input and returns
two values. The *dim* argument is returned as 2 or 3 for the
dimensionality of the grid. The function return is a grid index from
0 to G-1 where *G* is the number of grids the command instantiates. A
value of -1 is returned if the grid name is not recognized.
The *get_grid_by_index()* method is called after the
*get_grid_by_name()* method, using the grid index it returned as its
argument. This method will return a pointer to the Grid2d or Grid3d
class. The caller can use this to query grid attributes, such as the
global size of the grid, to ensure it is of the expected size.
The *get_griddata_by_name()* method takes a grid index *igrid* and a
data name as input. It returns two values. The *ncol* argument is
returned as a 0 if the grid data is a single value (scalar) per grid
cell, or an integer M > 0 if there are M values (vector) per grid
cell. Note that even if M = 1, it is still a 1-length vector, not a
scalar. The function return is a data index from 0 to D-1 where *D*
is the number of data sets associated with that grid by the command.
A value of -1 is returned if the data name is not recognized.
The *get_griddata_by_index()* method is called after the
*get_griddata_by_name()* method, using the data index it returned as
its argument. This method will return a pointer to the
multidimensional array which stores the requested data.
As in the discussion above of the Memory class *create_offset()*
methods, the dimensionality of the array associated with the returned
pointer depends on whether it is a 2d or 3d grid and whether there is
a single or multiple values stored for each grid cell:
* single value per cell for a 2d grid = 2d array pointer
* multiple values per cell for a 2d grid = 3d array pointer
* single value per cell for a 3d grid = 3d array pointer
* multiple values per cell for a 3d grid = 4d array pointer
The caller will typically access the data by casting the void pointer
to the corresponding array pointer and using nested loops in x,y,z
between owned or ghost index bounds returned by the
*get_bounds_owned()* or *get_bounds_ghost()* methods to index into the
array. Example code snippets with this logic were listed above,
----------
Final notes
^^^^^^^^^^^
Finally, here are some additional issues to pay attention to for
writing any style command which uses distributed grids via the Grid2d
or Grid3d class.
The command destructor should delete all instances of the Grid class,
any buffers it allocated for forward/reverse or remap communication,
and any data arrays it allocated to store grid cell data.
If a command is intended to work for either 2d or 3d simulations, then
it should have logic to instantiate either 2d or 3d grids and their
associated data arrays, depending on the dimension of the simulation
box. The :doc:`fix ave/grid <fix_ave_grid>` command is an example of
such a command.
When a command maps its particles to the grid and updates grid cell
values, it should check that it is not updating or accessing a grid
cell value outside the range of its owned+ghost cells, and generate an
error message if that is the case. This could happen, for example, if
a particle has moved further than half the neighbor skin distance,
because the neighbor list update criterion are not adequate to prevent
it from happening. See the src/KSPACE/pppm.cpp file and its
*particle_map()* method for an example of this kind of error check.

View File

@ -102,7 +102,7 @@ build is then :doc:`processed in parallel <Developer_par_neigh>`.
The most commonly required neighbor list is a so-called "half" neighbor
list, where each pair of atoms is listed only once (except when the
:doc:`newton command setting <newton>` for pair is off; in that case
pairs straddling subdomains or periodic boundaries will be listed twice).
pairs straddling sub-domains or periodic boundaries will be listed twice).
Thus these are the default settings when a neighbor list request is created in:
.. code-block:: c++
@ -361,7 +361,7 @@ allocated as a 1d vector or 3d array. Either way, the ordering of
values within contiguous memory x fastest, then y, z slowest.
For the ``3d decomposition`` of the grid, the global grid is
partitioned into bricks that correspond to the subdomains of the
partitioned into bricks that correspond to the sub-domains of the
simulation box that each processor owns. Often, this is a regular 3d
array (Px by Py by Pz) of bricks, where P = number of processors =
Px * Py * Pz. More generally it can be a tiled decomposition, where

View File

@ -7,7 +7,7 @@ but there are small a number of files in several other languages like C,
Fortran, Shell script, or Python.
The core of the code is located in the ``src`` folder and its
subdirectories. A sizable number of these files are in the ``src``
sub-directories. A sizable number of these files are in the ``src``
directory itself, but there are plenty of :doc:`packages <Packages>`,
which can be included or excluded when LAMMPS is built. See the
:doc:`Include packages in build <Build_package>` section of the manual
@ -15,42 +15,42 @@ for more information about that part of the build process. LAMMPS
currently supports building with :doc:`conventional makefiles
<Build_make>` and through :doc:`CMake <Build_cmake>`. Those procedures
differ in how packages are enabled or disabled for inclusion into a
LAMMPS binary, so they cannot be mixed. The source files for each
package are in all-uppercase subdirectories of the ``src`` folder, for
LAMMPS binary so they cannot be mixed. The source files for each
package are in all-uppercase sub-directories of the ``src`` folder, for
example ``src/MOLECULE`` or ``src/EXTRA-MOLECULE``. The ``src/STUBS``
subdirectory is not a package but contains a dummy MPI library, that is
sub-directory is not a package but contains a dummy MPI library, that is
used when building a serial version of the code. The ``src/MAKE``
directory and its subdirectories contain makefiles with settings and
directory and its sub-directories contain makefiles with settings and
flags for a variety of configuration and machines for the build process
with traditional makefiles.
The ``lib`` directory contains the source code for several supporting
libraries or files with configuration settings to use globally installed
libraries, that are required by some optional packages. They may
libraries, that are required by some of the optional packages. They may
include python scripts that can transparently download additional source
code on request. Each subdirectory, like ``lib/poems`` or ``lib/gpu``,
code on request. Each sub-directory, like ``lib/poems`` or ``lib/gpu``,
contains the source files, some of which are in different languages such
as Fortran or CUDA. These libraries included in the LAMMPS build, if the
corresponding package is installed.
as Fortran or CUDA. These libraries included in the LAMMPS build,
if the corresponding package is installed.
LAMMPS C++ source files almost always come in pairs, such as
``src/run.cpp`` (implementation file) and ``src/run.h`` (header file).
Each pair of files defines a C++ class, for example the
:cpp:class:`LAMMPS_NS::Run` class, which contains the code invoked by
the :doc:`run <run>` command in a LAMMPS input script. As this example
:cpp:class:`LAMMPS_NS::Run` class which contains the code invoked by the
:doc:`run <run>` command in a LAMMPS input script. As this example
illustrates, source file and class names often have a one-to-one
correspondence with a command used in a LAMMPS input script. Some
source files and classes do not have a corresponding input script
command, for example ``src/force.cpp`` and the :cpp:class:`LAMMPS_NS::Force`
command, e.g. ``src/force.cpp`` and the :cpp:class:`LAMMPS_NS::Force`
class. They are discussed in the next section.
The names of all source files are in lower case and may use the
underscore character '_' to separate words. Apart from bundled,
externally maintained libraries, which may have different conventions,
all C and C++ header files have a ``.h`` extension, all C++ files have a
``.cpp`` extension, and C files a ``.c`` extension. A few C++ classes
and utility functions are implemented with only a ``.h`` file. Examples
are the Pointers and Commands classes or the MathVec functions.
underscore character '_' to separate words. Outside of bundled libraries
which may have different conventions, all C and C++ header files have a
``.h`` extension, all C++ files have a ``.cpp`` extension, and C files a
``.c`` extension. A small number of C++ classes and utility functions
are implemented with only a ``.h`` file. Examples are the Pointers and
Commands classes or the MathVec functions.
Class topology
--------------
@ -62,36 +62,35 @@ associated source files in the ``src`` folder, for example the class
:cpp:class:`LAMMPS_NS::Memory` corresponds to the files ``memory.cpp``
and ``memory.h``, or the class :cpp:class:`LAMMPS_NS::AtomVec`
corresponds to the files ``atom_vec.cpp`` and ``atom_vec.h``. Full
lines in the figure represent compositing: that is, the class at the
base of the arrow holds a pointer to an instance of the class at the
tip. Dashed lines instead represent inheritance: the class at the tip
of the arrow is derived from the class at the base. Classes with a red
boundary are not instantiated directly, but they represent the base
classes for "styles". Those "styles" make up the bulk of the LAMMPS
code and only a few representative examples are included in the figure,
so it remains readable.
lines in the figure represent compositing: that is the class at the base
of the arrow holds a pointer to an instance of the class at the tip.
Dashed lines instead represent inheritance: the class to the tip of the
arrow is derived from the class at the base. Classes with a red boundary
are not instantiated directly, but they represent the base classes for
"styles". Those "styles" make up the bulk of the LAMMPS code and only
a few representative examples are included in the figure so it remains
readable.
.. _class-topology:
.. figure:: JPG/lammps-classes.png
LAMMPS class topology
This figure shows relations of base classes of the LAMMPS
simulation package. Full lines indicate that a class holds an
instance of the class it is pointing to; dashed lines point to
derived classes that are given as examples of what classes may be
instantiated during a LAMMPS run based on the input commands and
accessed through the API define by their respective base classes.
At the core is the :cpp:class:`LAMMPS <LAMMPS_NS::LAMMPS>` class,
which holds pointers to class instances with specific purposes.
Those may hold instances of other classes, sometimes directly, or
only temporarily, sometimes as derived classes or derived classes
of derived classes, which may also hold instances of other
classes.
This figure shows some of the relations of the base classes of the
LAMMPS simulation package. Full lines indicate that a class holds an
instance of the class it is pointing to; dashed lines point to
derived classes that are given as examples of what classes may be
instantiated during a LAMMPS run based on the input commands and
accessed through the API define by their respective base classes. At
the core is the :cpp:class:`LAMMPS <LAMMPS_NS::LAMMPS>` class, which
holds pointers to class instances with specific purposes. Those may
hold instances of other classes, sometimes directly, or only
temporarily, sometimes as derived classes or derived classes of
derived classes, which may also hold instances of other classes.
The :cpp:class:`LAMMPS_NS::LAMMPS` class is the topmost class and
represents what is generally referred to as an "instance of LAMMPS". It
is a composite holding pointers to instances of other core classes
represents what is generally referred to an "instance" of LAMMPS. It is
a composite holding pointers to instances of other core classes
providing the core functionality of the MD engine in LAMMPS and through
them abstractions of the required operations. The constructor of the
LAMMPS class will instantiate those instances, process the command line
@ -103,44 +102,42 @@ LAMMPS while passing it the command line flags and input script. It
deletes the LAMMPS instance after the method reading the input returns
and shuts down the MPI environment before it exits the executable.
The :cpp:class:`LAMMPS_NS::Pointers` class is not shown in the
The :cpp:class:`LAMMPS_NS::Pointers` is not shown in the
:ref:`class-topology` figure for clarity. It holds references to many
of the members of the `LAMMPS_NS::LAMMPS`, so that all classes derived
from :cpp:class:`LAMMPS_NS::Pointers` have direct access to those
references. From the class topology all classes with blue boundary are
reference. From the class topology all classes with blue boundary are
referenced in the Pointers class and all classes in the second and third
columns, that are not listed as derived classes, are instead derived
from :cpp:class:`LAMMPS_NS::Pointers`. To initialize the pointer
references in Pointers, a pointer to the LAMMPS class instance needs to
be passed to the constructor. All constructors for classes derived from
it, must do so and thus pass that pointer to the constructor for
:cpp:class:`LAMMPS_NS::Pointers`. The default constructor for
:cpp:class:`LAMMPS_NS::Pointers` is disabled to enforce this.
columns, that are not listed as derived classes are instead derived from
:cpp:class:`LAMMPS_NS::Pointers`. To initialize the pointer references
in Pointers, a pointer to the LAMMPS class instance needs to be passed
to the constructor and thus all constructors for classes derived from it
must do so and pass this pointer to the constructor for Pointers.
Since all storage is supposed to be encapsulated (there are a few
exceptions), the LAMMPS class can also be instantiated multiple times by
a calling code. Outside the aforementioned exceptions, those LAMMPS
a calling code. Outside of the aforementioned exceptions, those LAMMPS
instances can be used alternately. As of the time of this writing
(early 2023) LAMMPS is not yet sufficiently thread-safe for concurrent
(early 2021) LAMMPS is not yet sufficiently thread-safe for concurrent
execution. When running in parallel with MPI, care has to be taken,
that suitable copies of communicators are used to not create conflicts
between different instances.
The LAMMPS class currently holds instances of 19 classes representing
the core functionality. There are a handful of virtual parent classes
in LAMMPS that define what LAMMPS calls ``styles``. These are shaded
red in the :ref:`class-topology` figure. Each of these are parents of a
number of child classes that implement the interface defined by the
parent class. There are two main categories of these ``styles``: some
may only have one instance active at a time (e.g. atom, pair, bond,
angle, dihedral, improper, kspace, comm) and there is a dedicated
pointer variable for each of them in the corresponding composite class.
The LAMMPS class currently (early 2021) holds instances of 19 classes
representing the core functionality. There are a handful of virtual
parent classes in LAMMPS that define what LAMMPS calls ``styles``. They
are shaded red in the :ref:`class-topology` figure. Each of these are
parents of a number of child classes that implement the interface
defined by the parent class. There are two main categories of these
``styles``: some may only have one instance active at a time (e.g. atom,
pair, bond, angle, dihedral, improper, kspace, comm) and there is a
dedicated pointer variable for each of them in the composite class.
Setups that require a mix of different such styles have to use a
*hybrid* class instance that acts as a proxy, and manages and forwards
calls to the corresponding sub-style class instances for the designated
subset of atoms or data. The composite class may also have lists of
class instances, e.g. ``Modify`` handles lists of compute and fix
styles, while ``Output`` handles a list of dump class instances.
*hybrid* class that takes the place of the one allowed instance and then
manages and forwards calls to the corresponding sub-styles for the
designated subset of atoms or data. The composite class may also have
lists of class instances, e.g. Modify handles lists of compute and fix
styles, while Output handles a list of dump class instances.
The exception to this scheme are the ``command`` style classes. These
implement specific commands that can be invoked before, after, or in
@ -149,19 +146,19 @@ command() method called and then, after completion, the class instance
deleted. Examples for this are the create_box, create_atoms, minimize,
run, set, or velocity command styles.
For all those ``styles``, certain naming conventions are employed: for
For all those ``styles`` certain naming conventions are employed: for
the fix nve command the class is called FixNVE and the source files are
``fix_nve.h`` and ``fix_nve.cpp``. Similarly, for fix ave/time we have
``fix_nve.h`` and ``fix_nve.cpp``. Similarly for fix ave/time we have
FixAveTime and ``fix_ave_time.h`` and ``fix_ave_time.cpp``. Style names
are lower case and without spaces or special characters. A suffix or
words are appended with a forward slash '/' which denotes a variant of
the corresponding class without the suffix. To connect the style name
and the class name, LAMMPS uses macros like: ``AtomStyle()``,
``PairStyle()``, ``BondStyle()``, ``RegionStyle()``, and so on in the
corresponding header file. During configuration or compilation, files
corresponding header file. During configuration or compilation files
with the pattern ``style_<name>.h`` are created that consist of a list
of include statements including all headers of all styles of a given
type that are currently enabled (or "installed").
type that are currently active (or "installed).
More details on individual classes in the :ref:`class-topology` are as
@ -175,8 +172,8 @@ follows:
that one or multiple simulations can be run, on the processors
allocated for a run, e.g. by the mpirun command.
- The Input class reads and processes input (strings and files), stores
variables, and invokes :doc:`commands <Commands_all>`.
- The Input class reads and processes input input strings and files,
stores variables, and invokes :doc:`commands <Commands_all>`.
- Command style classes are derived from the Command class. They provide
input script commands that perform one-time operations
@ -195,7 +192,7 @@ follows:
- The Atom class stores per-atom properties associated with atom styles.
More precisely, they are allocated and managed by a class derived from
the AtomVec class, and the Atom class simply stores pointers to them.
The classes derived from AtomVec represent the different atom styles,
The classes derived from AtomVec represent the different atom styles
and they are instantiated through the :doc:`atom_style <atom_style>`
command.
@ -209,22 +206,18 @@ follows:
class stores a single list (for all atoms). A NeighRequest class
instance is created by pair, fix, or compute styles when they need a
particular kind of neighbor list and use the NeighRequest properties
to select the neighbor list settings for the given request. There can
be multiple instances of the NeighRequest class. The Neighbor class
will try to optimize how the requests are processed. Depending on the
NeighRequest properties, neighbor lists are constructed from scratch,
aliased, or constructed by post-processing an existing list into
sub-lists.
to select the neighbor list settings for the given request. There can
be multiple instances of the NeighRequest class and the Neighbor class
will try to optimize how they are computed by creating copies or
sub-lists where possible.
- The Comm class performs inter-processor communication, typically of
ghost atom information. This usually involves MPI message exchanges
with 6 neighboring processors in the 3d logical grid of processors
mapped to the simulation box. There are two :doc:`communication styles
<comm_style>`, enabling different ways to perform the domain
decomposition.
- The Irregular class is used, when atoms may migrate to arbitrary
processors.
<comm_style>` enabling different ways to do the domain decomposition.
Sometimes the Irregular class is used, when atoms may migrate to
arbitrary processors.
- The Domain class stores the simulation box geometry, as well as
geometric Regions and any user definition of a Lattice. The latter
@ -253,7 +246,7 @@ follows:
file, dump file snapshots, and restart files. These correspond to the
:doc:`Thermo <thermo_style>`, :doc:`Dump <dump>`, and
:doc:`WriteRestart <write_restart>` classes respectively. The Dump
class is a base class, with several derived classes implementing
class is a base class with several derived classes implementing
various dump style variants.
- The Timer class logs timing information, output at the end

View File

@ -1,22 +1,22 @@
Communication
^^^^^^^^^^^^^
Following the selected partitioning scheme, all per-atom data is
Following the partitioning scheme in use all per-atom data is
distributed across the MPI processes, which allows LAMMPS to handle very
large systems provided it uses a correspondingly large number of MPI
processes. Since The per-atom data (atom IDs, positions, velocities,
types, etc.) To be able to compute the short-range interactions, MPI
processes need not only access to the data of atoms they "own" but also
information about atoms from neighboring subdomains, in LAMMPS referred
types, etc.) To be able to compute the short-range interactions MPI
processes need not only access to data of atoms they "own" but also
information about atoms from neighboring sub-domains, in LAMMPS referred
to as "ghost" atoms. These are copies of atoms storing required
per-atom data for up to the communication cutoff distance. The green
dashed-line boxes in the :ref:`domain-decomposition` figure illustrate
the extended ghost-atom subdomain for one processor.
the extended ghost-atom sub-domain for one processor.
This approach is also used to implement periodic boundary
conditions: atoms that lie within the cutoff distance across a periodic
boundary are also stored as ghost atoms and taken from the periodic
replication of the subdomain, which may be the same subdomain, e.g. if
replication of the sub-domain, which may be the same sub-domain, e.g. if
running in serial. As a consequence of this, force computation in
LAMMPS is not subject to minimum image conventions and thus cutoffs may
be larger than half the simulation domain.
@ -28,10 +28,10 @@ be larger than half the simulation domain.
ghost atom communication
This figure shows the ghost atom communication patterns between
subdomains for "brick" (left) and "tiled" communication styles for
sub-domains for "brick" (left) and "tiled" communication styles for
2d simulations. The numbers indicate MPI process ranks. Here the
subdomains are drawn spatially separated for clarity. The
dashed-line box is the extended subdomain of processor 0 which
sub-domains are drawn spatially separated for clarity. The
dashed-line box is the extended sub-domain of processor 0 which
includes its ghost atoms. The red- and blue-shaded boxes are the
regions of communicated ghost atoms.
@ -42,7 +42,7 @@ atom communication is performed in two stages for a 2d simulation (three
in 3d) for both a regular and irregular partitioning of the simulation
box. For the regular case (left) atoms are exchanged first in the
*x*-direction, then in *y*, with four neighbors in the grid of processor
subdomains.
sub-domains.
In the *x* stage, processor ranks 1 and 2 send owned atoms in their
red-shaded regions to rank 0 (and vice versa). Then in the *y* stage,
@ -55,11 +55,11 @@ For the irregular case (right) the two stages are similar, but a
processor can have more than one neighbor in each direction. In the
*x* stage, MPI ranks 1,2,3 send owned atoms in their red-shaded regions to
rank 0 (and vice versa). These include only atoms between the lower
and upper *y*-boundary of rank 0's subdomain. In the *y* stage, ranks
and upper *y*-boundary of rank 0's sub-domain. In the *y* stage, ranks
4,5,6 send atoms in their blue-shaded regions to rank 0. This may
include ghost atoms they received in the *x* stage, but only if they
are needed by rank 0 to fill its extended ghost atom regions in the
+/-*y* directions (blue rectangles). Thus, in this case, ranks 5 and
+/-*y* directions (blue rectangles). Thus in this case, ranks 5 and
6 do not include ghost atoms they received from each other (in the *x*
stage) in the atoms they send to rank 0. The key point is that while
the pattern of communication is more complex in the irregular
@ -78,14 +78,14 @@ A "reverse" communication is when computed ghost atom attributes are
sent back to the processor who owns the atom. This is used (for
example) to sum partial forces on ghost atoms to the complete force on
owned atoms. The order of the two stages described in the
:ref:`ghost-atom-comm` figure is inverted, and the same lists of atoms
:ref:`ghost-atom-comm` figure is inverted and the same lists of atoms
are used to pack and unpack message buffers with per-atom forces. When
a received buffer is unpacked, the ghost forces are summed to owned atom
forces. As in forward communication, forces on atoms in the four blue
corners of the diagrams are sent, received, and summed twice (once at
each stage) before owning processors have the full force.
These two operations are used in many places within LAMMPS aside from
These two operations are used many places within LAMMPS aside from
exchange of coordinates and forces, for example by manybody potentials
to share intermediate per-atom values, or by rigid-body integrators to
enable each atom in a body to access body properties. Here are
@ -105,16 +105,16 @@ performed in LAMMPS:
atom pairs when building neighbor lists or computing forces.
- The cutoff distance for exchanging ghost atoms is typically equal to
the neighbor cutoff. But it can also set to a larger value if needed,
the neighbor cutoff. But it can also chosen to be longer if needed,
e.g. half the diameter of a rigid body composed of multiple atoms or
over 3x the length of a stretched bond for dihedral interactions. It
can also exceed the periodic box size. For the regular communication
pattern (left), if the cutoff distance extends beyond a neighbor
processor's subdomain, then multiple exchanges are performed in the
processor's sub-domain, then multiple exchanges are performed in the
same direction. Each exchange is with the same neighbor processor,
but buffers are packed/unpacked using a different list of atoms. For
forward communication, in the first exchange, a processor sends only
forward communication, in the first exchange a processor sends only
owned atoms. In subsequent exchanges, it sends ghost atoms received
in previous exchanges. For the irregular pattern (right) overlaps of
a processor's extended ghost-atom subdomain with all other processors
a processor's extended ghost-atom sub-domain with all other processors
in each dimension are detected.

View File

@ -20,7 +20,7 @@ e) electric field values from grid points near each atom are interpolated to com
For any of the spatial-decomposition partitioning schemes each processor
owns the brick-shaped portion of FFT grid points contained within its
subdomain. The two interpolation operations use a stencil of grid
sub-domain. The two interpolation operations use a stencil of grid
points surrounding each atom. To accommodate the stencil size, each
processor also stores a few layers of ghost grid points surrounding its
brick. Forward and reverse communication of grid point values is
@ -40,31 +40,31 @@ orthogonal boxes.
.. _fft-parallel:
.. figure:: img/fft-decomp-parallel.png
:align: center
Parallel FFT in PPPM
parallel FFT in PPPM
Stages of a parallel FFT for a simulation domain overlaid with an
8x8x8 3d FFT grid, partitioned across 64 processors. Within each
of the 4 diagrams, grid cells of the same color are owned by a
single processor; for simplicity, only cells owned by 4 or 8 of
the 64 processors are colored. The two images on the left
illustrate brick-to-pencil communication. The two images on the
right illustrate pencil-to-pencil communication, which in this
case transposes the *y* and *z* dimensions of the grid.
Stages of a parallel FFT for a simulation domain overlaid
with an 8x8x8 3d FFT grid, partitioned across 64 processors.
Within each of the 4 diagrams, grid cells of the same color are
owned by a single processor; for simplicity only cells owned by 4
or 8 of the 64 processors are colored. The two images on the left
illustrate brick-to-pencil communication. The two images on the
right illustrate pencil-to-pencil communication, which in this
case transposes the *y* and *z* dimensions of the grid.
Parallel 3d FFTs require substantial communication relative to their
computational cost. A 3d FFT is implemented by a series of 1d FFTs
along the *x-*, *y-*, and *z-*\ direction of the FFT grid. Thus, the
FFT grid cannot be decomposed like atoms into 3 dimensions for parallel
along the *x-*, *y-*, and *z-*\ direction of the FFT grid. Thus the FFT
grid cannot be decomposed like atoms into 3 dimensions for parallel
processing of the FFTs but only in 1 (as planes) or 2 (as pencils)
dimensions and in between the steps the grid needs to be transposed to
have the FFT grid portion "owned" by each MPI process complete in the
direction of the 1d FFTs it has to perform. LAMMPS uses the
pencil-decomposition algorithm as shown in the :ref:`fft-parallel`
figure.
pencil-decomposition algorithm as shown in the :ref:`fft-parallel` figure.
Initially (far left), each processor owns a brick of same-color grid
cells (actually grid points) contained within in its subdomain. A
cells (actually grid points) contained within in its sub-domain. A
brick-to-pencil communication operation converts this layout to 1d
pencils in the *x*-dimension (center left). Again, cells of the same
color are owned by the same processor. Each processor can then compute
@ -97,7 +97,7 @@ across all $P$ processors with a single call to ``MPI_Alltoall()``, but
this is typically much slower. However, for the specialized brick and
pencil tiling illustrated in :ref:`fft-parallel` figure, collective
communication across the entire MPI communicator is not required. In
the example, an :math:`8^3` grid with 512 grid cells is partitioned
the example an :math:`8^3` grid with 512 grid cells is partitioned
across 64 processors; each processor owns a 2x2x2 3d brick of grid
cells. The initial brick-to-pencil communication (upper left to upper
right) only requires collective communication within subgroups of 4
@ -132,7 +132,7 @@ grid/particle operations that LAMMPS supports:
- The fftMPI library allows each grid dimension to be a multiple of
small prime factors (2,3,5), and allows any number of processors to
perform the FFT. The resulting brick and pencil decompositions are
thus not always as well-aligned, but the size of subgroups of
thus not always as well-aligned but the size of subgroups of
processors for the two modes of communication (brick/pencil and
pencil/pencil) still scale as :math:`O(P^{\frac{1}{3}})` and
:math:`O(P^{\frac{1}{2}})`.
@ -143,28 +143,26 @@ grid/particle operations that LAMMPS supports:
in memory. This reordering can be done during the packing or
unpacking of buffers for MPI communication.
- For large systems and particularly many MPI processes, the dominant
cost for parallel FFTs is often the communication, not the computation
of 1d FFTs, even though the latter scales as :math:`N \log(N)` in the
number of grid points *N* per grid direction. This is due to the fact
that only a 2d decomposition into pencils is possible while atom data
(and their corresponding short-range force and energy computations)
can be decomposed efficiently in 3d.
- For large systems and particularly a large number of MPI processes,
the dominant cost for parallel FFTs is often the communication, not
the computation of 1d FFTs, even though the latter scales as :math:`N
\log(N)` in the number of grid points *N* per grid direction. This is
due to the fact that only a 2d decomposition into pencils is possible
while atom data (and their corresponding short-range force and energy
computations) can be decomposed efficiently in 3d.
Reducing the number of MPI processes involved in the MPI communication
will reduce this kind of overhead. By using a :doc:`hybrid MPI +
OpenMP parallelization <Speed_omp>` it is still possible to use all
processes for parallel computation. It will use OpenMP
parallelization inside the MPI domains. While that may have a lower
parallel efficiency for some part of the computation, that can be less
than the communication overhead in the 3d FFTs.
This can be addressed by reducing the number of MPI processes involved
in the MPI communication by using :doc:`hybrid MPI + OpenMP
parallelization <Speed_omp>`. This will use OpenMP parallelization
inside the MPI domains and while that may have a lower parallel
efficiency, it reduces the communication overhead.
As an alternative, it is also possible to start a :ref:`multi-partition
As an alternative it is also possible to start a :ref:`multi-partition
<partition>` calculation and then use the :doc:`verlet/split
integrator <run_style>` to perform the PPPM computation on a
dedicated, separate partition of MPI processes. This uses an integer
"1:*p*" mapping of *p* subdomains of the atom decomposition to one
subdomain of the FFT grid decomposition and where pairwise non-bonded
"1:*p*" mapping of *p* sub-domains of the atom decomposition to one
sub-domain of the FFT grid decomposition and where pairwise non-bonded
and bonded forces and energies are computed on the larger partition
and the PPPM kspace computation concurrently on the smaller partition.
@ -174,10 +172,10 @@ grid/particle operations that LAMMPS supports:
- LAMMPS implements a ``GridComm`` class which overlays the simulation
domain with a regular grid, partitions it across processors in a
manner consistent with processor subdomains, and provides methods for
manner consistent with processor sub-domains, and provides methods for
forward and reverse communication of owned and ghost grid point
values. It is used for PPPM as an FFT grid (as outlined above) and
also for the MSM algorithm, which uses a cascade of grid sizes from
also for the MSM algorithm which uses a cascade of grid sizes from
fine to coarse to compute long-range Coulombic forces. The GridComm
class is also useful for models where continuum fields interact with
particles. For example, the two-temperature model (TTM) defines heat

View File

@ -3,12 +3,12 @@ Neighbor lists
To compute forces efficiently, each processor creates a Verlet-style
neighbor list which enumerates all pairs of atoms *i,j* (*i* = owned,
*j* = owned or ghost) with separation less than the applicable neighbor
list cutoff distance. In LAMMPS, the neighbor lists are stored in a
multiple-page data structure; each page is a contiguous chunk of memory
which stores vectors of neighbor atoms *j* for many *i* atoms. This
allows pages to be incrementally allocated or deallocated in blocks as
needed. Neighbor lists typically consume the most memory of any data
*j* = owned or ghost) with separation less than the applicable
neighbor list cutoff distance. In LAMMPS the neighbor lists are stored
in a multiple-page data structure; each page is a contiguous chunk of
memory which stores vectors of neighbor atoms *j* for many *i* atoms.
This allows pages to be incrementally allocated or deallocated in blocks
as needed. Neighbor lists typically consume the most memory of any data
structure in LAMMPS. The neighbor list is rebuilt (from scratch) once
every few timesteps, then used repeatedly each step for force or other
computations. The neighbor cutoff distance is :math:`R_n = R_f +
@ -16,20 +16,20 @@ computations. The neighbor cutoff distance is :math:`R_n = R_f +
the interatomic potential for computing short-range pairwise or manybody
forces and :math:`\Delta_s` is a "skin" distance that allows the list to
be used for multiple steps assuming that atoms do not move very far
between consecutive time steps. Typically, the code triggers
between consecutive time steps. Typically the code triggers
reneighboring when any atom has moved half the skin distance since the
last reneighboring; this and other options of the neighbor list rebuild
can be adjusted with the :doc:`neigh_modify <neigh_modify>` command.
On steps when reneighboring is performed, atoms which have moved outside
their owning processor's subdomain are first migrated to new processors
their owning processor's sub-domain are first migrated to new processors
via communication. Periodic boundary conditions are also (only)
enforced on these steps to ensure each atom is re-assigned to the
correct processor. After migration, the atoms owned by each processor
are stored in a contiguous vector. Periodically, each processor
are stored in a contiguous vector. Periodically each processor
spatially sorts owned atoms within its vector to reorder it for improved
cache efficiency in force computations and neighbor list building. For
that, atoms are spatially binned and then reordered so that atoms in the
that atoms are spatially binned and then reordered so that atoms in the
same bin are adjacent in the vector. Atom sorting can be disabled or
its settings modified with the :doc:`atom_modify <atom_modify>` command.
@ -39,12 +39,12 @@ its settings modified with the :doc:`atom_modify <atom_modify>` command.
neighbor list stencils
A 2d simulation subdomain (thick black line) and the corresponding
A 2d simulation sub-domain (thick black line) and the corresponding
ghost atom cutoff region (dashed blue line) for both orthogonal
(left) and triclinic (right) domains. A regular grid of neighbor
bins (thin lines) overlays the entire simulation domain and need not
align with subdomain boundaries; only the portion overlapping the
augmented subdomain is shown. In the triclinic case, it overlaps the
align with sub-domain boundaries; only the portion overlapping the
augmented sub-domain is shown. In the triclinic case it overlaps the
bounding box of the tilted rectangle. The blue- and red-shaded bins
represent a stencil of bins searched to find neighbors of a particular
atom (black dot).
@ -52,13 +52,13 @@ its settings modified with the :doc:`atom_modify <atom_modify>` command.
To build a local neighbor list in linear time, the simulation domain is
overlaid (conceptually) with a regular 3d (or 2d) grid of neighbor bins,
as shown in the :ref:`neighbor-stencil` figure for 2d models and a
single MPI processor's subdomain. Each processor stores a set of
neighbor bins which overlap its subdomain, extended by the neighbor
single MPI processor's sub-domain. Each processor stores a set of
neighbor bins which overlap its sub-domain extended by the neighbor
cutoff distance :math:`R_n`. As illustrated, the bins need not align
with processor boundaries; an integer number in each dimension is fit to
the size of the entire simulation box.
Most often, LAMMPS builds what is called a "half" neighbor list where
Most often LAMMPS builds what it calls a "half" neighbor list where
each *i,j* neighbor pair is stored only once, with either atom *i* or
*j* as the central atom. The build can be done efficiently by using a
pre-computed "stencil" of bins around a central origin bin which
@ -67,18 +67,18 @@ is simply a list of integer offsets in *x,y,z* of nearby bins
surrounding the origin bin which are close enough to contain any
neighbor atom *j* within a distance :math:`R_n` from any atom *i* in the
origin bin. Note that for a half neighbor list, the stencil can be
asymmetric, since each atom only need store half its nearby neighbors.
asymmetric since each atom only need store half its nearby neighbors.
These stencils are illustrated in the figure for a half list and a bin
size of :math:`\frac{1}{2} R_n`. There are 13 red+blue stencil bins in
2d (for the orthogonal case, 15 for triclinic). In 3d there would be
63, 13 in the plane of bins that contain the origin bin and 25 in each
of the two planes above it in the *z* direction (75 for triclinic). The
triclinic stencil has extra bins because the bins tile the bounding box
of the entire triclinic domain, and thus are not periodic with respect
to the simulation box itself. The stencil and logic for determining
which *i,j* pairs to include in the neighbor list are altered slightly
to account for this.
reason the triclinic stencil has extra bins is because the bins tile the
bounding box of the entire triclinic domain and thus are not periodic
with respect to the simulation box itself. The stencil and logic for
determining which *i,j* pairs to include in the neighbor list are
altered slightly to account for this.
To build a neighbor list, a processor first loops over its "owned" plus
"ghost" atoms and assigns each to a neighbor bin. This uses an integer
@ -95,7 +95,7 @@ supports:
been found to be optimal for many typical cases. Smaller bins incur
additional overhead to loop over; larger bins require more distance
calculations. Note that for smaller bin sizes, the 2d stencil in the
figure would be of a more semicircular shape (hemispherical in 3d),
figure would be more semi-circular in shape (hemispherical in 3d),
with bins near the corners of the square eliminated due to their
distance from the origin bin.
@ -111,8 +111,8 @@ supports:
symmetric stencil. It also includes lists with partial enumeration of
ghost atom neighbors. The full and ghost-atom lists are used by
various manybody interatomic potentials. Lists may also use different
criteria for inclusion of a pairwise interaction. Typically, this
simply depends only on the distance between two atoms and the cutoff
criteria for inclusion of a pair interaction. Typically this simply
depends only on the distance between two atoms and the cutoff
distance. But for finite-size coarse-grained particles with
individual diameters (e.g. polydisperse granular particles), it can
also depend on the diameters of the two particles.
@ -121,11 +121,11 @@ supports:
of the master neighbor list for the full system need to be generated,
one for each sub-style, which contains only the *i,j* pairs needed to
compute interactions between subsets of atoms for the corresponding
potential. This means, not all *i* or *j* atoms owned by a processor
potential. This means not all *i* or *j* atoms owned by a processor
are included in a particular sub-list.
- Some models use different cutoff lengths for pairwise interactions
between different kinds of particles, which are stored in a single
between different kinds of particles which are stored in a single
neighbor list. One example is a solvated colloidal system with large
colloidal particles where colloid/colloid, colloid/solvent, and
solvent/solvent interaction cutoffs can be dramatically different.
@ -144,7 +144,7 @@ supports:
- For small and sparse systems and as a fallback method, LAMMPS also
supports neighbor list construction without binning by using a full
:math:`O(N^2)` loop over all *i,j* atom pairs in a subdomain when
:math:`O(N^2)` loop over all *i,j* atom pairs in a sub-domain when
using the :doc:`neighbor nsq <neighbor>` command.
- Dependent on the "pair" setting of the :doc:`newton <newton>` command,
@ -153,7 +153,7 @@ supports:
For the newton pair *on* setting the atom *j* is only added to the
list if its *z* coordinate is larger, or if equal the *y* coordinate
is larger, and that is equal, too, the *x* coordinate is larger. For
homogeneously dense systems, that will result in picking neighbors from
homogeneously dense systems that will result in picking neighbors from
a same size sector in always the same direction relative to the
"owned" atom, and thus it should lead to similar length neighbor lists
and reduce the chance of a load imbalance.
"owned" atom and thus it should lead to similar length neighbor lists
and thus reduce the chance of a load imbalance.

View File

@ -6,7 +6,7 @@ thread parallelism to predominantly distribute loops over local data
and thus follow an orthogonal parallelization strategy to the
decomposition into spatial domains used by the :doc:`MPI partitioning
<Developer_par_part>`. For clarity, this section discusses only the
implementation in the OPENMP package, as it is the simplest. The INTEL
implementation in the OPENMP package as it is the simplest. The INTEL
and KOKKOS package offer additional options and are more complex since
they support more features and different hardware like co-processors
or GPUs.
@ -14,7 +14,7 @@ or GPUs.
One of the key decisions when implementing the OPENMP package was to
keep the changes to the source code small, so that it would be easier to
maintain the code and keep it in sync with the non-threaded standard
implementation. This is achieved by a) making the OPENMP version a
implementation. this is achieved by a) making the OPENMP version a
derived class from the regular version (e.g. ``PairLJCutOMP`` from
``PairLJCut``) and overriding only methods that are multi-threaded or
need to be modified to support multi-threading (similar to what was done
@ -26,13 +26,13 @@ into three separate classes ``ThrOMP``, ``ThrData``, and ``FixOMP``.
available in the corresponding base class (e.g. ``Pair`` for
``PairLJCutOMP``) like multi-thread aware variants of the "tally"
functions. Those functions are made available through multiple
inheritance, so those new functions have to have unique names to avoid
inheritance so those new functions have to have unique names to avoid
ambiguities; typically ``_thr`` is appended to the name of the function.
``ThrData`` is a class that manages per-thread data structures. It is
used instead of extending the corresponding storage to per-thread arrays
to avoid slowdowns due to "false sharing" when multiple threads update
adjacent elements in an array and thus force the CPU cache lines to be
reset and re-fetched. ``FixOMP`` finally manages the "multi-thread
``ThrData`` is a classes that manages per-thread data structures.
It is used instead of extending the corresponding storage to per-thread
arrays to avoid slowdowns due to "false sharing" when multiple threads
update adjacent elements in an array and thus force the CPU cache lines
to be reset and re-fetched. ``FixOMP`` finally manages the "multi-thread
state" like settings and access to per-thread storage, it is activated
by the :doc:`package omp <package>` command.
@ -46,24 +46,24 @@ involve multiple atoms and thus there are race conditions when multiple
threads want to update per-atom data of the same atoms. Five possible
strategies have been considered to avoid this:
1. Restructure the code so that there is no overlapping access possible
1) restructure the code so that there is no overlapping access possible
when computing in parallel, e.g. by breaking lists into multiple
parts and synchronizing threads in between.
2. Have each thread be "responsible" for a specific group of atoms and
2) have each thread be "responsible" for a specific group of atoms and
compute these interactions multiple times, once on each thread that
is responsible for a given atom, and then have each thread only update
is responsible for a given atom and then have each thread only update
the properties of this atom.
3. Use mutexes around functions and regions of code where the data race
could happen.
4. Use atomic operations when updating per-atom properties.
5. Use replicated per-thread data structures to accumulate data without
3) use mutexes around functions and regions of code where the data race
could happen
4) use atomic operations when updating per-atom properties
5) use replicated per-thread data structures to accumulate data without
conflicts and then use a reduction to combine those results into the
data structures used by the regular style.
Option 5 was chosen for the OPENMP package because it would retain the
performance for the case of a single thread and the code would be more
performance for the case of 1 thread and the code would be more
maintainable. Option 1 would require extensive code changes,
particularly to the neighbor list code; option 2 would have incurred a
particularly to the neighbor list code; options 2 would have incurred a
2x or more performance penalty for the serial case; option 3 causes
significant overhead and would enforce serialization of operations in
inner loops and thus defeat the purpose of multi-threading; option 4
@ -80,7 +80,7 @@ equivalent to the number of CPU cores per CPU socket on high-end
supercomputers.
Thus arrays like the force array are dimensioned to the number of atoms
times the number of threads when enabling OpenMP support, and inside the
times the number of threads when enabling OpenMP support and inside the
compute functions a pointer to a different chunk is obtained by each thread.
Similarly, accumulators like potential energy or virial are kept in
per-thread instances of the ``ThrData`` class and then only reduced and
@ -91,7 +91,7 @@ Loop scheduling
"""""""""""""""
Multi-thread parallelization is applied by distributing (outer) loops
statically across threads. Typically, this would be the loop over local
statically across threads. Typically this would be the loop over local
atoms *i* when processing *i,j* pairs of atoms from a neighbor list.
The design of the neighbor list code results in atoms having a similar
number of neighbors for homogeneous systems and thus load imbalances

View File

@ -7,39 +7,39 @@ distributed-memory parallelism is set with the :doc:`comm_style command
.. _domain-decomposition:
.. figure:: img/domain-decomp.png
:align: center
Domain decomposition schemes
domain decomposition
This figure shows the different kinds of domain decomposition used
for MPI parallelization: "brick" on the left with an orthogonal
(left) and a triclinic (middle) simulation domain, and a "tiled"
decomposition (right). The black lines show the division into
subdomains, and the contained atoms are "owned" by the
corresponding MPI process. The green dashed lines indicate how
subdomains are extended with "ghost" atoms up to the communication
cutoff distance.
This figure shows the different kinds of domain decomposition used
for MPI parallelization: "brick" on the left with an orthogonal
(left) and a triclinic (middle) simulation domain, and a "tiled"
decomposition (right). The black lines show the division into
sub-domains and the contained atoms are "owned" by the corresponding
MPI process. The green dashed lines indicate how sub-domains are
extended with "ghost" atoms up to the communication cutoff distance.
The LAMMPS simulation box is a 3d or 2d volume, which can be of
orthogonal or triclinic shape, as illustrated in the
:ref:`domain-decomposition` figure for the 2d case. Orthogonal means
the box edges are aligned with the *x*, *y*, *z* Cartesian axes, and the
box faces are thus all rectangular. Triclinic allows for a more general
parallelepiped shape in which edges are aligned with three arbitrary
vectors and the box faces are parallelograms. In each dimension, box
faces can be periodic, or non-periodic with fixed or shrink-wrapped
boundaries. In the fixed case, atoms which move outside the face are
deleted; shrink-wrapped means the position of the box face adjusts
continuously to enclose all the atoms.
The LAMMPS simulation box is a 3d or 2d volume, which can be orthogonal
or triclinic in shape, as illustrated in the :ref:`domain-decomposition`
figure for the 2d case. Orthogonal means the box edges are aligned with
the *x*, *y*, *z* Cartesian axes, and the box faces are thus all
rectangular. Triclinic allows for a more general parallelepiped shape
in which edges are aligned with three arbitrary vectors and the box
faces are parallelograms. In each dimension box faces can be periodic,
or non-periodic with fixed or shrink-wrapped boundaries. In the fixed
case, atoms which move outside the face are deleted; shrink-wrapped
means the position of the box face adjusts continuously to enclose all
the atoms.
For distributed-memory MPI parallelism, the simulation box is spatially
decomposed (partitioned) into non-overlapping subdomains which fill the
decomposed (partitioned) into non-overlapping sub-domains which fill the
box. The default partitioning, "brick", is most suitable when atom
density is roughly uniform, as shown in the left-side images of the
:ref:`domain-decomposition` figure. The subdomains comprise a regular
grid, and all subdomains are identical in size and shape. Both the
:ref:`domain-decomposition` figure. The sub-domains comprise a regular
grid and all sub-domains are identical in size and shape. Both the
orthogonal and triclinic boxes can deform continuously during a
simulation, e.g. to compress a solid or shear a liquid, in which case
the processor subdomains likewise deform.
the processor sub-domains likewise deform.
For models with non-uniform density, the number of particles per
@ -50,14 +50,14 @@ load. For such models, LAMMPS supports multiple strategies to reduce
the load imbalance:
- The processor grid decomposition is by default based on the simulation
cell volume and tries to optimize the volume to surface ratio for the subdomains.
cell volume and tries to optimize the volume to surface ratio for the sub-domains.
This can be changed with the :doc:`processors command <processors>`.
- The parallel planes defining the size of the subdomains can be shifted
- The parallel planes defining the size of the sub-domains can be shifted
with the :doc:`balance command <balance>`. Which can be done in addition
to choosing a more optimal processor grid.
- The recursive bisectioning algorithm in combination with the "tiled"
communication style can produce a partitioning with equal numbers of
particles in each subdomain.
particles in each sub-domain.
.. |decomp1| image:: img/decomp-regular.png
@ -76,14 +76,14 @@ the load imbalance:
The pictures above demonstrate different decompositions for a 2d system
with 12 MPI ranks. The atom colors indicate the load imbalance of each
subdomain, with green being optimal and red the least optimal.
sub-domain with green being optimal and red the least optimal.
Due to the vacuum in the system, the default decomposition is
unbalanced, with several MPI ranks without atoms (left). By forcing a
1x12x1 processor grid, every MPI rank does computations now, but the
number of atoms per subdomain is still uneven, and the thin slice shape
increases the amount of communication between subdomains (center
left). With a 2x6x1 processor grid and shifting the subdomain divisions,
the load imbalance is further reduced and the amount of communication
required between subdomains is less (center right). And using the
recursive bisectioning leads to further improved decomposition (right).
Due to the vacuum in the system, the default decomposition is unbalanced
with several MPI ranks without atoms (left). By forcing a 1x12x1
processor grid, every MPI rank does computations now, but number of
atoms per sub-domain is still uneven and the thin slice shape increases
the amount of communication between sub-domains (center left). With a
2x6x1 processor grid and shifting the sub-domain divisions, the load
imbalance is further reduced and the amount of communication required
between sub-domains is less (center right). And using the recursive
bisectioning leads to further improved decomposition (right).

View File

@ -7,7 +7,7 @@ decomposition. The parallelization aims to be efficient, and resulting
in good strong scaling (= good speedup for the same system) and good
weak scaling (= the computational cost of enlarging the system is
proportional to the system size). Additional parallelization using GPUs
or OpenMP can also be applied within the subdomain assigned to an MPI
or OpenMP can also be applied within the sub-domain assigned to an MPI
process. For clarity, most of the following illustrations show the 2d
simulation case. The underlying algorithms in those cases, however,
apply to both 2d and 3d cases equally well.

View File

@ -226,9 +226,9 @@ The following test programs are currently available:
* - ``test_kim_commands.cpp``
- KimCommands
- Tests for several commands from the :ref:`KIM package <PKG-KIM>`
* - ``test_reset_atoms.cpp``
- ResetAtoms
- Tests to validate the :doc:`reset_atoms <reset_atoms>` sub-commands
* - ``test_reset_ids.cpp``
- ResetIDs
- Tests to validate the :doc:`reset_atom_ids <reset_atom_ids>` and :doc:`reset_mol_ids <reset_mol_ids>` commands
Tests for the C-style library interface

View File

@ -25,7 +25,6 @@ Available topics in mostly chronological order are:
- `Simplified and more compact neighbor list requests`_
- `Split of fix STORE into fix STORE/GLOBAL and fix STORE/PERATOM`_
- `Use Output::get_dump_by_id() instead of Output::find_dump()`_
- `Refactored grid communication using Grid3d/Grid2d classes instead of GridComm`_
----
@ -424,56 +423,3 @@ New:
if (dumpflag) for (auto idump : dumplist) idump->write();
This change is **required** or else the code will not compile.
Refactored grid communication using Grid3d/Grid2d classes instead of GridComm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. versionchanged:: 22Dec2022
The ``GridComm`` class was for creating and communicating distributed
grids was replaced by the ``Grid3d`` class with added functionality.
A ``Grid2d`` class was also added for additional flexibility.
The new functionality and commands using the two grid classes are
discussed on the following documentation pages:
- :doc:`Howto_grid`
- :doc:`Developer_grid`
If you have custom LAMMPS code, which uses the GridComm class, here are some notes
on how to adapt it for using the Grid3d class.
(1) The constructor has changed to allow the ``Grid3d`` / ``Grid2d``
classes to partition the global grid across processors, both for
owned and ghost grid cells. Previously any class which called
``GridComm`` performed the partitioning itself and that information
was passed in the ``GridComm::GridComm()`` constructor. There are
several "set" functions which can be called to alter how ``Grid3d``
/ ``Grid2d`` perform the partitioning. They should be sufficient
for most use cases of the grid classes.
(2) The partitioning is triggered by the ``setup_grid()`` method.
(3) The ``setup()`` method of the ``GridComm`` class has been replaced
by the ``setup_comm()`` method in the new grid classes. The syntax
for the ``forward_comm()`` and ``reverse_comm()`` methods is
slightly altered as is the syntax of the associated pack/unpack
callback methods. But the functionality of these operations is the
same as before.
(4) The new ``Grid3d`` / ``Grid2d`` classes have additional
functionality for dynamic load-balancing of grids and their
associated data across processors. This did not exist in the
``GridComm`` class.
This and more is explained in detail on the :doc:`Developer_grid` page.
The following LAMMPS source files can be used as illustrative examples
for how the new grid classes are used by computes, fixes, and various
KSpace solvers which use distributed FFT grids:
- ``src/fix_ave_grid.cpp``
- ``src/compute_property_grid.cpp``
- ``src/EXTRA-FIX/fix_ttm_grid.cpp``
- ``src/KSPACE/pppm.cpp``
This change is **required** or else the code will not compile.

View File

@ -133,9 +133,6 @@ and parsing files or arguments.
.. doxygenfunction:: trim_comment
:project: progguide
.. doxygenfunction:: strip_style_suffix
:project: progguide
.. doxygenfunction:: star_subst
:project: progguide
@ -157,9 +154,6 @@ and parsing files or arguments.
.. doxygenfunction:: trim_and_count_words
:project: progguide
.. doxygenfunction:: join_words
:project: progguide
.. doxygenfunction:: split_words
:project: progguide
@ -178,12 +172,6 @@ and parsing files or arguments.
.. doxygenfunction:: is_double
:project: progguide
.. doxygenfunction:: is_id
:project: progguide
.. doxygenfunction:: is_type
:project: progguide
Potential file functions
^^^^^^^^^^^^^^^^^^^^^^^^
@ -214,12 +202,6 @@ Argument processing
.. doxygenfunction:: expand_args
:project: progguide
.. doxygenfunction:: parse_grid_id
:project: progguide
.. doxygenfunction:: expand_type
:project: progguide
Convenience functions
^^^^^^^^^^^^^^^^^^^^^
@ -647,7 +629,7 @@ Communication buffer coding with *ubuf*
---------------------------------------
LAMMPS uses communication buffers where it collects data from various
class instances and then exchanges the data with neighboring subdomains.
class instances and then exchanges the data with neighboring sub-domains.
For simplicity those buffers are defined as ``double`` buffers and
used for doubles and integer numbers. This presents a unique problem
when 64-bit integers are used. While the storage needed for a ``double``

View File

@ -113,7 +113,7 @@ LAMMPS output, something is wrong with your simulation. If you
suspect this is happening, it is a good idea to print out
thermodynamic info frequently (e.g. every timestep) via the
:doc:`thermo <thermo>` so you can monitor what is happening.
Visualizing the atom movement is also a good idea to ensure your model
Visualizing the atom movement is also a good idea to insure your model
is behaving as you expect.
In parallel, one way LAMMPS can hang is due to how different MPI

View File

@ -1232,7 +1232,7 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
*Cannot use chosen neighbor list style with lj/gromacs/kk*
Self-explanatory.
*Cannot use chosen neighbor list style with lj/spica/kk*
*Cannot use chosen neighbor list style with lj/sdk/kk*
That style is not supported by Kokkos.
*Cannot use chosen neighbor list style with pair eam/kk*
@ -1600,10 +1600,10 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
*Cannot use newton pair with lj/gromacs/gpu pair style*
Self-explanatory.
*Cannot use newton pair with lj/spica/coul/long/gpu pair style*
*Cannot use newton pair with lj/sdk/coul/long/gpu pair style*
Self-explanatory.
*Cannot use newton pair with lj/spica/gpu pair style*
*Cannot use newton pair with lj/sdk/gpu pair style*
Self-explanatory.
*Cannot use newton pair with lj96/cut/gpu pair style*
@ -5453,11 +5453,6 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
Mass command must set a type from 1-N where N is the number of atom
types.
*Invalid label2type() function syntax in variable formula*
The first argument must be a label map kind (atom, bond, angle,
dihedral, or improper) and the second argument must be a valid type
label that has been assigned to a numeric type.
*Invalid use of library file() function*
This function is called through the library interface. This
error should not occur. Contact the developers if it does.
@ -5590,18 +5585,9 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
*LJ6 off not supported in pair_style buck/long/coul/long*
Self-explanatory.
*Label map is incomplete: all types must be assigned a unique type label*
For a given type-kind (atom types, bond types, etc.) to be written to
the data file, all associated types must be assigned a type label, and
each type label can be assigned to only one numeric type.
*Label wasn't found in input script*
Self-explanatory.
*Labelmap command before simulation box is defined*
The labelmap command cannot be used before a read_data,
read_restart, or create_box command.
*Lattice orient vectors are not orthogonal*
The three specified lattice orientation vectors must be mutually
orthogonal.
@ -5635,7 +5621,7 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
Lost atoms are checked for each time thermo output is done. See the
thermo_modify lost command for options. Lost atoms usually indicate
bad dynamics, e.g. atoms have been blown far out of the simulation
box, or moved further than one processor's subdomain away before
box, or moved further than one processor's sub-domain away before
reneighboring.
*MEAM library error %d*
@ -5877,12 +5863,6 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
*Must not have multiple fixes change box parameter ...*
Self-explanatory.
*Must read Angle Type Labels before Angles*
An Angle Type Labels section of a data file must come before the Angles section.
*Must read Atom Type Labels before Atoms*
An Atom Type Labels section of a data file must come before the Atoms section.
*Must read Atoms before Angles*
The Atoms section of a data file must come before an Angles section.
@ -5913,15 +5893,6 @@ Doc page with :doc:`WARNING messages <Errors_warnings>`
The Atoms section of a data file must come before a Velocities
section.
*Must read Bond Type Labels before Bonds*
A Bond Type Labels section of a data file must come before the Bonds section.
*Must read Dihedral Type Labels before Dihedrals*
An Dihedral Type Labels section of a data file must come before the Dihedrals section.
*Must read Improper Type Labels before Impropers*
An Improper Type Labels section of a data file must come before the Impropers section.
*Must re-specify non-restarted pair style (xxx) after read_restart*
For pair styles, that do not store their settings in a restart file,
it must be defined with a new 'pair_style' command after read_restart.
@ -6092,7 +6063,7 @@ keyword to allow for additional bonds to be formed
after a read_data, read_restart, or create_box command.
*Next command must list all universe and uloop variables*
This is to ensure they stay in sync.
This is to insure they stay in sync.
*No Kspace style defined for compute group/group*
Self-explanatory.
@ -6266,14 +6237,14 @@ keyword to allow for additional bonds to be formed
One or more atoms are attempting to map their charge to a MSM grid point
that is not owned by a processor. This is likely for one of two
reasons, both of them bad. First, it may mean that an atom near the
boundary of a processor's subdomain has moved more than 1/2 the
boundary of a processor's sub-domain has moved more than 1/2 the
:doc:`neighbor skin distance <neighbor>` without neighbor lists being
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
:doc:`neigh_modify <neigh_modify>` command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's subdomain or even the entire
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc.
@ -6281,14 +6252,14 @@ keyword to allow for additional bonds to be formed
One or more atoms are attempting to map their charge to a PPPM grid
point that is not owned by a processor. This is likely for one of two
reasons, both of them bad. First, it may mean that an atom near the
boundary of a processor's subdomain has moved more than 1/2 the
boundary of a processor's sub-domain has moved more than 1/2 the
:doc:`neighbor skin distance <neighbor>` without neighbor lists being
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
:doc:`neigh_modify <neigh_modify>` command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's subdomain or even the entire
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc.
@ -6296,14 +6267,14 @@ keyword to allow for additional bonds to be formed
One or more atoms are attempting to map their charge to a PPPM grid
point that is not owned by a processor. This is likely for one of two
reasons, both of them bad. First, it may mean that an atom near the
boundary of a processor's subdomain has moved more than 1/2 the
boundary of a processor's sub-domain has moved more than 1/2 the
:doc:`neighbor skin distance <neighbor>` without neighbor lists being
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
:doc:`neigh_modify <neigh_modify>` command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's subdomain or even the entire
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc.
@ -6811,7 +6782,7 @@ keyword to allow for additional bonds to be formed
This is because the computation of constraint forces within a water
molecule adds forces to atoms owned by other processors.
*Pair style lj/spica/coul/long/gpu requires atom attribute q*
*Pair style lj/sdk/coul/long/gpu requires atom attribute q*
The atom style defined does not have this attribute.
*Pair style nb3b/harmonic requires atom IDs*
@ -7231,7 +7202,7 @@ keyword to allow for additional bonds to be formed
*Replacing a fix, but new style != old style*
A fix ID can be used a second time, but only if the style matches the
previous fix. In this case it is assumed you want to reset a fix's
previous fix. In this case it is assumed you with to reset a fix's
parameters. This error may mean you are mistakenly re-using a fix ID
when you do not intend to.
@ -7337,7 +7308,7 @@ keyword to allow for additional bonds to be formed
*Rigid body atoms %d %d missing on proc %d at step %ld*
This means that an atom cannot find the atom that owns the rigid body
it is part of, or vice versa. The solution is to use the communicate
cutoff command to ensure ghost atoms are acquired from far enough away
cutoff command to insure ghost atoms are acquired from far enough away
to encompass the max distance printed when the fix rigid/small command
was invoked.
@ -7878,10 +7849,6 @@ keyword to allow for additional bonds to be formed
Number of local atoms times number of columns must fit in a 32-bit
integer for dump.
*Topology type exceeds system topology type*
The number of bond, angle, etc types exceeds the system setting. See
the create_box or read_data command for how to specify these values.
*Tree structure in joint connections*
Fix poems cannot (yet) work with coupled bodies whose joints connect
the bodies in a tree structure.
@ -7906,13 +7873,6 @@ keyword to allow for additional bonds to be formed
*Two groups cannot be the same in fix spring couple*
Self-explanatory.
*The %s type label %s is already in use for type %s*
For a given type-kind (atom types, bond types, etc.), a given type
label can be assigned to only one numeric type.
*Type label string %s for %s type %s is invalid*
See the labelmap command documentation for valid type labels.
*Unable to initialize accelerator for use*
There was a problem initializing an accelerator for the gpu package

View File

@ -109,9 +109,9 @@ Doc page with :doc:`ERROR messages <Errors_messages>`
*Communication cutoff is shorter than a bond length based estimate. This may lead to errors.*
Since LAMMPS stores topology data with individual atoms, all atoms
comprising a bond, angle, dihedral or improper must be present on any
subdomain that "owns" the atom with the information, either as a
sub-domain that "owns" the atom with the information, either as a
local or a ghost atom. The communication cutoff is what determines up
to what distance from a subdomain boundary ghost atoms are created.
to what distance from a sub-domain boundary ghost atoms are created.
The communication cutoff is by default the largest non-bonded cutoff
plus the neighbor skin distance, but for short or non-bonded cutoffs
and/or long bonds, this may not be sufficient. This warning indicates
@ -351,7 +351,7 @@ This will most likely cause errors in kinetic fluctuations.
Self-explanatory.
*Kspace_modify slab param < 2.0 may cause unphysical behavior*
The kspace_modify slab parameter should be larger to ensure periodic
The kspace_modify slab parameter should be larger to insure periodic
grids padded with empty space do not overlap.
*Less insertions than requested*
@ -398,7 +398,7 @@ This will most likely cause errors in kinetic fluctuations.
Lost atoms are checked for each time thermo output is done. See the
thermo_modify lost command for options. Lost atoms usually indicate
bad dynamics, e.g. atoms have been blown far out of the simulation
box, or moved further than one processor's subdomain away before
box, or moved further than one processor's sub-domain away before
reneighboring.
*MSM mesh too small, increasing to 2 points in each direction*
@ -470,12 +470,6 @@ This will most likely cause errors in kinetic fluctuations.
*More than one compute sna/atom*
Self-explanatory.
*More than one compute sna/grid*
Self-explanatory.
*More than one compute sna/grid/local*
Self-explanatory.
*More than one compute snad/atom*
Self-explanatory.
@ -491,7 +485,7 @@ This will most likely cause errors in kinetic fluctuations.
*Neighbor exclusions used with KSpace solver may give inconsistent Coulombic energies*
This is because excluding specific pair interactions also excludes
them from long-range interactions which may not be the desired effect.
The special_bonds command handles this consistently by ensuring
The special_bonds command handles this consistently by insuring
excluded (or weighted) 1-2, 1-3, 1-4 interactions are treated
consistently by both the short-range pair style and the long-range
solver. This is not done for exclusions of charged atom pairs via the
@ -545,7 +539,7 @@ This will most likely cause errors in kinetic fluctuations.
If there are other fixes that act immediately after the initial stage
of time integration within a timestep (i.e. after atoms move), then
the command that sets up the dynamic group should appear after those
fixes. This will ensure that dynamic group assignments are made
fixes. This will insure that dynamic group assignments are made
after all atoms have moved.
*One or more respa levels compute no forces*
@ -582,13 +576,13 @@ This will most likely cause errors in kinetic fluctuations.
needed. The requested volume fraction may be too high, or other atoms
may be in the insertion region.
*Proc subdomain size < neighbor skin, could lead to lost atoms*
*Proc sub-domain size < neighbor skin, could lead to lost atoms*
The decomposition of the physical domain (likely due to load
balancing) has led to a processor's subdomain being smaller than the
balancing) has led to a processor's sub-domain being smaller than the
neighbor skin in one or more dimensions. Since reneighboring is
triggered by atoms moving the skin distance, this may lead to lost
atoms, if an atom moves all the way across a neighboring processor's
subdomain before reneighboring is triggered.
sub-domain before reneighboring is triggered.
*Reducing PPPM order b/c stencil extends beyond nearest neighbor processor*
This may lead to a larger grid than desired. See the kspace_modify overlap

View File

@ -1,7 +1,7 @@
Example scripts
===============
The LAMMPS distribution includes an examples subdirectory with many
The LAMMPS distribution includes an examples sub-directory with many
sample problems. Many are 2d models that run quickly and are
straightforward to visualize, requiring at most a couple of minutes to
run on a desktop machine. Each problem has an input script (in.\*) and
@ -29,7 +29,7 @@ be quickly post-processed into a movie using commands described on the
Animations of many of the examples can be viewed on the Movies section
of the `LAMMPS website <https://www.lammps.org/movies.html>`_.
There are two kinds of subdirectories in the examples folder. Lower
There are two kinds of sub-directories in the examples folder. Lower
case named directories contain one or a few simple, quick-to-run
problems. Upper case named directories contain up to several complex
scripts that illustrate a particular kind of simulation method or
@ -221,10 +221,10 @@ Uppercase directories
Nearly all of these directories have README files which give more
details on how to understand and use their contents.
The PACKAGES directory has a large number of subdirectories which
The PACKAGES directory has a large number of sub-directories which
correspond by name to specific packages. They contain scripts that
illustrate how to use the command(s) provided in those packages. Many
of the subdirectories have their own README files which give further
of the sub-directories have their own README files which give further
instructions. See the :doc:`Packages_details <Packages_details>` doc
page for more info on specific packages.

File diff suppressed because it is too large Load Diff

View File

@ -4,10 +4,10 @@ Howto discussions
These doc pages describe how to perform various tasks with LAMMPS,
both for users and developers. The
`glossary <https://www.lammps.org/glossary.html>`_ website page also lists MD
terminology, with links to corresponding LAMMPS manual pages. The
example input scripts included in the ``examples`` directory of the LAMMPS
source code distribution and highlighted on the :doc:`Examples` page
also show how to set up and run various kinds of simulations.
terminology with links to corresponding LAMMPS manual pages. The
example input scripts included in the examples directory of the LAMMPS
distribution and highlighted on the :doc:`Examples <Examples>` doc page
also show how to setup and run various kinds of simulations.
General howto
=============
@ -34,7 +34,6 @@ Settings howto
:maxdepth: 1
Howto_2d
Howto_type_labels
Howto_triclinic
Howto_thermostat
Howto_barostat
@ -51,7 +50,6 @@ Analysis howto
Howto_output
Howto_chunk
Howto_grid
Howto_temperature
Howto_elastic
Howto_kappa
@ -67,7 +65,6 @@ Force fields howto
:maxdepth: 1
Howto_bioFF
Howto_amoeba
Howto_tip3p
Howto_tip4p
Howto_spc

View File

@ -22,7 +22,7 @@ atom in the file, assign a z coordinate so it falls inside the
z-boundaries of the box - e.g. 0.0.
Use the :doc:`fix enforce2d <fix_enforce2d>` command as the last
defined fix to ensure that the z-components of velocities and forces
defined fix to insure that the z-components of velocities and forces
are zeroed out every timestep. The reason to make it the last fix is
so that any forces induced by other fixes will be zeroed out.

View File

@ -1,324 +0,0 @@
AMOEBA and HIPPO force fields
=============================
The AMOEBA and HIPPO polarizable force fields were developed by Jay
Ponder's group at the U Washington at St Louis. The LAMMPS
implementation is based on Fortran 90 code provided by the Ponder
group in their `Tinker MD software <https://dasher.wustl.edu/tinker/>`_.
The current implementation (July 2022) of AMOEBA in LAMMPS matches the
version discussed in :ref:`(Ponder) <amoeba-Ponder>`, :ref:`(Ren)
<amoeba-Ren>`, and :ref:`(Shi) <amoeba-Shi>`. Likewise the current
implementation of HIPPO in LAMMPS matches the version discussed in
:ref:`(Rackers) <amoeba-Rackers>`.
These force fields can be used when polarization effects are desired
in simulations of water, organic molecules, and biomolecules including
proteins, provided that parameterizations (Tinker PRM force field
files) are available for the systems you are interested in. Files in
the LAMMPS potentials directory with a "amoeba" or "hippo" suffix can
be used. The Tinker distribution and website have additional force
field files as well:
`https://github.com/TinkerTools/tinker/tree/release/params
<https://github.com/TinkerTools/tinker/tree/release/params>`_.
Note that currently, HIPPO can only be used for water systems, but
HIPPO files for a variety of small organic and biomolecules are in
preparation by the Ponder group. Those force field files will be
included in the LAMMPS distribution when available.
To use the AMOEBA or HIPPO force fields, a simulation must be 3d, and
fully periodic or fully non-periodic, and use an orthogonal (not
triclinic) simulation box.
----------
The AMOEBA and HIPPO force fields contain the following terms in their
energy (U) computation. Further details for AMOEBA equations are in
:ref:`(Ponder) <amoeba-Ponder>`, further details for the HIPPO
equations are in :ref:`(Rackers) <amoeba-Rackers>`.
.. math::
U & = U_{intermolecular} + U_{intramolecular} \\
U_{intermolecular} & = U_{hal} + U_{repulsion} + U_{dispersion} + U_{multipole} + U_{polar} + U_{qxfer} \\
U_{intramolecular} & = U_{bond} + U_{angle} + U_{torsion} + U_{oop} + U_{b\theta} + U_{UB} + U_{pitorsion} + U_{bitorsion}
For intermolecular terms, the AMOEBA force field includes only the
:math:`U_{hal}`, :math:`U_{multipole}`, :math:`U_{polar}` terms. The
HIPPO force field includes all but the :math:`U_{hal}` term. In
LAMMPS, these are all computed by the :doc:`pair_style amoeba or hippo
<pair_style>` command. Note that the :math:`U_{multipole}` and
:math:`U_{polar}` terms in this formula are not the same for the
AMOEBA and HIPPO force fields.
For intramolecular terms, the :math:`U_{bond}`, :math:`U_{angle}`,
:math:`U_{torsion}`, :math:`U_{oop}` terms are computed by the
:doc:`bond_style class2 <bond_class2>` :doc:`angle_style amoeba
<angle_amoeba>`, :doc:`dihedral_style fourier <dihedral_fourier>`, and
:doc:`improper_style amoeba <improper_amoeba>` commands respectively.
The :doc:`angle_style amoeba <angle_amoeba>` command includes the
:math:`U_{b\theta}` bond-angle cross term, and the :math:`U_{UB}` term
for a Urey-Bradley bond contribution between the I,K atoms in the IJK
angle.
The :math:`U_{pitorsion}` term is computed by the :doc:`fix
amoeba/pitorsion <fix_amoeba_pitorsion>` command. It computes 6-body
interaction between a pair of bonded atoms which each have 2
additional bond partners.
The :math:`U_{bitorsion}` term is computed by the :doc:`fix
amoeba/bitorsion <fix_amoeba_bitorsion>` command. It computes 5-body
interaction between two 4-body torsions (dihedrals) which overlap,
having 3 atoms in common.
These command doc pages have additional details on the terms they
compute:
* :doc:`pair_style amoeba or hippo <pair_amoeba>`
* :doc:`bond_style class2 <bond_class2>`
* :doc:`angle_style amoeba <angle_amoeba>`
* :doc:`dihedral_style fourier <dihedral_fourier>`
* :doc:`improper_style amoeba <improper_amoeba>`
* :doc:`fix amoeba/pitorsion <fix_amoeba_pitorsion>`
* :doc:`fix amoeba/bitorsion <fix_amoeba_bitorsion>`
----------
To use the AMOEBA or HIPPO force fields in LAMMPS, use commands like
the following appropriately in your input script. The only change
needed for AMOEBA vs HIPPO simulation is for the :doc:`pair_style
<pair_style>` and :doc:`pair_coeff <pair_coeff>` commands, as shown
below. See examples/amoeba for example input scripts for both AMOEBA
and HIPPO.
.. code-block:: LAMMPS
units real # required
atom_style amoeba
bond_style class2 # CLASS2 package
angle_style amoeba
dihedral_style fourier # EXTRA-MOLECULE package
improper_style amoeba
# required per-atom data
fix amtype all property/atom i_amtype ghost yes
fix extra all property/atom &
i_amgroup i_ired i_xaxis i_yaxis i_zaxis d_pval ghost yes
fix polaxe all property/atom i_polaxe
fix pit all amoeba/pitorsion # PiTorsion terms in FF
fix_modify pit energy yes
# Bitorsion terms in FF
fix bit all amoeba/bitorsion bitorsion.ubiquitin.data
fix_modify bit energy yes
read_data data.ubiquitin fix amtype NULL "Tinker Types" &
fix pit "pitorsion types" "PiTorsion Coeffs" &
fix pit pitorsions PiTorsions &
fix bit bitorsions BiTorsions
pair_style amoeba # AMOEBA FF
pair_coeff * * amoeba_ubiquitin.prm amoeba_ubiquitin.key
pair_style hippo # HIPPO FF
pair_coeff * * hippo_water.prm hippo_water.key
special_bonds lj/coul 0.5 0.5 0.5 one/five yes # 1-5 neighbors
The data file read by the :doc:`read_data <read_data>` command should
be created by the tools/tinker/tinker2lmp.py conversion program
described below. It will create a section in the data file with the
header "Tinker Types". A :doc:`fix property/atom <fix_property_atom>`
command for the data must be specified before the read_data command.
In the example above the fix ID is *amtype*.
Similarly, if the system you are simulating defines AMOEBA/HIPPO
pitorsion or bitorsion interactions, there will be entries in the data
file for those interactions. They require a :doc:`fix
amoeba/pitortion <fix_amoeba_pitorsion>` and :doc:`fix
amoeba/bitorsion <fix_amoeba_bitorsion>` command be defined. In the
example above, the IDs for these two fixes are *pit* and *bit*.
Of course, if the system being modeled does not have one or more of
the following -- bond, angle, dihedral, improper, pitorsion,
bitorsion interactions -- then the corresponding style and fix
commands above do not need to be used. See the example scripts in
examples/amoeba for water systems as examples; they are simpler than
what is listed above.
The two :doc:`fix property/atom <fix_property_atom>` commands with IDs
(in the example above) *extra* and *polaxe* are also needed to define
internal per-atom quantities used by the AMOEBA and HIPPO force
fields.
The :doc:`pair_coeff <pair_coeff>` command used for either the AMOEBA
or HIPPO force field takes two arguments for Tinker force field files,
namely a PRM and KEY file. The keyfile can be specified as NULL and
default values for a various settings will be used. Note that these 2
files are meant to allow use of native Tinker files as-is. However
LAMMPS does not support all the options which can be included
in a Tinker PRM or KEY file. See specifics below.
A :doc:`special_bonds <special_bonds>` command with the *one/five*
option is required, since the AMOEBA/HIPPO force fields define
weighting factors for not only 1-2, 1-3, 1-4 interactions, but also
1-5 interactions. This command will trigger a per-atom list of 1-5
neighbors to be generated. The AMOEBA and HIPPO force fields define
their own custom weighting factors for all the 1-2, 1-3, 1-4, 1-5
terms which in the Tinker PRM and KEY files; they can be different for
different terms in the force field.
In addition to the list above, these command doc pages have additional
details:
* :doc:`atom_style amoeba <atom_style>`
* :doc:`fix property/atom <fix_property_atom>`
* :doc:`special_bonds <special_bonds>`
----------
Tinker PRM and KEY files
A Tinker PRM file is composed of sections, each of which has multiple
lines. This is the list of PRM sections LAMMPS knows how to parse and
use. Any other sections are skipped:
* Angle Bending Parameters
* Atom Type Definitions
* Atomic Multipole Parameters
* Bond Stretching Parameters
* Charge Penetration Parameters
* Charge Transfer Parameters
* Dipole Polarizability Parameters
* Dispersion Parameters
* Force Field Definition
* Literature References
* Out-of-Plane Bend Parameters
* Pauli Repulsion Parameters
* Pi-Torsion Parameters
* Stretch-Bend Parameters
* Torsion-Torsion Parameters
* Torsional Parameters
* Urey-Bradley Parameters
* Van der Waals Pair Parameters
* Van der Waals Parameters
A Tinker KEY file is composed of lines, each of which has a keyword
followed by zero or more parameters. This is the list of keywords
LAMMPS knows how to parse and use in the same manner Tinker does. Any
other keywords are skipped. The value in parenthesis is the default
value for the keyword if it is not specified, or if the keyfile in the
:doc:`pair_coeff <pair_coeff>` command is specified as NULL:
* a-axis (0.0)
* b-axis (0.0)
* c-axis (0.0)
* ctrn-cutoff (6.0)
* ctrn-taper (0.9 * ctrn-cutoff)
* cutoff
* delta-halgren (0.07)
* dewald (no long-range dispersion unless specified)
* dewald-alpha (0.4)
* dewald-cutoff (7.0)
* dispersion-cutoff (9.0)
* dispersion-taper (9.0 * dispersion-cutoff)
* dpme-grid
* dpme-order (4)
* ewald (no long-range electrostatics unless specified)
* ewald-alpha (0.4)
* ewald-cutoff (7.0)
* gamma-halgren (0.12)
* mpole-cutoff (9.0)
* mpole-taper (0.65 * mpole-cutoff)
* pcg-guess (enabled by default)
* pcg-noguess (disable pcg-guess if specified)
* pcg-noprecond (disable pcg-precond if specified)
* pcg-peek (1.0)
* pcg-precond (enabled by default)
* pewald-alpha (0.4)
* pme-grid
* pme-order (5)
* polar-eps (1.0e-6)
* polar-iter (100)
* polar-predict (no prediction operation unless specified)
* ppme-order (5)
* repulsion-cutoff (6.0)
* repulsion-taper (0.9 * repulsion-cutoff)
* taper
* usolve-cutoff (4.5)
* usolve-diag (2.0)
* vdw-cutoff (9.0)
* vdw-taper (0.9 * vdw-cutoff)
----------
Tinker2lmp.py tool
This conversion tool is found in the tools/tinker directory.
As shown in examples/amoeba/README, these commands produce
the data files found in examples/amoeba, and also illustrate
all the options available to use with the tinker2lmp.py script:
.. code-block:: bash
python tinker2lmp.py -xyz water_dimer.xyz -amoeba amoeba_water.prm -data data.water_dimer.amoeba # AMOEBA non-periodic system
python tinker2lmp.py -xyz water_dimer.xyz -hippo hippo_water.prm -data data.water_dimer.hippo # HIPPO non-periodic system
python tinker2lmp.py -xyz water_box.xyz -amoeba amoeba_water.prm -data data.water_box.amoeba -pbc 18.643 18.643 18.643 # AMOEBA periodic system
python tinker2lmp.py -xyz water_box.xyz -hippo hippo_water.prm -data data.water_box.hippo -pbc 18.643 18.643 18.643 # HIPPO periodic system
python tinker2lmp.py -xyz ubiquitin.xyz -amoeba amoeba_ubiquitin.prm -data data.ubiquitin.new -pbc 54.99 41.91 41.91 -bitorsion bitorsion.ubiquitin.data.new # system with bitorsions
Switches and their arguments may be specified in any order.
The -xyz switch is required and specifies an input XYZ file as an
argument. The format of this file is an extended XYZ format defined
and used by Tinker for its input. Example \*.xyz files are in the
examples/amoeba directory. The file lists the atoms in the system.
Each atom has the following information: Tinker species name (ignored
by LAMMPS), xyz coordinates, Tinker numeric type, and a list of atom
IDs the atom is bonded to.
Here is more information about the extended XYZ format defined and
used by Tinker, and links to programs that convert standard PDB files
to the extended XYZ format:
* `https://openbabel.org/docs/current/FileFormats/Tinker_XYZ_format.html <https://openbabel.org/docs/current/FileFormats/Tinker_XYZ_format.html>`_
* `https://github.com/emleddin/pdbxyz-xyzpdb <https://github.com/emleddin/pdbxyz-xyzpdb>`_
* `https://github.com/TinkerTools/tinker/blob/release/source/pdbxyz.f <https://github.com/TinkerTools/tinker/blob/release/source/pdbxyz.f>`_
The -amoeba or -hippo switch is required. It specifies an input
AMOEBA or HIPPO PRM force field file as an argument. This should be
the same file used by the :doc:`pair_style <pair_style>` command in
the input script.
The -data switch is required. It specifies an output file name for
the LAMMPS data file that will be produced.
For periodic systems, the -pbc switch is required. It specifies the
periodic box size for each dimension (x,y,z). For a Tinker simulation
these are specified in the KEY file.
The -bitorsion switch is only needed if the system contains Tinker
bitorsion interactions. The data for each type of bitorsion
interaction will be written to the specified file, and read by the
:doc:`fix amoeba/bitorsion <fix_amoeba_bitorsion>` command. The data
includes 2d arrays of values to which splines are fit, and thus is not
compatible with the LAMMPS data file format.
----------
.. _howto-Ponder:
**(Ponder)** Ponder, Wu, Ren, Pande, Chodera, Schnieders, Haque, Mobley, Lambrecht, DiStasio Jr, M. Head-Gordon, Clark, Johnson, T. Head-Gordon, J Phys Chem B, 114, 2549-2564 (2010).
.. _howto-Rackers:
**(Rackers)** Rackers, Silva, Wang, Ponder, J Chem Theory Comput, 17, 7056-7084 (2021).
.. _howto-Ren:
**(Ren)** Ren and Ponder, J Phys Chem B, 107, 5933 (2003).
.. _howto-Shi:
**(Shi)** Shi, Xia, Zhang, Best, Wu, Ponder, Ren, J Chem Theory Comp, 9, 4046, 2013.

View File

@ -3,16 +3,16 @@ Bonded particle models
The BPM package implements bonded particle models which can be used to
simulate mesoscale solids. Solids are constructed as a collection of
particles, which each represent a coarse-grained region of space much
larger than the atomistic scale. Particles within a solid region are
particles which each represent a coarse-grained region of space much
larger than the atomistic scale. Particles within a solid region are
then connected by a network of bonds to provide solid elasticity.
Unlike traditional bonds in molecular dynamics, the equilibrium bond
length can vary between bonds. Bonds store the reference state. This
includes setting the equilibrium length equal to the initial distance
between the two particles, but can also include data on the bond
orientation for rotational models. This produces a stress-free initial
state. Furthermore, bonds are allowed to break under large strains,
between the two particles but can also include data on the bond
orientation for rotational models. This produces a stress free initial
state. Furthermore, bonds are allowed to break under large strains
producing fracture. The examples/bpm directory has sample input scripts
for simulations of the fragmentation of an impacted plate and the
pouring of extended, elastic bodies.
@ -22,8 +22,8 @@ pouring of extended, elastic bodies.
Bonds can be created using a :doc:`read data <read_data>` or
:doc:`create bonds <create_bonds>` command. Alternatively, a
:doc:`molecule <molecule>` template with bonds can be used with
:doc:`fix deposit <fix_deposit>` or :doc:`fix pour <fix_pour>` to create
solid grains.
:doc:`fix deposit <fix_deposit>` or :doc:`fix pour <fix_pour>` to
create solid grains.
In this implementation, bonds store their reference state when they are
first computed in the setup of the first simulation run. Data is then
@ -33,97 +33,77 @@ reference state of a bond. Bonds that are created midway into a run,
such as those created by pouring grains using :doc:`fix pour
<fix_pour>`, are initialized on that timestep.
----------
Currently, there are two types of bonds included in the BPM package. The
first bond style, :doc:`bond bpm/spring <bond_bpm_spring>`, only applies
pairwise, central body forces. Point particles must have :doc:`bond atom
style <atom_style>` and may be thought of as nodes in a spring
network. Alternatively, the second bond style, :doc:`bond bpm/rotational
<bond_bpm_rotational>`, resolves tangential forces and torques arising
with the shearing, bending, and twisting of the bond due to rotation or
displacement of particles. Particles are similar to those used in the
:doc:`granular package <Howto_granular>`, :doc:`atom style sphere
<atom_style>`. However, they must also track the current orientation of
particles and store bonds, and therefore use a :doc:`bpm/sphere atom
style <atom_style>`. This also requires a unique integrator :doc:`fix
nve/bpm/sphere <fix_nve_bpm_sphere>` which numerically integrates
orientation similar to :doc:`fix nve/asphere <fix_nve_asphere>`.
In addition to bond styles, a new pair style :doc:`pair bpm/spring
<pair_bpm_spring>` was added to accompany the bpm/spring bond
style. This pair style is simply a hookean repulsion with similar
velocity damping as its sister bond style.
----------
Bond data can be output using a combination of standard LAMMPS commands.
A list of IDs for bonded atoms can be generated using the
:doc:`compute property/local <compute_property_local>` command.
Various properties of bonds can be computed using the
:doc:`compute bond/local <compute_bond_local>` command. This
command allows one to access data saved to the bond's history,
such as the reference length of the bond. More information on
bond history data can be found on the documentation pages for the specific
BPM bond styles. Finally, this data can be output using a :doc:`dump local <dump>`
command. As one may output many columns from the same compute, the
:doc:`dump modify <dump_modify>` *colname* option may be used to provide
more helpful column names. An example of this procedure is found in
/examples/bpm/pour/. External software, such as OVITO, can read these dump
files to render bond data.
----------
As bonds can be broken between neighbor list builds, the
:doc:`special_bonds <special_bonds>` command works differently for BPM
bond styles. There are two possible settings which determine how pair
interactions work between bonded particles. First, one can overlay
pair forces with bond forces such that all bonded particles also
feel pair interactions. This can be accomplished by using the *overlay/pair*
keyword present in all bpm bond styles and by using the following special
bond settings
interactions work between bonded particles. First, one can turn off
all pair interactions between bonded particles. Unlike :doc:`bond
quartic <bond_quartic>`, this is not done by subtracting pair forces
during the bond computation but rather by dynamically updating the
special bond list. This is the default behavior of BPM bond styles and
is done by updating the 1-2 special bond list as bonds break. To do
this, LAMMPS requires :doc:`newton <newton>` bond off such that all
processors containing an atom know when a bond breaks. Additionally,
one must do either (A) or (B).
.. code-block:: LAMMPS
special_bonds lj/coul 1 1 1
Alternatively, one can turn off all pair interactions between bonded
particles. Unlike :doc:`bond quartic <bond_quartic>`, this is not done
by subtracting pair forces during the bond computation, but rather by
dynamically updating the special bond list. This is the default behavior
of BPM bond styles and is done by updating the 1-2 special bond list as
bonds break. To do this, LAMMPS requires :doc:`newton <newton>` bond off
such that all processors containing an atom know when a bond breaks.
Additionally, one must use the following special bond settings
A) Use the following special bond settings
.. code-block:: LAMMPS
special_bonds lj 0 1 1 coul 1 1 1
These settings accomplish two goals. First, they turn off 1-3 and 1-4
special bond lists, which are not currently supported for BPMs. As
BPMs often have dense bond networks, generating 1-3 and 1-4 special
bond lists is expensive. By setting the lj weight for 1-2 bonds to
zero, this turns off pairwise interactions. Even though there are no
charges in BPM models, setting a nonzero coul weight for 1-2 bonds
ensures all bonded neighbors are still included in the neighbor list
in case bonds break between neighbor list builds.
These settings accomplish two goals. First, they turn off 1-3 and 1-4
special bond lists, which are not currently supported for BPMs. As
BPMs often have dense bond networks, generating 1-3 and 1-4 special
bond lists is expensive. By setting the lj weight for 1-2 bonds to
zero, this turns off pairwise interactions. Even though there are no
charges in BPM models, setting a nonzero coul weight for 1-2 bonds
ensures all bonded neighbors are still included in the neighbor list
in case bonds break between neighbor list builds.
To monitor the fracture of bonds in the system, all BPM bond styles
have the ability to record instances of bond breakage to output using
the :doc:`dump local <dump>` command. Since one may frequently output
a list of broken bonds and the time they broke, the
:doc:`dump modify <dump_modify>` option *header no* may be useful to
avoid repeatedly printing the header of the dump file. An example of
this procedure is found in /examples/bpm/impact/. Additionally,
one can use :doc:`compute nbond/atom <compute_nbond_atom>` to tally the
current number of bonds per atom.
B) Alternatively, one can simply overlay pair interactions such that all
bonded particles also feel pair interactions. This can be
accomplished by using the *overlay/pair* keyword present in all bpm
bond styles and by using the following special bond settings
.. code-block:: LAMMPS
special_bonds lj/coul 1 1 1
See the :doc:`Howto <Howto_broken_bonds>` page on broken bonds for
more information.
----------
Currently there are two types of bonds included in the BPM
package. The first bond style, :doc:`bond bpm/spring
<bond_bpm_spring>`, only applies pairwise, central body forces. Point
particles must have :doc:`bond atom style <atom_style>` and may be
thought of as nodes in a spring network. Alternatively, the second
bond style, :doc:`bond bpm/rotational <bond_bpm_rotational>`, resolves
tangential forces and torques arising with the shearing, bending, and
twisting of the bond due to rotation or displacement of particles.
Particles are similar to those used in the :doc:`granular package
<Howto_granular>`, :doc:`atom style sphere <atom_style>`. However,
they must also track the current orientation of particles and store bonds
and therefore use a :doc:`bpm/sphere atom style <atom_style>`.
This also requires a unique integrator :doc:`fix nve/bpm/sphere
<fix_nve_bpm_sphere>` which numerically integrates orientation similar
to :doc:`fix nve/asphere <fix_nve_asphere>`.
To monitor the fracture of bonds in the system, all BPM bond styles
have the ability to record instances of bond breakage to output using
the :doc:`dump local <dump>` command. Additionally, one can use
:doc:`compute nbond/atom <compute_nbond_atom>` to tally the current
number of bonds per atom.
In addition to bond styles, a new pair style :doc:`pair bpm/spring
<pair_bpm_spring>` was added to accompany the bpm/spring bond
style. This pair style is simply a hookean repulsion with similar
velocity damping as its sister bond style.
----------
While LAMMPS has many utilities to create and delete bonds, *only*
the following are currently compatible with BPM bond styles:
@ -133,10 +113,7 @@ the following are currently compatible with BPM bond styles:
* :doc:`fix bond/break <fix_bond_break>`
* :doc:`fix bond/swap <fix_bond_swap>`
.. note::
The :doc:`create_bonds <create_bonds>` command requires certain
:doc:`special_bonds <special_bonds>` settings. To subtract pair
interactions, one will need to switch between different *special_bonds*
settings in the input script. An example is found in
``examples/bpm/impact``.
Note :doc:`create_bonds <create_bonds>` requires certain special_bonds settings.
To subtract pair interactions, one will need to switch between different
special_bonds settings in the input script. An example is found in
examples/bpm/impact.

View File

@ -1,15 +1,15 @@
Broken Bonds
============
Typically, bond interactions persist for the duration of a simulation in
LAMMPS. However, there are some exceptions that allow for bonds to
break, including the :doc:`quartic bond style <bond_quartic>` and the
Typically, bond interactions persist for the duration of a simulation
in LAMMPS. However, there are some exceptions that allow for bonds to
break including the :doc:`quartic bond style <bond_quartic>` and the
bond styles in the :doc:`BPM package <Howto_bpm>` which contains the
:doc:`bpm/spring <bond_bpm_spring>` and :doc:`bpm/rotational
<bond_bpm_rotational>` bond styles. In these cases, a bond can be broken
if it is stretched beyond a user-defined threshold. LAMMPS accomplishes
this by setting the bond type to 0, such that the bond force is no
longer computed.
:doc:`bpm/spring <bond_bpm_spring>` and
:doc:`bpm/rotational <bond_bpm_rotational>` bond styles. In these cases,
a bond can be broken if it is stretched beyond a user-defined threshold.
LAMMPS accomplishes this by setting the bond type to zero such that the
bond force is no longer computed.
Users are normally able to weight the contribution of pair forces to atoms
that are bonded using the :doc:`special_bonds command <special_bonds>`.

View File

@ -89,7 +89,7 @@ different options (``build-parallel``, ``build-serial``) or with
different compilers (``build-gnu``, ``build-clang``, ``build-intel``)
and so on. All the auxiliary files created by one build process
(executable, object files, log files, etc) are stored in this directory
or subdirectories within it that CMake creates.
or sub-directories within it that CMake creates.
Running CMake

View File

@ -111,7 +111,7 @@ Therefore it is typically desirable to decouple the relative motion of
the core/shell pair, which is an imaginary degree of freedom, from the
real physical system. To do that, the :doc:`compute temp/cs <compute_temp_cs>` command can be used, in conjunction with
any of the thermostat fixes, such as :doc:`fix nvt <fix_nh>` or :doc:`fix langevin <fix_langevin>`. This compute uses the center-of-mass velocity
of the core/shell pairs to calculate a temperature, and ensures that
of the core/shell pairs to calculate a temperature, and insures that
velocity is what is rescaled for thermostatting purposes. This
compute also works for a system with both core/shell pairs and
non-polarized ions (ions without an attached satellite particle). The

View File

@ -1,27 +1,27 @@
Coupling LAMMPS to other codes
==============================
LAMMPS is designed to support being coupled to other codes. For
LAMMPS is designed to allow it to be coupled to other codes. For
example, a quantum mechanics code might compute forces on a subset of
atoms and pass those forces to LAMMPS. Or a continuum finite element
(FE) simulation might use atom positions as boundary conditions on FE
nodal points, compute a FE solution, and return interpolated forces on
MD atoms.
LAMMPS can be coupled to other codes in at least 4 different ways. Each
has advantages and disadvantages, which you will have to think about in
the context of your application.
LAMMPS can be coupled to other codes in at least 4 ways. Each has
advantages and disadvantages, which you will have to think about in the
context of your application.
1. Define a new :doc:`fix <fix>` command that calls the other code. In
this scenario, LAMMPS is the driver code. During timestepping, the
fix is invoked, and can make library calls to the other code, which
has been linked to LAMMPS as a library. This is the way the
1. Define a new :doc:`fix <fix>` command that calls the other code.
In this scenario, LAMMPS is the driver code. During timestepping,
the fix is invoked, and can make library calls to the other code,
which has been linked to LAMMPS as a library. This is the way the
:ref:`LATTE <PKG-LATTE>` package, which performs density-functional
tight-binding calculations using the `LATTE software
<https://github.com/lanl/LATTE>`_ to compute forces, is interfaced to
<https://github.com/lanl/LATTE>`_ to compute forces, is hooked to
LAMMPS. See the :doc:`fix latte <fix_latte>` command for more
details. Also see the :doc:`Modify <Modify>` pages for information
on how to add a new fix to LAMMPS.
details. Also see the :doc:`Modify <Modify>` doc pages for info on
how to add a new fix to LAMMPS.
.. spacer
@ -42,26 +42,28 @@ the context of your application.
stand-alone code could communicate with LAMMPS through files that the
command writes and reads.
See the :doc:`Modify command <Modify_command>` page for information
on how to add a new command to LAMMPS.
See the :doc:`Modify command <Modify_command>` page for info on how
to add a new command to LAMMPS.
.. spacer
3. Use LAMMPS as a library called by another code. In this case, the
other code is the driver and calls LAMMPS as needed. Alternately, a
wrapper code could link and call both LAMMPS and another code as
libraries. Again, the :doc:`run <run>` command has options that
allow it to be invoked with minimal overhead (no setup or clean-up)
if you wish to do multiple short runs, driven by another program.
Details about using the library interface are given in the
:doc:`library API <Library>` documentation.
3. Use LAMMPS as a library called by another code. In this case the
other code is the driver and calls LAMMPS as needed. Or a wrapper
code could link and call both LAMMPS and another code as libraries.
Again, the :doc:`run <run>` command has options that allow it to be
invoked with minimal overhead (no setup or clean-up) if you wish to
do multiple short runs, driven by another program. Details about
using the library interface are given in the :doc:`library API
<Library>` documentation.
.. spacer
4. Couple LAMMPS with another code in a client/server fashion, using the
`MDI Library <https://molssi-mdi.github.io/MDI_Library/html/index.html>`_
4. Couple LAMMPS with another code in a client/server fashion, using
using the `MDI Library
<https://molssi-mdi.github.io/MDI_Library/html/index.html>`_
developed by the `Molecular Sciences Software Institute (MolSSI)
<https://molssi.org>`_ to run LAMMPS as either an MDI driver (client)
or an MDI engine (server). The MDI driver issues commands to the MDI
server to exchange data between them. See the :doc:`Howto_mdi` page for
more information about how LAMMPS can operate in either of these modes.
<https://molssi.org>`_ to run LAMMPS as either an MDI driver
(client) or an MDI engine (server). The MDI driver issues commands
to the MDI server to exchange data between them. See the
:doc:`Howto mdi <Howto_mdi>` page for more information about how
LAMMPS can operate in either of these modes.

View File

@ -315,7 +315,7 @@ add changes. Please watch the comments to the pull requests. The two
"test" labels are used to trigger extended tests before the code is
merged. This is sometimes done by LAMMPS developers, if they suspect
that there may be some subtle side effects from your changes. It is not
done by default, because those tests are very time-consuming. The
done by default, because those tests are very time consuming. The
*ready_for_merge* label is usually attached when the LAMMPS developer
assigned to the pull request considers this request complete and to
trigger a final full test evaluation.

View File

@ -1,102 +0,0 @@
Using distributed grids
=======================
.. versionadded:: 22Dec2022
LAMMPS has internal capabilities to create uniformly spaced grids
which overlay the simulation domain. For 2d and 3d simulations these
are 2d and 3d grids respectively. Conceptually a grid can be thought
of as a collection of grid cells. Each grid cell can store one or
more values (data).
The grid cells and data they store are distributed across processors.
Each processor owns the grid cells (and data) whose center points lie
within the spatial subdomain of the processor. If needed for its
computations, a processor may also store ghost grid cells with their
data.
Distributed grids can overlay orthogonal or triclinic simulation
boxes; see the :doc:`Howto triclinic <Howto_triclinic>` doc page for
an explanation of the latter. For a triclinic box, the grid cell
shape conforms to the shape of the simulation domain,
e.g. parallelograms instead of rectangles in 2d.
If the box size or shape changes during a simulation, the grid changes
with it, so that it always overlays the entire simulation domain. For
non-periodic dimensions, the grid size in that dimension matches the
box size, as set by the :doc:`boundary <boundary>` command for fixed
or shrink-wrapped boundaries.
If load-balancing is invoked by the :doc:`balance <balance>` or
:doc:`fix balance <fix_balance>` commands, then the subdomain owned
by a processor can change which may also change which grid cells they
own.
Post-processing and visualization of grid cell data can be enabled by
the :doc:`dump grid <dump>`, :doc:`dump grid/vtk <dump>`, and
:doc:`dump image <dump_image>` commands. The latter has an optional
*grid* keyword. The `OVITO visualization tool
<https://www.ovito.org>`_ also plans (as of Nov 2022) to add support
for visualizing grid cell data (along with atoms) using :doc:`dump
grid <dump>` output files as input.
.. note::
For developers, distributed grids are implemented within the code via
two classes: Grid2d and Grid3d. These partition the grid across
processors and have methods which allow forward and reverse
communication of ghost grid data as well as load balancing. If you
write a new compute or fix which needs a distributed grid, these are
the classes to look at. A new pair style could use a distributed
grid by having a fix define it. Please see the section on
:doc:`using distributed grids within style classes <Developer_grid>`
for a detailed description.
----------
These are the commands which currently define or use distributed
grids:
* :doc:`fix ttm/grid <fix_ttm>` - store electron temperature on grid
* :doc:`fix ave/grid <fix_ave_grid>` - time average per-atom or per-grid values
* :doc:`compute property/grid <compute_property_grid>` - generate grid IDs and coords
* :doc:`dump grid <dump>` - output per-grid values in LAMMPS format
* :doc:`dump grid/vtk <dump>` - output per-grid values in VTK format
* :doc:`dump image grid <dump_image>` - include colored grid in output images
* :doc:`pair_style amoeba <pair_amoeba>` - FFT grids
* :doc:`kspace_style pppm <kspace_style>` (and variants) - FFT grids
* :doc:`kspace_style msm <kspace_style>` (and variants) - MSM grids
The grids used by the :doc:`kspace_style <kspace_style>` can not be
referenced by an input script. However the grids and data created and
used by the other commands can be.
A compute or fix command may create one or more grids (of different
sizes). Each grid can store one or more data fields. A data field
can be a single value per grid point (per-grid vector) or multiple
values per grid point (per-grid array). See the :doc:`Howto output
<Howto_output>` doc page for an explanation of how per-grid data can
be generated by some commands and used by other commands.
A command accesses grid data from a compute or fix using a *grid
reference* with the following syntax:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
The prefix "c\_" or "f\_" refers to the ID of the compute or fix; gname is
the name of the grid, which is assigned by the compute or fix; dname is
the name of the data field, which is also assigned by the compute or
fix.
If the data field is a per-grid vector (one value per grid point),
then no brackets are used to access the values. If the data field is
a per-grid array (multiple values per grid point), then brackets are
used to specify the column I of the array. I ranges from 1 to Ncol
inclusive, where Ncol is the number of columns in the array and is
defined by the compute or fix.
Currently, there are no per-grid variables implemented in LAMMPS. We
may add this feature at some point.

View File

@ -6,22 +6,22 @@ can be built as a static or shared library, so that it can be called by
another code, used in a :doc:`coupled manner <Howto_couple>` with other
codes, or driven through a :doc:`Python interface <Python_head>`.
At the core of LAMMPS is the ``LAMMPS`` class, which encapsulates the
At the core of LAMMPS is the ``LAMMPS`` class which encapsulates the
state of the simulation program through the state of the various class
instances that it is composed of. So a calculation using LAMMPS
requires creating an instance of the ``LAMMPS`` class and then send it
requires to create an instance of the ``LAMMPS`` class and then send it
(text) commands, either individually or from a file, or perform other
operations that modify the state stored inside that instance or drive
simulations. This is essentially what the ``src/main.cpp`` file does as
well for the standalone LAMMPS executable, reading commands either from
an input file or the standard input.
simulations. This is essentially what the ``src/main.cpp`` file does
as well for the standalone LAMMPS executable with reading commands
either from an input file or stdin.
Creating a LAMMPS instance can be done by using C++ code directly or
through a C-style interface library to LAMMPS that is provided in the
files ``src/library.cpp`` and ``src/library.h``. This :ref:`C language
API <lammps_c_api>`, can be used from C and C++, and is also the basis
for the :doc:`Python <Python_module>` and :doc:`Fortran <Fortran>`
interfaces or the :ref:`SWIG based wrappers <swig>` included in the
files ``src/library.cpp`` and ``library.h``. This
:ref:`C language API <lammps_c_api>`, can be used from C and C++,
and is also the basis for the :doc:`Python <Python_module>` and
:doc:`Fortran <Fortran>` interfaces or wrappers included in the
LAMMPS source code.
The ``examples/COUPLE`` and ``python/examples`` directories contain some

View File

@ -5,18 +5,18 @@ Client/server coupling of two (or more) codes is where one code is the
"client" and sends request messages (data) to one (or more) "server"
code(s). A server responds to each request with a reply message
(data). This enables two (or more) codes to work in tandem to perform
a simulation. In this context, LAMMPS can act as either a client or
server code. It does this by using the `MolSSI Driver Interface (MDI)
library <https://molssi-mdi.github.io/MDI_Library/html/index.html>`_,
a simulation. LAMMPS can act as either a client or server code; it
does this by using the `MolSSI Driver Interface (MDI) library
<https://molssi-mdi.github.io/MDI_Library/html/index.html>`_,
developed by the `Molecular Sciences Software Institute (MolSSI)
<https://molssi.org>`_, which is supported by the :ref:`MDI <PKG-MDI>`
package.
Alternate methods for coupling codes with LAMMPS are described on the
:doc:`Howto_couple` page.
Alternate methods for code coupling with LAMMPS are described on the
:doc:`Howto couple <Howto_couple>` doc page.
Some advantages of client/server coupling are that the codes can run
as stand-alone executables; they need not be linked together. Thus,
as stand-alone executables; they need not be linked together. Thus
neither code needs to have a library interface. This also makes it
easy to run the two codes on different numbers of processors. If a
message protocol (format and content) is defined for a particular kind
@ -41,7 +41,7 @@ within that sub-communicator exchange messages with the corresponding
engine instance, and can also send MPI messages to other processors in
the driver. The driver code can also destroy engine instances and
re-instantiate them. LAMMPS can operate as either a stand-alone or
plugin MDI engine. When it operates as a driver, it can use either
plugin MDI engine. When it operates as a driver, if can use either
stand-alone or plugin MDI engines.
The way in which an MDI driver communicates with an MDI engine is by
@ -50,83 +50,64 @@ to MPI_Send() and MPI_Recv() calls. Each send or receive operation
uses a string to identify the command name, and optionally some data,
which can be a single value or vector of values of any data type.
Inside the MDI library, data is exchanged between the driver and
engine via MPI calls or sockets. This is a run-time choice by the user.
engine via MPI calls or sockets. This a run-time choice by the user.
----------
The :ref:`MDI <PKG-MDI>` package provides a :doc:`mdi engine <mdi>`
command, which enables LAMMPS to operate as an MDI engine. Its doc
command which enables LAMMPS to operate as an MDI engine. Its doc
page explains the variety of standard and custom MDI commands which
the LAMMPS engine recognizes and can respond to.
The package also provides a :doc:`mdi plugin <mdi>` command, which
The package also provides a :doc:`mdi plugin <mdi>` command which
enables LAMMPS to operate as an MDI driver and load an MDI engine as a
plugin library.
The package furthermore includes a `fix mdi/qm <fix_mdi_qm>` command, in
which LAMMPS operates as an MDI driver in conjunction with a quantum
mechanics code as an MDI engine. The post_force() method of the
``fix_mdi_qm.cpp`` file shows how a driver issues MDI commands to another
code. This command can be used to couple to an MDI engine, which is
either a stand-alone code or a plugin library.
As explained in the `fix mdi/qm <fix_mdi_qm>` command documentation, it
can be used to perform *ab initio* MD simulations or energy
minimizations, or to evaluate the quantum energy and forces for a series
of independent systems. The ``examples/mdi`` directory has example
input scripts for all of these use cases.
The package also has a `fix mdi/aimd <fix_mdi_aimd>` command in which
LAMMPS operates as an MDI driver to perform *ab initio* MD simulations
in conjunction with a quantum mechanics code. Its post_force() method
illustrates how a driver issues MDI commands to another code. This
command can be used to couple to an MDI engine which is either a
stand-alone code or a plugin library.
----------
The examples/mdi directory contains Python scripts and LAMMPS input
script which use LAMMPS as either an MDI driver or engine, or both.
Currently, 5 example use cases are provided:
script which use LAMMPS as either an MDI driver or engine or both.
Three example use cases are provided:
* Run ab initio MD (AIMD) using 2 instances of LAMMPS. As a driver,
LAMMPS performs the timestepping in either NVE or NPT mode. As an
engine, LAMMPS computes forces and is a surrogate for a quantum
code.
* LAMMPS runs an MD simulation as a driver. Every N steps it passes the
current snapshot to an MDI engine to evaluate the energy, virial, and
peratom forces. As the engine, LAMMPS is a surrogate for a quantum
code.
* LAMMPS loops over a series of data files and passes the configuration
to an MDI engine to evaluate the energy, virial, and peratom forces
and thus acts as a simulation driver. As the engine, LAMMPS is used
as a surrogate for a quantum code.
* Run ab initio MD (AIMD) using 2 instances of LAMMPS, one as driver
and one as an engine. As an engine, LAMMPS is a surrogate for a
quantum code.
* A Python script driver invokes a sequence of unrelated LAMMPS
calculations. Calculations can be single-point energy/force
evaluations, MD runs, or energy minimizations.
* Run AIMD with a Python driver code and 2 LAMMPS instances as engines.
The first LAMMPS instance performs MD timestepping. The second LAMMPS
instance acts as a surrogate QM code to compute forces.
* Run AIMD with a Python driver code and 2 LAMMPS instances as
engines. The first LAMMPS instance performs MD timestepping. The
second LAMMPS instance acts as a surrogate QM code to compute
forces.
.. note::
Note that in any of these example where LAMMPS is used as an engine,
an actual QM code (which supports MDI) could be used in its place,
without modifying other code or scripts, except to specify the name of
the QM code.
In any of these examples where LAMMPS is used as an engine, an actual
QM code (provided it has support for MDI) could be used in its place,
without modifying the input scripts or launch commands, except to
specify the name of the QM code.
The ``examples/mdi/Run.sh`` file illustrates how to launch both driver
and engine codes so that they communicate using the MDI library via
either MPI or sockets, or using the engine as a stand-alone code, or
as a plugin library.
The examples/mdi/README file explains how to launch both driver and
engine codes so that they communicate using the MDI library via either
MPI or sockets.
-------------
Currently, there are at least two quantum DFT codes which have direct MDI
Currently there are two quantum DFT codes which have direct MDI
support, `Quantum ESPRESSO (QE) <https://www.quantum-espresso.org/>`_
and `INQ <https://qsg.llnl.gov/node/101.html>`_. There are also several
QM codes which have indirect support through QCEngine or i-PI. The
former means they require a wrapper program (QCEngine) with MDI support
which writes/read files to pass data to the quantum code itself. The
list of QCEngine-supported and i-PI-supported quantum codes is on the
`MDI webpage
and `INQ <https://qsg.llnl.gov/node/101.html>`_. There are also
several QM codes which have indirect support through QCEngine or i-PI.
The former means they require a wrapper program (QCEngine) with MDI
support which writes/read files to pass data to the quantum code
itself. The list of QCEngine-supported and i-PI-supported quantum
codes is on the `MDI webpage
<https://molssi-mdi.github.io/MDI_Library/html/index.html>`_.
Here is how to build QE as a stand-alone ``pw.x`` file which can be
@ -134,30 +115,30 @@ used in stand-alone mode:
.. code-block:: bash
git clone --branch mdi_plugin https://github.com/MolSSI-MDI/q-e.git <base_path>/q-e
build the executable pw.x, following the `QE build guide <https://gitlab.com/QEF/q-e/-/wikis/Developers/CMake-build-system>`_
% git clone --branch mdi_plugin https://github.com/MolSSI-MDI/q-e.git <base_path>/q-e
% build the executable pw.x, following the `QE build guide <https://gitlab.com/QEF/q-e/-/wikis/Developers/CMake-build-system>`_
Here is how to build QE as a shared library which can be used in plugin mode,
which results in a ``libqemdi.so`` file in ``<base_path>/q-e/MDI/src``:
which results in a libqemdi.so file in <base_path>/q-e/MDI/src:
.. code-block:: bash
git clone --branch mdi_plugin https://github.com/MolSSI-MDI/q-e.git <base_path>/q-e
cd <base_path>/q-e
./configure --enable-parallel --enable-openmp --enable-shared FFLAGS="-fPIC" FCFLAGS="-fPIC" CFLAGS="-fPIC" foxflags="-fPIC" try_foxflags="-fPIC"
make -j 4 mdi
% git clone --branch mdi_plugin https://github.com/MolSSI-MDI/q-e.git <base_path>/q-e
% cd <base_path>/q-e
% ./configure --enable-parallel --enable-openmp --enable-shared FFLAGS="-fPIC" FCFLAGS="-fPIC" CFLAGS="-fPIC" foxflags="-fPIC" try_foxflags="-fPIC"
% make -j 4 mdi
INQ cannot be built as a stand-alone code; it is by design a library.
Here is how to build INQ as a shared library which can be used in
plugin mode, which results in a ``libinqmdi.so`` file in
``<base_path>/inq/build/examples``:
plugin mode, which results in a libinqmdi.so file in
<base_path>/inq/build/examples:
.. code-block:: bash
git clone --branch mdi --recurse-submodules https://gitlab.com/taylor-a-barnes/inq.git <base_path>/inq
cd <base_path>/inq
mkdir -p build
cd build
../configure --prefix=<install_path>/install
make -j 4
make install
% git clone --branch mdi --recurse-submodules https://gitlab.com/taylor-a-barnes/inq.git <base_path>/inq
% cd <base_path>/inq
% mkdir -p build
% cd build
% ../configure --prefix=<install_path>/install
% make -j 4
% make install

View File

@ -4,7 +4,7 @@ Run multiple simulations from one input script
This can be done in several ways. See the documentation for
individual commands for more details on how these examples work.
If "multiple simulations" means to continue a previous simulation for
If "multiple simulations" means continue a previous simulation for
more timesteps, then you simply use the :doc:`run <run>` command
multiple times. For example, this script

View File

@ -26,13 +26,11 @@ discussion, note that users can also :doc:`add their own computes and
fixes to LAMMPS <Modify>` which can then generate values that can then
be output with these commands.
The following subsections discuss different LAMMPS commands related
The following sub-sections discuss different LAMMPS commands related
to output and the kind of data they operate on and produce:
* :ref:`Global/per-atom/local/per-grid data <global>`
* :ref:`Global/per-atom/local data <global>`
* :ref:`Scalar/vector/array data <scalar>`
* :ref:`Per-grid data <grid>`
* :ref:`Disambiguation <disambiguation>`
* :ref:`Thermodynamic output <thermo>`
* :ref:`Dump file output <dump>`
* :ref:`Fixes that write output files <fixoutput>`
@ -45,32 +43,27 @@ to output and the kind of data they operate on and produce:
.. _global:
Global/per-atom/local/per-grid data
-----------------------------------
Global/per-atom/local data
--------------------------
Various output-related commands work with four different styles of
data: global, per-atom, local, and per-grid. A global datum is one or
more system-wide values, e.g. the temperature of the system. A
per-atom datum is one or more values per atom, e.g. the kinetic energy
of each atom. Local datums are calculated by each processor based on
the atoms it owns, but there may be zero or more per atom, e.g. a list
of bond distances.
A per-grid datum is one or more values per grid cell, for a grid which
overlays the simulation domain. The grid cells and the data they
store are distributed across processors; each processor owns the grid
cells whose center point falls within its subdomain.
Various output-related commands work with three different styles of
data: global, per-atom, or local. A global datum is one or more
system-wide values, e.g. the temperature of the system. A per-atom
datum is one or more values per atom, e.g. the kinetic energy of each
atom. Local datums are calculated by each processor based on the
atoms it owns, but there may be zero or more per atom, e.g. a list of
bond distances.
.. _scalar:
Scalar/vector/array data
------------------------
Global, per-atom, and local datums can come in three kinds: a single
scalar value, a vector of values, or a 2d array of values. The doc
page for a "compute" or "fix" or "variable" that generates data will
specify both the style and kind of data it produces, e.g. a per-atom
vector.
Global, per-atom, and local datums can each come in three kinds: a
single scalar value, a vector of values, or a 2d array of values. The
doc page for a "compute" or "fix" or "variable" that generates data
will specify both the style and kind of data it produces, e.g. a
per-atom vector.
When a quantity is accessed, as in many of the output commands
discussed below, it can be referenced via the following bracket
@ -91,18 +84,6 @@ the dimension twice (array -> scalar). Thus a command that uses
scalar values as input can typically also process elements of a vector
or array.
.. _grid:
Per-grid data
------------------------
Per-grid data can come in two kinds: a vector of values (one per grid
cekk), or a 2d array of values (multiple values per grid ckk). The
doc page for a "compute" or "fix" that generates data will specify
names for both the grid(s) and datum(s) it produces, e.g. per-grid
vectors or arrays, which can be referenced by other commands. See the
:doc:`Howto grid <Howto_grid>` doc page for more details.
.. _disambiguation:
Disambiguation
@ -110,8 +91,8 @@ Disambiguation
Some computes and fixes produce data in multiple styles, e.g. a global
scalar and a per-atom vector. Usually the context in which the input
script references the data determines which style is meant. Example:
if a compute provides both a global scalar and a per-atom vector, the
script references the data determines which style is meant. Example: if
a compute provides both a global scalar and a per-atom vector, the
former will be accessed by using ``c_ID`` in an equal-style variable,
while the latter will be accessed by using ``c_ID`` in an atom-style
variable. Note that atom-style variable formulas can also access
@ -127,14 +108,15 @@ Thermodynamic output
The frequency and format of thermodynamic output is set by the
:doc:`thermo <thermo>`, :doc:`thermo_style <thermo_style>`, and
:doc:`thermo_modify <thermo_modify>` commands. The :doc:`thermo_style
<thermo_style>` command also specifies what values are calculated and
written out. Pre-defined keywords can be specified (e.g. press, etotal,
etc). Three additional kinds of keywords can also be specified (c_ID,
f_ID, v_name), where a :doc:`compute <compute>` or :doc:`fix <fix>` or
:doc:`variable <variable>` provides the value to be output. In each
case, the compute, fix, or variable must generate global values for
input to the :doc:`thermo_style custom <dump>` command.
:doc:`thermo_modify <thermo_modify>` commands. The
:doc:`thermo_style <thermo_style>` command also specifies what values
are calculated and written out. Pre-defined keywords can be specified
(e.g. press, etotal, etc). Three additional kinds of keywords can
also be specified (c_ID, f_ID, v_name), where a :doc:`compute <compute>`
or :doc:`fix <fix>` or :doc:`variable <variable>` provides the value to be
output. In each case, the compute, fix, or variable must generate
global values for input to the :doc:`thermo_style custom <dump>`
command.
Note that thermodynamic output values can be "extensive" or
"intensive". The former scale with the number of atoms in the system
@ -174,10 +156,6 @@ provides the values to be output. In each case, the compute or fix
must generate local values for input to the :doc:`dump local <dump>`
command.
There is also a :doc:`dump grid <dump>` format where the user
specifies what per-grid values to output from computes or fixes that
generate per-grid data.
.. _fixoutput:
Fixes that write output files
@ -212,12 +190,6 @@ velocity, force. They can also be per-atom quantities calculated by a
:doc:`variable <variable>`. The chunk-averaged output of this fix is
global and can also be used as input to other output commands.
Note that the :doc:`fix ave/grid <fix_ave_grid>` command can also
average the same per-atom quantities within spatial bins, but it does
this for a distributed grid whose grid cells are owned by different
processors. It outputs per-grid data, not global data, so it is more
efficient for large numbers of averaging bins.
The :doc:`fix ave/histo <fix_ave_histo>` command enables direct output
to a file of histogrammed quantities, which can be global or per-atom
or local quantities. The histogram output of this fix can also be
@ -263,23 +235,11 @@ as output values which can be used as input to other output commands.
The list of atom attributes is the same as for the :doc:`dump custom
<dump>` command.
The :doc:`compute property/local <compute_property_local>` command
takes a list of one or more pre-defined local attributes (bond info,
angle info, etc) and stores the values in a local vector or array.
These are produced as output values which can be used as input to
other output commands.
The :doc:`compute property/grid <compute_property_grid>` command takes
a list of one or more pre-defined per-grid attributes (id, grid cell
coords, etc) and stores the values in a per-grid vector or array.
These are produced as output values which can be used as input to the
:doc:`dump grid <dump>` command.
The :doc:`compute property/chunk <compute_property_chunk>` command
takes a list of one or more pre-defined chunk attributes (id, count,
coords for spatial bins) and stores the values in a global vector or
array. These are produced as output values which can be used as input
to other output commands.
The :doc:`compute property/local <compute_property_local>` command takes
a list of one or more pre-defined local attributes (bond info, angle
info, etc) and stores the values in a local vector or array. These
are produced as output values which can be used as input to other
output commands.
.. _fixprocoutput:
@ -293,42 +253,18 @@ a time.
The :doc:`fix ave/atom <fix_ave_atom>` command performs time-averaging
of per-atom vectors. The per-atom quantities can be atom attributes
such as position, velocity, force. They can also be per-atom
quantities calculated by a :doc:`compute <compute>`, by a :doc:`fix
<fix>`, or by an atom-style :doc:`variable <variable>`. The
quantities calculated by a :doc:`compute <compute>`, by a
:doc:`fix <fix>`, or by an atom-style :doc:`variable <variable>`. The
time-averaged per-atom output of this fix can be used as input to
other output commands.
The :doc:`fix store/state <fix_store_state>` command can archive one
or more per-atom attributes at a particular time, so that the old
values can be used in a future calculation or output. The list of
atom attributes is the same as for the :doc:`dump custom <dump>`
command, including per-atom quantities calculated by a :doc:`compute
<compute>`, by a :doc:`fix <fix>`, or by an atom-style :doc:`variable
<variable>`. The output of this fix can be used as input to other
output commands.
The :doc:`fix ave/grid <fix_ave_grid>` command performs time-averaging
of either per-atom or per-grid data.
For per-atom data it performs averaging for the atoms within each grid
cell, similar to the :doc:`fix ave/chunk <fix_ave_chunk>` command when
its chunks are defined as regular 2d or 3d bins. The per-atom
quantities can be atom density (mass or number) or atom attributes
such as position, velocity, force. They can also be per-atom
quantities calculated by a :doc:`compute <compute>`, by a :doc:`fix
<fix>`, or by an atom-style :doc:`variable <variable>`.
The chief difference between the :doc:`fix ave/grid <fix_ave_grid>`
and :doc:`fix ave/chunk <fix_ave_chunk>` commands when used in this
context is that the former uses a distributed grid, while the latter
uses a global grid. Distributed means that each processor owns the
subset of grid cells within its subdomain. Global means that each
processor owns a copy of the entire grid. The :doc:`fix ave/grid
<fix_ave_grid>` command is thus more efficient for large grids.
For per-grid data, the :doc:`fix ave/grid <fix_ave_grid>` command
takes inputs for grid data produced by other computes or fixes and
averages the values for each grid point over time.
The :doc:`fix store/state <fix_store_state>` command can archive one or
more per-atom attributes at a particular time, so that the old values
can be used in a future calculation or output. The list of atom
attributes is the same as for the :doc:`dump custom <dump>` command,
including per-atom quantities calculated by a :doc:`compute <compute>`,
by a :doc:`fix <fix>`, or by an atom-style :doc:`variable <variable>`.
The output of this fix can be used as input to other output commands.
.. _compute:
@ -336,25 +272,24 @@ Computes that generate values to output
---------------------------------------
Every :doc:`compute <compute>` in LAMMPS produces either global or
per-atom or local or per-grid values. The values can be scalars or
vectors or arrays of data. These values can be output using the other
commands described in this section. The page for each compute command
per-atom or local values. The values can be scalars or vectors or
arrays of data. These values can be output using the other commands
described in this section. The page for each compute command
describes what it produces. Computes that produce per-atom or local
or per-grid values have the word "atom" or "local" or "grid as the
last word in their style name. Computes without the word "atom" or
"local" or "grid" produce global values.
values have the word "atom" or "local" in their style name. Computes
without the word "atom" or "local" produce global values.
.. _fix:
Fixes that generate values to output
------------------------------------
Some :doc:`fixes <fix>` in LAMMPS produces either global or per-atom
or local or per-grid values which can be accessed by other commands.
The values can be scalars or vectors or arrays of data. These values
can be output using the other commands described in this section. The
page for each fix command tells whether it produces any output
quantities and describes them.
Some :doc:`fixes <fix>` in LAMMPS produces either global or per-atom or
local values which can be accessed by other commands. The values can
be scalars or vectors or arrays of data. These values can be output
using the other commands described in this section. The page for
each fix command tells whether it produces any output quantities and
describes them.
.. _variable:
@ -371,8 +306,6 @@ computes, fixes, and other variables. The values generated by
variables can be used as input to and thus output by the other
commands described in this section.
Per-grid variables have not (yet) been implemented.
.. _table:
Summary table of output options and data flow between commands
@ -392,52 +325,44 @@ Also note that, as described above, when a command takes a scalar as
input, that could be an element of a vector or array. Likewise a
vector input could be a column of an array.
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| Command | Input | Output |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`thermo_style custom <thermo_style>` | global scalars | screen, log file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump custom <dump>` | per-atom vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump local <dump>` | local vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump grid <dump>` | per-grid vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix print <fix_print>` | global scalar from variable | screen, file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`print <print>` | global scalar from variable | screen |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`computes <compute>` | N/A | global/per-atom/local/per-grid scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fixes <fix>` | N/A | global/per-atom/local/per-grid scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`variables <variable>` | global scalars and vectors, per-atom vectors | global scalar and vector, per-atom vector |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute reduce <compute_reduce>` | per-atom/local vectors | global scalar/vector |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute slice <compute_slice>` | global vectors/arrays | global vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/atom <compute_property_atom>` | N/A | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/local <compute_property_local>` | N/A | local vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/grid <compute_property_grid>` | N/A | per-grid vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/chunk <compute_property_chunk>` | N/A | global vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix vector <fix_vector>` | global scalars | global vector |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/atom <fix_ave_atom>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/time <fix_ave_time>` | global scalars/vectors | global scalar/vector/array, file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/chunk <fix_ave_chunk>` | per-atom vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/grid <fix_ave_grid>` | per-atom vectors or per-grid vectors | per-grid vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/histo <fix_ave_histo>` | global/per-atom/local scalars and vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/correlate <fix_ave_correlate>` | global scalars | global array, file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix store/state <fix_store_state>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| Command | Input | Output |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`thermo_style custom <thermo_style>` | global scalars | screen, log file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`dump custom <dump>` | per-atom vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`dump local <dump>` | local vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix print <fix_print>` | global scalar from variable | screen, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`print <print>` | global scalar from variable | screen |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`computes <compute>` | N/A | global/per-atom/local scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fixes <fix>` | N/A | global/per-atom/local scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`variables <variable>` | global scalars and vectors, per-atom vectors | global scalar and vector, per-atom vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute reduce <compute_reduce>` | per-atom/local vectors | global scalar/vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute slice <compute_slice>` | global vectors/arrays | global vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute property/atom <compute_property_atom>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute property/local <compute_property_local>` | local vectors | local vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix vector <fix_vector>` | global scalars | global vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix ave/atom <fix_ave_atom>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix ave/time <fix_ave_time>` | global scalars/vectors | global scalar/vector/array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix ave/chunk <fix_ave_chunk>` | per-atom vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix ave/histo <fix_ave_histo>` | global/per-atom/local scalars and vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix ave/correlate <fix_ave_correlate>` | global scalars | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fix store/state <fix_store_state>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+

View File

@ -783,19 +783,19 @@ Pitfalls
**Parallel Scalability**
LAMMPS operates in parallel in a :doc:`spatial-decomposition mode
<Developer_par_part>`, where each processor owns a spatial subdomain of
<Developer_par_part>`, where each processor owns a spatial sub-domain of
the overall simulation domain and communicates with its neighboring
processors via distributed-memory message passing (MPI) to acquire ghost
atom information to allow forces on the atoms it owns to be
computed. LAMMPS also uses Verlet neighbor lists which are recomputed
every few timesteps as particles move. On these timesteps, particles
also migrate to new processors as needed. LAMMPS decomposes the overall
simulation domain so that spatial subdomains of nearly equal volume are
assigned to each processor. When each subdomain contains nearly the
simulation domain so that spatial sub-domains of nearly equal volume are
assigned to each processor. When each sub-domain contains nearly the
same number of particles, this results in a reasonable load balance
among all processors. As is more typical with some peridynamic
simulations, some subdomains may contain many particles while other
subdomains contain few particles, resulting in a load imbalance that
simulations, some sub-domains may contain many particles while other
sub-domains contain few particles, resulting in a load imbalance that
impacts parallel scalability.
**Setting the "skin" distance**

View File

@ -392,7 +392,7 @@ IPyLammps Examples
------------------
Examples of IPython notebooks can be found in the python/examples/pylammps
subdirectory. To open these notebooks launch *jupyter notebook* inside this
sub-directory. To open these notebooks launch *jupyter notebook* inside this
directory and navigate to one of them. If you compiled and installed
a LAMMPS shared library with exceptions, PNG, JPEG and FFMPEG support
you should be able to rerun all of these notebooks.

View File

@ -47,9 +47,9 @@ script which is required when running in multi-replica mode.
Also note that with MPI installed on a machine (e.g. your desktop), you
can run on more (virtual) processors than you have physical processors.
Thus, the above commands could be run on a single-processor (or
Thus the above commands could be run on a single-processor (or
few-processor) desktop so that you can run a multi-replica simulation on
more replicas than you have physical processors. This is useful for
testing and debugging, since with most modern processors and MPI
libraries, the efficiency of a calculation can severely diminish when
libraries the efficiency of a calculation can severely diminish when
oversubscribing processors.

View File

@ -7,9 +7,8 @@ run will continue from where the previous run left off. Or binary
restart files can be saved to disk using the :doc:`restart <restart>`
command. At a later time, these binary files can be read via a
:doc:`read_restart <read_restart>` command in a new script. Or they can
be converted to text data files using the :doc:`-r command-line switch
<Run_options>` and read by a :doc:`read_data <read_data>` command in a
new script.
be converted to text data files using the :doc:`-r command-line switch <Run_options>` and read by a :doc:`read_data <read_data>`
command in a new script.
Here we give examples of 2 scripts that read either a binary restart
file or a converted data file and then issue a new run command to
@ -48,7 +47,7 @@ last 50 timesteps:
Note that the following commands do not need to be repeated because
their settings are included in the restart file: *units, atom_style,
special_bonds, pair_style, bond_style*. However, these commands do
special_bonds, pair_style, bond_style*. However these commands do
need to be used, since their settings are not in the restart file:
*neighbor, fix, timestep*\ .
@ -91,7 +90,7 @@ Then, this script could be used to re-run the last 50 steps:
Note that nearly all the settings specified in the original *in.chain*
script must be repeated, except the *pair_coeff* and *bond_coeff*
commands, since the new data file lists the force field coefficients.
commands since the new data file lists the force field coefficients.
Also, the :doc:`reset_timestep <reset_timestep>` command is used to tell
LAMMPS the current timestep. This value is stored in restart files, but
not in data files.
LAMMPS the current timestep. This value is stored in restart files,
but not in data files.

View File

@ -93,7 +93,7 @@ Some of the pair styles used to compute pairwise interactions between
finite-size particles also compute the correct interaction with point
particles as well, e.g. the interaction between a point particle and a
finite-size particle or between two point particles. If necessary,
:doc:`pair_style hybrid <pair_hybrid>` can be used to ensure the correct
:doc:`pair_style hybrid <pair_hybrid>` can be used to insure the correct
interactions are computed for the appropriate style of interactions.
Likewise, using groups to partition particles (ellipsoids versus
spheres versus point particles) will allow you to use the appropriate

View File

@ -144,13 +144,18 @@ does not change the atom positions due to non-periodicity. In this
mode, if you tilt the system to extreme angles, the simulation will
simply become inefficient, due to the highly skewed simulation box.
The limitation on not creating a simulation box with a tilt factor
skewing the box more than half the distance of the parallel box length
can be overridden via the :doc:`box <box>` command. Setting the *tilt*
keyword to *large* allows any tilt factors to be specified.
Box flips that may occur using the :doc:`fix deform <fix_deform>` or
:doc:`fix npt <fix_nh>` commands can be turned off using the *flip no*
option with either of the commands.
Note that if a simulation box has a large tilt factor, LAMMPS will run
less efficiently, due to the large volume of communication needed to
acquire ghost atoms around a processor's irregular-shaped subdomain.
acquire ghost atoms around a processor's irregular-shaped sub-domain.
For extreme values of tilt, LAMMPS may also lose atoms and generate an
error.

View File

@ -1,126 +0,0 @@
Type labels
===========
.. versionadded:: 15Sep2022
Each atom in LAMMPS has an associated numeric atom type. Similarly,
each bond, angle, dihedral, and improper is assigned a bond type,
angle type, and so on. The primary use of these types is to map
potential (force field) parameters to the interactions of the atom,
bond, angle, dihedral, and improper.
By default, type values are entered as integers from 1 to Ntypes
wherever they appear in LAMMPS input or output files. The total number
Ntypes for each interaction is "locked in" when the simulation box
is created.
A recent addition to LAMMPS is the option to use strings - referred
to as type labels - as an alternative. Using type labels instead of
numeric types can be advantageous in various scenarios. For example,
type labels can make inputs more readable and generic (i.e. usable through
the :doc:`include command <include>` for different systems with different
numerical values assigned to types. This generality also applies to
other inputs like data files read by :doc:`read_data <read_data>` or
molecule template files read by the :doc:`molecule <molecule>`
command. See below for a list of other commands that can use
type labels in different ways.
LAMMPS will *internally* continue to use numeric types, which means
that many previous restrictions still apply. For example, the total
number of types is locked in when creating the simulation box, and
potential parameters for each type must be provided even if not used
by any interactions.
A collection of type labels for all type-kinds (atom types, bond types,
etc.) is stored as a "label map" which is simply a list of numeric types
and their associated type labels. Within a type-kind, each type label
must be unique. It can be assigned to only one numeric type. To read
and write type labels to data files for a given type-kind, *all*
associated numeric types need have a type label assigned. Partial
maps can be saved with the :doc:`labelmap write <labelmap>` command
and read back with the :doc:`include <include>` command.
Valid type labels can contain most ASCII characters, but cannot start
with a number, a '#', or a '*'. Also, labels must not contain whitespace
characters. When using the :doc:`labelmap command <labelmap>` in the
LAMMPS input, if certain characters appear in the type label, such as
the single (') or double (") quote or the '#' character, the label
must be put in either double, single, or triple (""") quotes. Triple
quotes allow for the most generic type label strings, but they require
to have a leading and trailing blank space. When defining type labels
the blanks will be ignored. Example:
.. code-block:: LAMMPS
labelmap angle 1 """ C1'-C2"-C3# """
This command will map the string ```C1'-C2"-C3#``` to the angle type 1.
There are two ways to define label maps. One is via the :doc:`labelmap
<labelmap>` command. The other is via the :doc:`read_data <read_data>`
command. A data file can have sections such as *Atom Type Labels*, *Bond
Type Labels*, etc., which assign type labels to numeric types. The
label map can be written out to data files by the :doc:`write_data
<write_data>` command. This map is also written to and read from
restart files, by the :doc:`write_restart <write_restart>` and
:doc:`read_restart <read_restart>` commands.
----------
Use of type labels in LAMMPS input or output
""""""""""""""""""""""""""""""""""""""""""""
Many LAMMPS input script commands that take a numeric type as an
argument can use the associated type label instead. If a type label
is not defined for a particular numeric type, only its numeric type
can be used.
This example assigns labels to the atom types, and then uses the type
labels to redefine the pair coefficients.
.. code-block:: LAMMPS
pair_coeff 1 2 1.0 1.0 # numeric types
labelmap atom 1 C 2 H
pair_coeff C H 1.0 1.0 # type labels
Adding support for type labels to various commands is an ongoing
project. If an input script command (or a section in a file read by a
command) allows substituting a type label for a numeric type argument,
it will be explicitly mentioned in that command's documentation page.
As a temporary measure, input script commands can take advantage of
variables and how they can be expanded during processing of the input.
The variables can use functions that will translate type label strings
to their respective number as defined in the current label map. See the
:doc:`variable <variable>` command for details.
For example, here is how the pair_coeff command could be used with
type labels if it did not yet support them, either with an explicit
variable command or an implicit variable used in the pair_coeff
command.
.. code-block:: LAMMPS
labelmap atom 1 C 2 H
variable atom1 equal label2type(atom,C)
variable atom2 equal label2type(atom,H)
pair_coeff ${atom1} ${atom2} 1.0 1.0
.. code-block:: LAMMPS
labelmap atom 1 C 2 H
pair_coeff $(label2type(atom,C)) $(label2type(atom,H)) 80.0 1.2
----------
Commands that can use label types
"""""""""""""""""""""""""""""""""
Any workflow that involves reading multiple data files, molecule
templates or a combination of the two can be streamlined by using type
labels instead of numeric types, because types are automatically synced
between the files. The creation of simulation-ready reaction templates
for :doc:`fix bond/react <fix_bond_react>` is much simpler when using
type labels, and results in templates that can be used without
modification in multiple simulations or different systems.

View File

@ -1,21 +1,33 @@
Visualize LAMMPS snapshots
==========================
Snapshots from LAMMPS simulations can be viewed, visualized, and
analyzed in a variety of ways.
LAMMPS itself does not do visualization, but snapshots from LAMMPS
simulations can be visualized (and analyzed) in a variety of ways.
LAMMPS snapshots are created by the :doc:`dump <dump>` command, which
can create files in several formats. The native LAMMPS dump format is a
text file (see "dump atom" or "dump custom") which can be visualized by
`several visualization tools <https://www.lammps.org/viz.html>`_ for MD
simulation trajectories. `OVITO <https://www.ovito.org>`_ and `VMD
<https://www.ks.uiuc.edu/Research/vmd>`_ seem to be the most popular
choices among them.
Mention dump image and dump movie.
The :doc:`dump image <dump_image>` and :doc:`dump movie <dump_image>`
styles can output internally rendered images or convert them to a movie
during the MD run.
LAMMPS snapshots are created by the :doc:`dump <dump>` command which can
create files in several formats. The native LAMMPS dump format is a
text file (see "dump atom" or "dump custom") which can be visualized
by several popular visualization tools. The :doc:`dump image <dump_image>` and :doc:`dump movie <dump_image>` styles can
output internally rendered images and convert a sequence of them to a
movie during the MD run. Several programs included with LAMMPS as
auxiliary tools can convert between LAMMPS format files and other
formats. See the :doc:`Tools <Tools>` page for details.
Programs included with LAMMPS as auxiliary tools can convert
between LAMMPS format files and other formats. See the :doc:`Tools
<Tools>` page for details. These are rarely needed these days.
A Python-based toolkit distributed by our group can read native LAMMPS
dump files, including custom dump files with additional columns of
user-specified atom information, and convert them to various formats or
pipe them into visualization software directly. See the `Pizza.py WWW
site <pizza_>`_ for details. Specifically, Pizza.py can convert LAMMPS
dump files into PDB, XYZ, `EnSight <ensight_>`_, and VTK formats.
Pizza.py can pipe LAMMPS dump files directly into the Raster3d and
RasMol visualization programs. Pizza.py has tools that do interactive
3d OpenGL visualization and one that creates SVG images of dump file
snapshots.
.. _pizza: https://lammps.github.io/pizza
.. _ensight: https://www.ansys.com/products/fluids/ansys-ensight
.. _atomeye: http://li.mit.edu/Archive/Graphics/A/

View File

@ -18,8 +18,6 @@ you **must** build LAMMPS from the source code.
developers have no control over their choices of how they configure
and build their packages and when they update them.
----
.. toctree::
:maxdepth: 1
@ -31,40 +29,38 @@ you **must** build LAMMPS from the source code.
Install_tarball
Install_git
----
These are the files and sub-directories in the LAMMPS distribution:
These are the files and subdirectories in the LAMMPS distribution:
+------------+-------------------------------------------+
| README | Short description of the LAMMPS package |
+------------+-------------------------------------------+
| LICENSE | GNU General Public License (GPL) |
+------------+-------------------------------------------+
| SECURITY.md| Security Policy for the LAMMPS package |
+------------+-------------------------------------------+
| bench | benchmark problems |
+------------+-------------------------------------------+
| cmake | CMake build files |
+------------+-------------------------------------------+
| doc | documentation |
+------------+-------------------------------------------+
| examples | simple test problems |
+------------+-------------------------------------------+
| fortran | Fortran wrapper for LAMMPS |
+------------+-------------------------------------------+
| lib | additional provided or external libraries |
+------------+-------------------------------------------+
| potentials | interatomic potential files |
+------------+-------------------------------------------+
| python | Python wrappers for LAMMPS |
+------------+-------------------------------------------+
| src | source files |
+------------+-------------------------------------------+
| tools | pre- and post-processing tools |
+------------+-------------------------------------------+
| unittest | sources and inputs for testing LAMMPS |
+------------+-------------------------------------------+
+------------+---------------------------------------------+
| README | Short description of the LAMMPS package |
+------------+---------------------------------------------+
| LICENSE | GNU General Public License (GPL) |
+------------+---------------------------------------------+
| SECURITY.md| Security policy for the LAMMPS package |
+------------+---------------------------------------------+
| bench | benchmark inputs |
+------------+---------------------------------------------+
| cmake | CMake build files |
+------------+---------------------------------------------+
| doc | documentation and tools to build the manual |
+------------+---------------------------------------------+
| examples | example input files |
+------------+---------------------------------------------+
| fortran | Fortran module for LAMMPS library interface |
+------------+---------------------------------------------+
| lib | additional provided or external libraries |
+------------+---------------------------------------------+
| potentials | selected interatomic potential files |
+------------+---------------------------------------------+
| python | Python module for LAMMPS library interface |
+------------+---------------------------------------------+
| src | LAMMPS source files |
+------------+---------------------------------------------+
| tools | pre- and post-processing tools |
+------------+---------------------------------------------+
| unittest | source code and inputs for testing LAMMPS |
+------------+---------------------------------------------+
You will have all of these if you downloaded the LAMMPS source code.
You will have only some of them if you downloaded executables, as
explained on the pages listed above.
You will have all of these if you download source. You will only have
some of them if you download executables, as explained on the pages
listed above.

View File

@ -1,10 +1,10 @@
Download an executable for Linux or Mac via Conda
-------------------------------------------------
Pre-compiled LAMMPS binaries are available for macOS and Linux via the
Pre-compiled LAMMPS binaries are available for macOS or Linux via the
`Conda <conda_>`_ package management system.
First, one must set up the Conda package manager on your system. Follow the
First, one must setup the Conda package manager on your system. Follow the
instructions to install `Miniconda <mini_conda_install_>`_, then create a conda
environment (named `my-lammps-env` or whatever you prefer) for your LAMMPS
install:
@ -21,19 +21,19 @@ Then, you can install LAMMPS on your system with the following command:
conda activate my-lammps-env
conda install lammps
The LAMMPS binary is built with the :ref:`KIM package <kim>`, which
The LAMMPS binary is built with the :ref:`KIM package <kim>` which
results in Conda also installing the `kim-api` binaries when LAMMPS is
installed. In order to use potentials from `openkim.org <openkim_>`_,
you can install the `openkim-models` package
installed. In order to use potentials from `openkim.org <openkim_>`_, you can
install the `openkim-models` package
.. code-block:: bash
conda install openkim-models
If you have problems with the installation, you can post issues to `this
link <conda_forge_lammps_>`_. Thanks to Jan Janssen
(Max-Planck-Institut fuer Eisenforschung) for setting up the Conda
capability.
If you have problems with the installation you can post issues to
`this link <conda_forge_lammps_>`_.
Thanks to Jan Janssen (Max-Planck-Institut fuer Eisenforschung) for setting
up the Conda capability.
.. _conda_forge_lammps: https://github.com/conda-forge/lammps-feedstock/issues
.. _openkim: https://openkim.org

View File

@ -1,16 +1,16 @@
Download the LAMMPS source with git
-----------------------------------
LAMMPS development is coordinated through the "LAMMPS GitHub site".
If you clone the LAMMPS repository onto your local machine, it has
several advantages:
All LAMMPS development is coordinated through the "LAMMPS GitHub
site". If you clone the LAMMPS repository onto your local machine, it
has several advantages:
* You can stay current with changes to LAMMPS with a single git
command.
* You can create your own development branches to add code to LAMMPS.
* You can submit your new features back to GitHub for inclusion in
LAMMPS. For that, you should first create your own :doc:`fork on
GitHub <Howto_github>`, though.
LAMMPS. For that you should first create your own :doc:`fork on
GitHub <Howto_github>`.
You must have `git <git_>`_ installed on your system to use the
commands explained below to communicate with the git servers on
@ -31,8 +31,8 @@ You can follow the LAMMPS development on 3 different git branches:
* **stable** : this branch is updated from the *release* branch with
every stable release version and also has selected bug fixes and updates
back-ported from the *develop* branch
* **release** : this branch is updated with every patch or feature release;
updates are always "fast-forward" merges from *develop*
* **release** : this branch is updated with every patch release;
updates are always "fast forward" merges from *develop*
* **develop** : this branch follows the ongoing development and
is updated with every merge commit of a pull request
@ -58,12 +58,12 @@ between them at any time using "git checkout <branch name>".)
commit history (most people don't), you can speed up the "cloning"
process and reduce local disk space requirements by using the
*--depth* git command line flag. That will create a "shallow clone"
of the repository, which contains only a subset of the git history.
Using a depth of 1000 is usually sufficient to include the head
commits of the *develop* and the *release* branches. To include the
head commit of the *stable* branch you may need a depth of up
to 10000. If you later need more of the git history, you can always
convert the shallow clone into a "full clone".
of the repository containing only a subset of the git history. Using
a depth of 1000 is usually sufficient to include the head commits of
the *develop* and the *release* branches. To include the head commit
of the *stable* branch you may need a depth of up to 10000. If you
later need more of the git history, you can always convert the
shallow clone into a "full clone".
Once the command completes, your directory will contain the same files
as if you unpacked a current LAMMPS tarball, with the exception, that
@ -84,11 +84,11 @@ from within the "mylammps" directory:
git pull
Doing a "pull" will not change any files you have added to the LAMMPS
directory structure. It will also not change any existing LAMMPS files
you have edited, unless those files have changed in the repository. In
that case, git will attempt to merge the changes from the repository
file with your version of the file and tell you if there are any
conflicts. See the git documentation for details.
directory structure. It will also not change any existing LAMMPS
files you have edited, unless those files have changed in the
repository. In that case, git will attempt to merge the new
repository file with your version of the file and tell you if there
are any conflicts. See the git documentation for details.
If you want to access a particular previous release version of LAMMPS,
you can instead "check out" any version with a published tag. See the
@ -146,13 +146,13 @@ changed. How to do this depends on the build system you are using.
and package directories. This is OK to do even if you don't
use any packages. The ``make purge`` command removes any deprecated
src files if they were removed by the patch from a package
subdirectory.
sub-directory.
.. warning::
If you wish to edit/change a src file that is from a package,
you should edit the version of the file inside the package
subdirectory with src, then re-install the package. The
sub-directory with src, then re-install the package. The
version in the source directory is merely a copy and will be
wiped out if you type "make package-update".
@ -164,5 +164,5 @@ changed. How to do this depends on the build system you are using.
GitHub account, you may also use SSH protocol with the URL
"git@github.com:lammps/lammps.git".
The LAMMPS GitHub project is currently overseen by Axel Kohlmeyer
The LAMMPS GitHub project is currently managed by Axel Kohlmeyer
(Temple U, akohlmey at gmail.com).

View File

@ -3,7 +3,7 @@ Download an executable for Linux
Binaries are available for different versions of Linux:
- :ref:`Pre-built Ubuntu and Debian Linux executables <ubuntu>`
- :ref:`Pre-built Ubuntu Linux executables <ubuntu>`
- :ref:`Pre-built Fedora Linux executables <fedora>`
- :ref:`Pre-built EPEL Linux executables (RHEL, CentOS) <epel>`
- :ref:`Pre-built OpenSuse Linux executables <opensuse>`
@ -14,23 +14,21 @@ Binaries are available for different versions of Linux:
If you have questions about these pre-compiled LAMMPS executables,
you need to contact the people preparing those packages. The LAMMPS
developers have no control over how they configure and build their
packages and when they update them. They may only provide packages
for stable release versions and not always update the packages in a
timely fashion after a new LAMMPS release is made.
developers have no control over their choices of how they configure
and build their packages and when they update them.
----------
.. _ubuntu:
Pre-built Ubuntu and Debian Linux executables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Pre-built Ubuntu Linux executables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A pre-built LAMMPS executable, suitable for running on the latest Ubuntu
and Debian Linux versions, can be downloaded as a Debian package. This
allows you to install LAMMPS with a single command, and stay (mostly)
up-to-date with the current stable version of LAMMPS by simply updating
your operating system.
A pre-built LAMMPS executable suitable for running on the latest Ubuntu
Linux versions, can be downloaded as a Debian package. This allows you
to install LAMMPS with a single command, and stay (mostly) up-to-date
with the current stable version of LAMMPS by simply updating your
operating system.
To install LAMMPS do the following once:
@ -40,22 +38,8 @@ To install LAMMPS do the following once:
This downloads an executable named ``lmp`` to your box and multiple
packages with supporting data, examples and libraries as well as any
missing dependencies. For example, the LAMMPS binary in this package is
built with the :ref:`KIM package <kim>` enabled, which results in the
above command also installing the ``kim-api`` binaries when LAMMPS is
installed, unless they were installed already. In order to use
potentials from `openkim.org <openkim_>`_, you can also install the
``openkim-models`` package:
.. code-block:: bash
sudo apt-get install openkim-models
Or use the `KIM-API commands <https://openkim.org/doc/usage/obtaining-models/#installing_api>`_
to download and install individual models.
This LAMMPS executable can then be used in the usual way to run input
scripts:
missing dependencies. This executable can then be used in the usual way
to run input scripts:
.. code-block:: bash
@ -67,9 +51,20 @@ To update LAMMPS to the latest packaged version, do the following:
sudo apt-get update
This will also update other packages on your system.
which will also update other packages on your system.
To uninstall LAMMPS, do the following:
The ``lmp`` binary is built with the :ref:`KIM package <kim>` included,
which results in the above command also installing the ``kim-api``
binaries when LAMMPS is installed. In order to use potentials from
`openkim.org <openkim_>`_, you can also install the ``openkim-models``
package
.. code-block:: bash
sudo apt-get install openkim-models
Or use the KIM-API commands to download and install individual models.
To un-install LAMMPS, do the following:
.. code-block:: bash
@ -88,9 +83,8 @@ Ubuntu package capability.
Pre-built Fedora Linux executables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Pre-built `LAMMPS packages for stable releases
<https://packages.fedoraproject.org/pkgs/lammps/>`_ are available in the
Fedora Linux distribution since Fedora version 28. The packages can be
Pre-built LAMMPS packages for stable releases are available in the
Fedora Linux distribution as of Fedora version 28. The packages can be
installed via the dnf package manager. There are 3 basic varieties
(lammps = no MPI, lammps-mpich = MPICH MPI library, lammps-openmpi =
OpenMPI MPI library) and for each support for linking to the C library
@ -113,9 +107,9 @@ To install LAMMPS with OpenMPI and run an input ``in.lj`` with 2 CPUs do:
module load mpi/openmpi-x86_64
mpirun -np 2 lmp -in in.lj
The ``dnf install`` command is needed only once. In case of a new LAMMPS
The ``dnf install`` command is needed only once. In case of a new LAMMPS
stable release, ``dnf update`` will automatically update to the newer
version as soon as the RPM files are built and uploaded to the download
version as soon at the RPM files are built and uploaded to the download
mirrors. The ``module load`` command is needed once per (shell) session
or shell terminal instance, unless it is automatically loaded from the
shell profile.
@ -171,7 +165,7 @@ in OpenSuse as of Leap 15.0. You can install the package with:
zypper install lammps
This includes support for OpenMPI. The name of the LAMMPS executable
is ``lmp``. To run an input in parallel on 2 CPUs you would do:
is ``lmp``. Thus to run an input in parallel on 2 CPUs you would do:
.. code-block:: bash
@ -198,18 +192,17 @@ Thanks to Christoph Junghans (LANL) for making LAMMPS available in OpenSuse.
Gentoo Linux executable
^^^^^^^^^^^^^^^^^^^^^^^
LAMMPS is part of `Gentoo's main package tree
<https://packages.gentoo.org/packages/sci-physics/lammps>`_ and can be
installed by typing:
LAMMPS is part of Gentoo's main package tree and can be installed by
typing:
.. code-block:: bash
emerge --ask lammps
Note that in Gentoo the LAMMPS source code is downloaded and the package is
then compiled and installed on your machine.
Note that in Gentoo the LAMMPS source is downloaded and the package is
built on the your machine.
Certain LAMMPS packages can be enabled via USE flags, type
Certain LAMMPS packages can be enable via USE flags, type
.. code-block:: bash
@ -228,11 +221,8 @@ Archlinux build-script
^^^^^^^^^^^^^^^^^^^^^^
LAMMPS is available via Arch's unofficial Arch User repository (AUR).
There are three scripts available, named `lammps
<https://aur.archlinux.org/packages/lammps>`_, `lammps-beta
<https://aur.archlinux.org/packages/lammps>`_ and `lammps-git
<https://aur.archlinux.org/packages/lammps>`_. They respectively
package the stable, patch and git releases.
There are three scripts available, named lammps, lammps-beta and lammps-git.
They respectively package the stable, patch and git releases.
To install, you will need to have the git package installed. You may use
any of the above names in-place of lammps.
@ -244,9 +234,9 @@ any of the above names in-place of lammps.
makepkg -s
makepkg -i
To update LAMMPS, you may repeat the above, or change into the cloned
directory, and execute the following, after which, if there are any
changes, you may use makepkg as above.
To update, you may repeat the above, or change into the cloned directory,
and execute the following, after which, if there are any changes, you may
use makepkg as above.
.. code-block:: bash
@ -254,5 +244,12 @@ changes, you may use makepkg as above.
Alternatively, you may use an AUR helper to install these packages.
Note that the AUR provides build-scripts that download the source code
and then build and install the package on your machine.
Note that the AUR provides build-scripts that download the source and
the build the package on your machine.
.. note::
It looks like the Arch Linux AUR repository build scripts for LAMMPS
have not been updated since the 29 October 2020 version. You may want
to consider installing a more current version of LAMMPS from source
directly.

View File

@ -2,11 +2,10 @@ Download an executable for Mac
------------------------------
LAMMPS can be downloaded, built, and configured for OS X on a Mac with
`Homebrew <homebrew_>`_. (Alternatively, see the installation
instructions for :doc:`downloading an executable via Conda
<Install_conda>`.) The following LAMMPS packages are unavailable at
this time because of additional requirements not yet met: GPU, KOKKOS,
LATTE, MSCG, MPIIO, POEMS, VORONOI.
`Homebrew <homebrew_>`_. (Alternatively, see the install instructions for
:doc:`Download an executable via Conda <Install_conda>`.) The following LAMMPS
packages are unavailable at this time because of additional needs not yet met:
GPU, KOKKOS, LATTE, MSCG, MPIIO, POEMS, VORONOI.
After installing Homebrew, you can install LAMMPS on your system with
the following commands:
@ -25,16 +24,16 @@ Lennard-Jones benchmark file:
brew test lammps -v
The LAMMPS binary is built with the :ref:`KIM package <kim>`, which
results in Homebrew also installing the `kim-api` binaries when LAMMPS
is installed. In order to use potentials from `openkim.org
<openkim_>`_, you can install the `openkim-models` package
The LAMMPS binary is built with the :ref:`KIM package <kim>` which
results in Homebrew also installing the `kim-api` binaries when LAMMPS is
installed. In order to use potentials from `openkim.org <openkim_>`_, you can
install the `openkim-models` package
.. code-block:: bash
brew install openkim-models
If you have problems with the installation, you can post issues to
If you have problems with the installation you can post issues to
`this link <https://github.com/Homebrew/homebrew-core/issues>`_.
.. _homebrew: https://brew.sh

View File

@ -9,19 +9,17 @@ of the `LAMMPS website <lws_>`_.
.. _older: https://download.lammps.org/tars/
.. _lws: https://www.lammps.org
You have two choices of tarballs, either the most recent stable release
or the most current patch or feature release. Stable releases occur a
few times per year, and undergo more testing before release. Feature
releases occur every 4 to 8 weeks. The new contents in all feature
releases are listed on the `bug and feature page <bug_>`_ of the LAMMPS
homepage.
You have two choices of tarballs, either the most recent stable
release or the most current patch release. Stable releases occur a
few times per year, and undergo more testing before release. Patch
releases occur a couple times per month. The new contents in all
releases are listed on the `bug and feature page <bug_>`_ of the website.
Both tarballs include LAMMPS documentation (HTML and PDF files)
corresponding to that version. The download page also has an option
to download the current-version LAMMPS documentation by itself.
Tarballs of older LAMMPS versions can also be downloaded from `this page
<older_>`_.
Older versions of LAMMPS can also be downloaded from `this page <older_>`_.
Once you have a tarball, unzip and untar it with the following
command:
@ -38,13 +36,14 @@ in its name, e.g. lammps-23Jun18.
You can also download a compressed tar or zip archives from the
"Assets" sections of the `LAMMPS GitHub releases site <git_>`_.
The file name will be lammps-<version>.zip which can be unzipped
with the following command, to create a lammps-<version> directory:
with the following command, to create a lammps-<version> dir:
.. code-block:: bash
unzip lammps*.zip
This version corresponds to the selected LAMMPS patch or stable release.
This version corresponds to the selected LAMMPS patch or stable
release.
.. _git: https://github.com/lammps/lammps/releases

View File

@ -33,7 +33,7 @@ Windows, once it is installed, in both serial and parallel.
When you download the installer package, you run it on your Windows
machine. It will then prompt you with a dialog, where you can choose
the installation directory, unpack and copy several executables,
potential files, documentation PDFs, selected example files, etc. It
potential files, documentation pdfs, selected example files, etc. It
will then update a few system settings (e.g. PATH, LAMMPS_POTENTIALS)
and add an entry into the Start Menu (with references to the
documentation, LAMMPS homepage and more). From that menu, there is
@ -41,10 +41,10 @@ also a link to an uninstaller that removes the files and undoes the
environment manipulations.
Note that to update to a newer version of LAMMPS, you should typically
uninstall the version you currently have, download a new installer, and
go through the installation procedure described above. I.e. the same
uninstall the version you currently have, download a new installer,
and go through the install procedure described above. I.e. the same
procedure for installing/updating most Windows programs. You can
install multiple versions of LAMMPS (in different directories), but only
the executable for the last-installed package will be found
install multiple versions of LAMMPS (in different directories), but
only the executable for the last-installed package will be found
automatically, so this should only be done for debugging purposes.

View File

@ -10,7 +10,6 @@ These pages provide a brief introduction to LAMMPS.
Manual_version
Intro_features
Intro_nonfeatures
Intro_portability
Intro_opensource
Intro_authors
Intro_citing

Some files were not shown because too many files have changed in this diff Show More