Compare commits

...

319 Commits

Author SHA1 Message Date
78533e25dc git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16053 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-26 21:05:57 +00:00
be3cacddef git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16052 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-26 21:05:55 +00:00
5d3e441e59 sync with latest GHub bug fixes
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16051 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-26 20:54:58 +00:00
43e2d2443f Added validated parameter file for 2NN Tungsten potential
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16050 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-26 02:07:37 +00:00
406a4da000 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16049 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-23 23:04:40 +00:00
841cae3682 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16048 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-23 22:40:21 +00:00
28af591168 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16046 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-20 20:23:11 +00:00
20805d47b3 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16045 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-20 20:23:08 +00:00
4008b967ee git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16044 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-20 20:21:13 +00:00
c79a21970b sync latest bug fixes from GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16043 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-20 20:20:31 +00:00
c771e00a1c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16042 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 20:47:38 +00:00
507b038f41 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16041 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 15:50:30 +00:00
bd4d5bdcac git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16040 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 15:50:18 +00:00
e0d0ef12cc git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16039 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 15:14:59 +00:00
43370b75a1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16038 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 15:11:33 +00:00
60f2b25b3f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16037 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-19 15:11:17 +00:00
9a3d05a86a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16036 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 23:40:20 +00:00
88eca7c181 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16035 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 23:32:26 +00:00
298e62ae70 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16034 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 23:32:09 +00:00
6ac456e751 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16033 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 23:15:15 +00:00
02b6519599 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16032 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 22:08:26 +00:00
b471be9638 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16031 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 20:07:25 +00:00
019d28ae7d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16030 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 20:04:19 +00:00
062450abc8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16029 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 18:36:08 +00:00
e13633b881 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16028 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-18 18:34:39 +00:00
52c45f67f3 sync with GHub and new OXDNA user package
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16027 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-18 18:33:29 +00:00
1f0e32e0ae git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16024 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 17:06:43 +00:00
465f33d3f4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16023 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 17:06:41 +00:00
fdef2e7011 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16022 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 17:06:28 +00:00
e878b8fd52 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16021 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 16:53:53 +00:00
460202c149 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16020 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 16:53:04 +00:00
e6adb5c2a1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16019 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 16:51:53 +00:00
9b01275837 neighbor list bug fixes, new compute coord/atom option
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16018 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-17 16:44:40 +00:00
23cfb88bb9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16017 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-17 16:00:09 +00:00
645d30dfa4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16016 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-11 20:57:49 +00:00
6dc24ea90d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16015 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-11 17:25:33 +00:00
1820b6785f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16014 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-10 23:31:32 +00:00
9c01b1b75f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16013 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-10 21:33:27 +00:00
9619521426 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16011 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:31:56 +00:00
f5b8906eb6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16010 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:31:54 +00:00
eb79a5f03c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16009 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:30:53 +00:00
9daf579909 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16008 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:30:12 +00:00
515a68d663 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16007 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:29:27 +00:00
2bf46e0c11 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16006 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:20:07 +00:00
de83ad9df1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16003 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 20:11:26 +00:00
27805f36b2 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16002 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 19:56:54 +00:00
f9f2c96d17 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16001 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 18:26:52 +00:00
c093ec15a5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@16000 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 18:15:09 +00:00
663f6403ef git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15999 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 18:15:00 +00:00
f22fcaed9f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15998 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 17:47:31 +00:00
fd2bdcd5d5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15997 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 17:46:25 +00:00
f8ee20372b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15996 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 17:39:37 +00:00
3e5991f7da git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15995 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 17:38:53 +00:00
8423271025 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15994 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-09 16:17:20 +00:00
77339b61b7 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15992 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-06 18:12:13 +00:00
72c5cf7045 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15991 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-06 18:12:10 +00:00
fd8876234a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15990 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-06 17:24:38 +00:00
2b77cb5c5d sync with GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15989 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-06 17:16:03 +00:00
a56413c0da git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15988 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 23:38:44 +00:00
8b3c8341e1 Updating modify_kokkos to match modify
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15987 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-05 22:50:55 +00:00
6e26482003 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15986 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 22:08:14 +00:00
9e91ee9ffc Updating modify_kokkos to match modify
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15985 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2017-01-05 21:31:06 +00:00
171530acc1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15984 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:55:13 +00:00
58fb78379d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15983 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:13:13 +00:00
102f30005c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15982 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:12:52 +00:00
f7bd264706 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15981 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:11:35 +00:00
35a929015e git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15980 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:10:41 +00:00
13a8dbca4a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15979 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:09:54 +00:00
5a46527886 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15978 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:08:12 +00:00
c0165e1261 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15977 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 20:07:11 +00:00
f55a51e1b5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15976 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 19:57:44 +00:00
b597aa6dac git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15975 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 19:56:46 +00:00
702b480cc0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15974 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 19:56:42 +00:00
07c0fccf7b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15973 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 19:56:37 +00:00
d85648ae2d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15972 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-05 19:56:10 +00:00
9c1de594e8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15971 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-04 23:26:22 +00:00
139a159a5d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15970 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-04 23:25:56 +00:00
2854350708 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15969 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-04 19:55:04 +00:00
d289d195e9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15968 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-04 16:12:26 +00:00
ac342f3687 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15967 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-04 15:52:53 +00:00
0f819c1e25 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15966 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-03 23:41:02 +00:00
c28560301d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15965 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-03 23:36:00 +00:00
2449e14f6d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15964 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2017-01-03 23:31:04 +00:00
8486258c73 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15959 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-21 16:53:12 +00:00
e1b30b2787 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15958 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-21 16:53:09 +00:00
a47b59c303 sync with GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15957 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-12-21 16:51:39 +00:00
4732f90521 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15956 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-21 16:44:36 +00:00
7339480095 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15952 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-17 00:46:28 +00:00
68a358a0f4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15951 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-17 00:43:52 +00:00
34216ead1f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15950 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 20:08:46 +00:00
0bb23c5810 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15948 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 18:36:32 +00:00
f9f487f5ca git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15947 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 18:36:30 +00:00
44fd05c97d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15946 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 18:32:09 +00:00
4b8b9b97cc git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15944 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 17:29:46 +00:00
fbc8fa111a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15943 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 17:29:44 +00:00
c71bba1980 sync with GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15942 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-12-16 17:26:10 +00:00
47a6449148 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15941 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 17:22:59 +00:00
e72aa59d83 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15940 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 17:22:46 +00:00
1b7e8eb7aa git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15939 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-16 16:24:33 +00:00
bee06997fb git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15938 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 16:11:08 +00:00
60e08ad7b7 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15936 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:10:56 +00:00
104ad18e0c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15935 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:10:53 +00:00
155dccacda git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15933 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:04:50 +00:00
35f8a9009d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15932 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:04:35 +00:00
5f04559071 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15931 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:00:20 +00:00
89719fb171 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15930 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-14 00:00:07 +00:00
6963dd2d83 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15929 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-13 23:58:45 +00:00
11e436ab43 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15928 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-13 23:44:51 +00:00
b0d24754a3 changes to all neighbor classes
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15927 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-12-13 23:27:33 +00:00
8320f9dcee git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15926 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-13 23:20:43 +00:00
45715f993c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15925 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-13 22:56:08 +00:00
abab6e8d99 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15924 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-13 22:46:40 +00:00
3846395e09 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15923 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-12-09 15:31:23 +00:00
c24d10ad7c Fixing bug in ewald disp
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15922 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-12-08 20:03:23 +00:00
e14a2bf12d Tweaking ewald disp error estimator
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15921 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-12-08 16:54:30 +00:00
2d36ae2f8d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15920 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 23:04:18 +00:00
0d64dd3eea git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15919 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 22:49:05 +00:00
8bd4c37e0e git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15918 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 22:45:53 +00:00
a70e2f6db4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15916 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 21:04:12 +00:00
8d7ba77ab2 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15915 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 21:04:08 +00:00
745050a374 sync with GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15914 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-30 21:03:35 +00:00
c2b852f940 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15913 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 20:59:16 +00:00
489272ed91 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15912 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 20:57:03 +00:00
a5ee9da9c5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15911 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 20:56:54 +00:00
7a3103c911 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15910 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 20:10:37 +00:00
ecfa2d85f5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15908 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-30 20:06:19 +00:00
9b9291b417 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15906 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 20:18:28 +00:00
fa304895ea git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15905 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 20:17:01 +00:00
64c021824a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15904 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 18:17:52 +00:00
6a5a95d0b0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15902 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 16:21:57 +00:00
810a7bca52 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15901 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 16:21:55 +00:00
09a388e5d4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15897 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 16:10:37 +00:00
09eb377cb8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15896 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:58:36 +00:00
a70d5f71b9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15895 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:55:07 +00:00
d692a47d73 sync with recent GHub PRs
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15894 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-22 15:52:42 +00:00
40762e69ce git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15893 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:47:10 +00:00
3856965055 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15892 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:43:59 +00:00
a4eaf200b5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15891 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:43:28 +00:00
1a3a1b1e72 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15890 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:20:24 +00:00
da9bea2355 new temper_grem command
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15889 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-22 15:16:29 +00:00
98b025d053 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15888 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-22 15:14:36 +00:00
2af2091bd2 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15886 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 23:13:20 +00:00
6471c2750b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15885 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 23:13:17 +00:00
76182cb892 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15884 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 23:10:42 +00:00
dad749b37f Updated explanation of how virial is computed
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15883 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-17 19:36:20 +00:00
0701201e03 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15882 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 17:37:27 +00:00
80d6518602 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15880 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 17:21:17 +00:00
e81c5e3fdf git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15879 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 16:23:37 +00:00
47be003191 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15878 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-17 16:15:56 +00:00
41745a3b90 pair vashishta/kk, pair tersoff/mod/c, pair agni
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15877 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-17 16:06:01 +00:00
5692ea7977 Added note on pressure for periodic systems
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15876 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-16 19:53:05 +00:00
597f874f3d Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15875 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 19:37:05 +00:00
2b82e83d13 Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15874 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 19:33:36 +00:00
23b468e74f Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15873 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 17:41:16 +00:00
16efa68d35 Fixing clang compile error
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15872 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 17:07:06 +00:00
fa8d7c1d6e Adding missing Kokkos dependency
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15871 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 15:42:30 +00:00
846f11db5c Fixing bug with Kokkos/CUDA
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15870 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-15 15:28:36 +00:00
1ee5247500 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15869 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-14 21:49:29 +00:00
1d8db38a75 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15868 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-11 15:59:32 +00:00
f378934817 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15866 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:47:02 +00:00
aa8cce5b06 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15865 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:46:59 +00:00
57c0d77c71 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15863 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:42:46 +00:00
b1f7de2776 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15862 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:41:53 +00:00
ebe6ee813c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15861 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:41:17 +00:00
b222f8b946 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15860 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:40:33 +00:00
6b0a8628f2 sync pointer changes with GHub and 2 new pair styles
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15859 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-09 22:37:03 +00:00
5c141edca7 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15858 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 22:35:41 +00:00
3a2cea52d8 Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15857 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-09 20:00:27 +00:00
45f2940225 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15856 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-09 18:45:21 +00:00
07bb6fe443 Adding support for CommTiledKokkos
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15854 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-08 16:23:24 +00:00
b6b7c3ad67 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15852 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-08 16:04:44 +00:00
55fa0f2e8a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15851 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-08 16:04:40 +00:00
c770e270f2 Adding support for CommTiledKokkos
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15848 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-08 15:43:15 +00:00
d077a8b024 Adding support for CommTiledKokkos
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15847 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-08 15:30:12 +00:00
e147701e87 Updating Kokkos phi Makefile
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15846 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-08 15:26:08 +00:00
cc0be86470 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15841 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-07 15:23:10 +00:00
34966b3a38 Added 4-stage version of coord2ssaAIR
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15840 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-05 12:48:53 +00:00
9197eea89b Fixed a few errors and updated citations
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15838 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-04 17:10:33 +00:00
b682c8d98a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15837 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-04 17:08:10 +00:00
c7d3af81f1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15836 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-04 17:07:58 +00:00
8ded262792 sync with GHub
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15835 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-04 17:01:14 +00:00
7830537091 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15834 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-11-04 16:54:31 +00:00
e24fff05b3 Fixed a few things I forgot
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15833 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-03 00:26:41 +00:00
30e14c7f37 Added threebody tests for sw, tersoff, vashishta
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15832 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-03 00:16:18 +00:00
5ffdbc1a97 Edited some of the comments in the file headers
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15831 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-02 23:31:41 +00:00
639b22cd56 Updating docs for Kokkkos
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15830 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-01 15:22:44 +00:00
8e0b69478a Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15829 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-11-01 00:22:36 +00:00
dd296bf237 Improving performance of Kokkos ReaxFF
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15828 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-31 22:00:06 +00:00
8de4680898 Adding short neighbor lists
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15827 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-28 18:42:43 +00:00
ef4dc21c15 Adding short neighbor list to tersoff Kokkos from C. Trott
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15826 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-27 21:55:25 +00:00
ceff3565d6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15825 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 17:09:15 +00:00
41f666db52 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15823 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 15:32:57 +00:00
f2df16e0f0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15822 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 15:32:53 +00:00
4475897049 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15821 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 15:32:32 +00:00
02ae428e37 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15820 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 15:28:59 +00:00
21887831ff git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15819 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-27 14:29:02 +00:00
7a13d54a0d Fixed typo in temperature formula
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15818 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-26 22:28:56 +00:00
01209d450c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15817 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-26 22:24:50 +00:00
bc250ab7b9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15816 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-26 22:24:31 +00:00
0270a33ab4 Fixing clang compile error
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15815 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-26 20:28:23 +00:00
287c57daf4 Adding Kokkos error check
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15814 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-26 17:36:13 +00:00
7d3d315753 Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15813 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-26 16:46:30 +00:00
77fa5ee08d Fixing Kokkos bug
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15812 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-26 16:41:57 +00:00
0fd26f7b9d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15811 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 22:49:56 +00:00
f092df34d4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15810 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 22:49:20 +00:00
e517e5a5a5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15809 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 22:48:44 +00:00
79250a7916 Adding support for FixReaxCBonds and FixReaxCSpecies to the Kokkos ReaxFF
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15808 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-25 22:31:05 +00:00
3de6f5b9c3 Adding support for FixReaxCBonds and FixReaxCSpecies to the Kokkos ReaxFF
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15807 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-25 22:15:00 +00:00
b42db824da Adding support for FixReaxCBonds and FixReaxCBonds to the Kokkos ReaxFF
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15806 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-25 20:47:40 +00:00
c587a3106f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15805 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 19:26:35 +00:00
d7304c5843 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15804 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 16:55:25 +00:00
8ed519045f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15803 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 16:55:11 +00:00
18b452c9c2 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15802 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 16:55:06 +00:00
8770adf78a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15801 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 16:54:48 +00:00
2a07f06924 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15800 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-25 16:53:20 +00:00
bb78ea0248 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15799 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-25 16:33:34 +00:00
bfdaa09a72 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15798 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-25 15:47:44 +00:00
a1cb91486b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15796 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-24 14:19:10 +00:00
b9fc540733 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15795 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-24 14:16:13 +00:00
c0b98f5299 Recommitting reverted change
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15794 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-19 19:02:08 +00:00
5d076bafea git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15792 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 17:03:39 +00:00
51e7c77aec git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15791 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 17:03:33 +00:00
8fa049edda git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15790 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 16:29:44 +00:00
218ab76d0b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15789 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 16:27:17 +00:00
09a3a259c2 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15788 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 16:27:13 +00:00
aab7de9579 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15787 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-19 14:58:39 +00:00
616724091e git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15786 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 14:50:50 +00:00
252c52b9b8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15785 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 14:43:54 +00:00
3089edfce1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15784 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 14:42:40 +00:00
82badf85a4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15783 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-19 14:42:24 +00:00
6d759f1b6f sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15782 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-19 14:39:50 +00:00
2babec1b38 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15780 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 19:09:14 +00:00
15dbceee76 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15779 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 19:09:11 +00:00
49f6e138e6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15778 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 19:04:11 +00:00
773aec0f1c sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15777 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-18 18:59:37 +00:00
a9b065ca3a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15776 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 17:14:54 +00:00
bc43acd4e9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15775 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 17:10:29 +00:00
95ed575b66 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15774 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 15:40:09 +00:00
4f1ea743bd git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15773 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-18 15:39:51 +00:00
9a6dc87fa6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15772 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-17 15:41:42 +00:00
daf719470f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15771 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-14 14:36:10 +00:00
fdd61cf314 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15769 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 23:03:30 +00:00
3593ca7f48 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15768 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 23:03:27 +00:00
d58e86625b sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15767 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-13 23:01:03 +00:00
06fa6ce105 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15766 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 19:58:01 +00:00
c3c2587fef Added fix for problem with energy_full and shake
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15765 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-13 19:11:48 +00:00
115d67c1a0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15764 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 19:07:38 +00:00
011568fae3 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15763 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 19:07:22 +00:00
0f1c56d0fc git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15762 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-13 19:04:21 +00:00
2f98f4ad98 Added fix for problem with energy_full and shake
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15761 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-13 18:56:05 +00:00
0145275cd2 Added fix for problem with energy_full and shake
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15760 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-13 18:52:26 +00:00
1ce8f1479e git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15759 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 23:39:29 +00:00
5661aea6d5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15758 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 23:39:17 +00:00
6ec1550081 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15757 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 21:13:30 +00:00
c660a813e4 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15756 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 20:52:53 +00:00
96eaa5d59f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15754 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 13:35:01 +00:00
409fe28ee9 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15753 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 13:34:58 +00:00
ab2998e4dd git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15752 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 13:34:48 +00:00
fb4cbf1a4a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15751 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-12 13:34:37 +00:00
1d501f05e4 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15750 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-12 13:32:56 +00:00
a6ceebf5b1 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15749 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 20:33:10 +00:00
338f6ae70a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15748 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 19:51:46 +00:00
7e37c5aecb sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15747 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-11 19:42:15 +00:00
e710053de6 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15746 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-11 18:43:51 +00:00
7a4da54a71 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15744 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 18:34:01 +00:00
d1145f14ee git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15743 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 18:33:59 +00:00
b195d32105 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15742 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 18:23:52 +00:00
66b073415b git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15741 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 18:23:09 +00:00
6888a80d7d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15740 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-11 18:20:53 +00:00
59215db1a3 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15739 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-10 20:02:41 +00:00
dcdb53cc79 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15737 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-10 19:40:11 +00:00
b31b4093ca Fixing Kokkos compile error
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15736 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-10 18:17:51 +00:00
c4ab7c8245 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15735 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-10 15:16:43 +00:00
c35d0d77e0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15734 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 22:46:55 +00:00
fda969f1c9 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15733 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-06 21:50:57 +00:00
50ea9d151f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15731 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 21:41:42 +00:00
325aa50c67 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15730 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 21:41:40 +00:00
3b67310233 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15729 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 21:41:21 +00:00
5c8fb1d55c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15728 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 21:41:07 +00:00
94ebde04e3 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15722 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 21:06:14 +00:00
720c352a08 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15721 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 20:03:35 +00:00
65585e69a6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15720 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 20:03:13 +00:00
cd8d18dc71 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15719 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 20:01:30 +00:00
5bc562b095 Fixing Kokkos bugs
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15718 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-06 19:48:28 +00:00
2a52034786 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15717 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 19:17:50 +00:00
b35352153c sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15716 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-06 19:05:57 +00:00
4f01a3055a git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15715 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:59:16 +00:00
44ef94958c git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15714 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:46:54 +00:00
54413ce1b7 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15713 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:46:21 +00:00
2d6f118846 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15712 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:44:52 +00:00
47b3de2554 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15711 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:44:36 +00:00
e51650664f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15710 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:43:25 +00:00
df0694e4e5 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15709 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:43:06 +00:00
a227a63ddb git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15708 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:42:51 +00:00
3f7821ba1f git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15707 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:09:53 +00:00
2a93bca2a6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15706 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:09:45 +00:00
f9ff3bd0bd git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15705 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:08:43 +00:00
9327eb756d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15704 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-06 18:03:27 +00:00
8a8c9fa8e8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15701 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 21:41:11 +00:00
f4948ad5ff git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15700 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 21:03:06 +00:00
f86f711115 python lib callback issue fixed
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15699 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-05 16:55:23 +00:00
26da91a157 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15698 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 16:32:33 +00:00
82cac1a0e6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15697 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 15:43:36 +00:00
ce665801ea git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15696 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 15:43:33 +00:00
28f88a6085 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15695 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 15:39:31 +00:00
44a8d082e8 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15694 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 15:32:07 +00:00
998c5b7d2d git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15693 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-05 15:07:50 +00:00
05c027fcaf git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15692 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-04 15:11:27 +00:00
57dfa51b97 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15691 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-10-04 00:00:38 +00:00
dc2bd269d6 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15690 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-03 23:56:44 +00:00
d86416aee3 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15689 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-03 23:56:06 +00:00
58f1297b61 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15688 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-10-03 22:23:47 +00:00
87540fbac0 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15684 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-09-30 15:25:09 +00:00
0311121190 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15683 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-09-30 15:25:07 +00:00
49e66858ab sync with GH: colvars update, add forgotten CMAP potential files
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15682 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-09-30 15:22:12 +00:00
40ec180798 git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15681 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-09-30 15:16:16 +00:00
bcd4dad2f1 sync with GH
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15680 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-09-29 21:52:57 +00:00
f60331a5fb git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15678 f3b2605a-c512-4ea7-a41b-209d697bcdaa 2016-09-29 20:32:11 +00:00
d7bb53e4d2 Fixing Kokkos bug and adding host version of CommTiled
git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@15676 f3b2605a-c512-4ea7-a41b-209d697bcdaa
2016-09-29 20:21:39 +00:00
2524 changed files with 361910 additions and 189947 deletions

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# FENE beadspring benchmark
units lj
@ -43,25 +43,25 @@ Neighbor list info ...
master list distance cutoff = 1.52
ghost atom cutoff = 1.52
binsize = 0.76 -> bins = 45 45 45
Memory usage per processor = 11.5189 Mbytes
Memory usage per processor = 12.0423 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0.97029772 0.44484087 20.494523 22.394765 4.6721833
100 0.9729966 0.4361122 20.507698 22.40326 4.6548819
Loop time of 0.978585 on 1 procs for 100 steps with 32000 atoms
Loop time of 0.977647 on 1 procs for 100 steps with 32000 atoms
Performance: 105948.895 tau/day, 102.188 timesteps/s
100.0% CPU use with 1 MPI tasks x no OpenMP threads
Performance: 106050.541 tau/day, 102.286 timesteps/s
99.9% CPU use with 1 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.19562 | 0.19562 | 0.19562 | 0.0 | 19.99
Bond | 0.087475 | 0.087475 | 0.087475 | 0.0 | 8.94
Neigh | 0.44861 | 0.44861 | 0.44861 | 0.0 | 45.84
Comm | 0.032932 | 0.032932 | 0.032932 | 0.0 | 3.37
Output | 0.00010395 | 0.00010395 | 0.00010395 | 0.0 | 0.01
Modify | 0.19413 | 0.19413 | 0.19413 | 0.0 | 19.84
Other | | 0.01972 | | | 2.02
Pair | 0.19421 | 0.19421 | 0.19421 | 0.0 | 19.86
Bond | 0.08741 | 0.08741 | 0.08741 | 0.0 | 8.94
Neigh | 0.45791 | 0.45791 | 0.45791 | 0.0 | 46.84
Comm | 0.032649 | 0.032649 | 0.032649 | 0.0 | 3.34
Output | 0.00012207 | 0.00012207 | 0.00012207 | 0.0 | 0.01
Modify | 0.18071 | 0.18071 | 0.18071 | 0.0 | 18.48
Other | | 0.02464 | | | 2.52
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# FENE beadspring benchmark
units lj
@ -43,25 +43,25 @@ Neighbor list info ...
master list distance cutoff = 1.52
ghost atom cutoff = 1.52
binsize = 0.76 -> bins = 45 45 45
Memory usage per processor = 3.91518 Mbytes
Memory usage per processor = 4.14663 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0.97029772 0.44484087 20.494523 22.394765 4.6721833
100 0.97145835 0.43803883 20.502691 22.397872 4.626988
Loop time of 0.271187 on 4 procs for 100 steps with 32000 atoms
Loop time of 0.269205 on 4 procs for 100 steps with 32000 atoms
Performance: 382319.453 tau/day, 368.749 timesteps/s
99.6% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 385133.446 tau/day, 371.464 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.048621 | 0.050076 | 0.051229 | 0.4 | 18.47
Bond | 0.022254 | 0.022942 | 0.023567 | 0.3 | 8.46
Neigh | 0.11873 | 0.11881 | 0.11887 | 0.0 | 43.81
Comm | 0.019066 | 0.021357 | 0.024297 | 1.3 | 7.88
Output | 5.0068e-05 | 5.5015e-05 | 6.1035e-05 | 0.1 | 0.02
Modify | 0.048737 | 0.050198 | 0.051231 | 0.4 | 18.51
Other | | 0.007751 | | | 2.86
Pair | 0.049383 | 0.049756 | 0.049988 | 0.1 | 18.48
Bond | 0.022701 | 0.022813 | 0.022872 | 0.0 | 8.47
Neigh | 0.11982 | 0.12002 | 0.12018 | 0.0 | 44.58
Comm | 0.020274 | 0.021077 | 0.022348 | 0.5 | 7.83
Output | 5.3167e-05 | 5.6148e-05 | 6.3181e-05 | 0.1 | 0.02
Modify | 0.046276 | 0.046809 | 0.047016 | 0.1 | 17.39
Other | | 0.008669 | | | 3.22
Nlocal: 8000 ave 8030 max 7974 min
Histogram: 1 0 0 1 0 1 0 0 0 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# FENE beadspring benchmark
variable x index 1
@ -59,25 +59,25 @@ Neighbor list info ...
master list distance cutoff = 1.52
ghost atom cutoff = 1.52
binsize = 0.76 -> bins = 89 89 45
Memory usage per processor = 12.8735 Mbytes
Memory usage per processor = 13.2993 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0.97027498 0.44484087 20.494523 22.394765 4.6721833
100 0.97682955 0.44239968 20.500229 22.407862 4.6527025
Loop time of 1.20889 on 4 procs for 100 steps with 128000 atoms
Loop time of 1.14845 on 4 procs for 100 steps with 128000 atoms
Performance: 85764.410 tau/day, 82.720 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 90277.919 tau/day, 87.074 timesteps/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.21738 | 0.23306 | 0.23926 | 1.9 | 19.28
Bond | 0.094536 | 0.10196 | 0.10534 | 1.4 | 8.43
Neigh | 0.52311 | 0.52392 | 0.52519 | 0.1 | 43.34
Comm | 0.090161 | 0.10022 | 0.12557 | 4.7 | 8.29
Output | 0.00012207 | 0.00017327 | 0.00019598 | 0.2 | 0.01
Modify | 0.19662 | 0.20262 | 0.20672 | 0.8 | 16.76
Other | | 0.04694 | | | 3.88
Pair | 0.2203 | 0.22207 | 0.22386 | 0.3 | 19.34
Bond | 0.094861 | 0.095302 | 0.095988 | 0.1 | 8.30
Neigh | 0.52127 | 0.5216 | 0.52189 | 0.0 | 45.42
Comm | 0.079585 | 0.082159 | 0.084366 | 0.7 | 7.15
Output | 0.00013304 | 0.00015306 | 0.00018501 | 0.2 | 0.01
Modify | 0.18351 | 0.18419 | 0.1856 | 0.2 | 16.04
Other | | 0.04298 | | | 3.74
Nlocal: 32000 ave 32015 max 31983 min
Histogram: 1 0 1 0 0 0 0 0 1 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# LAMMPS benchmark of granular flow
# chute flow of 32000 atoms with frozen base at 26 degrees
@ -47,24 +47,24 @@ Neighbor list info ...
master list distance cutoff = 1.1
ghost atom cutoff = 1.1
binsize = 0.55 -> bins = 73 37 68
Memory usage per processor = 15.567 Mbytes
Step Atoms KinEng 1 Volume
Memory usage per processor = 16.0904 Mbytes
Step Atoms KinEng c_1 Volume
0 32000 784139.13 1601.1263 29833.783
100 32000 784292.08 1571.0968 29834.707
Loop time of 0.550482 on 1 procs for 100 steps with 32000 atoms
Loop time of 0.534174 on 1 procs for 100 steps with 32000 atoms
Performance: 1569.534 tau/day, 181.659 timesteps/s
100.1% CPU use with 1 MPI tasks x no OpenMP threads
Performance: 1617.451 tau/day, 187.205 timesteps/s
99.8% CPU use with 1 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.33849 | 0.33849 | 0.33849 | 0.0 | 61.49
Neigh | 0.040353 | 0.040353 | 0.040353 | 0.0 | 7.33
Comm | 0.018023 | 0.018023 | 0.018023 | 0.0 | 3.27
Output | 0.00020385 | 0.00020385 | 0.00020385 | 0.0 | 0.04
Modify | 0.13155 | 0.13155 | 0.13155 | 0.0 | 23.90
Other | | 0.02186 | | | 3.97
Pair | 0.33346 | 0.33346 | 0.33346 | 0.0 | 62.43
Neigh | 0.043902 | 0.043902 | 0.043902 | 0.0 | 8.22
Comm | 0.018391 | 0.018391 | 0.018391 | 0.0 | 3.44
Output | 0.00022411 | 0.00022411 | 0.00022411 | 0.0 | 0.04
Modify | 0.11666 | 0.11666 | 0.11666 | 0.0 | 21.84
Other | | 0.02153 | | | 4.03
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# LAMMPS benchmark of granular flow
# chute flow of 32000 atoms with frozen base at 26 degrees
@ -47,24 +47,24 @@ Neighbor list info ...
master list distance cutoff = 1.1
ghost atom cutoff = 1.1
binsize = 0.55 -> bins = 73 37 68
Memory usage per processor = 6.81783 Mbytes
Step Atoms KinEng 1 Volume
Memory usage per processor = 7.04927 Mbytes
Step Atoms KinEng c_1 Volume
0 32000 784139.13 1601.1263 29833.783
100 32000 784292.08 1571.0968 29834.707
Loop time of 0.13141 on 4 procs for 100 steps with 32000 atoms
Loop time of 0.171815 on 4 procs for 100 steps with 32000 atoms
Performance: 6574.833 tau/day, 760.976 timesteps/s
99.3% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 5028.653 tau/day, 582.020 timesteps/s
99.7% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.062505 | 0.067 | 0.07152 | 1.5 | 50.99
Neigh | 0.010041 | 0.0101 | 0.010178 | 0.1 | 7.69
Comm | 0.012347 | 0.012895 | 0.013444 | 0.5 | 9.81
Output | 6.3896e-05 | 0.00010294 | 0.00014091 | 0.3 | 0.08
Modify | 0.031802 | 0.032348 | 0.032897 | 0.3 | 24.62
Other | | 0.008965 | | | 6.82
Pair | 0.093691 | 0.096898 | 0.10005 | 0.8 | 56.40
Neigh | 0.011976 | 0.012059 | 0.012146 | 0.1 | 7.02
Comm | 0.016384 | 0.017418 | 0.018465 | 0.8 | 10.14
Output | 7.7963e-05 | 0.00010747 | 0.00013304 | 0.2 | 0.06
Modify | 0.031744 | 0.031943 | 0.032167 | 0.1 | 18.59
Other | | 0.01339 | | | 7.79
Nlocal: 8000 ave 8008 max 7992 min
Histogram: 2 0 0 0 0 0 0 0 0 2

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# LAMMPS benchmark of granular flow
# chute flow of 32000 atoms with frozen base at 26 degrees
@ -57,24 +57,24 @@ Neighbor list info ...
master list distance cutoff = 1.1
ghost atom cutoff = 1.1
binsize = 0.55 -> bins = 146 73 68
Memory usage per processor = 15.7007 Mbytes
Step Atoms KinEng 1 Volume
Memory usage per processor = 16.1265 Mbytes
Step Atoms KinEng c_1 Volume
0 128000 3136556.5 6404.5051 119335.13
100 128000 3137168.3 6284.3873 119338.83
Loop time of 0.906913 on 4 procs for 100 steps with 128000 atoms
Loop time of 0.832365 on 4 procs for 100 steps with 128000 atoms
Performance: 952.683 tau/day, 110.264 timesteps/s
99.7% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 1038.006 tau/day, 120.140 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.51454 | 0.53094 | 0.55381 | 2.0 | 58.54
Neigh | 0.042597 | 0.043726 | 0.045801 | 0.6 | 4.82
Comm | 0.063027 | 0.064657 | 0.067367 | 0.7 | 7.13
Output | 0.00024891 | 0.00059718 | 0.00086498 | 1.0 | 0.07
Modify | 0.16508 | 0.17656 | 0.1925 | 2.6 | 19.47
Other | | 0.09043 | | | 9.97
Pair | 0.5178 | 0.52208 | 0.52793 | 0.5 | 62.72
Neigh | 0.047003 | 0.047113 | 0.047224 | 0.0 | 5.66
Comm | 0.05233 | 0.052988 | 0.053722 | 0.2 | 6.37
Output | 0.00024986 | 0.00032717 | 0.00036693 | 0.3 | 0.04
Modify | 0.15517 | 0.15627 | 0.15808 | 0.3 | 18.77
Other | | 0.0536 | | | 6.44
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 4 0 0 0 0 0 0 0 0 0
@ -87,4 +87,4 @@ Total # of neighbors = 460532
Ave neighs/atom = 3.59791
Neighbor list builds = 2
Dangerous builds = 0
Total wall time: 0:00:01
Total wall time: 0:00:00

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# bulk Cu lattice
variable x index 1
@ -49,25 +49,25 @@ Neighbor list info ...
master list distance cutoff = 5.95
ghost atom cutoff = 5.95
binsize = 2.975 -> bins = 25 25 25
Memory usage per processor = 10.2238 Mbytes
Memory usage per processor = 11.2238 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -113280 0 -106662.09 18703.573
50 781.69049 -109873.35 0 -106640.13 52273.088
100 801.832 -109957.3 0 -106640.77 51322.821
Loop time of 5.90097 on 1 procs for 100 steps with 32000 atoms
Loop time of 5.96529 on 1 procs for 100 steps with 32000 atoms
Performance: 7.321 ns/day, 3.278 hours/ns, 16.946 timesteps/s
Performance: 7.242 ns/day, 3.314 hours/ns, 16.764 timesteps/s
99.9% CPU use with 1 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.2121 | 5.2121 | 5.2121 | 0.0 | 88.33
Neigh | 0.58212 | 0.58212 | 0.58212 | 0.0 | 9.86
Comm | 0.030392 | 0.030392 | 0.030392 | 0.0 | 0.52
Output | 0.00023389 | 0.00023389 | 0.00023389 | 0.0 | 0.00
Modify | 0.060871 | 0.060871 | 0.060871 | 0.0 | 1.03
Other | | 0.01527 | | | 0.26
Pair | 5.2743 | 5.2743 | 5.2743 | 0.0 | 88.42
Neigh | 0.59212 | 0.59212 | 0.59212 | 0.0 | 9.93
Comm | 0.030399 | 0.030399 | 0.030399 | 0.0 | 0.51
Output | 0.00026202 | 0.00026202 | 0.00026202 | 0.0 | 0.00
Modify | 0.050487 | 0.050487 | 0.050487 | 0.0 | 0.85
Other | | 0.01776 | | | 0.30
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# bulk Cu lattice
variable x index 1
@ -49,25 +49,25 @@ Neighbor list info ...
master list distance cutoff = 5.95
ghost atom cutoff = 5.95
binsize = 2.975 -> bins = 25 25 25
Memory usage per processor = 5.09629 Mbytes
Memory usage per processor = 5.59629 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -113280 0 -106662.09 18703.573
50 781.69049 -109873.35 0 -106640.13 52273.088
100 801.832 -109957.3 0 -106640.77 51322.821
Loop time of 1.58019 on 4 procs for 100 steps with 32000 atoms
Loop time of 1.64562 on 4 procs for 100 steps with 32000 atoms
Performance: 27.338 ns/day, 0.878 hours/ns, 63.284 timesteps/s
Performance: 26.252 ns/day, 0.914 hours/ns, 60.767 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.3617 | 1.366 | 1.3723 | 0.4 | 86.45
Neigh | 0.15123 | 0.15232 | 0.15374 | 0.2 | 9.64
Comm | 0.033429 | 0.041275 | 0.047066 | 2.7 | 2.61
Output | 0.00011301 | 0.0001573 | 0.000211 | 0.3 | 0.01
Modify | 0.014694 | 0.015085 | 0.015421 | 0.2 | 0.95
Other | | 0.005342 | | | 0.34
Pair | 1.408 | 1.4175 | 1.4341 | 0.9 | 86.14
Neigh | 0.15512 | 0.15722 | 0.16112 | 0.6 | 9.55
Comm | 0.029105 | 0.049986 | 0.061822 | 5.8 | 3.04
Output | 0.00010991 | 0.00011539 | 0.00012302 | 0.0 | 0.01
Modify | 0.013383 | 0.013573 | 0.013883 | 0.2 | 0.82
Other | | 0.007264 | | | 0.44
Nlocal: 8000 ave 8008 max 7993 min
Histogram: 2 0 0 0 0 0 0 0 1 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# bulk Cu lattice
variable x index 1
@ -49,25 +49,25 @@ Neighbor list info ...
master list distance cutoff = 5.95
ghost atom cutoff = 5.95
binsize = 2.975 -> bins = 49 49 25
Memory usage per processor = 10.1402 Mbytes
Memory usage per processor = 11.1402 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1600 -453120 0 -426647.73 18704.012
50 779.50001 -439457.02 0 -426560.06 52355.276
100 797.97828 -439764.76 0 -426562.07 51474.74
Loop time of 6.46849 on 4 procs for 100 steps with 128000 atoms
Loop time of 6.60121 on 4 procs for 100 steps with 128000 atoms
Performance: 6.679 ns/day, 3.594 hours/ns, 15.460 timesteps/s
Performance: 6.544 ns/day, 3.667 hours/ns, 15.149 timesteps/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.581 | 5.5997 | 5.6265 | 0.8 | 86.57
Neigh | 0.65287 | 0.658 | 0.66374 | 0.5 | 10.17
Comm | 0.075706 | 0.11015 | 0.13655 | 7.2 | 1.70
Output | 0.00026488 | 0.00028312 | 0.00029302 | 0.1 | 0.00
Modify | 0.069607 | 0.072407 | 0.074555 | 0.7 | 1.12
Other | | 0.02794 | | | 0.43
Pair | 5.6676 | 5.7011 | 5.7469 | 1.3 | 86.36
Neigh | 0.66423 | 0.67119 | 0.68082 | 0.7 | 10.17
Comm | 0.079367 | 0.13668 | 0.1791 | 10.5 | 2.07
Output | 0.00026989 | 0.00028622 | 0.00031209 | 0.1 | 0.00
Modify | 0.060046 | 0.062203 | 0.065009 | 0.9 | 0.94
Other | | 0.02974 | | | 0.45
Nlocal: 32000 ave 32092 max 31914 min
Histogram: 1 0 0 1 0 1 0 0 0 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# 3d Lennard-Jones melt
variable x index 1
@ -50,20 +50,20 @@ Memory usage per processor = 8.21387 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 2.26309 on 1 procs for 100 steps with 32000 atoms
Loop time of 2.26185 on 1 procs for 100 steps with 32000 atoms
Performance: 19088.920 tau/day, 44.187 timesteps/s
Performance: 19099.377 tau/day, 44.212 timesteps/s
99.9% CPU use with 1 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.9341 | 1.9341 | 1.9341 | 0.0 | 85.46
Neigh | 0.2442 | 0.2442 | 0.2442 | 0.0 | 10.79
Comm | 0.024158 | 0.024158 | 0.024158 | 0.0 | 1.07
Output | 0.00011611 | 0.00011611 | 0.00011611 | 0.0 | 0.01
Modify | 0.053222 | 0.053222 | 0.053222 | 0.0 | 2.35
Other | | 0.007258 | | | 0.32
Pair | 1.9328 | 1.9328 | 1.9328 | 0.0 | 85.45
Neigh | 0.2558 | 0.2558 | 0.2558 | 0.0 | 11.31
Comm | 0.024061 | 0.024061 | 0.024061 | 0.0 | 1.06
Output | 0.00012612 | 0.00012612 | 0.00012612 | 0.0 | 0.01
Modify | 0.040887 | 0.040887 | 0.040887 | 0.0 | 1.81
Other | | 0.008214 | | | 0.36
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# 3d Lennard-Jones melt
variable x index 1
@ -50,20 +50,20 @@ Memory usage per processor = 4.09506 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 0.640733 on 4 procs for 100 steps with 32000 atoms
Loop time of 0.635957 on 4 procs for 100 steps with 32000 atoms
Performance: 67422.779 tau/day, 156.071 timesteps/s
99.7% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 67929.172 tau/day, 157.243 timesteps/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.49487 | 0.51733 | 0.5322 | 1.9 | 80.74
Neigh | 0.061131 | 0.063685 | 0.065433 | 0.6 | 9.94
Comm | 0.02457 | 0.042349 | 0.069598 | 8.1 | 6.61
Output | 5.9843e-05 | 6.3181e-05 | 6.6996e-05 | 0.0 | 0.01
Modify | 0.012961 | 0.013863 | 0.014491 | 0.5 | 2.16
Other | | 0.003448 | | | 0.54
Pair | 0.51335 | 0.51822 | 0.52569 | 0.7 | 81.49
Neigh | 0.063695 | 0.064309 | 0.065397 | 0.3 | 10.11
Comm | 0.027525 | 0.03629 | 0.041959 | 3.1 | 5.71
Output | 6.3896e-05 | 6.6698e-05 | 7.081e-05 | 0.0 | 0.01
Modify | 0.012472 | 0.01254 | 0.012618 | 0.1 | 1.97
Other | | 0.004529 | | | 0.71
Nlocal: 8000 ave 8037 max 7964 min
Histogram: 2 0 0 0 0 0 0 0 1 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# 3d Lennard-Jones melt
variable x index 1
@ -50,20 +50,20 @@ Memory usage per processor = 8.13678 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6133849 -5.0196788
100 0.75841891 -5.759957 0 -4.6223375 0.20008866
Loop time of 2.57914 on 4 procs for 100 steps with 128000 atoms
Loop time of 2.55762 on 4 procs for 100 steps with 128000 atoms
Performance: 16749.768 tau/day, 38.773 timesteps/s
Performance: 16890.677 tau/day, 39.099 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.042 | 2.1092 | 2.1668 | 3.1 | 81.78
Neigh | 0.23982 | 0.24551 | 0.25233 | 1.0 | 9.52
Comm | 0.067088 | 0.13887 | 0.22681 | 15.7 | 5.38
Output | 0.00013185 | 0.00021666 | 0.00027108 | 0.4 | 0.01
Modify | 0.060348 | 0.071269 | 0.077063 | 2.5 | 2.76
Other | | 0.01403 | | | 0.54
Pair | 2.0583 | 2.0988 | 2.1594 | 2.6 | 82.06
Neigh | 0.24411 | 0.24838 | 0.25585 | 0.9 | 9.71
Comm | 0.066397 | 0.13872 | 0.1863 | 11.9 | 5.42
Output | 0.00012994 | 0.00021023 | 0.00025702 | 0.3 | 0.01
Modify | 0.055533 | 0.058343 | 0.061791 | 1.2 | 2.28
Other | | 0.0132 | | | 0.52
Nlocal: 32000 ave 32060 max 31939 min
Histogram: 1 0 1 0 0 0 0 1 0 1

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# Rhodopsin model
units real
@ -56,6 +56,7 @@ timestep 2.0
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316)
G vector (1/distance) = 0.248835
grid = 25 32 32
stencil order = 5
@ -70,41 +71,41 @@ Neighbor list info ...
master list distance cutoff = 12
ghost atom cutoff = 12
binsize = 6 -> bins = 10 13 13
Memory usage per processor = 91.7487 Mbytes
Memory usage per processor = 93.2721 Mbytes
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = -25356.2064 KinEng = 21444.8313 Temp = 299.0397
PotEng = -46801.0377 E_bond = 2537.9940 E_angle = 10921.3742
E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634
E_coul = 207025.8927 E_long = -270403.7333 Press = -142.6035
E_coul = 207025.8927 E_long = -270403.7333 Press = -149.3301
Volume = 307995.0335
---------------- Step 50 ----- CPU = 17.6362 (sec) ----------------
TotEng = -25330.0828 KinEng = 21501.0029 Temp = 299.8230
PotEng = -46831.0857 E_bond = 2471.7004 E_angle = 10836.4975
E_dihed = 5239.6299 E_impro = 227.1218 E_vdwl = -1993.2754
E_coul = 206797.6331 E_long = -270410.3930 Press = 237.6701
Volume = 308031.5639
---------------- Step 100 ----- CPU = 35.9089 (sec) ----------------
TotEng = -25290.7593 KinEng = 21592.0117 Temp = 301.0920
PotEng = -46882.7709 E_bond = 2567.9807 E_angle = 10781.9408
E_dihed = 5198.7432 E_impro = 216.7834 E_vdwl = -1902.4783
E_coul = 206659.2326 E_long = -270404.9733 Press = 6.9960
Volume = 308133.9888
Loop time of 35.9089 on 1 procs for 100 steps with 32000 atoms
---------------- Step 50 ----- CPU = 17.2007 (sec) ----------------
TotEng = -25330.0321 KinEng = 21501.0036 Temp = 299.8230
PotEng = -46831.0357 E_bond = 2471.7033 E_angle = 10836.5108
E_dihed = 5239.6316 E_impro = 227.1219 E_vdwl = -1993.2763
E_coul = 206797.6655 E_long = -270410.3927 Press = 237.6866
Volume = 308031.5640
---------------- Step 100 ----- CPU = 35.0315 (sec) ----------------
TotEng = -25290.7387 KinEng = 21591.9096 Temp = 301.0906
PotEng = -46882.6484 E_bond = 2567.9789 E_angle = 10781.9556
E_dihed = 5198.7493 E_impro = 216.7863 E_vdwl = -1902.6458
E_coul = 206659.5006 E_long = -270404.9733 Press = 6.7898
Volume = 308133.9933
Loop time of 35.0316 on 1 procs for 100 steps with 32000 atoms
Performance: 0.481 ns/day, 49.874 hours/ns, 2.785 timesteps/s
Performance: 0.493 ns/day, 48.655 hours/ns, 2.855 timesteps/s
99.9% CPU use with 1 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 25.731 | 25.731 | 25.731 | 0.0 | 71.66
Bond | 1.2771 | 1.2771 | 1.2771 | 0.0 | 3.56
Kspace | 3.2094 | 3.2094 | 3.2094 | 0.0 | 8.94
Neigh | 4.4538 | 4.4538 | 4.4538 | 0.0 | 12.40
Comm | 0.068507 | 0.068507 | 0.068507 | 0.0 | 0.19
Output | 0.00025916 | 0.00025916 | 0.00025916 | 0.0 | 0.00
Modify | 1.1417 | 1.1417 | 1.1417 | 0.0 | 3.18
Other | | 0.027 | | | 0.08
Pair | 25.021 | 25.021 | 25.021 | 0.0 | 71.42
Bond | 1.2834 | 1.2834 | 1.2834 | 0.0 | 3.66
Kspace | 3.2116 | 3.2116 | 3.2116 | 0.0 | 9.17
Neigh | 4.2767 | 4.2767 | 4.2767 | 0.0 | 12.21
Comm | 0.069283 | 0.069283 | 0.069283 | 0.0 | 0.20
Output | 0.00028205 | 0.00028205 | 0.00028205 | 0.0 | 0.00
Modify | 1.14 | 1.14 | 1.14 | 0.0 | 3.25
Other | | 0.02938 | | | 0.08
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
@ -113,9 +114,9 @@ Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.20281e+07 ave 1.20281e+07 max 1.20281e+07 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 12028107
Total # of neighbors = 12028098
Ave neighs/atom = 375.878
Ave special neighs/atom = 7.43187
Neighbor list builds = 11
Dangerous builds = 0
Total wall time: 0:00:37
Total wall time: 0:00:36

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# Rhodopsin model
units real
@ -56,6 +56,7 @@ timestep 2.0
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316)
G vector (1/distance) = 0.248835
grid = 25 32 32
stencil order = 5
@ -70,52 +71,52 @@ Neighbor list info ...
master list distance cutoff = 12
ghost atom cutoff = 12
binsize = 6 -> bins = 10 13 13
Memory usage per processor = 36.629 Mbytes
Memory usage per processor = 37.3604 Mbytes
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = -25356.2064 KinEng = 21444.8313 Temp = 299.0397
PotEng = -46801.0377 E_bond = 2537.9940 E_angle = 10921.3742
E_dihed = 5211.7865 E_impro = 213.5116 E_vdwl = -2307.8634
E_coul = 207025.8927 E_long = -270403.7333 Press = -142.6035
E_coul = 207025.8927 E_long = -270403.7333 Press = -149.3301
Volume = 307995.0335
---------------- Step 50 ----- CPU = 4.7461 (sec) ----------------
TotEng = -25330.0828 KinEng = 21501.0029 Temp = 299.8230
PotEng = -46831.0857 E_bond = 2471.7004 E_angle = 10836.4975
E_dihed = 5239.6299 E_impro = 227.1218 E_vdwl = -1993.2754
E_coul = 206797.6331 E_long = -270410.3930 Press = 237.6701
Volume = 308031.5639
---------------- Step 100 ----- CPU = 9.6332 (sec) ----------------
TotEng = -25290.7591 KinEng = 21592.0117 Temp = 301.0920
PotEng = -46882.7708 E_bond = 2567.9807 E_angle = 10781.9408
E_dihed = 5198.7432 E_impro = 216.7834 E_vdwl = -1902.4783
E_coul = 206659.2327 E_long = -270404.9733 Press = 6.9960
Volume = 308133.9888
Loop time of 9.63322 on 4 procs for 100 steps with 32000 atoms
---------------- Step 50 ----- CPU = 4.6056 (sec) ----------------
TotEng = -25330.0321 KinEng = 21501.0036 Temp = 299.8230
PotEng = -46831.0357 E_bond = 2471.7033 E_angle = 10836.5108
E_dihed = 5239.6316 E_impro = 227.1219 E_vdwl = -1993.2763
E_coul = 206797.6655 E_long = -270410.3927 Press = 237.6866
Volume = 308031.5640
---------------- Step 100 ----- CPU = 9.3910 (sec) ----------------
TotEng = -25290.7386 KinEng = 21591.9096 Temp = 301.0906
PotEng = -46882.6482 E_bond = 2567.9789 E_angle = 10781.9556
E_dihed = 5198.7493 E_impro = 216.7863 E_vdwl = -1902.6458
E_coul = 206659.5007 E_long = -270404.9733 Press = 6.7898
Volume = 308133.9933
Loop time of 9.39107 on 4 procs for 100 steps with 32000 atoms
Performance: 1.794 ns/day, 13.379 hours/ns, 10.381 timesteps/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads
Performance: 1.840 ns/day, 13.043 hours/ns, 10.648 timesteps/s
99.8% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 6.4364 | 6.5993 | 6.7208 | 4.7 | 68.51
Bond | 0.30755 | 0.32435 | 0.35704 | 3.4 | 3.37
Kspace | 0.92248 | 1.0782 | 1.2597 | 13.0 | 11.19
Neigh | 1.1669 | 1.1672 | 1.1675 | 0.0 | 12.12
Comm | 0.094674 | 0.098065 | 0.10543 | 1.4 | 1.02
Output | 0.00015521 | 0.00016224 | 0.00018215 | 0.1 | 0.00
Modify | 0.32982 | 0.34654 | 0.35365 | 1.6 | 3.60
Other | | 0.01943 | | | 0.20
Pair | 6.2189 | 6.3266 | 6.6072 | 6.5 | 67.37
Bond | 0.30793 | 0.32122 | 0.3414 | 2.4 | 3.42
Kspace | 0.87994 | 1.1644 | 1.2855 | 15.3 | 12.40
Neigh | 1.1358 | 1.136 | 1.1362 | 0.0 | 12.10
Comm | 0.08292 | 0.084935 | 0.087077 | 0.5 | 0.90
Output | 0.00015712 | 0.00016558 | 0.00018501 | 0.1 | 0.00
Modify | 0.33717 | 0.34246 | 0.34794 | 0.7 | 3.65
Other | | 0.01526 | | | 0.16
Nlocal: 8000 ave 8143 max 7933 min
Histogram: 1 2 0 0 0 0 0 0 0 1
Nghost: 22733.5 ave 22769 max 22693 min
Histogram: 1 0 0 0 0 2 0 0 0 1
Neighs: 3.00703e+06 ave 3.0975e+06 max 2.96493e+06 min
Neighs: 3.00702e+06 ave 3.0975e+06 max 2.96492e+06 min
Histogram: 1 2 0 0 0 0 0 0 0 1
Total # of neighbors = 12028107
Total # of neighbors = 12028098
Ave neighs/atom = 375.878
Ave special neighs/atom = 7.43187
Neighbor list builds = 11
Dangerous builds = 0
Total wall time: 0:00:10
Total wall time: 0:00:09

View File

@ -1,4 +1,4 @@
LAMMPS (15 Feb 2016)
LAMMPS (6 Oct 2016)
# Rhodopsin model
variable x index 1
@ -77,6 +77,7 @@ timestep 2.0
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:316)
G vector (1/distance) = 0.248593
grid = 48 60 36
stencil order = 5
@ -91,52 +92,52 @@ Neighbor list info ...
master list distance cutoff = 12
ghost atom cutoff = 12
binsize = 6 -> bins = 19 26 13
Memory usage per processor = 95.5339 Mbytes
Memory usage per processor = 96.9597 Mbytes
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = -101425.4887 KinEng = 85779.3251 Temp = 299.0304
PotEng = -187204.8138 E_bond = 10151.9760 E_angle = 43685.4968
E_dihed = 20847.1460 E_impro = 854.0463 E_vdwl = -9231.4537
E_coul = 827053.5824 E_long = -1080565.6077 Press = -142.3092
E_coul = 827053.5824 E_long = -1080565.6077 Press = -149.0358
Volume = 1231980.1340
---------------- Step 50 ----- CPU = 18.7806 (sec) ----------------
TotEng = -101320.2677 KinEng = 86003.4837 Temp = 299.8118
PotEng = -187323.7514 E_bond = 9887.1072 E_angle = 43346.7922
E_dihed = 20958.7032 E_impro = 908.4715 E_vdwl = -7973.4457
E_coul = 826141.3831 E_long = -1080592.7629 Press = 238.0161
Volume = 1232126.1855
---------------- Step 100 ----- CPU = 38.3684 (sec) ----------------
TotEng = -101158.1849 KinEng = 86355.6149 Temp = 301.0393
PotEng = -187513.7998 E_bond = 10272.0693 E_angle = 43128.6454
E_dihed = 20793.9759 E_impro = 867.0826 E_vdwl = -7586.7186
E_coul = 825583.7122 E_long = -1080572.5667 Press = 15.2151
Volume = 1232535.8423
Loop time of 38.3684 on 4 procs for 100 steps with 128000 atoms
---------------- Step 50 ----- CPU = 18.1689 (sec) ----------------
TotEng = -101320.0211 KinEng = 86003.4933 Temp = 299.8118
PotEng = -187323.5144 E_bond = 9887.1189 E_angle = 43346.8448
E_dihed = 20958.7108 E_impro = 908.4721 E_vdwl = -7973.4486
E_coul = 826141.5493 E_long = -1080592.7617 Press = 238.0404
Volume = 1232126.1814
---------------- Step 100 ----- CPU = 37.2027 (sec) ----------------
TotEng = -101157.9546 KinEng = 86355.7413 Temp = 301.0398
PotEng = -187513.6959 E_bond = 10272.0456 E_angle = 43128.7018
E_dihed = 20794.0107 E_impro = 867.0928 E_vdwl = -7587.2409
E_coul = 825584.2416 E_long = -1080572.5474 Press = 15.1729
Volume = 1232535.8440
Loop time of 37.2028 on 4 procs for 100 steps with 128000 atoms
Performance: 0.450 ns/day, 53.289 hours/ns, 2.606 timesteps/s
Performance: 0.464 ns/day, 51.671 hours/ns, 2.688 timesteps/s
99.9% CPU use with 4 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 26.205 | 26.538 | 26.911 | 5.0 | 69.17
Bond | 1.298 | 1.3125 | 1.3277 | 1.0 | 3.42
Kspace | 3.7099 | 4.0992 | 4.4422 | 13.3 | 10.68
Neigh | 4.6137 | 4.6144 | 4.615 | 0.0 | 12.03
Comm | 0.21398 | 0.21992 | 0.22886 | 1.2 | 0.57
Output | 0.00030518 | 0.00031543 | 0.00033307 | 0.1 | 0.00
Modify | 1.5066 | 1.5232 | 1.5388 | 1.0 | 3.97
Other | | 0.06051 | | | 0.16
Pair | 25.431 | 25.738 | 25.984 | 4.0 | 69.18
Bond | 1.2966 | 1.3131 | 1.3226 | 0.9 | 3.53
Kspace | 3.7563 | 4.0123 | 4.3127 | 10.0 | 10.79
Neigh | 4.3778 | 4.378 | 4.3782 | 0.0 | 11.77
Comm | 0.1903 | 0.19549 | 0.20485 | 1.3 | 0.53
Output | 0.00031805 | 0.00037521 | 0.00039601 | 0.2 | 0.00
Modify | 1.4861 | 1.5051 | 1.5122 | 0.9 | 4.05
Other | | 0.05992 | | | 0.16
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 47957 ave 47957 max 47957 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 1.20281e+07 ave 1.20572e+07 max 1.1999e+07 min
Neighs: 1.20281e+07 ave 1.20572e+07 max 1.19991e+07 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Total # of neighbors = 48112472
Total # of neighbors = 48112540
Ave neighs/atom = 375.879
Ave special neighs/atom = 7.43187
Neighbor list builds = 11
Dangerous builds = 0
Total wall time: 0:00:39
Total wall time: 0:00:38

View File

@ -8,19 +8,21 @@ VENV = $(BUILDDIR)/docenv
TXT2RST = $(VENV)/bin/txt2rst
PYTHON = $(shell which python3)
HAS_PYTHON3 = NO
HAS_VIRTUALENV = NO
ifeq ($(shell which python3 >/dev/null 2>&1; echo $$?), 1)
$(error Python3 was not found! Please check README.md for further instructions)
ifeq ($(shell which python3 >/dev/null 2>&1; echo $$?), 0)
HAS_PYTHON3 = YES
endif
ifeq ($(shell which virtualenv >/dev/null 2>&1; echo $$?), 1)
$(error virtualenv was not found! Please check README.md for further instructions)
ifeq ($(shell which virtualenv >/dev/null 2>&1; echo $$?), 0)
HAS_VIRTUALENV = YES
endif
SOURCES=$(wildcard src/*.txt)
OBJECTS=$(SOURCES:src/%.txt=$(RSTDIR)/%.rst)
.PHONY: help clean-all clean html pdf old venv
.PHONY: help clean-all clean epub html pdf old venv
# ------------------------------------------
@ -30,6 +32,7 @@ help:
@echo " pdf create Manual.pdf and Developer.pdf in this dir"
@echo " old create old-style HTML doc pages in old dir"
@echo " fetch fetch HTML and PDF files from LAMMPS web site"
@echo " epub create ePUB format manual for e-book readers"
@echo " clean remove all intermediate RST files"
@echo " clean-all reset the entire build environment"
@echo " txt2html build txt2html tool"
@ -40,7 +43,7 @@ clean-all:
rm -rf $(BUILDDIR)/* utils/txt2html/txt2html.exe
clean:
rm -rf $(RSTDIR)
rm -rf $(RSTDIR) html
html: $(OBJECTS)
@(\
@ -61,6 +64,20 @@ html: $(OBJECTS)
@rm -rf html/USER/*/*.[sg]*
@echo "Build finished. The HTML pages are in doc/html."
epub: $(OBJECTS)
@mkdir -p epub
@rm -f LAMMPS.epub
@cp src/JPG/lammps-logo.png epub/
@(\
. $(VENV)/bin/activate ;\
cp -r src/* $(RSTDIR)/ ;\
sphinx-build -j 8 -b epub -c utils/sphinx-config -d $(BUILDDIR)/doctrees $(RSTDIR) epub ;\
deactivate ;\
)
@mv epub/LAMMPS.epub .
@rm -rf epub
@echo "Build finished. The ePUB manual file is created."
pdf: utils/txt2html/txt2html.exe
@(\
cd src; \
@ -109,6 +126,8 @@ $(RSTDIR)/%.rst : src/%.txt $(TXT2RST)
)
$(VENV):
@if [ "$(HAS_PYTHON3)" == "NO" ] ; then echo "Python3 was not found! Please check README.md for further instructions" 1>&2; exit 1; fi
@if [ "$(HAS_VIRTUALENV)" == "NO" ] ; then echo "virtualenv was not found! Please check README.md for further instructions" 1>&2; exit 1; fi
@( \
virtualenv -p $(PYTHON) $(VENV); \
. $(VENV)/bin/activate; \

View File

@ -1,13 +1,14 @@
LAMMPS Documentation
Depending on how you obtained LAMMPS, this directory has 2 or 3
sub-directories and optionally 2 PDF files:
sub-directories and optionally 2 PDF files and an ePUB file:
src content files for LAMMPS documentation
html HTML version of the LAMMPS manual (see html/Manual.html)
tools tools and settings for building the documentation
Manual.pdf large PDF version of entire manual
Developer.pdf small PDF with info about how LAMMPS is structured
LAMMPS.epub Manual in ePUB format
If you downloaded LAMMPS as a tarball from the web site, all these
directories and files should be included.
@ -49,6 +50,7 @@ make pdf # generate 2 PDF files (Manual.pdf,Developer.pdf)
make old # generate old-style HTML pages in old dir via txt2html
make fetch # fetch HTML doc pages and 2 PDF files from web site
# as a tarball and unpack into html dir and 2 PDFs
make epub # generate LAMMPS.epub in ePUB format using Sphinx
make clean # remove intermediate RST files created by HTML build
make clean-all # remove entire build folder and any cached data
@ -92,5 +94,22 @@ This will install virtualenv from the Python Package Index.
Installing prerequisites for PDF build
[TBA]
----------------
Installing prerequisites for epub build
## ePUB
Same as for HTML. This uses the same tools and configuration
files as the HTML tree.
For converting the generated ePUB file to a mobi format file
(for e-book readers like Kindle, that cannot read ePUB), you
also need to have the 'ebook-convert' tool from the "calibre"
software installed. http://calibre-ebook.com/
You first create the ePUB file with 'make epub' and then do:
ebook-convert LAMMPS.epub LAMMPS.mobi

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.8 KiB

View File

@ -0,0 +1,10 @@
\documentclass[12pt]{article}
\pagestyle{empty}
\begin{document}
$$
E = - \frac{\epsilon}{2} \ln \left[ 1 - \left(\frac{r-r0}{\Delta}\right)^2\right]
$$
\end{document}

BIN
doc/src/Eqs/fix_grem.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

9
doc/src/Eqs/fix_grem.tex Normal file
View File

@ -0,0 +1,9 @@
\documentclass[12pt]{article}
\begin{document}
$$
T_{eff} = \lambda + \eta (H - H_0)
$$
\end{document}

BIN
doc/src/Eqs/pair_agni.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 KiB

View File

@ -1,9 +0,0 @@
\documentclass[12pt]{article}
\pagestyle{empty}
\begin{document}
$$
F^C = A \omega_{ij} \qquad \qquad r_{ij} < r_c
$$
\end{document}

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View File

@ -0,0 +1,12 @@
\documentclass[12pt]{article}
\pagestyle{empty}
\begin{document}
\begin{eqnarray*}
du_{i}^{cond} & = & \kappa_{ij}(\frac{1}{\theta_{i}}-\frac{1}{\theta_{j}})\omega_{ij}^{2} + \alpha_{ij}\omega_{ij}\zeta_{ij}^{q}(\Delta{t})^{-1/2} \\
du_{i}^{mech} & = & -\frac{1}{2}\gamma_{ij}\omega_{ij}^{2}(\frac{\vec{r_{ij}}}{r_{ij}}\bullet\vec{v_{ij}})^{2} -
\frac{\sigma^{2}_{ij}}{4}(\frac{1}{m_{i}}+\frac{1}{m_{j}})\omega_{ij}^{2} -
\frac{1}{2}\sigma_{ij}\omega_{ij}(\frac{\vec{r_{ij}}}{r_{ij}}\bullet\vec{v_{ij}})\zeta_{ij}(\Delta{t})^{-1/2} \\
\end{eqnarray*}
\end{document}

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

View File

@ -0,0 +1,11 @@
\documentclass[12pt]{article}
\pagestyle{empty}
\begin{document}
\begin{eqnarray*}
\alpha_{ij}^{2} & = & 2k_{B}\kappa_{ij} \\
\sigma^{2}_{ij} & = & 2\gamma_{ij}k_{B}\Theta_{ij} \\
\Theta_{ij}^{-1} & = & \frac{1}{2}(\frac{1}{\theta_{i}}+\frac{1}{\theta_{j}}) \\
\end{eqnarray*}
\end{document}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.1 KiB

View File

@ -0,0 +1,10 @@
\documentclass[12pt]{article}
\pagestyle{empty}
\begin{document}
\begin{eqnarray*}
V_{ij} & = & f_C(r_{ij}) \left[ f_R(r_{ij}) + b_{ij} f_A(r_{ij}) + c_0 \right]
\end{eqnarray*}
\end{document}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.0 KiB

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -3,7 +3,7 @@
\begin{document}
$$
P = \frac{N k_B T}{V} + \frac{\sum_{i}^{N} r_i \bullet f_i}{dV}
P = \frac{N k_B T}{V} + \frac{\sum_{i}^{N'} r_i \bullet f_i}{dV}
$$
\end{document}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

After

Width:  |  Height:  |  Size: 5.3 KiB

View File

@ -4,7 +4,7 @@
$$
P_{IJ} = \frac{\sum_{k}^{N} m_k v_{k_I} v_{k_J}}{V} +
\frac{\sum_{k}^{N} r_{k_I} f_{k_J}}{V}
\frac{\sum_{k}^{N'} r_{k_I} f_{k_J}}{V}
$$
\end{document}

BIN
doc/src/JPG/gran_funnel.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 KiB

BIN
doc/src/JPG/gran_mixer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 KiB

BIN
doc/src/JPG/lammps-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

View File

@ -1,7 +1,7 @@
<!-- HTML_ONLY -->
<HEAD>
<TITLE>LAMMPS Users Manual</TITLE>
<META NAME="docnumber" CONTENT="29 Sep 2016 version">
<META NAME="docnumber" CONTENT="26 Jan 2017 version">
<META NAME="author" CONTENT="http://lammps.sandia.gov - Sandia National Laboratories">
<META NAME="copyright" CONTENT="Copyright (2003) Sandia Corporation. This software and manual is distributed under the GNU General Public License.">
</HEAD>
@ -21,7 +21,7 @@
<H1></H1>
LAMMPS Documentation :c,h3
29 Sep 2016 version :c,h4
26 Jan 2017 version :c,h4
Version info: :h4
@ -109,7 +109,7 @@ it gives quick access to documentation for all LAMMPS commands.
:caption: User Documentation
:name: userdoc
:includehidden:
Section_intro
Section_start
Section_commands
@ -144,7 +144,7 @@ Indices and tables
* :ref:`genindex`
* :ref:`search`
END_RST -->
<!-- HTML_ONLY -->

Binary file not shown.

View File

@ -117,7 +117,7 @@ PPPM. However, 2-FFT PPPM also requires a slightly larger mesh size to
achieve the same accuracy as 4-FFT PPPM. For problems where the FFT
cost is the performance bottleneck (typically large problems running
on many processors), 2-FFT PPPM may be faster than 4-FFT PPPM.
Staggered PPPM performs calculations using two different meshes, one
shifted slightly with respect to the other. This can reduce force
aliasing errors and increase the accuracy of the method, but also

View File

@ -37,14 +37,14 @@ simulation with all the settings. Rather, the input script is read
one line at a time and each command takes effect when it is read.
Thus this sequence of commands:
timestep 0.5
run 100
timestep 0.5
run 100
run 100 :pre
does something different than this sequence:
run 100
timestep 0.5
run 100
timestep 0.5
run 100 :pre
In the first case, the specified timestep (0.5 fmsec) is used for two
@ -97,7 +97,7 @@ single leading "#" will comment out the entire command.
(3) The line is searched repeatedly for $ characters, which indicate
variables that are replaced with a text string. See an exception in
(6).
(6).
If the $ is followed by curly brackets, then the variable name is the
text inside the curly brackets. If no curly brackets follow the $,
@ -106,7 +106,7 @@ the $. Thus $\{myTemp\} and $x refer to variable names "myTemp" and
"x".
How the variable is converted to a text string depends on what style
of variable it is; see the "variable"_variable doc page for details.
of variable it is; see the "variable"_variable.html doc page for details.
It can be a variable that stores multiple text strings, and return one
of them. The returned text string can be multiple "words" (space
separated) which will then be interpreted as multiple arguments in the
@ -123,7 +123,7 @@ variable X equal (xlo+xhi)/2+sqrt(v_area)
region 1 block $X 2 INF INF EDGE EDGE
variable X delete :pre
can be replaced by
can be replaced by
region 1 block $((xlo+xhi)/2+sqrt(v_area)) 2 INF INF EDGE EDGE :pre
@ -282,78 +282,135 @@ the "minimize"_minimize.html command. A parallel tempering
3.4 Commands listed by category :link(cmd_4),h4
This section lists all LAMMPS commands, grouped by category. The
"next section"_#cmd_5 lists the same commands alphabetically. Note
that some style options for some commands are part of specific LAMMPS
packages, which means they cannot be used unless the package was
included when LAMMPS was built. Not all packages are included in a
default LAMMPS build. These dependencies are listed as Restrictions
in the command's documentation.
"next section"_#cmd_5 lists the same commands alphabetically. The
next section also includes (long) lists of style options for entries
that appear in the following categories as a single command (fix,
compute, pair, etc). Commands that are added by user packages are not
included in these categories, but they are in the next section.
Initialization:
"atom_modify"_atom_modify.html, "atom_style"_atom_style.html,
"boundary"_boundary.html, "dimension"_dimension.html,
"newton"_newton.html, "processors"_processors.html, "units"_units.html
"newton"_newton.html,
"package"_package.html,
"processors"_processors.html,
"suffix"_suffix.html,
"units"_units.html
Atom definition:
Setup simulation box:
"create_atoms"_create_atoms.html, "create_box"_create_box.html,
"lattice"_lattice.html, "read_data"_read_data.html,
"read_dump"_read_dump.html, "read_restart"_read_restart.html,
"region"_region.html, "replicate"_replicate.html
"boundary"_boundary.html,
"box"_box.html,
"change_box"_change_box.html,
"create_box"_create_box.html,
"dimension"_dimension.html,
"lattice"_lattice.html,
"region"_region.html
Setup atoms:
"atom_modify"_atom_modify.html,
"atom_style"_atom_style.html,
"balance"_balance.html,
"create_atoms"_create_atoms.html,
"create_bonds"_create_bonds.html,
"delete_atoms"_delete_atoms.html,
"delete_bonds"_delete_bonds.html,
"displace_atoms"_displace_atoms.html,
"group"_group.html,
"mass"_mass.html,
"molecule"_molecule.html,
"read_data"_read_data.html,
"read_dump"_read_dump.html,
"read_restart"_read_restart.html,
"replicate"_replicate.html,
"set"_set.html,
"velocity"_velocity.html
Force fields:
"angle_coeff"_angle_coeff.html, "angle_style"_angle_style.html,
"bond_coeff"_bond_coeff.html, "bond_style"_bond_style.html,
"dielectric"_dielectric.html, "dihedral_coeff"_dihedral_coeff.html,
"angle_coeff"_angle_coeff.html,
"angle_style"_angle_style.html,
"bond_coeff"_bond_coeff.html,
"bond_style"_bond_style.html,
"bond_write"_bond_write.html,
"dielectric"_dielectric.html,
"dihedral_coeff"_dihedral_coeff.html,
"dihedral_style"_dihedral_style.html,
"improper_coeff"_improper_coeff.html,
"improper_style"_improper_style.html,
"kspace_modify"_kspace_modify.html, "kspace_style"_kspace_style.html,
"pair_coeff"_pair_coeff.html, "pair_modify"_pair_modify.html,
"pair_style"_pair_style.html, "pair_write"_pair_write.html,
"kspace_modify"_kspace_modify.html,
"kspace_style"_kspace_style.html,
"pair_coeff"_pair_coeff.html,
"pair_modify"_pair_modify.html,
"pair_style"_pair_style.html,
"pair_write"_pair_write.html,
"special_bonds"_special_bonds.html
Settings:
"comm_style"_comm_style.html, "group"_group.html, "mass"_mass.html,
"min_modify"_min_modify.html, "min_style"_min_style.html,
"neigh_modify"_neigh_modify.html, "neighbor"_neighbor.html,
"reset_timestep"_reset_timestep.html, "run_style"_run_style.html,
"set"_set.html, "timestep"_timestep.html, "velocity"_velocity.html
"comm_modify"_comm_modify.html,
"comm_style"_comm_style.html,
"info"_info.html,
"min_modify"_min_modify.html,
"min_style"_min_style.html,
"neigh_modify"_neigh_modify.html,
"neighbor"_neighbor.html,
"partition"_partition.html,
"reset_timestep"_reset_timestep.html,
"run_style"_run_style.html,
"timer"_timer.html,
"timestep"_timestep.html
Fixes:
Operations within timestepping (fixes) and diagnositics (computes):
"fix"_fix.html, "fix_modify"_fix_modify.html, "unfix"_unfix.html
Computes:
"compute"_compute.html, "compute_modify"_compute_modify.html,
"uncompute"_uncompute.html
"compute"_compute.html,
"compute_modify"_compute_modify.html,
"fix"_fix.html,
"fix_modify"_fix_modify.html,
"uncompute"_uncompute.html,
"unfix"_unfix.html
Output:
"dump"_dump.html, "dump image"_dump_image.html,
"dump_modify"_dump_modify.html, "dump movie"_dump_image.html,
"restart"_restart.html, "thermo"_thermo.html,
"thermo_modify"_thermo_modify.html, "thermo_style"_thermo_style.html,
"undump"_undump.html, "write_data"_write_data.html,
"write_dump"_write_dump.html, "write_restart"_write_restart.html
"dump image"_dump_image.html,
"dump movie"_dump_image.html,
"dump"_dump.html,
"dump_modify"_dump_modify.html,
"restart"_restart.html,
"thermo"_thermo.html,
"thermo_modify"_thermo_modify.html,
"thermo_style"_thermo_style.html,
"undump"_undump.html,
"write_coeff"_write_coeff.html,
"write_data"_write_data.html,
"write_dump"_write_dump.html,
"write_restart"_write_restart.html
Actions:
"delete_atoms"_delete_atoms.html, "delete_bonds"_delete_bonds.html,
"displace_atoms"_displace_atoms.html, "change_box"_change_box.html,
"minimize"_minimize.html, "neb"_neb.html "prd"_prd.html,
"rerun"_rerun.html, "run"_run.html, "temper"_temper.html
"minimize"_minimize.html,
"neb"_neb.html,
"prd"_prd.html,
"rerun"_rerun.html,
"run"_run.html,
"tad"_tad.html,
"temper"_temper.html
Miscellaneous:
Input script control:
"clear"_clear.html, "echo"_echo.html, "if"_if.html,
"include"_include.html, "jump"_jump.html, "label"_label.html,
"log"_log.html, "next"_next.html, "print"_print.html,
"shell"_shell.html, "variable"_variable.html
"clear"_clear.html,
"echo"_echo.html,
"if"_if.html,
"include"_include.html,
"jump"_jump.html,
"label"_label.html,
"log"_log.html,
"next"_next.html,
"print"_print.html,
"python"_python.html,
"quit"_quit.html,
"shell"_shell.html,
"variable"_variable.html
:line
@ -471,8 +528,11 @@ These are additional commands in USER packages, which can be used if
package"_Section_start.html#start_3.
"dump custom/vtk"_dump_custom_vtk.html,
"dump nc"_dump_nc.html,
"dump nc/mpiio"_dump_nc.html,
"group2ndx"_group2ndx.html,
"ndx2group"_group2ndx.html :tb(c=3,ea=c)
"ndx2group"_group2ndx.html,
"temper/grem"_temper_grem.html :tb(c=3,ea=c)
:line
@ -516,12 +576,14 @@ USER-INTEL, k = KOKKOS, o = USER-OMP, t = OPT.
"gcmc"_fix_gcmc.html,
"gld"_fix_gld.html,
"gravity (o)"_fix_gravity.html,
"halt"_fix_halt.html,
"heat"_fix_heat.html,
"indent"_fix_indent.html,
"langevin (k)"_fix_langevin.html,
"lineforce"_fix_lineforce.html,
"momentum"_fix_momentum.html,
"momentum (k)"_fix_momentum.html,
"move"_fix_move.html,
"mscg"_fix_mscg.html,
"msst"_fix_msst.html,
"neb"_fix_neb.html,
"nph (ko)"_fix_nh.html,
@ -572,10 +634,10 @@ USER-INTEL, k = KOKKOS, o = USER-OMP, t = OPT.
"rigid/nve (o)"_fix_rigid.html,
"rigid/nvt (o)"_fix_rigid.html,
"rigid/small (o)"_fix_rigid.html,
"rigid/small/nph"_fix_rigid.html,
"rigid/small/npt"_fix_rigid.html,
"rigid/small/nve"_fix_rigid.html,
"rigid/small/nvt"_fix_rigid.html,
"rigid/small/nph (o)"_fix_rigid.html,
"rigid/small/npt (o)"_fix_rigid.html,
"rigid/small/nve (o)"_fix_rigid.html,
"rigid/small/nvt (o)"_fix_rigid.html,
"setforce (k)"_fix_setforce.html,
"shake"_fix_shake.html,
"spring"_fix_spring.html,
@ -599,6 +661,7 @@ USER-INTEL, k = KOKKOS, o = USER-OMP, t = OPT.
"viscous"_fix_viscous.html,
"wall/colloid"_fix_wall.html,
"wall/gran"_fix_wall_gran.html,
"wall/gran/region"_fix_wall_gran_region.html,
"wall/harmonic"_fix_wall.html,
"wall/lj1043"_fix_wall.html,
"wall/lj126"_fix_wall.html,
@ -617,6 +680,7 @@ package"_Section_start.html#start_3.
"atc"_fix_atc.html,
"ave/correlate/long"_fix_ave_correlate_long.html,
"colvars"_fix_colvars.html,
"dpd/energy"_fix_dpd_energy.html,
"drude"_fix_drude.html,
"drude/transform/direct"_fix_drude_transform.html,
"drude/transform/reverse"_fix_drude_transform.html,
@ -625,6 +689,7 @@ package"_Section_start.html#start_3.
"eos/table/rx"_fix_eos_table_rx.html,
"flow/gauss"_fix_flow_gauss.html,
"gle"_fix_gle.html,
"grem"_fix_grem.html,
"imd"_fix_imd.html,
"ipi"_fix_ipi.html,
"langevin/drude"_fix_langevin_drude.html,
@ -637,7 +702,10 @@ package"_Section_start.html#start_3.
"meso"_fix_meso.html,
"manifoldforce"_fix_manifoldforce.html,
"meso/stationary"_fix_meso_stationary.html,
"nve/dot"_fix_nve_dot.html,
"nve/dotc/langevin"_fix_nve_dotc_langevin.html,
"nve/manifold/rattle"_fix_nve_manifold_rattle.html,
"nvk"_fix_nvk.html,
"nvt/manifold/rattle"_fix_nvt_manifold_rattle.html,
"nph/eff"_fix_nh_eff.html,
"npt/eff"_fix_nh_eff.html,
@ -703,6 +771,7 @@ KOKKOS, o = USER-OMP, t = OPT.
"erotate/sphere"_compute_erotate_sphere.html,
"erotate/sphere/atom"_compute_erotate_sphere_atom.html,
"event/displace"_compute_event_displace.html,
"global/atom"_compute_global_atom.html,
"group/group"_compute_group_group.html,
"gyration"_compute_gyration.html,
"gyration/chunk"_compute_gyration_chunk.html,
@ -824,6 +893,8 @@ KOKKOS, o = USER-OMP, t = OPT.
"body"_pair_body.html,
"bop"_pair_bop.html,
"born (go)"_pair_born.html,
"born/coul/dsf"_pair_born.html,
"born/coul/dsf/cs"_pair_born.html,
"born/coul/long (go)"_pair_born.html,
"born/coul/long/cs"_pair_born.html,
"born/coul/msm (o)"_pair_born.html,
@ -847,10 +918,10 @@ KOKKOS, o = USER-OMP, t = OPT.
"coul/msm"_pair_coul.html,
"coul/streitz"_pair_coul.html,
"coul/wolf (ko)"_pair_coul.html,
"dpd (o)"_pair_dpd.html,
"dpd/tstat (o)"_pair_dpd.html,
"dpd (go)"_pair_dpd.html,
"dpd/tstat (go)"_pair_dpd.html,
"dsmc"_pair_dsmc.html,
"eam (gkot)"_pair_eam.html,
"eam (gkiot)"_pair_eam.html,
"eam/alloy (gkot)"_pair_eam.html,
"eam/fs (gkot)"_pair_eam.html,
"eim (o)"_pair_eim.html,
@ -896,7 +967,7 @@ KOKKOS, o = USER-OMP, t = OPT.
"lubricate/poly (o)"_pair_lubricate.html,
"lubricateU"_pair_lubricateU.html,
"lubricateU/poly"_pair_lubricateU.html,
"meam (o)"_pair_meam.html,
"meam"_pair_meam.html,
"mie/cut (o)"_pair_mie.html,
"morse (got)"_pair_morse.html,
"nb3b/harmonic (o)"_pair_nb3b_harmonic.html,
@ -917,11 +988,13 @@ KOKKOS, o = USER-OMP, t = OPT.
"table (gko)"_pair_table.html,
"tersoff (gkio)"_pair_tersoff.html,
"tersoff/mod (gko)"_pair_tersoff_mod.html,
"tersoff/mod/c (o)"_pair_tersoff_mod.html,
"tersoff/zbl (gko)"_pair_tersoff_zbl.html,
"tip4p/cut (o)"_pair_coul.html,
"tip4p/long (o)"_pair_coul.html,
"tri/lj"_pair_tri_lj.html,
"vashishta (o)"_pair_vashishta.html,
"vashishta (ko)"_pair_vashishta.html,
"vashishta/table (o)"_pair_vashishta.html,
"yukawa (go)"_pair_yukawa.html,
"yukawa/colloid (go)"_pair_yukawa_colloid.html,
"zbl (go)"_pair_zbl.html :tb(c=4,ea=c)
@ -930,6 +1003,7 @@ These are additional pair styles in USER packages, which can be used
if "LAMMPS is built with the appropriate
package"_Section_start.html#start_3.
"agni (o)"_pair_agni.html,
"awpmd/cut"_pair_awpmd.html,
"buck/mdf"_pair_mdf.html,
"coul/cut/soft (o)"_pair_lj_soft.html,
@ -956,13 +1030,18 @@ package"_Section_start.html#start_3.
"lj/sdk/coul/long (go)"_pair_sdk.html,
"lj/sdk/coul/msm (o)"_pair_sdk.html,
"lj/sf (o)"_pair_lj_sf.html,
"meam/spline"_pair_meam_spline.html,
"meam/spline (o)"_pair_meam_spline.html,
"meam/sw/spline"_pair_meam_sw_spline.html,
"mgpt"_pair_mgpt.html,
"morse/smooth/linear"_pair_morse.html,
"morse/soft"_pair_morse.html,
"multi/lucy"_pair_multi_lucy.html,
"multi/lucy/rx"_pair_multi_lucy_rx.html,
"oxdna/coaxstk"_pair_oxdna.html,
"oxdna/excv"_pair_oxdna.html,
"oxdna/hbond"_pair_oxdna.html,
"oxdna/stk"_pair_oxdna.html,
"oxdna/xstk"_pair_oxdna.html,
"quip"_pair_quip.html,
"reax/c (k)"_pair_reax_c.html,
"smd/hertz"_pair_smd_hertz.html,
@ -1011,7 +1090,8 @@ if "LAMMPS is built with the appropriate
package"_Section_start.html#start_3.
"harmonic/shift (o)"_bond_harmonic_shift.html,
"harmonic/shift/cut (o)"_bond_harmonic_shift_cut.html :tb(c=4,ea=c)
"harmonic/shift/cut (o)"_bond_harmonic_shift_cut.html,
"oxdna/fene"_bond_oxdna_fene.html :tb(c=4,ea=c)
:line

View File

@ -55,12 +55,13 @@ LAMMPS errors are detected at setup time; others like a bond
stretching too far may not occur until the middle of a run.
LAMMPS tries to flag errors and print informative error messages so
you can fix the problem. Of course, LAMMPS cannot figure out your
physics or numerical mistakes, like choosing too big a timestep,
specifying erroneous force field coefficients, or putting 2 atoms on
top of each other! If you run into errors that LAMMPS doesn't catch
that you think it should flag, please send an email to the
"developers"_http://lammps.sandia.gov/authors.html.
you can fix the problem. For most errors it will also print the last
input script command that it was processing. Of course, LAMMPS cannot
figure out your physics or numerical mistakes, like choosing too big a
timestep, specifying erroneous force field coefficients, or putting 2
atoms on top of each other! If you run into errors that LAMMPS
doesn't catch that you think it should flag, please send an email to
the "developers"_http://lammps.sandia.gov/authors.html.
If you get an error message about an invalid command in your input
script, you can determine what command is causing the problem by
@ -159,7 +160,7 @@ As a last resort, you can send an email directly to the
These are two alphabetic lists of the "ERROR"_#error and
"WARNING"_#warn messages LAMMPS prints out and the reason why. If the
explanation here is not sufficient, the documentation for the
offending command may help.
offending command may help.
Error and warning messages also list the source file and line number
where the error was generated. For example, this message
@ -8116,11 +8117,11 @@ boundary of a processor's sub-domain has moved more than 1/2 the
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
"neigh_modify"_neigh_modify command. The safest settings are "delay 0
every 1 check yes". Second, it may mean that an atom has moved far
outside a processor's sub-domain or even the entire simulation box.
This indicates bad physics, e.g. due to highly overlapping atoms, too
large a timestep, etc. :dd
"neigh_modify"_neigh_modify.html command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc. :dd
{Out of range atoms - cannot compute PPPM} :dt
@ -8132,11 +8133,11 @@ boundary of a processor's sub-domain has moved more than 1/2 the
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
"neigh_modify"_neigh_modify command. The safest settings are "delay 0
every 1 check yes". Second, it may mean that an atom has moved far
outside a processor's sub-domain or even the entire simulation box.
This indicates bad physics, e.g. due to highly overlapping atoms, too
large a timestep, etc. :dd
"neigh_modify"_neigh_modify.html command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc. :dd
{Out of range atoms - cannot compute PPPMDisp} :dt
@ -8148,11 +8149,11 @@ boundary of a processor's sub-domain has moved more than 1/2 the
rebuilt and atoms being migrated to new processors. This also means
you may be missing pairwise interactions that need to be computed.
The solution is to change the re-neighboring criteria via the
"neigh_modify"_neigh_modify command. The safest settings are "delay 0
every 1 check yes". Second, it may mean that an atom has moved far
outside a processor's sub-domain or even the entire simulation box.
This indicates bad physics, e.g. due to highly overlapping atoms, too
large a timestep, etc. :dd
"neigh_modify"_neigh_modify.html command. The safest settings are
"delay 0 every 1 check yes". Second, it may mean that an atom has
moved far outside a processor's sub-domain or even the entire
simulation box. This indicates bad physics, e.g. due to highly
overlapping atoms, too large a timestep, etc. :dd
{Overflow of allocated fix vector storage} :dt

View File

@ -54,30 +54,30 @@ accelerate: run with various acceleration options (OpenMP, GPU, Phi)
balance: dynamic load balancing, 2d system
body: body particles, 2d system
colloid: big colloid particles in a small particle solvent, 2d system
comb: models using the COMB potential
comb: models using the COMB potential
coreshell: core/shell model using CORESHELL package
crack: crack propagation in a 2d solid
crack: crack propagation in a 2d solid
deposit: deposit atoms and molecules on a surface
dipole: point dipolar particles, 2d system
dreiding: methanol via Dreiding FF
eim: NaCl using the EIM potential
ellipse: ellipsoidal particles in spherical solvent, 2d system
flow: Couette and Poiseuille flow in a 2d channel
flow: Couette and Poiseuille flow in a 2d channel
friction: frictional contact of spherical asperities between 2d surfaces
hugoniostat: Hugoniostat shock dynamics
indent: spherical indenter into a 2d solid
indent: spherical indenter into a 2d solid
kim: use of potentials in Knowledge Base for Interatomic Models (KIM)
meam: MEAM test for SiC and shear (same as shear examples)
melt: rapid melt of 3d LJ system
meam: MEAM test for SiC and shear (same as shear examples)
melt: rapid melt of 3d LJ system
micelle: self-assembly of small lipid-like molecules into 2d bilayers
min: energy minimization of 2d LJ melt
msst: MSST shock dynamics
min: energy minimization of 2d LJ melt
msst: MSST shock dynamics
nb3b: use of nonbonded 3-body harmonic pair style
neb: nudged elastic band (NEB) calculation for barrier finding
nemd: non-equilibrium MD of 2d sheared system
neb: nudged elastic band (NEB) calculation for barrier finding
nemd: non-equilibrium MD of 2d sheared system
obstacle: flow around two voids in a 2d channel
peptide: dynamics of a small solvated peptide chain (5-mer)
peri: Peridynamic model of cylinder impacted by indenter
peri: Peridynamic model of cylinder impacted by indenter
pour: pouring of granular particles into a 3d box, then chute flow
prd: parallel replica dynamics of vacancy diffusion in bulk Si
python: using embedded Python in a LAMMPS input script
@ -120,7 +120,7 @@ browser.
Uppercase directories :h4
ASPHERE: various aspherical particle models, using ellipsoids, rigid bodies, line/triangle particles, etc
COUPLE: examples of how to use LAMMPS as a library
COUPLE: examples of how to use LAMMPS as a library
DIFFUSE: compute diffusion coefficients via several methods
ELASTIC: compute elastic constants at zero temperature
ELASTIC_T: compute elastic constants at finite temperature

View File

@ -37,7 +37,7 @@ pitfalls or alternatives.
Please see some of the closed issues for examples of how to
suggest code enhancements, submit proposed changes, or report
elated issues and how they are resoved.
possible bugs and how they are resoved.
As an alternative to using GitHub, you may e-mail the
"core developers"_http://lammps.sandia.gov/authors.html or send
@ -71,7 +71,7 @@ a parallel framework similar to LAMMPS. Most notably, these have
included many-body potentials - Stillinger-Weber, Tersoff, ReaxFF -
and the associated charge-equilibration routines needed for ReaxFF.
The "History link"_http://lammps.sandia.gov/history.html on the
The "History link"_http://lammps.sandia.gov/history.html on the
LAMMPS WWW page gives a timeline of features added to the
C++ open-source version of LAMMPS over the last several years.
@ -80,7 +80,7 @@ site"_lws, except for Warp & GranFlow which were primarily used
internally. A brief listing of their features is given here.
LAMMPS 2001
F90 + MPI
dynamic memory
spatial-decomposition parallelism
@ -96,7 +96,7 @@ LAMMPS 2001
user-defined diagnostics :ul
LAMMPS 99
F77 + MPI
static memory allocation
spatial-decomposition parallelism

View File

@ -4,7 +4,7 @@
:link(ld,Manual.html)
:link(lc,Section_commands.html#comm)
:line
:line
6. How-to discussions :h3
@ -68,7 +68,7 @@ Look at the {in.chain} input script provided in the {bench} directory
of the LAMMPS distribution to see the original script that these 2
scripts are based on. If that script had the line
restart 50 tmp.restart :pre
restart 50 tmp.restart :pre
added to it, it would produce 2 binary restart files (tmp.restart.50
and tmp.restart.100) as it ran.
@ -76,17 +76,17 @@ and tmp.restart.100) as it ran.
This script could be used to read the 1st restart file and re-run the
last 50 timesteps:
read_restart tmp.restart.50 :pre
read_restart tmp.restart.50 :pre
neighbor 0.4 bin
neigh_modify every 1 delay 1 :pre
neighbor 0.4 bin
neigh_modify every 1 delay 1 :pre
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297 :pre
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297 :pre
timestep 0.012 :pre
timestep 0.012 :pre
run 50 :pre
run 50 :pre
Note that the following commands do not need to be repeated because
their settings are included in the restart file: {units, atom_style,
@ -107,25 +107,25 @@ lmp_g++ -r tmp.restart.50 tmp.restart.data :pre
Then, this script could be used to re-run the last 50 steps:
units lj
atom_style bond
pair_style lj/cut 1.12
pair_modify shift yes
bond_style fene
units lj
atom_style bond
pair_style lj/cut 1.12
pair_modify shift yes
bond_style fene
special_bonds 0.0 1.0 1.0 :pre
read_data tmp.restart.data :pre
read_data tmp.restart.data :pre
neighbor 0.4 bin
neigh_modify every 1 delay 1 :pre
neighbor 0.4 bin
neigh_modify every 1 delay 1 :pre
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297 :pre
fix 1 all nve
fix 2 all langevin 1.0 1.0 10.0 904297 :pre
timestep 0.012 :pre
timestep 0.012 :pre
reset_timestep 50
run 50 :pre
reset_timestep 50
run 50 :pre
Note that nearly all the settings specified in the original {in.chain}
script must be repeated, except the {pair_coeff} and {bond_coeff}
@ -522,7 +522,7 @@ H mass = 1.008
O charge = -1.040
H charge = 0.520
r0 of OH bond = 0.9572
theta of HOH angle = 104.52
theta of HOH angle = 104.52
OM distance = 0.15
LJ epsilon of O-O = 0.1550
LJ sigma of O-O = 3.1536
@ -629,7 +629,7 @@ the SPC and SPC/E models.
Wikipedia also has a nice article on "water
models"_http://en.wikipedia.org/wiki/Water_model.
:line
:line
6.10 Coupling LAMMPS to other codes :link(howto_10),h4
@ -729,7 +729,7 @@ LAMMPS and half to the other code and run both codes simultaneously
before syncing them up periodically. Or it might instantiate multiple
instances of LAMMPS to perform different calculations.
:line
:line
6.11 Visualizing LAMMPS snapshots :link(howto_11),h4
@ -832,7 +832,7 @@ rotation of [A], [B], and [C] and can be computed as follows:
where A = | [A] | indicates the scalar length of [A]. The hat symbol (^)
indicates the corresponding unit vector. {beta} and {gamma} are angles
between the vectors described below. Note that by construction,
between the vectors described below. Note that by construction,
[a], [b], and [c] have strictly positive x, y, and z components, respectively.
If it should happen that
[A], [B], and [C] form a left-handed basis, then the above equations
@ -841,17 +841,17 @@ to first apply an inversion. This can be achieved
by interchanging two basis vectors or by changing the sign of one of them.
For consistency, the same rotation/inversion applied to the basis vectors
must also be applied to atom positions, velocities,
must also be applied to atom positions, velocities,
and any other vector quantities.
This can be conveniently achieved by first converting to
This can be conveniently achieved by first converting to
fractional coordinates in the
old basis and then converting to distance coordinates in the new basis.
The transformation is given by the following equation:
:c,image(Eqs/rotate.jpg)
where {V} is the volume of the box, [X] is the original vector quantity and
[x] is the vector in the LAMMPS basis.
where {V} is the volume of the box, [X] is the original vector quantity and
[x] is the vector in the LAMMPS basis.
There is no requirement that a triclinic box be periodic in any
dimension, though it typically should be in at least the 2nd dimension
@ -938,17 +938,17 @@ defined above. The relationship between these 6 quantities
(a,b,c,alpha,beta,gamma) and the LAMMPS box sizes (lx,ly,lz) =
(xhi-xlo,yhi-ylo,zhi-zlo) and tilt factors (xy,xz,yz) is as follows:
:c,image(Eqs/box.jpg)
:c,image(Eqs/box.jpg)
The inverse relationship can be written as follows:
:c,image(Eqs/box_inverse.jpg)
:c,image(Eqs/box_inverse.jpg)
The values of {a}, {b}, {c} , {alpha}, {beta} , and {gamma} can be printed
out or accessed by computes using the
"thermo_style custom"_thermo_style.html keywords
The values of {a}, {b}, {c} , {alpha}, {beta} , and {gamma} can be printed
out or accessed by computes using the
"thermo_style custom"_thermo_style.html keywords
{cella}, {cellb}, {cellc}, {cellalpha}, {cellbeta}, {cellgamma},
respectively.
respectively.
As discussed on the "dump"_dump.html command doc page, when the BOX
BOUNDS for a snapshot is written to a dump file for a triclinic box,
@ -1854,13 +1854,19 @@ internal LAMMPS operations. Note that LAMMPS classes are defined
within a LAMMPS namespace (LAMMPS_NS) if you use them from another C++
application.
Library.cpp contains these 5 basic functions:
Library.cpp contains these functions for creating and destroying an
instance of LAMMPS and sending it commands to execute. See the
documentation in the src/library.cpp file for details:
void lammps_open(int, char **, MPI_Comm, void **)
void lammps_open_no_mpi(int, char **, void **)
void lammps_close(void *)
int lammps_version(void *)
void lammps_file(void *, char *)
char *lammps_command(void *, char *) :pre
char *lammps_command(void *, char *)
void lammps_commands_list(void *, int, char **)
void lammps_commands_string(void *, char *)
void lammps_free(void *) :pre
The lammps_open() function is used to initialize LAMMPS, passing in a
list of strings as if they were "command-line
@ -1880,6 +1886,10 @@ half to the other code and run both codes simultaneously before
syncing them up periodically. Or it might instantiate multiple
instances of LAMMPS to perform different calculations.
The lammps_open_no_mpi() function is similar except that no MPI
communicator is passed from the caller. Instead, MPI_COMM_WORLD is
used to instantiate LAMMPS, and MPI is initialzed if necessary.
The lammps_close() function is used to shut down an instance of LAMMPS
and free all its memory.
@ -1891,44 +1901,106 @@ changes to the LAMMPS command syntax between versions. The returned
LAMMPS version code is an integer (e.g. 2 Sep 2015 results in
20150902) that grows with every new LAMMPS version.
The lammps_file() and lammps_command() functions are used to pass a
file or string to LAMMPS as if it were an input script or single
command in an input script. Thus the calling code can read or
generate a series of LAMMPS commands one line at a time and pass it
thru the library interface to setup a problem and then run it,
interleaving the lammps_command() calls with other calls to extract
information from LAMMPS, perform its own operations, or call another
code's library.
The lammps_file(), lammps_command(), lammps_commands_list(), and
lammps_commands_string() functions are used to pass one or more
commands to LAMMPS to execute, the same as if they were coming from an
input script.
Other useful functions are also included in library.cpp. For example:
Via these functions, the calling code can read or generate a series of
LAMMPS commands one or multiple at a time and pass it thru the library
interface to setup a problem and then run it in stages. The caller
can interleave the command function calls with operations it performs,
calls to extract information from or set information within LAMMPS, or
calls to another code's library.
The lammps_file() function passes the filename of an input script.
The lammps_command() function passes a single command as a string.
The lammps_commands_list() function passes multiple commands in a
char** list. In both lammps_command() and lammps_commands_list(),
individual commands may or may not have a trailing newline. The
lammps_commands_string() function passes multiple commands
concatenated into one long string, separated by newline characters.
In both lammps_commands_list() and lammps_commands_string(), a single
command can be spread across multiple lines, if the last printable
character of all but the last line is "&", the same as if the lines
appeared in an input script.
The lammps_free() function is a clean-up function to free memory that
the library allocated previously via other function calls. See
comments in src/library.cpp file for which other functions need this
clean-up.
Library.cpp also contains these functions for extracting information
from LAMMPS and setting value within LAMMPS. Again, see the
documentation in the src/library.cpp file for details, including
which quantities can be queried by name:
void *lammps_extract_global(void *, char *)
void lammps_extract_box(void *, double *, double *,
double *, double *, double *, int *, int *)
void *lammps_extract_atom(void *, char *)
void *lammps_extract_compute(void *, char *, int, int)
void *lammps_extract_fix(void *, char *, int, int, int, int)
void *lammps_extract_variable(void *, char *, char *)
int lammps_set_variable(void *, char *, char *)
void *lammps_extract_variable(void *, char *, char *) :pre
void lammps_reset_box(void *, double *, double *, double, double, double)
int lammps_set_variable(void *, char *, char *) :pre
double lammps_get_thermo(void *, char *)
int lammps_get_natoms(void *)
void lammps_get_coords(void *, double *)
void lammps_put_coords(void *, double *) :pre
void lammps_gather_atoms(void *, double *)
void lammps_scatter_atoms(void *, double *) :pre
void lammps_create_atoms(void *, int, tagint *, int *, double *, double *,
imageint *, int) :pre
These can extract various global or per-atom quantities from LAMMPS as
well as values calculated by a compute, fix, or variable. The
"set_variable" function can set an existing string-style variable to a
new value, so that subsequent LAMMPS commands can access the variable.
The "get" and "put" operations can retrieve and reset atom
coordinates. See the library.cpp file and its associated header file
library.h for details.
The extract functions return a pointer to various global or per-atom
quantities stored in LAMMPS or to values calculated by a compute, fix,
or variable. The pointer returned by the extract_global() function
can be used as a permanent reference to a value which may change. For
the other extract functions, the underlying storage may be reallocated
as LAMMPS runs, so you need to re-call the function to assure a
current pointer or returned value(s).
The key idea of the library interface is that you can write any
functions you wish to define how your code talks to LAMMPS and add
them to src/library.cpp and src/library.h, as well as to the "Python
interface"_Section_python.html. The routines you add can access or
change any LAMMPS data you wish. The examples/COUPLE and python
directories have example C++ and C and Python codes which show how a
driver code can link to LAMMPS as a library, run LAMMPS on a subset of
processors, grab data from LAMMPS, change it, and put it back into
LAMMPS.
The lammps_reset_box() function resets the size and shape of the
simulation box, e.g. as part of restoring a previously extracted and
saved state of a simulation.
The lammps_set_variable() function can set an existing string-style
variable to a new string value, so that subsequent LAMMPS commands can
access the variable.
The lammps_get_thermo() function returns the current value of a thermo
keyword as a double precision value.
The lammps_get_natoms() function returns the total number of atoms in
the system and can be used by the caller to allocate space for the
lammps_gather_atoms() and lammps_scatter_atoms() functions. The
gather function collects atom info of the requested type (atom coords,
types, forces, etc) from all procsesors, orders them by atom ID, and
returns a full list to each calling processor. The scatter function
does the inverse. It distributes the same kinds of values,
passed by the caller, to each atom owned by individual processors.
The lammps_create_atoms() function takes a list of N atoms as input
with atom types and coords (required), an optionally atom IDs and
velocities and image flags. It uses the coords of each atom to assign
it as a new atom to the processor that owns it. This function is
useful to add atoms to a simulation or (in tandem with
lammps_reset_box()) to restore a previously extracted and saved state
of a simulation. Additional properties for the new atoms can then be
assigned via the lammps_scatter_atoms() or lammps_extract_atom()
functions.
The examples/COUPLE and python directories have example C++ and C and
Python codes which show how a driver code can link to LAMMPS as a
library, run LAMMPS on a subset of processors, grab data from LAMMPS,
change it, and put it back into LAMMPS.
NOTE: You can write code for additional functions as needed to define
how your code talks to LAMMPS and add them to src/library.cpp and
src/library.h, as well as to the "Python
interface"_Section_python.html. The added functions can access or
change any LAMMPS data you wish.
:line
@ -2092,11 +2164,11 @@ lattice fcc 5.376 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 box
mass 1 39.948
mass 1 39.948
pair_style lj/cut 13.0
pair_coeff * * 0.2381 3.405
timestep $\{dt\}
thermo $d :pre
thermo $d :pre
# equilibration and thermalization :pre
@ -2130,7 +2202,7 @@ but uses the Einstein formulation, analogous to the Einstein
mean-square-displacement formulation for self-diffusivity. The
time-integrated momentum fluxes play the role of Cartesian
coordinates, whose mean-square displacement increases linearly
with time at sufficiently long times.
with time at sufficiently long times.
:line
@ -2510,8 +2582,8 @@ the electrostatic environment inducing polarizability.
Technically, shells are attached to the cores by a spring force f =
k*r where k is a parametrized spring constant and r is the distance
between the core and the shell. The charges of the core and the shell
add up to the ion charge, thus q(ion) = q(core) + q(shell). This
setup introduces the ion polarizability (alpha) given by
add up to the ion charge, thus q(ion) = q(core) + q(shell). This
setup introduces the ion polarizability (alpha) given by
alpha = q(shell)^2 / k. In a
similar fashion the mass of the ion is distributed on the core and the
shell with the core having the larger mass.
@ -2526,7 +2598,7 @@ for NaCl, as found in examples/coreshell, has this format:
432 atoms # core and shell atoms
216 bonds # number of core/shell springs :pre
4 atom types # 2 cores and 2 shells for Na and Cl
4 atom types # 2 cores and 2 shells for Na and Cl
2 bond types :pre
0.0 24.09597 xlo xhi
@ -2545,19 +2617,19 @@ Atoms :pre
1 1 2 1.5005 0.00000000 0.00000000 0.00000000 # core of core/shell pair 1
2 1 4 -2.5005 0.00000000 0.00000000 0.00000000 # shell of core/shell pair 1
3 2 1 1.5056 4.01599500 4.01599500 4.01599500 # core of core/shell pair 2
4 2 3 -0.5056 4.01599500 4.01599500 4.01599500 # shell of core/shell pair 2
4 2 3 -0.5056 4.01599500 4.01599500 4.01599500 # shell of core/shell pair 2
(...) :pre
Bonds # Bond topology for spring forces :pre
1 2 1 2 # spring for core/shell pair 1
2 2 3 4 # spring for core/shell pair 2
2 2 3 4 # spring for core/shell pair 2
(...) :pre
Non-Coulombic (e.g. Lennard-Jones) pairwise interactions are only
defined between the shells. Coulombic interactions are defined
between all cores and shells. If desired, additional bonds can be
specified between cores.
specified between cores.
The "special_bonds"_special_bonds.html command should be used to
turn-off the Coulombic interaction within core/shell pairs, since that
@ -2620,7 +2692,7 @@ Note that to perform thermostatting using this definition of
temperature, the "fix modify temp"_fix_modify.html command should be
used to assign the compute to the thermostat fix. Likewise the
"thermo_modify temp"_thermo_modify.html command can be used to make
this temperature be output for the overall system.
this temperature be output for the overall system.
For the NaCl example, this can be done as follows:
@ -2632,13 +2704,13 @@ fix thermostatequ all nve # integrator as needed f
fix_modify thermoberendsen temp CSequ
thermo_modify temp CSequ # output of center-of-mass derived temperature :pre
If "compute temp/cs"_compute_temp_cs.html is used, the decoupled
relative motion of the core and the shell should in theory be
If "compute temp/cs"_compute_temp_cs.html is used, the decoupled
relative motion of the core and the shell should in theory be
stable. However numerical fluctuation can introduce a small
momentum to the system, which is noticable over long trajectories.
Therefore it is recomendable to use the "fix
momentum"_fix_momentum.html command in combination with "compute
temp/cs"_compute_temp_cs.html when equilibrating the system to
Therefore it is recomendable to use the "fix
momentum"_fix_momentum.html command in combination with "compute
temp/cs"_compute_temp_cs.html when equilibrating the system to
prevent any drift.
When intializing the velocities of a system with core/shell pairs, it
@ -2661,17 +2733,17 @@ to the electrostatic environment. This fast movement also limits the
timestep size that can be used.
The primary literature of the adiabatic core/shell model suggests that
the fast relative motion of the core/shell pairs only allows negligible
the fast relative motion of the core/shell pairs only allows negligible
energy transfer to the environment. Therefore it is not intended to
decouple the core/shell degree of freedom from the physical system
during production runs. In other words, the "compute
temp/cs"_compute_temp_cs.html command should not be used during
production runs and is only required during equilibration. This way one
is consistent with literature (based on the code packages DL_POLY or
production runs and is only required during equilibration. This way one
is consistent with literature (based on the code packages DL_POLY or
GULP for instance).
The mentioned energy transfer will typically lead to a a small drift
in total energy over time. This internal energy can be monitored
The mentioned energy transfer will typically lead to a small drift
in total energy over time. This internal energy can be monitored
using the "compute chunk/atom"_compute_chunk_atom.html and "compute
temp/chunk"_compute_temp_chunk.html commands. The internal kinetic
energies of each core/shell pair can then be summed using the sum()
@ -2702,14 +2774,14 @@ The additional section in the date file would be formatted like this:
CS-Info # header of additional section :pre
1 1 # column 1 = atom ID, column 2 = core/shell ID
2 1
3 2
4 2
5 3
6 3
7 4
8 4
1 1 # column 1 = atom ID, column 2 = core/shell ID
2 1
3 2
4 2
5 3
6 3
7 4
8 4
(...) :pre
:line
@ -2771,7 +2843,7 @@ temp/drude"_compute_temp_drude.html. This requires also to use the
command {comm_modify vel yes}.
Short-range damping of the induced dipole interactions can be achieved
using Thole functions through the the "pair style
using Thole functions through the "pair style
thole"_pair_thole.html in "pair_style hybrid/overlay"_pair_hybrid.html
with a Coulomb pair style. It may be useful to use {coul/long/cs} or
similar from the CORESHELL package if the core and Drude particle come

View File

@ -181,7 +181,7 @@ Atom creation :h5
displace atoms :ul
Ensembles, constraints, and boundary conditions :h5
("fix"_fix.html command)
("fix"_fix.html command)
2d or 3d systems
orthogonal or non-orthogonal (triclinic symmetry) simulation domains
@ -199,7 +199,7 @@ Ensembles, constraints, and boundary conditions :h5
variety of additional boundary conditions and constraints :ul
Integrators :h5
("run"_run.html, "run_style"_run_style.html, "minimize"_minimize.html commands)
("run"_run.html, "run_style"_run_style.html, "minimize"_minimize.html commands)
velocity-Verlet integrator
Brownian dynamics
@ -213,7 +213,7 @@ Diagnostics :h5
see the various flavors of the "fix"_fix.html and "compute"_compute.html commands :ul
Output :h5
("dump"_dump.html, "restart"_restart.html commands)
("dump"_dump.html, "restart"_restart.html commands)
log file of thermodynamic info
text dump files of atom coords, velocities, other per-atom quantities
@ -366,11 +366,11 @@ complementary modeling tasks.
"DL_POLY"_dlpoly
"Tinker"_tinker :ul
:link(charmm,http://www.scripps.edu/brooks)
:link(amber,http://amber.scripps.edu)
:link(charmm,http://www.charmm.org)
:link(amber,http://ambermd.org)
:link(namd,http://www.ks.uiuc.edu/Research/namd/)
:link(nwchem,http://www.emsl.pnl.gov/docs/nwchem/nwchem.html)
:link(dlpoly,http://www.cse.clrc.ac.uk/msi/software/DL_POLY)
:link(dlpoly,http://www.ccp5.ac.uk/DL_POLY_CLASSIC)
:link(tinker,http://dasher.wustl.edu/tinker)
CHARMM, AMBER, NAMD, NWCHEM, and Tinker are designed primarily for

View File

@ -84,7 +84,7 @@ Package, Description, Author(s), Doc page, Example, Library
"PERI"_#PERI, Peridynamics models, Mike Parks (Sandia), "pair_style peri"_pair_peri.html, peri, -
"POEMS"_#POEMS, coupled rigid body motion, Rudra Mukherjee (JPL), "fix poems"_fix_poems.html, rigid, lib/poems
"PYTHON"_#PYTHON, embed Python code in an input script, -, "python"_python.html, python, lib/python
"REAX"_#REAX, ReaxFF potential, Aidan Thompson (Sandia), "pair_style reax"_pair_reax.html, reax, lib/reax
"REAX"_#REAX, ReaxFF potential, Aidan Thompson (Sandia), "pair_style reax"_pair_reax.html, reax, lib/reax
"REPLICA"_#REPLICA, multi-replica methods, -, "Section 6.6.5"_Section_howto.html#howto_5, tad, -
"RIGID"_#RIGID, rigid bodies, -, "fix rigid"_fix_rigid.html, rigid, -
"SHOCK"_#SHOCK, shock loading methods, -, "fix msst"_fix_msst.html, -, -
@ -182,7 +182,7 @@ Supporting info: "atom_style body"_atom_style.html, "body"_body.html,
"pair_style body"_pair_body.html, examples/body
:line
CLASS2 package :link(CLASS2),h5
Contents: Bond, angle, dihedral, improper, and pair styles for the
@ -206,9 +206,9 @@ Supporting info: "bond_style class2"_bond_class2.html, "angle_style
class2"_angle_class2.html, "dihedral_style
class2"_dihedral_class2.html, "improper_style
class2"_improper_class2.html, "pair_style lj/class2"_pair_class2.html
:line
COLLOID package :link(COLLOID),h5
Contents: Support for coarse-grained colloidal particles. Wall fix
@ -239,9 +239,9 @@ lubricate"_pair_lubricate.html, "pair_style
lubricateU"_pair_lubricateU.html, examples/colloid, examples/srd
:line
COMPRESS package :link(COMPRESS),h5
Contents: Support for compressed output of dump files via the zlib
compression library, using dump styles with a "gz" in their style
name.
@ -271,7 +271,7 @@ atom/gz"_dump.html, "dump cfg/gz"_dump.html, "dump
custom/gz"_dump.html, "dump xyz/gz"_dump.html
:line
CORESHELL package :link(CORESHELL),h5
Contents: Compute and pair styles that implement the adiabatic
@ -302,7 +302,7 @@ buck/coul/long/cs"_pair_cs.html, pair_style
lj/cut/coul/long/cs"_pair_lj.html, examples/coreshell
:line
DIPOLE package :link(DIPOLE),h5
Contents: An atom style and several pair styles to support point
@ -326,9 +326,9 @@ Supporting info: "atom_style dipole"_atom_style.html, "pair_style
lj/cut/dipole/cut"_pair_dipole.html, "pair_style
lj/cut/dipole/long"_pair_dipole.html, "pair_style
lj/long/dipole/long"_pair_dipole.html, examples/dipole
:line
GPU package :link(GPU),h5
Contents: Dozens of pair styles and a version of the PPPM long-range
@ -385,9 +385,9 @@ Pair Styles section of "Section 3.5"_Section_commands.html#cmd_5
for any pair style listed with a (g),
"kspace_style"_kspace_style.html, "package gpu"_package.html,
examples/accelerate, bench/FERMI, bench/KEPLER
:line
GRANULAR package :link(GRANULAR),h5
Contents: Fixes and pair styles that support models of finite-size
@ -412,9 +412,9 @@ Supporting info: "Section 6.6"_Section_howto.html#howto_6, "fix
pour"_fix_pour.html, "fix wall/gran"_fix_wall_gran.html, "pair_style
gran/hooke"_pair_gran.html, "pair_style
gran/hertz/history"_pair_gran.html, examples/pour, bench/in.chute
:line
KIM package :link(KIM),h5
Contents: A pair style that interfaces to the Knowledge Base for
@ -443,9 +443,9 @@ Make.py -p ^kim -a machine :pre
Supporting info: src/KIM/README, lib/kim/README, "pair_style
kim"_pair_kim.html, examples/kim
:line
KOKKOS package :link(KOKKOS),h5
Contents: Dozens of atom, pair, bond, angle, dihedral, improper styles
@ -501,7 +501,7 @@ for any pair style listed with a (k), "package kokkos"_package.html,
examples/accelerate, bench/FERMI, bench/KEPLER
:line
KSPACE package :link(KSPACE),h5
Contents: A variety of long-range Coulombic solvers, and pair styles
@ -543,7 +543,7 @@ which have "long" or "msm" in their style name,
examples/peptide, bench/in.rhodo
:line
MANYBODY package :link(MANYBODY),h5
Contents: A variety of many-body and bond-order potentials. These
@ -565,14 +565,14 @@ make machine :pre
Make.py -p ^manybody -a machine :pre
Supporting info:
Supporting info:
Examples: Pair Styles section of "Section
3.5"_Section_commands.html#cmd_5, examples/comb, examples/eim,
examples/nb3d, examples/vashishta
:line
MC package :link(MC),h5
Contents: Several fixes and a pair style that have Monte Carlo (MC) or
@ -598,9 +598,9 @@ Supporting info: "fix atom/swap"_fix_atom_swap.html, "fix
bond/break"_fix_bond_break.html, "fix
bond/create"_fix_bond_create.html, "fix bond/swap"_fix_bond_swap.html,
"fix gcmc"_fix_gcmc.html, "pair_style dsmc"_pair_dsmc.html
:line
MEAM package :link(MEAM),h5
Contents: A pair style for the modified embedded atom (MEAM)
@ -644,9 +644,9 @@ Make.py -p ^meam -a machine :pre
Supporting info: lib/meam/README, "pair_style meam"_pair_meam.html,
examples/meam
:line
MISC package :link(MISC),h5
Contents: A variety of computes, fixes, and pair styles that are not
@ -670,9 +670,9 @@ Make.py -p ^misc -a machine :pre
Supporting info: "compute ti"_compute_ti.html, "fix
evaporate"_fix_evaporate.html, "fix tmm"_fix_ttm.html, "fix
viscosity"_fix_viscosity.html, examples/misc
:line
MOLECULE package :link(MOLECULE),h5
Contents: A large number of atom, pair, bond, angle, dihedral,
@ -704,7 +704,7 @@ lj/charmm/coul/charmm"_pair_charmm.html,
examples/micelle, examples/peptide, bench/in.chain, bench/in.rhodo
:line
MPIIO package :link(MPIIO),h5
Contents: Support for parallel output/input of dump and restart files
@ -729,9 +729,9 @@ Make.py -p ^mpiio -a machine :pre
Supporting info: "dump"_dump.html, "restart"_restart.html,
"write_restart"_write_restart.html, "read_restart"_read_restart.html
:line
OPT package :link(OPT),h5
Contents: A handful of pair styles with an "opt" in their style name
@ -768,7 +768,7 @@ Supporting info: "Section 5.3"_Section_accelerate.html#acc_3,
listed with an (t), examples/accelerate, bench/KEPLER
:line
PERI package :link(PERI),h5
Contents: Support for the Peridynamics method, a particle-based
@ -796,9 +796,9 @@ Supporting info:
"doc/PDF/PDLammps_VES.pdf"_PDF/PDLammps_VES.pdf, "atom_style
peri"_atom_style.html, "compute damage/atom"_compute_damage_atom.html,
"pair_style peri/pmb"_pair_peri.html, examples/peri
:line
POEMS package :link(POEMS),h5
Contents: A fix that wraps the Parallelizable Open source Efficient
@ -839,7 +839,7 @@ Supporting info: src/POEMS/README, lib/poems/README,
"fix poems"_fix_poems.html, examples/rigid
:line
PYTHON package :link(PYTHON),h5
Contents: A "python"_python.html command which allow you to execute
@ -873,9 +873,9 @@ make machine :pre
Make.py -p ^python -a machine :pre
Supporting info: examples/python
:line
QEQ package :link(QEQ),h5
Contents: Several fixes for performing charge equilibration (QEq) via
@ -897,9 +897,9 @@ make machine :pre
Make.py -p ^qeq -a machine :pre
Supporting info: "fix qeq/*"_fix_qeq.html, examples/qeq
:line
REAX package :link(REAX),h5
Contents: A pair style for the ReaxFF potential, a universal reactive
@ -941,9 +941,9 @@ Make.py -p ^reax -a machine :pre
Supporting info: lib/reax/README, "pair_style reax"_pair_reax.html,
"fix reax/bonds"_fix_reax_bonds.html, examples/reax
:line
REPLICA package :link(REPLICA),h5
Contents: A collection of multi-replica methods that are used by
@ -978,7 +978,7 @@ Supporting info: "Section 6.5"_Section_howto.html#howto_5,
examples/tad
:line
RIGID package :link(RIGID),h5
Contents: A collection of computes and fixes which enforce rigid
@ -1005,7 +1005,7 @@ Supporting info: "compute erotate/rigid"_compute_erotate_rigid.html,
rigid/*"_fix_rigid.html, examples/ASPHERE, examples/rigid
:line
SHOCK package :link(SHOCK),h5
Contents: A small number of fixes useful for running impact
@ -1028,15 +1028,15 @@ Make.py -p ^shock -a machine :pre
Supporting info: "fix append/atoms"_fix_append_atoms.html, "fix
msst"_fix_msst.html, "fix nphug"_fix_nphug.html, "fix
wall/piston"_fix_wall_piston.html, examples/hugoniostat, examples/msst
:line
SNAP package :link(SNAP),h5
Contents: A pair style for the spectral neighbor analysis potential
(SNAP), which is an empirical potential which can be quantum accurate
when fit to an archive of DFT data. Computes useful for analyzing
properties of the potential are also included.
when fit to an archive of DFT data. Computes useful for analyzing
properties of the potential are also included.
To install via make or Make.py:
@ -1055,9 +1055,9 @@ Make.py -p ^snap -a machine :pre
Supporting info: "pair snap"_pair_snap.html, "compute
sna/atom"_compute_sna_atom.html, "compute snad/atom"_compute_sna_atom.html,
"compute snav/atom"_compute_sna_atom.html, examples/snap
:line
SRD package :link(SRD),h5
Contents: Two fixes which implement the Stochastic Rotation Dynamics
@ -1080,9 +1080,9 @@ Make.py -p ^srd -a machine :pre
Supporting info: "fix srd"_fix_srd.html, "fix
wall/srd"_fix_wall_srd.html, examples/srd, examples/ASPHERE
:line
VORONOI package :link(VORONOI),h5
Contents: A "compute voronoi/atom"_compute_voronoi_atom.html command
@ -1129,9 +1129,9 @@ Make.py -p ^voronoi -a machine :pre
Supporting info: src/VORONOI/README, lib/voronoi/README, "compute
voronoi/atom"_compute_voronoi_atom.html, examples/voronoi
:line
4.2 User packages :h4,link(pkg_2)
The current list of user-contributed packages is as follows:
@ -1140,6 +1140,7 @@ Package, Description, Author(s), Doc page, Example, Pic/movie, Library
"USER-ATC"_#USER-ATC, atom-to-continuum coupling, Jones & Templeton & Zimmerman (1), "fix atc"_fix_atc.html, USER/atc, "atc"_atc, lib/atc
"USER-AWPMD"_#USER-AWPMD, wave-packet MD, Ilya Valuev (JIHT), "pair_style awpmd/cut"_pair_awpmd.html, USER/awpmd, -, lib/awpmd
"USER-CG-CMM"_#USER-CG-CMM, coarse-graining model, Axel Kohlmeyer (Temple U), "pair_style lj/sdk"_pair_sdk.html, USER/cg-cmm, "cg"_cg, -
"USER-CGDNA"_#USER-CGDNA, coarse-grained DNA force fields, Oliver Henrich (U Edinburgh), src/USER-CGDNA/README, USER/cgdna, -, -
"USER-COLVARS"_#USER-COLVARS, collective variables, Fiorin & Henin & Kohlmeyer (2), "fix colvars"_fix_colvars.html, USER/colvars, "colvars"_colvars, lib/colvars
"USER-DIFFRACTION"_#USER-DIFFRACTION, virutal x-ray and electron diffraction, Shawn Coleman (ARL),"compute xrd"_compute_xrd.html, USER/diffraction, -, -
"USER-DPD"_#USER-DPD, reactive dissipative particle dynamics (DPD), Larentzos & Mattox & Brennan (5), src/USER-DPD/README, USER/dpd, -, -
@ -1153,6 +1154,7 @@ Package, Description, Author(s), Doc page, Example, Pic/movie, Library
"USER-MISC"_#USER-MISC, single-file contributions, USER-MISC/README, USER-MISC/README, -, -, -
"USER-MANIFOLD"_#USER-MANIFOLD, motion on 2d surface, Stefan Paquay (Eindhoven U of Technology), "fix manifoldforce"_fix_manifoldforce.html, USER/manifold, "manifold"_manifold, -
"USER-MOLFILE"_#USER-MOLFILE, "VMD"_VMD molfile plug-ins, Axel Kohlmeyer (Temple U), "dump molfile"_dump_molfile.html, -, -, VMD-MOLFILE
"USER-NC-DUMP"_#USER-NC-DUMP, dump output via NetCDF, Lars Pastewka (Karlsruhe Institute of Technology, KIT), "dump nc / dump nc/mpiio"_dump_nc.html, -, -, lib/netcdf
"USER-OMP"_#USER-OMP, OpenMP threaded styles, Axel Kohlmeyer (Temple U), "Section 5.3.4"_accelerate_omp.html, -, -, -
"USER-PHONON"_#USER-PHONON, phonon dynamical matrix, Ling-Ti Kong (Shanghai Jiao Tong U), "fix phonon"_fix_phonon.html, USER/phonon, -, -
"USER-QMMM"_#USER-QMMM, QM/MM coupling, Axel Kohlmeyer (Temple U), "fix qmmm"_fix_qmmm.html, USER/qmmm, -, lib/qmmm
@ -1283,6 +1285,31 @@ him directly if you have questions.
:line
USER-CGDNA package :link(USER-CGDNA),h5
Contents: The CGDNA package implements coarse-grained force fields for
single- and double-stranded DNA. This is at the moment mainly the
oxDNA model, developed by Doye, Louis and Ouldridge at the University
of Oxford. The package also contains Langevin-type rigid-body
integrators with improved stability.
See these doc pages to get started:
"bond_style oxdna_fene"_bond_oxdna_fene.html
"pair_style oxdna_excv"_pair_oxdna_excv.html
"fix nve/dotc/langevin"_fix_nve_dotc_langevin.html :ul
Supporting info: /src/USER-CGDNA/README, "bond_style
oxdna_fene"_bond_oxdna_fene.html, "pair_style
oxdna_excv"_pair_oxdna_excv.html, "fix
nve/dotc/langevin"_fix_nve_dotc_langevin.html
Author: Oliver Henrich at the University of Edinburgh, UK (o.henrich
at epcc.ed.ac.uk or ohenrich at ph.ed.ac.uk). Contact him directly if
you have any questions.
:line
USER-COLVARS package :link(USER-COLVARS),h5
Contents: COLVARS stands for collective variables which can be used to
@ -1302,7 +1329,7 @@ fix. The COLVARS library itself is written and maintained by Giacomo
Fiorin (ICMS, Temple University, Philadelphia, PA, USA) and Jerome
Henin (LISM, CNRS, Marseille, France). Contact them directly if you
have questions.
:line
USER-DIFFRACTION package :link(USER-DIFFRACTION),h5
@ -1380,7 +1407,7 @@ in 2007. See src/USER-EFF/README for more details. There are
auxiliary tools for using this package in tools/eff; see its README
file.
Supporting info:
Supporting info:
Author: Andres Jaramillo-Botero at CalTech (ajaramil at
wag.caltech.edu). Contact him directly if you have questions.
@ -1456,21 +1483,21 @@ LINKFLAGS: add -fopenmp :ul
For Phi mode add the following in addition to the CPU mode flags:
CCFLAGS: add -DLMP_INTEL_OFFLOAD and
CCFLAGS: add -DLMP_INTEL_OFFLOAD and
LINKFLAGS: add -offload :ul
And also add this to CCFLAGS:
-offload-option,mic,compiler,"-fp-model fast=2 -mGLOB_default_function_attrs=\"gather_scatter_loop_unroll=4\"" :pre
Examples:
Examples:
:line
USER-LB package :link(USER-LB),h5
Supporting info:
Supporting info:
This package contains a LAMMPS implementation of a background
Lattice-Boltzmann fluid, which can be used to model MD particles
influenced by hydrodynamic forces.
@ -1489,8 +1516,8 @@ Examples: examples/USER/lb
USER-MGPT package :link(USER-MGPT),h5
Supporting info:
Supporting info:
This package contains a fast implementation for LAMMPS of
quantum-based MGPT multi-ion potentials. The MGPT or model GPT method
derives from first-principles DFT-based generalized pseudopotential
@ -1521,8 +1548,8 @@ Examples: examples/USER/mgpt
USER-MISC package :link(USER-MISC),h5
Supporting info:
Supporting info:
The files in this package are a potpourri of (mostly) unrelated
features contributed to LAMMPS by users. Each feature is a single
pair of files (*.cpp and *.h).
@ -1548,8 +1575,8 @@ Examples: examples/USER/misc
USER-MANIFOLD package :link(USER-MANIFOLD),h5
Supporting info:
Supporting info:
This package contains a dump molfile command which uses molfile
plugins that are bundled with the
"VMD"_http://www.ks.uiuc.edu/Research/vmd molecular visualization and
@ -1574,8 +1601,8 @@ Contact him directly if you have questions.
USER-MOLFILE package :link(USER-MOLFILE),h5
Supporting info:
Supporting info:
This package contains a dump molfile command which uses molfile
plugins that are bundled with the
"VMD"_http://www.ks.uiuc.edu/Research/vmd molecular visualization and
@ -1598,14 +1625,38 @@ The person who created this package is Axel Kohlmeyer at Temple U
:line
USER-NC-DUMP package :link(USER-NC-DUMP),h5
Contents: Dump styles for writing NetCDF format files. NetCDF is a binary,
portable, self-describing file format on top of HDF5. The file format
contents follow the AMBER NetCDF trajectory conventions
(http://ambermd.org/netcdf/nctraj.xhtml), but include extensions to this
convention. This package implements a "dump nc"_dump_nc.html command
and a "dump nc/mpiio"_dump_nc.html command to output LAMMPS snapshots
in this format. See src/USER-NC-DUMP/README for more details.
NetCDF files can be directly visualized with the following tools:
Ovito (http://www.ovito.org/). Ovito supports the AMBER convention
and all of the above extensions. :ulb,l
VMD (http://www.ks.uiuc.edu/Research/vmd/) :l
AtomEye (http://www.libatoms.org/). The libAtoms version of AtomEye contains
a NetCDF reader that is not present in the standard distribution of AtomEye :l,ule
The person who created these files is Lars Pastewka at
Karlsruhe Institute of Technology (lars.pastewka at kit.edu).
Contact him directly if you have questions.
:line
USER-OMP package :link(USER-OMP),h5
Supporting info:
Supporting info:
This package provides OpenMP multi-threading support and
other optimizations of various LAMMPS pair styles, dihedral
styles, and fix styles.
See this section of the manual to get started:
"Section 5.3"_Section_accelerate.html#acc_3
@ -1643,8 +1694,8 @@ Examples: examples/USER/phonon
USER-QMMM package :link(USER-QMMM),h5
Supporting info:
Supporting info:
This package provides a fix qmmm command which allows LAMMPS to be
used in a QM/MM simulation, currently only in combination with pw.x
code from the "Quantum ESPRESSO"_espresso package.
@ -1667,11 +1718,11 @@ The person who created this package is Axel Kohlmeyer at Temple U
(akohlmey at gmail.com). Contact him directly if you have questions.
:line
USER-QTB package :link(USER-QTB),h5
Supporting info:
Supporting info:
This package provides a self-consistent quantum treatment of the
vibrational modes in a classical molecular dynamics simulation. By
coupling the MD simulation to a colored thermostat, it introduces zero
@ -1701,16 +1752,16 @@ Examples: examples/USER/qtb
USER-QUIP package :link(USER-QUIP),h5
Supporting info:
Supporting info:
Examples: examples/USER/quip
:line
USER-REAXC package :link(USER-REAXC),h5
Supporting info:
Supporting info:
This package contains a implementation for LAMMPS of the ReaxFF force
field. ReaxFF uses distance-dependent bond-order functions to
represent the contributions of chemical bonding to the potential
@ -1748,24 +1799,24 @@ Examples: examples/reax
USER-SMD package :link(USER-SMD),h5
Supporting info:
Supporting info:
This package implements smoothed Mach dynamics (SMD) in
LAMMPS. Currently, the package has the following features:
* Does liquids via traditional Smooth Particle Hydrodynamics (SPH)
* Also solves solids mechanics problems via a state of the art
* Also solves solids mechanics problems via a state of the art
stabilized meshless method with hourglass control.
* Can specify hydrostatic interactions independently from material
* Can specify hydrostatic interactions independently from material
strength models, i.e. pressure and deviatoric stresses are separated.
* Many material models available (Johnson-Cook, plasticity with
hardening, Mie-Grueneisen, Polynomial EOS). Easy to add new
* Many material models available (Johnson-Cook, plasticity with
hardening, Mie-Grueneisen, Polynomial EOS). Easy to add new
material models.
* Rigid boundary conditions (walls) can be loaded as surface geometries
* Rigid boundary conditions (walls) can be loaded as surface geometries
from *.STL files.
See the file doc/PDF/SMD_LAMMPS_userguide.pdf to get started.
@ -1783,8 +1834,8 @@ Examples: examples/USER/smd
USER-SMTBQ package :link(USER-SMTBQ),h5
Supporting info:
Supporting info:
This package implements the Second Moment Tight Binding - QEq (SMTB-Q)
potential for the description of ionocovalent bonds in oxides.
@ -1806,22 +1857,22 @@ Examples: examples/USER/smtbq
USER-SPH package :link(USER-SPH),h5
Supporting info:
Supporting info:
This package implements smoothed particle hydrodynamics (SPH) in
LAMMPS. Currently, the package has the following features:
* Tait, ideal gas, Lennard-Jones equation of states, full support for
* Tait, ideal gas, Lennard-Jones equation of states, full support for
complete (i.e. internal-energy dependent) equations of state
* Plain or Monaghans XSPH integration of the equations of motion
* Density continuity or density summation to propagate the density field
* Commands to set internal energy and density of particles from the
* Commands to set internal energy and density of particles from the
input script
* Output commands to access internal energy and density for dumping and
* Output commands to access internal energy and density for dumping and
thermo output
See the file doc/PDF/SPH_LAMMPS_userguide.pdf to get started.
@ -1839,7 +1890,7 @@ Examples: examples/USER/sph
USER-TALLY package :link(USER-TALLY),h5
Supporting info:
Supporting info:
Examples: examples/USER/tally

View File

@ -8,28 +8,36 @@
11. Python interface to LAMMPS :h3
LAMMPS can work together with Python in two ways. First, Python can
LAMMPS can work together with Python in three ways. First, Python can
wrap LAMMPS through the "LAMMPS library
interface"_Section_howto.html#howto_19, so that a Python script can
create one or more instances of LAMMPS and launch one or more
simulations. In Python lingo, this is "extending" Python with LAMMPS.
Second, LAMMPS can use the Python interpreter, so that a LAMMPS input
Second, the low-level Python interface can be used indirectly through the
PyLammps and IPyLammps wrapper classes in Python. These wrappers try to
simplify the usage of LAMMPS in Python by providing an object-based interface
to common LAMMPS functionality. It also reduces the amount of code necessary to
parameterize LAMMPS scripts through Python and makes variables and computes
directly accessible. See "PyLammps interface"_#py_9 for more details.
Third, LAMMPS can use the Python interpreter, so that a LAMMPS input
script can invoke Python code, and pass information back-and-forth
between the input script and Python functions you write. The Python
code can also callback to LAMMPS to query or change its attributes.
In Python lingo, this is "embedding" Python in LAMMPS.
This section describes how to do both.
This section describes how to use these three approaches.
11.1 "Overview of running LAMMPS from Python"_#py_1
11.2 "Overview of using Python from a LAMMPS script"_#py_2
11.2 "Overview of using Python from a LAMMPS script"_#py_2
11.3 "Building LAMMPS as a shared library"_#py_3
11.4 "Installing the Python wrapper into Python"_#py_4
11.5 "Extending Python with MPI to run in parallel"_#py_5
11.6 "Testing the Python-LAMMPS interface"_#py_6
11.7 "Using LAMMPS from Python"_#py_7
11.8 "Example Python scripts that use LAMMPS"_#py_8 :ul
11.8 "Example Python scripts that use LAMMPS"_#py_8
11.9 "PyLammps interface"_#py_9 :ul
If you are not familiar with it, "Python"_http://www.python.org is a
powerful scripting and programming language which can essentially do
@ -503,7 +511,7 @@ one of several ways:
The last command requires that the first line of the script be
something like this:
#!/usr/local/bin/python
#!/usr/local/bin/python
#!/usr/local/bin/python -i :pre
where the path points to where you have Python installed, and that you
@ -534,10 +542,11 @@ from lammps import lammps :pre
These are the methods defined by the lammps module. If you look at
the files src/library.cpp and src/library.h you will see that they
correspond one-to-one with calls you can make to the LAMMPS library
from a C++ or C or Fortran program.
from a C++ or C or Fortran program, and which are described in
"Section 6.19"_Section_howto.html#howto_19 of the manual.
lmp = lammps() # create a LAMMPS object using the default liblammps.so library
4 optional args are allowed: name, cmdargs, ptr, comm
# 4 optional args are allowed: name, cmdargs, ptr, comm
lmp = lammps(ptr=lmpptr) # use lmpptr as previously created LAMMPS object
lmp = lammps(comm=split) # create a LAMMPS object with a custom communicator, requires mpi4py 2.0.0 or later
lmp = lammps(name="g++") # create a LAMMPS object using the liblammps_g++.so library
@ -549,37 +558,41 @@ version = lmp.version() # return the numerical version id, e.g. LAMMPS 2 Sep 20
lmp.file(file) # run an entire input script, file = "in.lj"
lmp.command(cmd) # invoke a single LAMMPS command, cmd = "run 100" :pre
lmp.commands_list(cmdlist) # invoke commands in cmdlist = ["run 10", "run 20"]
lmp.commands_string(multicmd) # invoke commands in multicmd = "run 10\nrun 20"
xlo = lmp.extract_global(name,type) # extract a global quantity
# name = "boxxlo", "nlocal", etc
# type = 0 = int
# 1 = double :pre
# type = 0 = int
# 1 = double :pre
coords = lmp.extract_atom(name,type) # extract a per-atom quantity
# name = "x", "type", etc
# type = 0 = vector of ints
# 1 = array of ints
# 2 = vector of doubles
# 3 = array of doubles :pre
# type = 0 = vector of ints
# 1 = array of ints
# 2 = vector of doubles
# 3 = array of doubles :pre
eng = lmp.extract_compute(id,style,type) # extract value(s) from a compute
v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
# id = ID of compute or fix
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
# type = 0 = scalar
# 1 = vector
# 2 = array
# i,j = indices of value in global vector or array :pre
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
# type = 0 = scalar
# 1 = vector
# 2 = array
# i,j = indices of value in global vector or array :pre
var = lmp.extract_variable(name,group,flag) # extract value(s) from a variable
# name = name of variable
# group = group ID (ignored for equal-style variables)
# flag = 0 = equal-style variable
# 1 = atom-style variable :pre
# name = name of variable
# group = group ID (ignored for equal-style variables)
# flag = 0 = equal-style variable
# 1 = atom-style variable :pre
flag = lmp.set_variable(name,value) # set existing named string-style variable to value, flag = 0 if successful
value = lmp.get_thermo(name) # return current value of a thermo keyword
natoms = lmp.get_natoms() # total # of atoms as int
data = lmp.gather_atoms(name,type,count) # return atom attribute of all atoms gathered into data, ordered by atom ID
# name = "x", "charge", "type", etc
@ -599,9 +612,10 @@ create an instance of LAMMPS, wrapped in a Python class by the lammps
Python module, and return an instance of the Python class as lmp. It
is used to make all subequent calls to the LAMMPS library.
Additional arguments can be used to tell Python the name of the shared
library to load or to pass arguments to the LAMMPS instance, the same
as if LAMMPS were launched from a command-line prompt.
Additional arguments to lammps() can be used to tell Python the name
of the shared library to load or to pass arguments to the LAMMPS
instance, the same as if LAMMPS were launched from a command-line
prompt.
If the ptr argument is set like this:
@ -626,8 +640,9 @@ lmp2 = lammps()
lmp1.file("in.file1")
lmp2.file("in.file2") :pre
The file() and command() methods allow an input script or single
commands to be invoked.
The file(), command(), commands_list(), commands_string() methods
allow an input script, a single command, or multiple commands to be
invoked.
The extract_global(), extract_atom(), extract_compute(),
extract_fix(), and extract_variable() methods return values or
@ -724,7 +739,7 @@ lmp.scatter_coords("x",1,3,x) :pre
Alternatively, you can just change values in the vector returned by
gather_atoms("x",1,3), since it is a ctypes vector of doubles.
:line
:line
As noted above, these Python class methods correspond one-to-one with
the functions in the LAMMPS library interface in src/library.cpp and
@ -767,7 +782,7 @@ vizplotgui_tool.py, combination of viz_tool.py and plot.py and gui.py :tb(c=2)
For the viz_tool.py and vizplotgui_tool.py commands, replace "tool"
with "gl" or "atomeye" or "pymol" or "vmd", depending on what
visualization package you have installed.
visualization package you have installed.
Note that for GL, you need to be able to run the Pizza.py GL tool,
which is included in the pizza sub-directory. See the "Pizza.py doc
@ -817,3 +832,7 @@ different visualization package options. Click to see larger images:
:image(JPG/screenshot_atomeye_small.jpg,JPG/screenshot_atomeye.jpg)
:image(JPG/screenshot_pymol_small.jpg,JPG/screenshot_pymol.jpg)
:image(JPG/screenshot_vmd_small.jpg,JPG/screenshot_vmd.jpg)
11.9 PyLammps interface :link(py_9),h4
Please see the "PyLammps Tutorial"_tutorial_pylammps.html.

View File

@ -33,7 +33,7 @@ tar -xzvf lammps*.tar.gz :pre
This will create a LAMMPS directory containing two files and several
sub-directories:
README: text file
LICENSE: the GNU General Public License (GPL)
bench: benchmark problems
@ -600,10 +600,10 @@ LAMMPS will generate a run-time error. As far as we know, the
settings defined in src/lmptype.h are portable and work on every
current system.
In all cases, the size of problem that can be run on a per-processor
basis is limited by 4-byte integer storage to 2^31 atoms per processor
(about 2 billion). This should not normally be a limitation since such
a problem would have a huge per-processor memory footprint due to
In all cases, the size of problem that can be run on a per-processor
basis is limited by 4-byte integer storage to 2^31 atoms per processor
(about 2 billion). This should not normally be a limitation since such
a problem would have a huge per-processor memory footprint due to
neighbor lists and would run very slowly in terms of CPU secs/timestep.
:line
@ -706,7 +706,7 @@ future changes to LAMMPS.
User packages, such as user-atc or user-omp, have been contributed by
users, and always begin with the user prefix. If they are a single
command (single file), they are typically in the user-misc package.
Otherwise, they are a a set of files grouped together which add a
Otherwise, they are a set of files grouped together which add a
specific functionality to the code.
User packages don't necessarily meet the requirements of the standard
@ -841,7 +841,7 @@ libpackage.a
Makefile.lammps :pre
The Makefile.lammps file will typically be a copy of one of the
Makefile.lammps.* files in the library directory.
Makefile.lammps.* files in the library directory.
Note that you must insure that the settings in Makefile.lammps are
appropriate for your system. If they are not, the LAMMPS build may
@ -883,7 +883,7 @@ A few packages require specific settings in Makefile.machine, to
either build or use the package effectively. These are the
USER-INTEL, KOKKOS, USER-OMP, and OPT packages, used for accelerating
code performance on CPUs or other hardware, as discussed in "Section
5.3"_Section_accelerate.html#acc_3.
5.3"_Section_accelerate.html#acc_3.
A summary of what Makefile.machine changes are needed for each of
these packages is given in "Section 4"_Section_packages.html.
@ -1199,7 +1199,7 @@ installer package from "here"_http://rpm.lammps.org/windows.html
For running the non-MPI executable, follow these steps:
Get a command prompt by going to Start->Run... ,
Get a command prompt by going to Start->Run... ,
then typing "cmd". :ulb,l
Move to the directory where you have your input, e.g. a copy of
@ -1209,7 +1209,7 @@ At the command prompt, type "lmp_serial -in in.lj", replacing [in.lj]
with the name of your LAMMPS input script. :l
:ule
For the MPI version, which allows you to run LAMMPS under Windows on
For the MPI version, which allows you to run LAMMPS under Windows on
multiple processors, follow these steps:
Download and install
@ -1224,7 +1224,7 @@ For this you need to start a Command Prompt in {Administrator Mode}
installation directory, then into the subdirectory [bin] and execute
[smpd.exe -install]. Exit the command window.
Get a new, regular command prompt by going to Start->Run... ,
Get a new, regular command prompt by going to Start->Run... ,
then typing "cmd". :l
Move to the directory where you have your input file
@ -1488,7 +1488,7 @@ of the manual. World- and universe-style "variables"_variable.html
are useful in this context.
-plog file :pre
Specify the base name for the partition log files, so partition N
writes log information to file.N. If file is none, then no partition
log files are created. This overrides the filename specified in the
@ -1499,7 +1499,7 @@ replica_files/log.lammps) If this option is not used the log file for
partition N is log.lammps.N or whatever is specified by the -log
command-line option.
-pscreen file :pre
-pscreen file :pre
Specify the base name for the partition screen file, so partition N
writes screen information to file.N. If file is none, then no
@ -1511,7 +1511,7 @@ sub-directory (-pscreen replica_files/screen). If this option is not
used the screen file for partition N is screen.N or whatever is
specified by the -screen command-line option.
-restart restartfile {remap} datafile keyword value ... :pre
-restart restartfile {remap} datafile keyword value ... :pre
Convert the restart file into a data file and immediately exit. This
is the same operation as if the following 2-line input script were
@ -1572,7 +1572,7 @@ to
so that the processors in each partition will be
0 1 2 4 5 6 8 9 10
0 1 2 4 5 6 8 9 10
3 7 11 :pre
See the "processors" command for how to insure processors from each
@ -1601,9 +1601,9 @@ implementations, either by environment variables that specify how to
order physical processors, or by config files that specify what
physical processors to assign to each MPI rank. The -reorder switch
simply gives you a portable way to do this without relying on MPI
itself. See the "processors out"_processors command for how to output
info on the final assignment of physical processors to the LAMMPS
simulation domain.
itself. See the "processors out"_processors.html command for how
to output info on the final assignment of physical processors to
the LAMMPS simulation domain.
-screen file :pre
@ -1663,12 +1663,12 @@ invokes the default USER-INTEL settings, as if the command "package
intel 1" were used at the top of your input script. These settings
can be changed by using the "-package intel" command-line switch or
the "package intel"_package.html command in your script. If the
USER-OMP package is also installed, the hybrid style with "intel omp"
arguments can be used to make the omp suffix a second choice, if a
requested style is not available in the USER-INTEL package. It will
also invoke the default USER-OMP settings, as if the command "package
omp 0" were used at the top of your input script. These settings can
be changed by using the "-package omp" command-line switch or the
USER-OMP package is also installed, the hybrid style with "intel omp"
arguments can be used to make the omp suffix a second choice, if a
requested style is not available in the USER-INTEL package. It will
also invoke the default USER-OMP settings, as if the command "package
omp 0" were used at the top of your input script. These settings can
be changed by using the "-package omp" command-line switch or the
"package omp"_package.html command in your script.
For the KOKKOS package, using this command-line switch also invokes
@ -1727,7 +1727,7 @@ thermodynamic state and a total run time for the simulation. It then
appends statistics about the CPU time and storage requirements for the
simulation. An example set of statistics is shown here:
Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms
Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms :pre
Performance: 18.436 ns/day 1.302 hours/ns 106.689 timesteps/s
97.0% CPU use with 4 MPI tasks x no OpenMP threads :pre
@ -1757,14 +1757,14 @@ Ave special neighs/atom = 2.34032
Neighbor list builds = 26
Dangerous builds = 0 :pre
The first section provides a global loop timing summary. The loop time
The first section provides a global loop timing summary. The {loop time}
is the total wall time for the section. The {Performance} line is
provided for convenience to help predicting the number of loop
continuations required and for comparing performance with other
similar MD codes. The CPU use line provides the CPU utilzation per
continuations required and for comparing performance with other,
similar MD codes. The {CPU use} line provides the CPU utilzation per
MPI task; it should be close to 100% times the number of OpenMP
threads (or 1). Lower numbers correspond to delays due to file I/O or
insufficient thread utilization.
threads (or 1 of no OpenMP). Lower numbers correspond to delays due
to file I/O or insufficient thread utilization.
The MPI task section gives the breakdown of the CPU run time (in
seconds) into major categories:
@ -1791,7 +1791,7 @@ is present that also prints the CPU utilization in percent. In
addition, when using {timer full} and the "package omp"_package.html
command are active, a similar timing summary of time spent in threaded
regions to monitor thread utilization and load balance is provided. A
new entry is the {Reduce} section, which lists the time spend in
new entry is the {Reduce} section, which lists the time spent in
reducing the per-thread data elements to the storage for non-threaded
computation. These thread timings are taking from the first MPI rank
only and and thus, as the breakdown for MPI tasks can change from MPI
@ -1833,7 +1833,7 @@ e.g.
Minimization stats:
Stopping criterion = linesearch alpha is zero
Energy initial, next-to-last, final =
Energy initial, next-to-last, final =
-6372.3765206 -8328.46998942 -8328.46998942
Force two-norm initial, final = 1059.36 5.36874
Force max component initial, final = 58.6026 1.46872

View File

@ -104,7 +104,7 @@ since binary files are not compatible across all platforms.
ch2lmp tool :h4,link(charmm)
The ch2lmp sub-directory contains tools for converting files
back-and-forth between the CHARMM MD code and LAMMPS.
back-and-forth between the CHARMM MD code and LAMMPS.
They are intended to make it easy to use CHARMM as a builder and as a
post-processor for LAMMPS. Using charmm2lammps.pl, you can convert a

View File

@ -29,80 +29,80 @@ Bond Styles: fene, harmonic :l
Dihedral Styles: charmm, harmonic, opls :l
Fixes: nve, npt, nvt, nvt/sllod :l
Improper Styles: cvff, harmonic :l
Pair Styles: buck/coul/cut, buck/coul/long, buck, gayberne,
Pair Styles: buck/coul/cut, buck/coul/long, buck, eam, gayberne,
charmm/coul/long, lj/cut, lj/cut/coul/long, sw, tersoff :l
K-Space Styles: pppm :l
:ule
[Speed-ups to expect:]
The speedups will depend on your simulation, the hardware, which
styles are used, the number of atoms, and the floating-point
precision mode. Performance improvements are shown compared to
LAMMPS {without using other acceleration packages} as these are
under active development (and subject to performance changes). The
The speedups will depend on your simulation, the hardware, which
styles are used, the number of atoms, and the floating-point
precision mode. Performance improvements are shown compared to
LAMMPS {without using other acceleration packages} as these are
under active development (and subject to performance changes). The
measurements were performed using the input files available in
the src/USER-INTEL/TEST directory. These are scalable in size; the
results given are with 512K particles (524K for Liquid Crystal).
the src/USER-INTEL/TEST directory. These are scalable in size; the
results given are with 512K particles (524K for Liquid Crystal).
Most of the simulations are standard LAMMPS benchmarks (indicated
by the filename extension in parenthesis) with modifications to the
run length and to add a warmup run (for use with offload
benchmarks).
by the filename extension in parenthesis) with modifications to the
run length and to add a warmup run (for use with offload
benchmarks).
:c,image(JPG/user_intel.png)
Results are speedups obtained on Intel Xeon E5-2697v4 processors
(code-named Broadwell) and Intel Xeon Phi 7250 processors
Results are speedups obtained on Intel Xeon E5-2697v4 processors
(code-named Broadwell) and Intel Xeon Phi 7250 processors
(code-named Knights Landing) with "18 Jun 2016" LAMMPS built with
Intel Parallel Studio 2016 update 3. Results are with 1 MPI task
per physical core. See {src/USER-INTEL/TEST/README} for the raw
Intel Parallel Studio 2016 update 3. Results are with 1 MPI task
per physical core. See {src/USER-INTEL/TEST/README} for the raw
simulation rates and instructions to reproduce.
:line
[Quick Start for Experienced Users:]
LAMMPS should be built with the USER-INTEL package installed.
LAMMPS should be built with the USER-INTEL package installed.
Simulations should be run with 1 MPI task per physical {core},
not {hardware thread}.
For Intel Xeon CPUs:
Edit src/MAKE/OPTIONS/Makefile.intel_cpu_intelmpi as necessary. :ulb,l
If using {kspace_style pppm} in the input script, add "neigh_modify binsize 3" and "kspace_modify diff ad" to the input script for better
If using {kspace_style pppm} in the input script, add "neigh_modify binsize 3" and "kspace_modify diff ad" to the input script for better
performance. :l
"-pk intel 0 omp 2 -sf intel" added to LAMMPS command-line :l
:ule
For Intel Xeon Phi CPUs for simulations without {kspace_style
For Intel Xeon Phi CPUs for simulations without {kspace_style
pppm} in the input script :
Edit src/MAKE/OPTIONS/Makefile.knl as necessary. :ulb,l
Runs should be performed using MCDRAM. :l
"-pk intel 0 omp 2 -sf intel" {or} "-pk intel 0 omp 4 -sf intel"
should be added to the LAMMPS command-line. Choice for best
"-pk intel 0 omp 2 -sf intel" {or} "-pk intel 0 omp 4 -sf intel"
should be added to the LAMMPS command-line. Choice for best
performance will depend on the simulation. :l
:ule
For Intel Xeon Phi CPUs for simulations with {kspace_style
For Intel Xeon Phi CPUs for simulations with {kspace_style
pppm} in the input script:
Edit src/MAKE/OPTIONS/Makefile.knl as necessary. :ulb,l
Runs should be performed using MCDRAM. :l
Add "neigh_modify binsize 3" to the input script for better
Add "neigh_modify binsize 3" to the input script for better
performance. :l
Add "kspace_modify diff ad" to the input script for better
Add "kspace_modify diff ad" to the input script for better
performance. :l
export KMP_AFFINITY=none :l
"-pk intel 0 omp 3 lrt yes -sf intel" or "-pk intel 0 omp 1 lrt yes
-sf intel" added to LAMMPS command-line. Choice for best performance
-sf intel" added to LAMMPS command-line. Choice for best performance
will depend on the simulation. :l
:ule
For Intel Xeon Phi coprocessors (Offload):
For Intel Xeon Phi coprocessors (Offload):
Edit src/MAKE/OPTIONS/Makefile.intel_coprocessor as necessary :ulb,l
"-pk intel N omp 1" added to command-line where N is the number of
"-pk intel N omp 1" added to command-line where N is the number of
coprocessors per node. :l
:ule
@ -111,7 +111,7 @@ coprocessors per node. :l
[Required hardware/software:]
In order to use offload to coprocessors, an Intel Xeon Phi
coprocessor and an Intel compiler are required. For this, the
coprocessor and an Intel compiler are required. For this, the
recommended version of the Intel compiler is 14.0.1.106 or
versions 15.0.2.044 and higher.
@ -133,7 +133,7 @@ slightly lower.
[Notes about Simultaneous Multithreading:]
Modern CPUs often support Simultaneous Multithreading (SMT). On
Modern CPUs often support Simultaneous Multithreading (SMT). On
Intel processors, this is called Hyper-Threading (HT) technology.
SMT is hardware support for running multiple threads efficiently on
a single core. {Hardware threads} or {logical cores} are often used
@ -141,8 +141,8 @@ to refer to the number of threads that are supported in hardware.
For example, the Intel Xeon E5-2697v4 processor is described
as having 36 cores and 72 threads. This means that 36 MPI processes
or OpenMP threads can run simultaneously on separate cores, but that
up to 72 MPI processes or OpenMP threads can be running on the CPU
without costly operating system context switches.
up to 72 MPI processes or OpenMP threads can be running on the CPU
without costly operating system context switches.
Molecular dynamics simulations will often run faster when making use
of SMT. If a thread becomes stalled, for example because it is
@ -150,18 +150,18 @@ waiting on data that has not yet arrived from memory, another thread
can start running so that the CPU pipeline is still being used
efficiently. Although benefits can be seen by launching a MPI task
for every hardware thread, for multinode simulations, we recommend
that OpenMP threads are used for SMT instead, either with the
USER-INTEL package, "USER-OMP package"_accelerate_omp.html", or
that OpenMP threads are used for SMT instead, either with the
USER-INTEL package, "USER-OMP package"_accelerate_omp.html, or
"KOKKOS package"_accelerate_kokkos.html. In the example above, up
to 36X speedups can be observed by using all 36 physical cores with
LAMMPS. By using all 72 hardware threads, an additional 10-30%
performance gain can be achieved.
The BIOS on many platforms allows SMT to be disabled, however, we do
not recommend this on modern processors as there is little to no
not recommend this on modern processors as there is little to no
benefit for any software package in most cases. The operating system
will report every hardware thread as a separate core allowing one to
determine the number of hardware threads available. On Linux systems,
will report every hardware thread as a separate core allowing one to
determine the number of hardware threads available. On Linux systems,
this information can normally be obtained with:
cat /proc/cpuinfo :pre
@ -182,21 +182,21 @@ Makefile.intel_cpu_openpmi # Intel Compiler, OpenMPI, No Offload
Makefile.intel_coprocessor # Intel Compiler, Intel MPI, Offload :pre
Makefile.knl is identical to Makefile.intel_cpu_intelmpi except that
it explicitly specifies that vectorization should be for Intel
Xeon Phi x200 processors making it easier to cross-compile. For
users with recent installations of Intel Parallel Studio, the
it explicitly specifies that vectorization should be for Intel
Xeon Phi x200 processors making it easier to cross-compile. For
users with recent installations of Intel Parallel Studio, the
process can be as simple as:
make yes-user-intel
source /opt/intel/parallel_studio_xe_2016.3.067/psxevars.sh
source /opt/intel/parallel_studio_xe_2016.3.067/psxevars.sh
# or psxevars.csh for C-shell
make intel_cpu_intelmpi :pre
Alternatively, the build can be accomplished with the src/Make.py
script, described in "Section 2.4"_Section_start.html#start_4 of the
Alternatively, the build can be accomplished with the src/Make.py
script, described in "Section 2.4"_Section_start.html#start_4 of the
manual. Type "Make.py -h" for help. For an example:
Make.py -v -p intel omp -intel cpu -a file intel_cpu_intelmpi :pre
Make.py -v -p intel omp -intel cpu -a file intel_cpu_intelmpi :pre
Note that if you build with support for a Phi coprocessor, the same
binary can be used on nodes with or without coprocessors installed.
@ -205,26 +205,26 @@ without offload support will produce a smaller binary.
The general requirements for Makefiles with the USER-INTEL package
are as follows. "-DLAMMPS_MEMALIGN=64" is required for CCFLAGS. When
using Intel compilers, "-restrict" is required and "-qopenmp" is
highly recommended for CCFLAGS and LINKFLAGS. LIB should include
using Intel compilers, "-restrict" is required and "-qopenmp" is
highly recommended for CCFLAGS and LINKFLAGS. LIB should include
"-ltbbmalloc". For builds supporting offload, "-DLMP_INTEL_OFFLOAD"
is required for CCFLAGS and "-qoffload" is required for LINKFLAGS.
Other recommended CCFLAG options for best performance are
"-O2 -fno-alias -ansi-alias -qoverride-limits fp-model fast=2
-no-prec-div". The Make.py command will add all of these
Other recommended CCFLAG options for best performance are
"-O2 -fno-alias -ansi-alias -qoverride-limits fp-model fast=2
-no-prec-div". The Make.py command will add all of these
automatically.
NOTE: The vectorization and math capabilities can differ depending on
the CPU. For Intel compilers, the "-x" flag specifies the type of
processor for which to optimize. "-xHost" specifies that the compiler
should build for the processor used for compiling. For Intel Xeon Phi
should build for the processor used for compiling. For Intel Xeon Phi
x200 series processors, this option is "-xMIC-AVX512". For fourth
generation Intel Xeon (v4/Broadwell) processors, "-xCORE-AVX2" should
generation Intel Xeon (v4/Broadwell) processors, "-xCORE-AVX2" should
be used. For older Intel Xeon processors, "-xAVX" will perform best
in general for the different simulations in LAMMPS. The default
in most of the example Makefiles is to use "-xHost", however this
should not be used when cross-compiling.
[Running LAMMPS with the USER-INTEL package:]
Running LAMMPS with the USER-INTEL package is similar to normal use
@ -232,7 +232,7 @@ with the exceptions that one should 1) specify that LAMMPS should use
the USER-INTEL package, 2) specify the number of OpenMP threads, and
3) optionally specify the specific LAMMPS styles that should use the
USER-INTEL package. 1) and 2) can be performed from the command-line
or by editing the input script. 3) requires editing the input script.
or by editing the input script. 3) requires editing the input script.
Advanced performance tuning options are also described below to get
the best performance.
@ -241,14 +241,14 @@ coprocessor), best performance is normally obtained by using 1 MPI
task per physical core and additional OpenMP threads with SMT. For
Intel Xeon processors, 2 OpenMP threads should be used for SMT.
For Intel Xeon Phi CPUs, 2 or 4 OpenMP threads should be used
(best choice depends on the simulation). In cases where the user
specifies that LRT mode is used (described below), 1 or 3 OpenMP
(best choice depends on the simulation). In cases where the user
specifies that LRT mode is used (described below), 1 or 3 OpenMP
threads should be used. For multi-node runs, using 1 MPI task per
physical core will often perform best, however, depending on the
machine and scale, users might get better performance by decreasing
the number of MPI tasks and using more OpenMP threads. For
performance, the product of the number of MPI tasks and OpenMP
threads should not exceed the number of available hardware threads in
the number of MPI tasks and using more OpenMP threads. For
performance, the product of the number of MPI tasks and OpenMP
threads should not exceed the number of available hardware threads in
almost all cases.
NOTE: Setting core affinity is often used to pin MPI tasks and OpenMP
@ -257,21 +257,21 @@ uniform. Unless disabled at build time, affinity for MPI tasks and
OpenMP threads on the host (CPU) will be set by default on the host
{when using offload to a coprocessor}. In this case, it is unnecessary
to use other methods to control affinity (e.g. taskset, numactl,
I_MPI_PIN_DOMAIN, etc.). This can be disabled with the {no_affinity}
option to the "package intel"_package.html command or by disabling the
option at build time (by adding -DINTEL_OFFLOAD_NOAFFINITY to the
CCFLAGS line of your Makefile). Disabling this option is not
recommended, especially when running on a machine with Intel
I_MPI_PIN_DOMAIN, etc.). This can be disabled with the {no_affinity}
option to the "package intel"_package.html command or by disabling the
option at build time (by adding -DINTEL_OFFLOAD_NOAFFINITY to the
CCFLAGS line of your Makefile). Disabling this option is not
recommended, especially when running on a machine with Intel
Hyper-Threading technology disabled.
[Run with the USER-INTEL package from the command line:]
To enable USER-INTEL optimizations for all available styles used in
the input script, the "-sf intel"
To enable USER-INTEL optimizations for all available styles used in
the input script, the "-sf intel"
"command-line switch"_Section_start.html#start_7 can be used without
any requirement for editing the input script. This switch will
automatically append "intel" to styles that support it. It also
invokes a default command: "package intel 1"_package.html. This
automatically append "intel" to styles that support it. It also
invokes a default command: "package intel 1"_package.html. This
package command is used to set options for the USER-INTEL package.
The default package command will specify that USER-INTEL calculations
are performed in mixed precision, that the number of OpenMP threads
@ -281,16 +281,16 @@ support, that 1 coprocessor per node will be used with automatic
balancing of work between the CPU and the coprocessor.
You can specify different options for the USER-INTEL package by using
the "-pk intel Nphi" "command-line switch"_Section_start.html#start_7
the "-pk intel Nphi" "command-line switch"_Section_start.html#start_7
with keyword/value pairs as specified in the documentation. Here,
Nphi = # of Xeon Phi coprocessors/node (ignored without offload
support). Common options to the USER-INTEL package include {omp} to
override any OMP_NUM_THREADS setting and specify the number of OpenMP
threads, {mode} to set the floating-point precision mode, and
{lrt} to enable Long-Range Thread mode as described below. See the
"package intel"_package.html command for details, including the
default values used for all its options if not specified, and how to
set the number of OpenMP threads via the OMP_NUM_THREADS environment
{lrt} to enable Long-Range Thread mode as described below. See the
"package intel"_package.html command for details, including the
default values used for all its options if not specified, and how to
set the number of OpenMP threads via the OMP_NUM_THREADS environment
variable if desired.
Examples (see documentation for your MPI/Machine for differences in
@ -303,7 +303,7 @@ mpirun -np 72 -ppn 36 lmp_machine -sf intel -in in.script -pk intel 0 omp 2 mode
As an alternative to adding command-line arguments, the input script
can be edited to enable the USER-INTEL package. This requires adding
the "package intel"_package.html command to the top of the input
the "package intel"_package.html command to the top of the input
script. For the second example above, this would be:
package intel 0 omp 2 mode double :pre
@ -314,46 +314,46 @@ add an "intel" suffix to the individual style, e.g.:
pair_style lj/cut/intel 2.5 :pre
Alternatively, the "suffix intel"_suffix.html command can be added to
the input script to enable USER-INTEL styles for the commands that
the input script to enable USER-INTEL styles for the commands that
follow in the input script.
[Tuning for Performance:]
NOTE: The USER-INTEL package will perform better with modifications
to the input script when "PPPM"_kspace_style.html is used:
"kspace_modify diff ad"_kspace_modify.html and "neigh_modify binsize
NOTE: The USER-INTEL package will perform better with modifications
to the input script when "PPPM"_kspace_style.html is used:
"kspace_modify diff ad"_kspace_modify.html and "neigh_modify binsize
3"_neigh_modify.html should be added to the input script.
Long-Range Thread (LRT) mode is an option to the "package
Long-Range Thread (LRT) mode is an option to the "package
intel"_package.html command that can improve performance when using
"PPPM"_kspace_style.html for long-range electrostatics on processors
with SMT. It generates an extra pthread for each MPI task. The thread
is dedicated to performing some of the PPPM calculations and MPI
with SMT. It generates an extra pthread for each MPI task. The thread
is dedicated to performing some of the PPPM calculations and MPI
communications. On Intel Xeon Phi x200 series CPUs, this will likely
always improve performance, even on a single node. On Intel Xeon
processors, using this mode might result in better performance when
using multiple nodes, depending on the machine. To use this mode,
specify that the number of OpenMP threads is one less than would
specify that the number of OpenMP threads is one less than would
normally be used for the run and add the "lrt yes" option to the "-pk"
command-line suffix or "package intel" command. For example, if a run
would normally perform best with "-pk intel 0 omp 4", instead use
"-pk intel 0 omp 3 lrt yes". When using LRT, you should set the
environment variable "KMP_AFFINITY=none". LRT mode is not supported
"-pk intel 0 omp 3 lrt yes". When using LRT, you should set the
environment variable "KMP_AFFINITY=none". LRT mode is not supported
when using offload.
Not all styles are supported in the USER-INTEL package. You can mix
the USER-INTEL package with styles from the "OPT"_accelerate_opt.html
package or the "USER-OMP package"_accelerate_omp.html". Of course,
the USER-INTEL package with styles from the "OPT"_accelerate_opt.html
package or the "USER-OMP package"_accelerate_omp.html. Of course,
this requires that these packages were installed at build time. This
can performed automatically by using "-sf hybrid intel opt" or
"-sf hybrid intel omp" command-line options. Alternatively, the "opt"
and "omp" suffixes can be appended manually in the input script. For
the latter, the "package omp"_package.html command must be in the
input script or the "-pk omp Nt" "command-line
switch"_Section_start.html#start_7 must be used where Nt is the
input script or the "-pk omp Nt" "command-line
switch"_Section_start.html#start_7 must be used where Nt is the
number of OpenMP threads. The number of OpenMP threads should not be
set differently for the different packages. Note that the "suffix
hybrid intel omp"_suffix.html command can also be used within the
set differently for the different packages. Note that the "suffix
hybrid intel omp"_suffix.html command can also be used within the
input script to automatically append the "omp" suffix to styles when
USER-INTEL styles are not available.
@ -374,33 +374,33 @@ that MPI runs are performed in MCDRAM.
[Tuning for Offload Performance:]
The default settings for offload should give good performance.
The default settings for offload should give good performance.
When using LAMMPS with offload to Intel coprocessors, best performance
will typically be achieved with concurrent calculations performed on
both the CPU and the coprocessor. This is achieved by offloading only
a fraction of the neighbor and pair computations to the coprocessor or
using "hybrid"_pair_hybrid.html pair styles where only one style uses
the "intel" suffix. For simulations with long-range electrostatics or
bond, angle, dihedral, improper calculations, computation and data
transfer to the coprocessor will run concurrently with computations
the "intel" suffix. For simulations with long-range electrostatics or
bond, angle, dihedral, improper calculations, computation and data
transfer to the coprocessor will run concurrently with computations
and MPI communications for these calculations on the host CPU. This
is illustrated in the figure below for the rhodopsin protein benchmark
running on E5-2697v2 processors with a Intel Xeon Phi 7120p
running on E5-2697v2 processors with a Intel Xeon Phi 7120p
coprocessor. In this plot, the vertical access is time and routines
running at the same time are running concurrently on both the host and
the coprocessor.
:c,image(JPG/offload_knc.png)
The fraction of the offloaded work is controlled by the {balance}
keyword in the "package intel"_package.html command. A balance of 0
runs all calculations on the CPU. A balance of 1 runs all
supported calculations on the coprocessor. A balance of 0.5 runs half
of the calculations on the coprocessor. Setting the balance to -1
(the default) will enable dynamic load balancing that continously
adjusts the fraction of offloaded work throughout the simulation.
Because data transfer cannot be timed, this option typically produces
The fraction of the offloaded work is controlled by the {balance}
keyword in the "package intel"_package.html command. A balance of 0
runs all calculations on the CPU. A balance of 1 runs all
supported calculations on the coprocessor. A balance of 0.5 runs half
of the calculations on the coprocessor. Setting the balance to -1
(the default) will enable dynamic load balancing that continously
adjusts the fraction of offloaded work throughout the simulation.
Because data transfer cannot be timed, this option typically produces
results within 5 to 10 percent of the optimal fixed balance.
If running short benchmark runs with dynamic load balancing, adding a
@ -418,15 +418,15 @@ with 60 cores available for offload and 4 hardware threads per core
each MPI task to use a subset of 10 threads on the coprocessor. Fine
tuning of the number of threads to use per MPI task or the number of
threads to use per core can be accomplished with keyword settings of
the "package intel"_package.html command.
the "package intel"_package.html command.
The USER-INTEL package has two modes for deciding which atoms will be
handled by the coprocessor. This choice is controlled with the {ghost}
keyword of the "package intel"_package.html command. When set to 0,
ghost atoms (atoms at the borders between MPI tasks) are not offloaded
to the card. This allows for overlap of MPI communication of forces
with computation on the coprocessor when the "newton"_newton.html
setting is "on". The default is dependent on the style being used,
The USER-INTEL package has two modes for deciding which atoms will be
handled by the coprocessor. This choice is controlled with the {ghost}
keyword of the "package intel"_package.html command. When set to 0,
ghost atoms (atoms at the borders between MPI tasks) are not offloaded
to the card. This allows for overlap of MPI communication of forces
with computation on the coprocessor when the "newton"_newton.html
setting is "on". The default is dependent on the style being used,
however, better performance may be achieved by setting this option
explictly.
@ -442,10 +442,10 @@ mode is being used and indicating the number of coprocessor threads
per MPI task. Additionally, an offload timing summary is printed at
the end of each run. When offloading, the frequency for "atom
sorting"_atom_modify.html is changed to 1 so that the per-atom data is
effectively sorted at every rebuild of the neighbor lists. All the
available coprocessor threads on each Phi will be divided among MPI
tasks, unless the {tptask} option of the "-pk intel" "command-line
switch"_Section_start.html#start_7 is used to limit the coprocessor
effectively sorted at every rebuild of the neighbor lists. All the
available coprocessor threads on each Phi will be divided among MPI
tasks, unless the {tptask} option of the "-pk intel" "command-line
switch"_Section_start.html#start_7 is used to limit the coprocessor
threads per MPI task.
[Restrictions:]

View File

@ -65,7 +65,7 @@ Make.py -v -p kokkos -kokkos omp -o mpi -a file mpi # or one-line build via Ma
mpirun -np 16 lmp_mpi -k on -sf kk -in in.lj # 1 node, 16 MPI tasks/node, no threads
mpirun -np 2 -ppn 1 lmp_mpi -k on t 16 -sf kk -in in.lj # 2 nodes, 1 MPI task/node, 16 threads/task
mpirun -np 2 lmp_mpi -k on t 8 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 8 threads/task
mpirun -np 2 lmp_mpi -k on t 8 -sf kk -in in.lj # 1 node, 2 MPI tasks/node, 8 threads/task
mpirun -np 32 -ppn 4 lmp_mpi -k on t 4 -sf kk -in in.lj # 8 nodes, 4 MPI tasks/node, 4 threads/task :pre
specify variables and settings in your Makefile.machine that enable OpenMP, GPU, or Phi support
@ -110,14 +110,14 @@ mpirun -np 96 -ppn 12 lmp_g++ -k on t 20 -sf kk -in in.lj # ditto on 8 Phis :p
[Required hardware/software:]
Kokkos support within LAMMPS must be built with a C++11 compatible
compiler. If using gcc, version 4.8.1 or later is required.
compiler. If using gcc, version 4.7.2 or later is required.
To build with Kokkos support for CPUs, your compiler must support the
OpenMP interface. You should have one or more multi-core CPUs so that
multiple threads can be launched by each MPI task running on a CPU.
To build with Kokkos support for NVIDIA GPUs, NVIDIA Cuda software
version 6.5 or later must be installed on your system. See the
version 7.5 or later must be installed on your system. See the
discussion for the "GPU"_accelerate_gpu.html package for details of
how to check and do this.
@ -178,7 +178,7 @@ make kokkos_cuda_mpich :pre
These examples set the KOKKOS-specific OMP, MIC, CUDA variables on the
make command line which requires a GNU-compatible make command. Try
"gmake" if your system's standard make complains.
"gmake" if your system's standard make complains.
NOTE: If you build using make line variables and re-build LAMMPS twice
with different KOKKOS options and the *same* target, e.g. g++ in the
@ -394,7 +394,7 @@ additional parallelism (beyond MPI) will be invoked on the host
CPU(s).
You can compare the performance running in different modes:
run with 1 MPI task/node and N threads/task
run with N MPI tasks/node and 1 thread/task
run with settings in between these extremes :ul
@ -427,7 +427,7 @@ e.g. src/MAKE/Makefile.cuda, is correct for your GPU hardware/software
details).
The -np setting of the mpirun command should set the number of MPI
tasks/node to be equal to the # of physical GPUs on the node.
tasks/node to be equal to the # of physical GPUs on the node.
Use the "-k" "command-line switch"_Section_commands.html#start_7 to
specify the number of GPUs per node, and the number of threads per MPI

View File

@ -96,7 +96,7 @@ variable.
Depending on which styles are accelerated, you should look for a
reduction in the "Pair time", "Bond time", "KSpace time", and "Loop
time" values printed at the end of a run.
time" values printed at the end of a run.
You may see a small performance advantage (5 to 20%) when running a
USER-OMP style (in serial or parallel) with a single thread per MPI

View File

@ -74,7 +74,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -61,7 +61,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -66,7 +66,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -74,7 +74,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -66,7 +66,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -21,11 +21,11 @@ angle_coeff 6 2.1 180.0 :pre
[Description:]
The {dipole} angle style is used to control the orientation of a dipolar
atom within a molecule "(Orsi)"_#Orsi. Specifically, the {dipole} angle
style restrains the orientation of a point dipole mu_j (embedded in atom
'j') with respect to a reference (bond) vector r_ij = r_i - r_j, where 'i'
is another atom of the same molecule (typically, 'i' and 'j' are also
covalently bonded).
atom within a molecule "(Orsi)"_#Orsi. Specifically, the {dipole} angle
style restrains the orientation of a point dipole mu_j (embedded in atom
'j') with respect to a reference (bond) vector r_ij = r_i - r_j, where 'i'
is another atom of the same molecule (typically, 'i' and 'j' are also
covalently bonded).
It is convenient to define an angle gamma between the 'free' vector mu_j
and the reference (bond) vector r_ij:
@ -37,21 +37,21 @@ The {dipole} angle style uses the potential:
:c,image(Eqs/angle_dipole_potential.jpg)
where K is a rigidity constant and gamma0 is an equilibrium (reference)
angle.
angle.
The torque on the dipole can be obtained by differentiating the
potential using the 'chain rule' as in appendix C.3 of
The torque on the dipole can be obtained by differentiating the
potential using the 'chain rule' as in appendix C.3 of
"(Allen)"_#Allen:
:c,image(Eqs/angle_dipole_torque.jpg)
Example: if gamma0 is set to 0 degrees, the torque generated by
the potential will tend to align the dipole along the reference
the potential will tend to align the dipole along the reference
direction defined by the (bond) vector r_ij (in other words, mu_j is
restrained to point towards atom 'i').
The dipolar torque T_j must be counterbalanced in order to conserve
the local angular momentum. This is achieved via an additional force
The dipolar torque T_j must be counterbalanced in order to conserve
the local angular momentum. This is achieved via an additional force
couple generating a torque equivalent to the opposite of T_j:
:c,image(Eqs/angle_dipole_couple.jpg)
@ -118,7 +118,7 @@ This angle style should not be used with SHAKE.
:line
:link(Orsi)
[(Orsi)] Orsi & Essex, The ELBA force field for coarse-grain modeling of
[(Orsi)] Orsi & Essex, The ELBA force field for coarse-grain modeling of
lipid membranes, PloS ONE 6(12): e28637, 2011.
:link(Allen)

View File

@ -62,7 +62,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
section for more info on packages.
[Related commands:]

View File

@ -61,7 +61,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
section for more info on packages.
[Related commands:]

View File

@ -65,11 +65,11 @@ more instructions on how to use the accelerated styles effectively.
:line
[Restrictions:] none
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
MOLECULE package. See the "Making LAMMPS"_Section_start.html#start_3
section for more info on packages.
[Related commands:]

View File

@ -76,7 +76,7 @@ for specific angle types.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
Unlike other angle styles, the hybrid angle style does not store angle

View File

@ -68,7 +68,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
USER_MISC package. See the "Making LAMMPS"_Section_start.html#start_3
section for more info on packages.
[Related commands:]

View File

@ -43,7 +43,7 @@ internally; hence the units of K are in energy/radian^2.
The also required {lj/sdk} parameters will be extracted automatically
from the pair_style.
[Restrictions:]
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
USER-CG-CMM package. See the "Making

View File

@ -147,7 +147,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This angle style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

View File

@ -1,4 +1,4 @@
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS
Commands"_lc :c
:link(lws,http://lammps.sandia.gov)
@ -156,12 +156,12 @@ used with a group-ID that is not "all".
[Default:]
By default, {id} is yes. By default, atomic systems (no bond topology
info) do not use a map. For molecular systems (with bond topology
info), a map is used. The default map style is array if no atom ID is
larger than 1 million, otherwise the default is hash. By default, a
"first" group is not defined. By default, sorting is enabled with a
frequency of 1000 and a binsize of 0.0, which means the neighbor
By default, {id} is yes. By default, atomic systems (no bond topology
info) do not use a map. For molecular systems (with bond topology
info), a map is used. The default map style is array if no atom ID is
larger than 1 million, otherwise the default is hash. By default, a
"first" group is not defined. By default, sorting is enabled with a
frequency of 1000 and a binsize of 0.0, which means the neighbor
cutoff will be used to set the bin size.
:line

View File

@ -14,7 +14,7 @@ atom_style style args :pre
style = {angle} or {atomic} or {body} or {bond} or {charge} or {dipole} or \
{dpd} or {electron} or {ellipsoid} or {full} or {line} or {meso} or \
{molecular} or {peri} or {smd} or {sphere} or {tri} or \
{molecular} or {peri} or {smd} or {sphere} or {tri} or \
{template} or {hybrid} :ulb,l
args = none for any style except the following
{body} args = bstyle bstyle-args
@ -166,7 +166,7 @@ stores a per-particle mass and size and orientation (i.e. the corner
points of the triangle).
The {template} style allows molecular topolgy (bonds,angles,etc) to be
defined via a molecule template using the "molecule"_molecule.txt
defined via a molecule template using the "molecule"_molecule.html
command. The template stores one or more molecules with a single copy
of the topology info (bonds,angles,etc) of each. Individual atoms
only store a template index and template atom to identify which
@ -193,7 +193,7 @@ For the {body} style, the particles are arbitrary bodies with internal
attributes defined by the "style" of the bodies, which is specified by
the {bstyle} argument. Body particles can represent complex entities,
such as surface meshes of discrete points, collections of
sub-particles, deformable objects, etc.
sub-particles, deformable objects, etc.
The "body"_body.html doc page descibes the body styles LAMMPS
currently supports, and provides more details as to the kind of body
@ -269,7 +269,7 @@ The {line} and {tri} styles are part of the ASPHERE package.
The {body} style is part of the BODY package.
The {dipole} style is part of the DIPOLE package.
The {dipole} style is part of the DIPOLE package.
The {peri} style is part of the PERI package for Peridynamics.

View File

@ -319,14 +319,16 @@ accurately would be impractical and slow down the computation.
Instead the {weight} keyword implements several ways to influence the
per-particle weights empirically by properties readily available or
using the user's knowledge of the system. Note that the absolute
value of the weights are not important; their ratio is what is used to
assign particles to processors. A particle with a weight of 2.5 is
assumed to require 5x more computational than a particle with a weight
of 0.5.
value of the weights are not important; only their relative ratios
affect which particle is assigned to which processor. A particle with
a weight of 2.5 is assumed to require 5x more computational than a
particle with a weight of 0.5. For all the options below the weight
assigned to a particle must be a positive value; an error will be be
generated if a weight is <= 0.0.
Below is a list of possible weight options with a short description of
their usage and some example scenarios where they might be applicable.
It is possible to apply multiple weight flags and the weightins they
It is possible to apply multiple weight flags and the weightings they
induce will be combined through multiplication. Most of the time,
however, it is sufficient to use just one method.
@ -346,13 +348,24 @@ the computational cost for each group remains constant over time.
This is a purely empirical weighting, so a series test runs to tune
the assigned weight factors for optimal performance is recommended.
The {neigh} weight style assigns a weight to each particle equal to
its number of neighbors divided by the avergage number of neighbors
for all particles. The {factor} setting is then appied as an overall
scale factor to all the {neigh} weights which allows tuning of the
impact of this style. A {factor} smaller than 1.0 (e.g. 0.8) often
results in the best performance, since the number of neighbors is
likely to overestimate the ideal weight.
The {neigh} weight style assigns the same weight to each particle
owned by a processor based on the total count of neighbors in the
neighbor list owned by that processor. The motivation is that more
neighbors means a higher computational cost. The style does not use
neighbors per atom to assign a unique weight to each atom, because
that value can vary depending on how the neighbor list is built.
The {factor} setting is applied as an overall scale factor to the
{neigh} weights which allows adjustment of their impact on the
balancing operation. The specified {factor} value must be positive.
A value > 1.0 will increase the weights so that the ratio of max
weight to min weight increases by {factor}. A value < 1.0 will
decrease the weights so that the ratio of max weight to min weight
decreases by {factor}. In both cases the intermediate weight values
increase/decrease proportionally as well. A value = 1.0 has no effect
on the {neigh} weights. As a rule of thumb, we have found a {factor}
of about 0.8 often results in the best performance, since the number
of neighbors is likely to overestimate the ideal weight.
This weight style is useful for systems where there are different
cutoffs used for different pairs of interations, or the density
@ -368,35 +381,48 @@ weights are computed. Inserting a "run 0 post no"_run.html command
before issuing the {balance} command, may be a workaround for this
case, as it will induce the neighbor list to be built.
The {time} weight style uses "timer data"_timer.html to estimate a
weight for each particle. It uses the same information as is used for
the "MPI task timing breakdown"_Section_start.html#start_8, namely,
the timings for sections {Pair}, {Bond}, {Kspace}, and {Neigh}. The
time spent in these sections of the timestep are measured for each MPI
rank, summed up, then converted into a cost for each MPI rank relative
to the average cost over all MPI ranks for the same sections. That
cost then evenly distributed over all the particles owned by that
rank. Finally, the {factor} setting is then appied as an overall
scale factor to all the {time} weights as a way to fine tune the
impact of this weight style. Good {factor} values to use are
typically between 0.5 and 1.2.
The {time} weight style uses "timer data"_timer.html to estimate
weights. It assigns the same weight to each particle owned by a
processor based on the total computational time spent by that
processor. See details below on what time window is used. It uses
the same timing information as is used for the "MPI task timing
breakdown"_Section_start.html#start_8, namely, for sections {Pair},
{Bond}, {Kspace}, and {Neigh}. The time spent in those portions of
the timestep are measured for each MPI rank, summed, then divided by
the number of particles owned by that processor. I.e. the weight is
an effective CPU time/particle averaged over the particles on that
processor.
For the {balance} command the timing data is taken from the preceding
run command, i.e. the timings are for the entire previous run. For
the {fix balance} command the timing data is for only the timesteps
since the last balancing operation was performed. If timing
information for the required sections is not available, e.g. at the
beginning of a run, or when the "timer"_timer.html command is set to
either {loop} or {off}, a warning is issued. In this case no weights
are computed.
The {factor} setting is applied as an overall scale factor to the
{time} weights which allows adjustment of their impact on the
balancing operation. The specified {factor} value must be positive.
A value > 1.0 will increase the weights so that the ratio of max
weight to min weight increases by {factor}. A value < 1.0 will
decrease the weights so that the ratio of max weight to min weight
decreases by {factor}. In both cases the intermediate weight values
increase/decrease proportionally as well. A value = 1.0 has no effect
on the {time} weights. As a rule of thumb, effective values to use
are typicall between 0.5 and 1.2. Note that the timer quantities
mentioned above can be affected by communication which occurs in the
middle of the operations, e.g. pair styles with intermediate exchange
of data witin the force computation, and likewise for KSpace solves.
This weight style is the most generic one, and should be tried first,
if neither the {group} or {neigh} styles are easily applicable.
However, since the computed cost function is averaged over all local
particles this weight style may not be highly accurate. This style
can also be effective as a secondary weight in combination with either
{group} or {neigh} to offset some of inaccuracies in either of those
heuristics.
When using the {time} weight style with the {balance} command, the
timing data is taken from the preceding run command, i.e. the timings
are for the entire previous run. For the {fix balance} command the
timing data is for only the timesteps since the last balancing
operation was performed. If timing information for the required
sections is not available, e.g. at the beginning of a run, or when the
"timer"_timer.html command is set to either {loop} or {off}, a warning
is issued. In this case no weights are computed.
NOTE: The {time} weight style is the most generic option, and should
be tried first, unless the {group} style is easily applicable.
However, since the computed cost function is averaged over all
particles on a processor, the weights may not be highly accurate.
This style can also be effective as a secondary weight in combination
with either {group} or {neigh} to offset some of inaccuracies in
either of those heuristics.
The {var} weight style assigns per-particle weights by evaluating an
"atom-style variable"_variable.html specified by {name}. This is
@ -464,7 +490,7 @@ per processor. Note that the 4 sub-domains share vertices, so there
will be duplicate nodes in the list.
The "SQUARES" section lists the node IDs of the 4 vertices in a
rectangle for each processor (1 to 4).
rectangle for each processor (1 to 4).
For a 3d problem, the syntax is similar with 8 vertices listed for
each processor, instead of 4, and "SQUARES" replaced by "CUBES".

View File

@ -125,7 +125,7 @@ in the {Bodies} section of the data file:
atom-ID 1 M
N
ixx iyy izz ixy ixz iyz
ixx iyy izz ixy ixz iyz
x1 y1 z1
...
xN yN zN :pre
@ -198,11 +198,11 @@ in the {Bodies} section of the data file:
atom-ID 1 M
N
ixx iyy izz ixy ixz iyz
ixx iyy izz ixy ixz iyz
x1 y1 z1
...
xN yN zN
i j j k k ...
i j j k k ...
radius :pre
N is the number of vertices in the body particle. M = 6 + 3*N + 2*N +
@ -230,11 +230,11 @@ particles whose edge length is sqrt(2):
3 1 27
4
1 1 4 0 0 0
-0.7071 -0.7071 0
-0.7071 0.7071 0
0.7071 0.7071 0
0.7071 -0.7071 0
1 1 4 0 0 0
-0.7071 -0.7071 0
-0.7071 0.7071 0
0.7071 0.7071 0
0.7071 -0.7071 0
0 1 1 2 2 3 3 0
1.0 :pre

View File

@ -70,10 +70,10 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This bond style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
You typically should specify "special_bonds fene"_special_bonds.html"
You typically should specify "special_bonds fene"_special_bonds.html
or "special_bonds lj/coul 0 1 1"_special_bonds.html to use this bond
style. LAMMPS will issue a warning it that's not the case.

View File

@ -73,10 +73,10 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This bond style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
You typically should specify "special_bonds fene"_special_bonds.html"
You typically should specify "special_bonds fene"_special_bonds.html
or "special_bonds lj/coul 0 1 1"_special_bonds.html to use this bond
style. LAMMPS will issue a warning it that's not the case.

View File

@ -65,7 +65,7 @@ more instructions on how to use the accelerated styles effectively.
[Restrictions:]
This bond style can only be used if LAMMPS was built with the
MOLECULE package (which it is by default). See the "Making
MOLECULE package. See the "Making
LAMMPS"_Section_start.html#start_3 section for more info on packages.
[Related commands:]

Some files were not shown because too many files have changed in this diff Show More