Files
lammps/examples/reaxff/water/log.1Feb25.water.acks2.field.g++.4

131 lines
4.5 KiB
Groff

LAMMPS (19 Nov 2024 - Development - patch_19Nov2024-1172-g920337963b-modified)
using 1 OpenMP thread(s) per MPI task
# ACKS2 Water, CITE: Achtyl et al., Nat. Comm., 6 6539 (2015)
boundary p p s
units real
atom_style charge
read_data data.water
Reading data file ...
orthogonal box = (0 0 0) to (31.043046 31.043046 31.043046)
1 by 2 by 2 MPI processor grid
reading atoms ...
WARNING: Non-zero imageflag(s) in z direction for non-periodic boundary reset to zero (src/read_data.cpp:1548)
3000 atoms
read_data CPU = 0.009 seconds
variable x index 1
variable y index 1
variable z index 1
replicate $x $y $z
replicate 1 $y $z
replicate 1 1 $z
replicate 1 1 1
Replication is creating a 1x1x1 = 1 times larger system...
orthogonal box = (0 0 0.0012889577) to (31.043046 31.043046 31.045309)
2 by 1 by 2 MPI processor grid
3000 atoms
replicate CPU = 0.001 seconds
pair_style reaxff NULL safezone 3.0 mincap 150
pair_coeff * * acks2_ff.water O H
Reading potential file acks2_ff.water with DATE: 2021-09-21
WARNING: Changed valency_val to valency_boc for X (src/REAXFF/reaxff_ffield.cpp:294)
neighbor 0.5 bin
neigh_modify every 1 delay 0 check yes
velocity all create 300.0 4928459 rot yes dist gaussian
fix 1 all acks2/reaxff 1 0.0 10.0 1.0e-6 reaxff maxiter 1000
fix 2 all nvt temp 300 300 50.0
fix 3 all efield 0.0 0.0 1.0
fix 4 all wall/reflect zlo EDGE zhi EDGE
timestep 0.5
thermo 10
thermo_style custom step temp press density vol
run 20
CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE
Your simulation uses code contributions which should be cited:
- pair reaxff command: doi:10.1016/j.parco.2011.08.005
@Article{Aktulga12,
author = {H. M. Aktulga and J. C. Fogarty and S. A. Pandit and A. Y. Grama},
title = {Parallel Reactive Molecular Dynamics: {N}umerical Methods and Algorithmic Techniques},
journal = {Parallel Computing},
year = 2012,
volume = 38,
number = {4--5},
pages = {245--259}
}
- fix acks2/reaxff command: doi:10.1137/18M1224684
@Article{O'Hearn2020,
author = {K. A. {O'Hearn} and A. Alperen and H. M. Aktulga},
title = {Fast Solvers for Charge Distribution Models on Shared Memory Platforms},
journal = {SIAM J.\ Sci.\ Comput.},
year = 2020,
volume = 42,
number = 1,
pages = {1--22}
}
CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE-CITE
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 10.5
ghost atom cutoff = 10.5
binsize = 5.25, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair reaxff, perpetual
attributes: half, newton off, ghost
pair build: half/bin/ghost/newtoff
stencil: full/ghost/bin/3d
bin: standard
(2) fix acks2/reaxff, perpetual, copy from (1)
attributes: half, newton off
pair build: copy
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 198.5 | 199.3 | 199.9 Mbytes
Step Temp Press Density Volume
0 300 -20760.178 0.99996859 29916.212
10 396.27472 -18425.294 1.0000143 29914.845
20 518.59015 -10012.352 1.0000209 29914.647
Loop time of 3.14386 on 4 procs for 20 steps with 3000 atoms
Performance: 0.275 ns/day, 87.329 hours/ns, 6.362 timesteps/s, 19.085 katom-step/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.66121 | 0.66884 | 0.68146 | 1.0 | 21.27
Neigh | 0.095801 | 0.09707 | 0.098006 | 0.3 | 3.09
Comm | 0.0018538 | 0.014557 | 0.022268 | 6.8 | 0.46
Output | 3.5758e-05 | 4.0192e-05 | 5.2801e-05 | 0.0 | 0.00
Modify | 2.3621 | 2.3631 | 2.3644 | 0.1 | 75.17
Other | | 0.0002583 | | | 0.01
Nlocal: 750 ave 758 max 737 min
Histogram: 1 0 0 0 0 0 1 0 1 1
Nghost: 4219.5 ave 4233 max 4198 min
Histogram: 1 0 0 0 1 0 0 0 0 2
Neighs: 230733 ave 233430 max 225532 min
Histogram: 1 0 0 0 0 0 0 0 2 1
Total # of neighbors = 922931
Ave neighs/atom = 307.64367
Neighbor list builds = 7
Dangerous builds = 0
Total wall time: 0:00:03