Merge pull request #482 from akohlmey/add-pair-python

Add python pair style for implementing simple pairwise additive potentials in python
This commit is contained in:
sjplimp
2017-05-18 11:15:58 -06:00
committed by GitHub
33 changed files with 12340 additions and 28 deletions

View File

@ -157,6 +157,8 @@ doc page for its python-style variables for more info, including
examples of Python code you can write for both pure Python operations
and callbacks to LAMMPS. See "fix python"_fix_python.html to learn about
possibilities to execute Python code during each time step.
Through the "python pair style"_pair_python.html it is also possible
to define potential functions as python code.
To run pure Python code from LAMMPS, you only need to build LAMMPS
with the PYTHON package installed:

View File

@ -237,6 +237,7 @@ fix_pour.html
fix_press_berendsen.html
fix_print.html
fix_property_atom.html
fix_python.html
fix_qbmsst.html
fix_qeq.html
fix_qeq_comb.html
@ -468,6 +469,7 @@ pair_oxdna.html
pair_oxdna2.html
pair_peri.html
pair_polymorphic.html
pair_python.html
pair_quip.html
pair_reax.html
pair_reaxc.html

212
doc/src/pair_python.txt Normal file
View File

@ -0,0 +1,212 @@
"LAMMPS WWW Site"_lws - "LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
:link(lws,http://lammps.sandia.gov)
:link(ld,Manual.html)
:link(lc,Section_commands.html#comm)
:line
pair_style python command :h3
[Syntax:]
pair_style python cutoff :pre
cutoff = global cutoff for interactions in python potential classes
[Examples:]
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj :pre
pair_style hybrid/overlay coul/long 12.0 python 12.0
pair_coeff * * coul/long
pair_coeff * * python py_pot.LJCutSPCE OW NULL :pre
[Description:]
The {python} pair style provides a way to define pairwise additive
potential functions as python script code that is loaded into LAMMPS
from a python file which must contain specific python class definitions.
This allows to rapidly evaluate different potential functions without
having to modify and recompile LAMMPS. Due to python being an
interpreted language, however, the performance of this pair style is
going to be significantly slower (often between 20x and 100x) than
corresponding compiled code. This penalty can be significantly reduced
through generating tabulations from the python code through the
"pair_write"_pair_write.html command, which is supported by this style.
Only a single pair_coeff command is used with the {python} pair style
which specifies a python class inside a python module or file that
LAMMPS will look up in the current directory, the folder pointed to by
the LAMMPS_POTENTIALS environment variable or somewhere in your python
path. A single python module can hold multiple python pair class
definitions. The class definitions itself have to follow specific rules
that are explained below.
Atom types in the python class are specified through symbolic constants,
typically strings. These are mapped to LAMMPS atom types by specifying
N additional arguments after the class name in the pair_coeff command,
where N must be the number of currently defined atom types:
As an example, imagine a file {py_pot.py} has a python potential class
names {LJCutMelt} with parameters and potential functions for a two
Lennard-Jones atom types labeled as 'LJ1' and 'LJ2'. In your LAMMPS
input and you would have defined 3 atom types, out of which the first
two are supposed to be using the 'LJ1' parameters and the third
the 'LJ2' parameters, then you would use the following pair_coeff
command:
pair_coeff * * py_pot.LJCutMelt LJ1 LJ1 LJ2 :pre
The first two arguments [must] be * * so as to span all LAMMPS atom types.
The first two LJ1 arguments map LAMMPS atom types 1 and 2 to the LJ1
atom type in the LJCutMelt class of the py_pot.py file. The final LJ2
argument maps LAMMPS atom type 3 to the LJ2 atom type the python file.
If a mapping value is specified as NULL, the mapping is not performed,
any pair interaction with this atom type will be skipped. This can be
used when a {python} potential is used as part of the {hybrid} or
{hybrid/overlay} pair style. The NULL values are then placeholders for
atom types that will be used with other potentials.
:line
The python potential file has to start with the following code:
from __future__ import print_function
class LAMMPSPairPotential(object):
def __init__(self):
self.pmap=dict()
self.units='lj'
def map_coeff(self,name,ltype):
self.pmap\[ltype\]=name
def check_units(self,units):
if (units != self.units):
raise Exception("Conflicting units: %s vs. %s" % (self.units,units))
:pre
Any classes with definitions of specific potentials have to be derived
from this class and should be initialize in a similar fashion to the
example given below. NOTE: The class constructor has to set up a data
structure containing the potential parameters supported by this class.
It should also define a variable {self.units} containing a string
matching one of the options of LAMMPS' "units"_units.html command, which
is used to verify, that the potential definition in the python class and
in the LAMMPS input match. Example for a single type Lennard-Jones
potential class {LJCutMelt} in reducted units, which defines an atom
type {lj} for which the parameters epsilon and sigma are both 1.0:
class LJCutMelt(LAMMPSPairPotential):
def __init__(self):
super(LJCutMelt,self).__init__()
# set coeffs: 48*eps*sig**12, 24*eps*sig**6,
# 4*eps*sig**12, 4*eps*sig**6
self.units = 'lj'
self.coeff = {'lj' : {'lj' : (48.0,24.0,4.0,4.0)}}
:pre
The class also has to provide two methods for the computation of the
potential energy and forces, which have be named {compute_force},
and {compute_energy}, which both take 3 numerical arguments:
rsq = the square of the distance between a pair of atoms (float) :li
itype = the (numerical) type of the first atom :li
jtype = the (numerical) type of the second atom :ul
This functions need to compute the force and the energy, respectively,
and use the result as return value. The functions need to use the
{pmap} dictionary to convert the LAMMPS atom type number to the symbolic
value of the internal potential parameter data structure. Following
the {LJCutMelt} example, here are the two functions:
def compute_force(self,rsq,itype,jtype):
coeff = self.coeff\[self.pmap\[itype\]\]\[self.pmap\[jtype\]\]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj1 = coeff\[0\]
lj2 = coeff\[1\]
return (r6inv * (lj1*r6inv - lj2))*r2inv :pre
def compute_energy(self,rsq,itype,jtype):
coeff = self.coeff\[self.pmap\[itype\]\]\[self.pmap\[jtype\]\]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj3 = coeff\[2\]
lj4 = coeff\[3\]
return (r6inv * (lj3*r6inv - lj4)) :pre
IMPORTANT NOTE: for consistency with the C++ pair styles in LAMMPS,
the {compute_force} function follows the conventions of the Pair::single()
methods and does not return the full force, but the force scaled by
the distance between the two atoms, so this value only needs to be
multiplied by delta x, delta y, and delta z to conveniently obtain
the three components of the force vector between these two atoms.
:line
IMPORTANT NOTE: The evaluation of scripted python code will slow down
the computation pair-wise interactions quite significantly. However,
this can be largely worked around through using the python pair style
not for the actual simulation, but to generate tabulated potentials
on the fly using the "pair_write"_pair_write.html command. Please
see below for an example LAMMPS input of how to build a table file:
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
shell rm -f melt.table
pair_write 1 1 2000 rsq 0.01 2.5 lj1_lj2.table lj :pre
Note, that it is strong recommended to try to [delete] the potential
table file before generating it. Since the {pair_write} command will
always append to a table file, which pair style table will use the first
match. Thus when changing the potential function in the python class,
the table pair style will still read the old variant.
After switching the pair style to {table}, the potential tables need
to be assigned to the LAMMPS atom types like this:
pair_style table linear 2000
pair_coeff 1 1 melt.table lj :pre
This can also be done for more complex systems. Please see the
{examples/python} folders for a few more examples.
:line
[Mixing, shift, table, tail correction, restart, rRESPA info]:
Mixing of potential parameters has to be handled inside the provided
python module. The python pair style simply assumes that force and
energy computation can be correctly performed for all pairs of atom
types as they are mapped to the atom type labels inside the python
potential class.
This pair style does not support the "pair_modify"_pair_modify.html
shift, table, and tail options.
This pair style does not write its information to "binary restart
files"_restart.html, since it is stored in potential files. Thus, you
need to re-specify the pair_style and pair_coeff commands in an input
script that reads a restart file.
This pair style can only be used via the {pair} keyword of the
"run_style respa"_run_style.html command. It does not support the
{inner}, {middle}, {outer} keywords.
:line
[Restrictions:]
This pair style is part of the PYTHON package. It is only enabled
if LAMMPS was built with that package. See
the "Making LAMMPS"_Section_start.html#start_3 section for more info.
[Related commands:]
"pair_coeff"_pair_coeff.html, "pair_write"_pair_write.html,
"pair style table"_pair_table.html
[Default:] none

View File

@ -72,6 +72,7 @@ Pair Styles :h1
pair_oxdna2
pair_peri
pair_polymorphic
pair_python
pair_quip
pair_reax
pair_reaxc

View File

@ -0,0 +1,41 @@
This folder contains several LAMMPS input scripts and a python module
file py_pot.py to demonstrate the use of the pair style python.
in.pair_python_melt:
This is a version of the melt example using the python pair style. The first
part of the output should have identical energies, temperature and pressure
than the melt example. The following two sections then demonstrate how to
restart with pair style python from a restart file and a data file.
in.pair_python_hybrid:
This versions shows how to mix regular pair styles with a python pair style.
However, in this case both potentials are the same, so the energies and
pressure in the output should be identical to that of the previous example.
in.pair_python_spce:
This input shows a simulation of small bulk water system with the SPC/E
water potential. Since the python pair style does not support computing
coulomb contributions, pair style hybrid/overload is used to combine
the python style containing the Lennard-Jones part with the long-range coulomb.
Same as for the previous example, it also showcases restarting.
in.pair_python_table:
This input demonstrates the use of using the python pair style to build
a table file for use with pair style table. This will run much faster
than the python pair style. This example tabulates the melt example from
above. Note that tabulation is approximative, so the output will only
agree with the melt result to some degree.
in.pair_python_coulomb:
This is another tabulation example, this time for the SPC/E water example
with cutoff coulomb interactions.
Please note, that tabulating long-range coulomb has a systematic error in
forces and energies for all systems with bonds, angle and dihedrals.
In this case, this will only affect the energies, since the water molecules
are held rigid with fix shake. To enable long-range coulomb the coul/cut
style needs to be replaced with coul/long, a suitable kspace style added
and the pppm keyword added to the table pair style definition.
in.pair_python_long:
The final example shows how to combine long-range coulomb with tabulation
for only the short range interactions via pair style hybrid/overlay.

9029
examples/python/data.spce Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,42 @@
units real
atom_style full
read_data data.spce
pair_style hybrid/overlay coul/cut 12.0 python 12.0
pair_coeff * * coul/cut
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
fix 2 all nvt temp 300.0 300.0 100.0
# create combined lj/coul table for all atom types
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW -0.8472 -0.8472
pair_write 1 2 2000 rsq 0.1 12 spce.table OW-HW -0.8472 0.4236
pair_write 2 2 2000 rsq 0.1 12 spce.table HW-HW 0.4236 0.4236
# switch to tabulated potential
pair_style table linear 2000 pppm
pair_coeff 1 1 spce.table OW-OW
pair_coeff 1 2 spce.table OW-HW
pair_coeff 2 2 spce.table HW-HW
thermo 10
run 100
shell rm spce.table

View File

@ -0,0 +1,63 @@
# 3d Lennard-Jones hybrid
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 10 0 10 0 10
create_box 2 box
create_atoms 1 box
mass * 1.0
region half block -0.1 4.9 0 10 0 10
set region half type 2
velocity all create 3.0 87287
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
write_data hybrid.data
write_restart hybrid.restart
clear
read_restart hybrid.restart
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
fix 1 all nve
thermo 50
run 250
clear
units lj
atom_style atomic
read_data hybrid.data
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
shell rm hybrid.data hybrid.restart

View File

@ -0,0 +1,38 @@
units real
atom_style full
read_data data.spce
pair_style python 12.0
pair_coeff * * py_pot.LJCutSPCE OW HW
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
fix 2 all nvt temp 300.0 300.0 100.0
# create only lj/cut table for the oxygen atoms from python
shell rm -f spce.table
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW
# switch to tabulated potential with long-range coulomb as overlay
pair_style hybrid/overlay coul/long 12.0 table linear 2000
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff 1 1 table spce.table OW-OW
thermo 10
run 100
shell rm spce.table

View File

@ -0,0 +1,58 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 10 0 10 0 10
create_box 1 box
create_atoms 1 box
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
write_data melt.data
write_restart melt.restart
clear
read_restart melt.restart
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
fix 1 all nve
thermo 50
run 250
clear
units lj
atom_style atomic
read_data melt.data
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
shell rm melt.data melt.restart

View File

@ -0,0 +1,28 @@
units real
atom_style full
read_data data.spce
pair_style hybrid/overlay coul/long 12.0 python 12.0
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
fix 2 all nvt temp 300.0 300.0 100.0
thermo 10
run 100

View File

@ -0,0 +1,32 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 10 0 10 0 10
create_box 1 box
create_atoms 1 box
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.01 2.5 lj_1_1.table LJ
pair_style table linear 2000
pair_coeff 1 1 lj_1_1.table LJ
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
shell rm lj_1_1.table

View File

@ -0,0 +1,178 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
1 by 1 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style hybrid/overlay python 12.0 coul/long 12.0
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff * * python potentials.LJCutSPCE OW NULL
pair_modify table 0
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
# create combined lj/coul table for all atom types
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW -0.8472 -0.8472
PPPM initialization ...
WARNING: Using polynomial approximation for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394206
estimated relative force accuracy = 1.18714e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair python, perpetual, skip from (2)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
pair_write 1 2 2000 rsq 0.1 12 spce.table OW-HW -0.8472 0.4236
PPPM initialization ...
WARNING: Using polynomial approximation for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394206
estimated relative force accuracy = 1.18714e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
pair_write 2 2 2000 rsq 0.1 12 spce.table HW-HW 0.4236 0.4236
PPPM initialization ...
WARNING: Using polynomial approximation for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394206
estimated relative force accuracy = 1.18714e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
# switch to tabulated potential
pair_style table linear 2000 pppm
pair_coeff 1 1 spce.table OW-OW
pair_coeff 1 2 spce.table OW-HW
pair_coeff 2 2 spce.table HW-HW
thermo 10
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394674
estimated relative force accuracy = 1.18855e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair table, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 35.26 | 35.26 | 35.26 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -100272.97 0 -100272.97 -1282.0708
10 120.61568 -101350.63 0 -100272.39 -4077.5051
20 136.11379 -101465.43 0 -100248.65 -5136.5677
30 137.01602 -101455.3 0 -100230.46 -5347.8311
40 153.424 -101582.46 0 -100210.93 -5223.1676
50 167.73654 -101686.24 0 -100186.77 -4468.6687
60 163.11642 -101618.16 0 -100159.99 -3291.7815
70 169.64512 -101647.89 0 -100131.35 -2611.638
80 182.9979 -101737.01 0 -100101.11 -2390.6293
90 191.33873 -101778.71 0 -100068.24 -2239.386
100 194.7458 -101775.84 0 -100034.92 -1951.9128
Loop time of 7.60221 on 1 procs for 100 steps with 4500 atoms
Performance: 1.137 ns/day, 21.117 hours/ns, 13.154 timesteps/s
99.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.7401 | 5.7401 | 5.7401 | 0.0 | 75.51
Bond | 0.00017881 | 0.00017881 | 0.00017881 | 0.0 | 0.00
Kspace | 1.5387 | 1.5387 | 1.5387 | 0.0 | 20.24
Neigh | 0.2299 | 0.2299 | 0.2299 | 0.0 | 3.02
Comm | 0.024311 | 0.024311 | 0.024311 | 0.0 | 0.32
Output | 0.00057936 | 0.00057936 | 0.00057936 | 0.0 | 0.01
Modify | 0.063158 | 0.063158 | 0.063158 | 0.0 | 0.83
Other | | 0.005243 | | | 0.07
Nlocal: 4500 ave 4500 max 4500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 21216 ave 21216 max 21216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 2.60177e+06 ave 2.60177e+06 max 2.60177e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2601766
Ave neighs/atom = 578.17
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
shell rm spce.table
Total wall time: 0:00:07

View File

@ -0,0 +1,138 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
1 by 1 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style hybrid/overlay coul/cut 12.0 python 12.0
pair_coeff * * coul/cut
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
# create combined lj/coul table for all atom types
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW -0.8472 -0.8472
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/cut, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair python, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
pair_write 1 2 2000 rsq 0.1 12 spce.table OW-HW -0.8472 0.4236
pair_write 2 2 2000 rsq 0.1 12 spce.table HW-HW 0.4236 0.4236
# switch to tabulated potential
pair_style table linear 2000 pppm
pair_coeff 1 1 spce.table OW-OW
pair_coeff 1 2 spce.table OW-HW
pair_coeff 2 2 spce.table HW-HW
thermo 10
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair table, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 25.08 | 25.08 | 25.08 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -18284.922 0 -18284.922 -2080.7739
10 146.83806 -19552.072 0 -18239.421 -4865.31
20 183.15761 -18706.872 0 -17069.543 -4865.6695
30 205.96203 -18901.541 0 -17060.354 -4454.8634
40 241.62768 -18323.117 0 -16163.099 -3269.1475
50 265.98384 -19883.562 0 -17505.813 -2788.5194
60 274.01897 -21320.575 0 -18870.996 -2387.0708
70 288.7601 -19849.269 0 -17267.913 -1235.818
80 300.64724 -20958.602 0 -18270.981 -1714.7988
90 304.19113 -21580.4 0 -18861.099 -2144.1614
100 304.22027 -21239.014 0 -18519.452 -2092.6759
Loop time of 6.01861 on 1 procs for 100 steps with 4500 atoms
Performance: 1.436 ns/day, 16.718 hours/ns, 16.615 timesteps/s
99.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.698 | 5.698 | 5.698 | 0.0 | 94.67
Bond | 0.0001626 | 0.0001626 | 0.0001626 | 0.0 | 0.00
Neigh | 0.23235 | 0.23235 | 0.23235 | 0.0 | 3.86
Comm | 0.018961 | 0.018961 | 0.018961 | 0.0 | 0.32
Output | 0.00058126 | 0.00058126 | 0.00058126 | 0.0 | 0.01
Modify | 0.063452 | 0.063452 | 0.063452 | 0.0 | 1.05
Other | | 0.005146 | | | 0.09
Nlocal: 4500 ave 4500 max 4500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 21285 ave 21285 max 21285 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 2.59766e+06 ave 2.59766e+06 max 2.59766e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2597662
Ave neighs/atom = 577.258
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
shell rm spce.table
Total wall time: 0:00:06

View File

@ -0,0 +1,138 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
2 by 2 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style hybrid/overlay coul/cut 12.0 python 12.0
pair_coeff * * coul/cut
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
# create combined lj/coul table for all atom types
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW -0.8472 -0.8472
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/cut, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair python, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
pair_write 1 2 2000 rsq 0.1 12 spce.table OW-HW -0.8472 0.4236
pair_write 2 2 2000 rsq 0.1 12 spce.table HW-HW 0.4236 0.4236
# switch to tabulated potential
pair_style table linear 2000 pppm
pair_coeff 1 1 spce.table OW-OW
pair_coeff 1 2 spce.table OW-HW
pair_coeff 2 2 spce.table HW-HW
thermo 10
run 100
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair table, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 9.962 | 9.963 | 9.963 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -18284.922 0 -18284.922 -2080.7739
10 146.83806 -19552.072 0 -18239.421 -4865.31
20 183.15761 -18706.872 0 -17069.543 -4865.6695
30 205.96203 -18901.541 0 -17060.354 -4454.8634
40 241.62768 -18323.117 0 -16163.099 -3269.1475
50 265.98384 -19883.562 0 -17505.813 -2788.5194
60 274.01897 -21320.575 0 -18870.996 -2387.0708
70 288.7601 -19849.269 0 -17267.913 -1235.818
80 300.64724 -20958.602 0 -18270.981 -1714.7988
90 304.19113 -21580.4 0 -18861.099 -2144.1614
100 304.22027 -21239.014 0 -18519.452 -2092.6759
Loop time of 1.7361 on 4 procs for 100 steps with 4500 atoms
Performance: 4.977 ns/day, 4.823 hours/ns, 57.600 timesteps/s
99.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.4424 | 1.5149 | 1.6066 | 5.3 | 87.26
Bond | 8.9407e-05 | 0.00010258 | 0.00012374 | 0.0 | 0.01
Neigh | 0.064205 | 0.064241 | 0.064295 | 0.0 | 3.70
Comm | 0.023643 | 0.1155 | 0.18821 | 19.2 | 6.65
Output | 0.00038004 | 0.00042355 | 0.00054145 | 0.0 | 0.02
Modify | 0.037507 | 0.037787 | 0.038042 | 0.1 | 2.18
Other | | 0.003148 | | | 0.18
Nlocal: 1125 ave 1162 max 1098 min
Histogram: 1 1 0 0 0 1 0 0 0 1
Nghost: 12267.8 ave 12302 max 12238 min
Histogram: 2 0 0 0 0 0 0 0 1 1
Neighs: 649416 ave 681458 max 630541 min
Histogram: 1 0 2 0 0 0 0 0 0 1
Total # of neighbors = 2597662
Ave neighs/atom = 577.258
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
shell rm spce.table
Total wall time: 0:00:01

View File

@ -0,0 +1,250 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones hybrid
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 2 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
region half block -0.1 4.9 0 10 0 10
set region half type 2
2000 settings made for type
velocity all create 3.0 87287
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 4.446 | 4.446 | 4.446 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6758903 -4.7955425 0 -2.2823355 5.670064
100 1.6458363 -4.7492704 0 -2.2811332 5.8691042
150 1.6324555 -4.7286791 0 -2.280608 5.9589514
200 1.6630725 -4.7750988 0 -2.2811136 5.7364886
250 1.6275257 -4.7224992 0 -2.281821 5.9567365
Loop time of 10.0384 on 1 procs for 250 steps with 4000 atoms
Performance: 10758.705 tau/day, 24.904 timesteps/s
98.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 9.913 | 9.913 | 9.913 | 0.0 | 98.75
Neigh | 0.095569 | 0.095569 | 0.095569 | 0.0 | 0.95
Comm | 0.012686 | 0.012686 | 0.012686 | 0.0 | 0.13
Output | 0.00027537 | 0.00027537 | 0.00027537 | 0.0 | 0.00
Modify | 0.01386 | 0.01386 | 0.01386 | 0.0 | 0.14
Other | | 0.003027 | | | 0.03
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5499 ave 5499 max 5499 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 85978 ave 85978 max 85978 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 85978
Ave neighs/atom = 21.4945
Neighbor list builds = 12
Dangerous builds not checked
write_data hybrid.data
write_restart hybrid.restart
clear
using 1 OpenMP thread(s) per MPI task
read_restart hybrid.restart
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
4000 atoms
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 4.245 | 4.245 | 4.245 Mbytes
Step Temp E_pair E_mol TotEng Press
250 1.6275257 -4.7224992 0 -2.281821 5.9567365
300 1.645592 -4.7496711 0 -2.2819002 5.8734193
350 1.6514972 -4.7580756 0 -2.2814491 5.810167
400 1.6540555 -4.7622999 0 -2.281837 5.8200413
450 1.6264734 -4.7200865 0 -2.2809863 5.9546991
500 1.6366891 -4.7350979 0 -2.2806781 5.9369284
Loop time of 10.0803 on 1 procs for 250 steps with 4000 atoms
Performance: 10713.932 tau/day, 24.801 timesteps/s
98.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 9.8479 | 9.8479 | 9.8479 | 0.0 | 97.69
Neigh | 0.20002 | 0.20002 | 0.20002 | 0.0 | 1.98
Comm | 0.01437 | 0.01437 | 0.01437 | 0.0 | 0.14
Output | 0.00024033 | 0.00024033 | 0.00024033 | 0.0 | 0.00
Modify | 0.013422 | 0.013422 | 0.013422 | 0.0 | 0.13
Other | | 0.004348 | | | 0.04
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5472 ave 5472 max 5472 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 86930 ave 86930 max 86930 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 86930
Ave neighs/atom = 21.7325
Neighbor list builds = 25
Dangerous builds = 25
clear
using 1 OpenMP thread(s) per MPI task
units lj
atom_style atomic
read_data hybrid.data
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
reading atoms ...
4000 atoms
reading velocities ...
4000 velocities
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.745 | 3.745 | 3.745 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.6275257 -4.7224992 0 -2.281821 5.9567365
50 1.6454666 -4.7497515 0 -2.2821686 5.8729175
100 1.6512008 -4.7582693 0 -2.2820874 5.8090548
150 1.6537193 -4.7627023 0 -2.2827434 5.8177704
200 1.6258731 -4.7205017 0 -2.2823017 5.952511
250 1.6370862 -4.7373176 0 -2.2823022 5.925807
Loop time of 9.93686 on 1 procs for 250 steps with 4000 atoms
Performance: 10868.626 tau/day, 25.159 timesteps/s
98.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 9.8119 | 9.8119 | 9.8119 | 0.0 | 98.74
Neigh | 0.096041 | 0.096041 | 0.096041 | 0.0 | 0.97
Comm | 0.01243 | 0.01243 | 0.01243 | 0.0 | 0.13
Output | 0.00028133 | 0.00028133 | 0.00028133 | 0.0 | 0.00
Modify | 0.013261 | 0.013261 | 0.013261 | 0.0 | 0.13
Other | | 0.002994 | | | 0.03
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5487 ave 5487 max 5487 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 86831 ave 86831 max 86831 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 86831
Ave neighs/atom = 21.7078
Neighbor list builds = 12
Dangerous builds not checked
shell rm hybrid.data hybrid.restart
Total wall time: 0:00:30

View File

@ -0,0 +1,250 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones hybrid
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 2 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
region half block -0.1 4.9 0 10 0 10
set region half type 2
2000 settings made for type
velocity all create 3.0 87287
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.953 | 3.953 | 3.953 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6754119 -4.7947589 0 -2.2822693 5.6615925
100 1.6503357 -4.756014 0 -2.2811293 5.8050524
150 1.6596605 -4.7699432 0 -2.2810749 5.7830138
200 1.6371874 -4.7365462 0 -2.2813789 5.9246674
250 1.6323462 -4.7292021 0 -2.2812949 5.9762238
Loop time of 2.71748 on 4 procs for 250 steps with 4000 atoms
Performance: 39742.745 tau/day, 91.997 timesteps/s
98.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.4777 | 2.5639 | 2.6253 | 3.9 | 94.35
Neigh | 0.024626 | 0.025331 | 0.02598 | 0.3 | 0.93
Comm | 0.061933 | 0.12297 | 0.20987 | 18.0 | 4.53
Output | 0.00026131 | 0.00027591 | 0.00031352 | 0.0 | 0.01
Modify | 0.0036087 | 0.0036573 | 0.0037553 | 0.1 | 0.13
Other | | 0.001337 | | | 0.05
Nlocal: 1000 ave 1010 max 982 min
Histogram: 1 0 0 0 0 0 1 0 0 2
Nghost: 2703.75 ave 2713 max 2689 min
Histogram: 1 0 0 0 0 0 0 2 0 1
Neighs: 21469.8 ave 22167 max 20546 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Total # of neighbors = 85879
Ave neighs/atom = 21.4698
Neighbor list builds = 12
Dangerous builds not checked
write_data hybrid.data
write_restart hybrid.restart
clear
using 1 OpenMP thread(s) per MPI task
read_restart hybrid.restart
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
4000 atoms
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.612 | 3.612 | 3.612 Mbytes
Step Temp E_pair E_mol TotEng Press
250 1.6323462 -4.7292062 0 -2.2812991 5.9762168
300 1.6451788 -4.7488091 0 -2.2816578 5.8375485
350 1.6171909 -4.7064928 0 -2.2813129 6.0094235
400 1.6388136 -4.7387093 0 -2.2811035 5.9331084
450 1.6431295 -4.7452215 0 -2.2811435 5.8929898
500 1.643316 -4.7454222 0 -2.2810644 5.8454817
Loop time of 2.75827 on 4 procs for 250 steps with 4000 atoms
Performance: 39155.038 tau/day, 90.637 timesteps/s
98.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.3631 | 2.5412 | 2.6672 | 7.2 | 92.13
Neigh | 0.050358 | 0.052316 | 0.053312 | 0.5 | 1.90
Comm | 0.032793 | 0.15893 | 0.33904 | 29.1 | 5.76
Output | 0.00018525 | 0.00020212 | 0.00024509 | 0.0 | 0.01
Modify | 0.0034482 | 0.0035321 | 0.0036578 | 0.1 | 0.13
Other | | 0.002039 | | | 0.07
Nlocal: 1000 ave 1012 max 983 min
Histogram: 1 0 0 0 0 0 2 0 0 1
Nghost: 2699 ave 2706 max 2693 min
Histogram: 1 1 0 0 0 0 1 0 0 1
Neighs: 21802 ave 22700 max 21236 min
Histogram: 1 1 0 1 0 0 0 0 0 1
Total # of neighbors = 87208
Ave neighs/atom = 21.802
Neighbor list builds = 25
Dangerous builds = 25
clear
using 1 OpenMP thread(s) per MPI task
units lj
atom_style atomic
read_data hybrid.data
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
reading atoms ...
4000 atoms
reading velocities ...
4000 velocities
pair_style hybrid lj/cut 2.5 python 2.5
pair_coeff * * python py_pot.LJCutMelt lj NULL
pair_coeff * 2 lj/cut 1.0 1.0
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
3 neighbor lists, perpetual/occasional/extra = 3 0 0
(1) pair lj/cut, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(2) pair python, perpetual, skip from (3)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
(3) neighbor class addition, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.112 | 3.112 | 3.112 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.6323462 -4.7292062 0 -2.2812991 5.9762168
50 1.6450626 -4.7488948 0 -2.2819177 5.8370409
100 1.6169004 -4.7066969 0 -2.2819526 6.0082546
150 1.6384234 -4.7389689 0 -2.2819482 5.9315273
200 1.6428814 -4.7460743 0 -2.2823683 5.8888228
250 1.6432631 -4.7466603 0 -2.2823818 5.8398819
Loop time of 2.71936 on 4 procs for 250 steps with 4000 atoms
Performance: 39715.257 tau/day, 91.933 timesteps/s
98.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 2.3769 | 2.5432 | 2.6447 | 6.6 | 93.52
Neigh | 0.024088 | 0.025093 | 0.025748 | 0.4 | 0.92
Comm | 0.044614 | 0.14598 | 0.31339 | 27.5 | 5.37
Output | 0.00026488 | 0.00028872 | 0.00034189 | 0.0 | 0.01
Modify | 0.0034099 | 0.0035709 | 0.0036535 | 0.2 | 0.13
Other | | 0.001215 | | | 0.04
Nlocal: 1000 ave 1013 max 989 min
Histogram: 1 0 0 1 0 1 0 0 0 1
Nghost: 2695.5 ave 2706 max 2682 min
Histogram: 1 0 0 0 0 0 2 0 0 1
Neighs: 21792 ave 22490 max 21457 min
Histogram: 2 0 1 0 0 0 0 0 0 1
Total # of neighbors = 87168
Ave neighs/atom = 21.792
Neighbor list builds = 12
Dangerous builds not checked
shell rm hybrid.data hybrid.restart
Total wall time: 0:00:08

View File

@ -0,0 +1,146 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
1 by 1 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style python 12.0
pair_coeff * * py_pot.LJCutSPCE OW HW
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
# create only lj/cut table for the oxygen atoms from python
shell rm -f spce.table
WARNING: Shell command 'rm' failed with error 'No such file or directory' (../input.cpp:1285)
WARNING: Shell command 'rm' failed with error 'No such file or directory' (../input.cpp:1285)
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
# switch to tabulated potential with long-range coulomb as overlay
pair_style hybrid/overlay coul/long 12.0 table linear 2000
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff 1 1 table spce.table OW-OW
thermo 10
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394674
estimated relative force accuracy = 1.18855e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair table, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 36.47 | 36.47 | 36.47 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -16690.032 0 -16690.032 -1268.9538
10 120.58553 -17767.504 0 -16689.536 -4063.8589
20 136.11736 -17882.557 0 -16665.742 -5124.6758
30 137.00764 -17872.318 0 -16647.545 -5337.2022
40 153.38868 -17999.269 0 -16628.059 -5213.6001
50 167.70342 -18103.06 0 -16603.883 -4460.6632
60 163.07134 -18034.856 0 -16577.088 -3285.0037
70 169.59286 -18064.636 0 -16548.57 -2606.407
80 182.92893 -18153.499 0 -16518.215 -2385.5152
90 191.2793 -18195.356 0 -16485.425 -2235.3701
100 194.68587 -18192.458 0 -16452.073 -1948.3746
Loop time of 7.90705 on 1 procs for 100 steps with 4500 atoms
Performance: 1.093 ns/day, 21.964 hours/ns, 12.647 timesteps/s
99.6% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 6.0343 | 6.0343 | 6.0343 | 0.0 | 76.32
Bond | 0.00019622 | 0.00019622 | 0.00019622 | 0.0 | 0.00
Kspace | 1.5311 | 1.5311 | 1.5311 | 0.0 | 19.36
Neigh | 0.246 | 0.246 | 0.246 | 0.0 | 3.11
Comm | 0.023937 | 0.023937 | 0.023937 | 0.0 | 0.30
Output | 0.00060368 | 0.00060368 | 0.00060368 | 0.0 | 0.01
Modify | 0.065543 | 0.065543 | 0.065543 | 0.0 | 0.83
Other | | 0.005364 | | | 0.07
Nlocal: 4500 ave 4500 max 4500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 21216 ave 21216 max 21216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 2.60177e+06 ave 2.60177e+06 max 2.60177e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2601769
Ave neighs/atom = 578.171
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
shell rm spce.table
Total wall time: 0:00:08

View File

@ -0,0 +1,146 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
2 by 2 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style python 12.0
pair_coeff * * py_pot.LJCutSPCE OW HW
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
# create only lj/cut table for the oxygen atoms from python
shell rm -f spce.table
WARNING: Shell command 'rm' failed with error 'No such file or directory' (../input.cpp:1285)
WARNING: Shell command 'rm' failed with error 'No such file or directory' (../input.cpp:1285)
pair_write 1 1 2000 rsq 0.1 12 spce.table OW-OW
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
# switch to tabulated potential with long-range coulomb as overlay
pair_style hybrid/overlay coul/long 12.0 table linear 2000
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff 1 1 table spce.table OW-OW
thermo 10
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394674
estimated relative force accuracy = 1.18855e-06
using double precision FFTs
3d grid and FFT values/proc = 34263 16000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair table, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 13.45 | 13.45 | 13.45 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -16690.032 0 -16690.032 -1268.9538
10 120.58553 -17767.504 0 -16689.536 -4063.8589
20 136.11736 -17882.557 0 -16665.742 -5124.6758
30 137.00764 -17872.318 0 -16647.545 -5337.2022
40 153.38868 -17999.269 0 -16628.059 -5213.6001
50 167.70342 -18103.06 0 -16603.883 -4460.6632
60 163.07134 -18034.856 0 -16577.088 -3285.0037
70 169.59286 -18064.636 0 -16548.57 -2606.407
80 182.92893 -18153.499 0 -16518.215 -2385.5152
90 191.2793 -18195.356 0 -16485.425 -2235.3701
100 194.68587 -18192.458 0 -16452.073 -1948.3746
Loop time of 2.36748 on 4 procs for 100 steps with 4500 atoms
Performance: 3.649 ns/day, 6.576 hours/ns, 42.239 timesteps/s
99.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.5309 | 1.5977 | 1.6926 | 4.7 | 67.49
Bond | 9.9182e-05 | 0.00012749 | 0.00016403 | 0.0 | 0.01
Kspace | 0.52158 | 0.61232 | 0.67676 | 7.3 | 25.86
Neigh | 0.066937 | 0.06702 | 0.067093 | 0.0 | 2.83
Comm | 0.035882 | 0.039862 | 0.042244 | 1.2 | 1.68
Output | 0.0004003 | 0.00044602 | 0.00057578 | 0.0 | 0.02
Modify | 0.046088 | 0.046227 | 0.046315 | 0.0 | 1.95
Other | | 0.003775 | | | 0.16
Nlocal: 1125 ave 1154 max 1092 min
Histogram: 1 0 0 0 1 0 0 1 0 1
Nghost: 12256.2 ave 12296 max 12213 min
Histogram: 1 0 1 0 0 0 0 0 1 1
Neighs: 650442 ave 678831 max 626373 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 2601769
Ave neighs/atom = 578.171
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
shell rm spce.table
Total wall time: 0:00:02

View File

@ -0,0 +1,214 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 1 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.184 | 3.184 | 3.184 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6758903 -4.7955425 0 -2.2823355 5.670064
100 1.6458363 -4.7492704 0 -2.2811332 5.8691042
150 1.6324555 -4.7286791 0 -2.280608 5.9589514
200 1.6630725 -4.7750988 0 -2.2811136 5.7364886
250 1.6275257 -4.7224992 0 -2.281821 5.9567365
Loop time of 20.9283 on 1 procs for 250 steps with 4000 atoms
Performance: 5160.475 tau/day, 11.946 timesteps/s
98.6% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 20.809 | 20.809 | 20.809 | 0.0 | 99.43
Neigh | 0.088638 | 0.088638 | 0.088638 | 0.0 | 0.42
Comm | 0.013424 | 0.013424 | 0.013424 | 0.0 | 0.06
Output | 0.0002737 | 0.0002737 | 0.0002737 | 0.0 | 0.00
Modify | 0.014334 | 0.014334 | 0.014334 | 0.0 | 0.07
Other | | 0.003089 | | | 0.01
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5499 ave 5499 max 5499 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 151513 ave 151513 max 151513 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 151513
Ave neighs/atom = 37.8783
Neighbor list builds = 12
Dangerous builds not checked
write_data melt.data
write_restart melt.restart
clear
using 1 OpenMP thread(s) per MPI task
read_restart melt.restart
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
4000 atoms
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.36 | 3.36 | 3.36 Mbytes
Step Temp E_pair E_mol TotEng Press
250 1.6275257 -4.7224992 0 -2.281821 5.9567365
300 1.645592 -4.7496711 0 -2.2819002 5.8734193
350 1.6514972 -4.7580756 0 -2.2814491 5.810167
400 1.6540555 -4.7622999 0 -2.281837 5.8200413
450 1.6264734 -4.7200865 0 -2.2809863 5.9546991
500 1.6366891 -4.7350979 0 -2.2806781 5.9369284
Loop time of 21.1422 on 1 procs for 250 steps with 4000 atoms
Performance: 5108.279 tau/day, 11.825 timesteps/s
98.5% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 20.925 | 20.925 | 20.925 | 0.0 | 98.97
Neigh | 0.18452 | 0.18452 | 0.18452 | 0.0 | 0.87
Comm | 0.014836 | 0.014836 | 0.014836 | 0.0 | 0.07
Output | 0.00027108 | 0.00027108 | 0.00027108 | 0.0 | 0.00
Modify | 0.01366 | 0.01366 | 0.01366 | 0.0 | 0.06
Other | | 0.004355 | | | 0.02
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5472 ave 5472 max 5472 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 151513 ave 151513 max 151513 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 151513
Ave neighs/atom = 37.8783
Neighbor list builds = 25
Dangerous builds = 25
clear
using 1 OpenMP thread(s) per MPI task
units lj
atom_style atomic
read_data melt.data
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
reading atoms ...
4000 atoms
reading velocities ...
4000 velocities
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 2.86 | 2.86 | 2.86 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.6275257 -4.7224992 0 -2.281821 5.9567365
50 1.6454666 -4.7497515 0 -2.2821686 5.8729175
100 1.6512008 -4.7582693 0 -2.2820874 5.8090548
150 1.6537193 -4.7627023 0 -2.2827434 5.8177704
200 1.6258731 -4.7205017 0 -2.2823017 5.952511
250 1.6370862 -4.7373176 0 -2.2823022 5.925807
Loop time of 21.1026 on 1 procs for 250 steps with 4000 atoms
Performance: 5117.845 tau/day, 11.847 timesteps/s
98.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 20.984 | 20.984 | 20.984 | 0.0 | 99.44
Neigh | 0.088639 | 0.088639 | 0.088639 | 0.0 | 0.42
Comm | 0.012881 | 0.012881 | 0.012881 | 0.0 | 0.06
Output | 0.00028563 | 0.00028563 | 0.00028563 | 0.0 | 0.00
Modify | 0.013523 | 0.013523 | 0.013523 | 0.0 | 0.06
Other | | 0.003033 | | | 0.01
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5487 ave 5487 max 5487 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 151490 ave 151490 max 151490 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 151490
Ave neighs/atom = 37.8725
Neighbor list builds = 12
Dangerous builds not checked
shell rm melt.data melt.restart
Total wall time: 0:01:05

View File

@ -0,0 +1,214 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 1 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 2.69 | 2.69 | 2.69 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733681 0 -2.2744931 -3.7033504
50 1.6754119 -4.7947589 0 -2.2822693 5.6615925
100 1.6503357 -4.756014 0 -2.2811293 5.8050524
150 1.6596605 -4.7699432 0 -2.2810749 5.7830138
200 1.6371874 -4.7365462 0 -2.2813789 5.9246674
250 1.6323462 -4.7292021 0 -2.2812949 5.9762238
Loop time of 5.65922 on 4 procs for 250 steps with 4000 atoms
Performance: 19083.895 tau/day, 44.176 timesteps/s
98.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.4529 | 5.5207 | 5.5575 | 1.7 | 97.55
Neigh | 0.023164 | 0.023376 | 0.023883 | 0.2 | 0.41
Comm | 0.073318 | 0.1099 | 0.17804 | 12.2 | 1.94
Output | 0.00023365 | 0.00026143 | 0.00030684 | 0.0 | 0.00
Modify | 0.0036483 | 0.0037143 | 0.003896 | 0.2 | 0.07
Other | | 0.001274 | | | 0.02
Nlocal: 1000 ave 1010 max 982 min
Histogram: 1 0 0 0 0 0 1 0 0 2
Nghost: 2703.75 ave 2713 max 2689 min
Histogram: 1 0 0 0 0 0 0 2 0 1
Neighs: 37915.5 ave 39239 max 36193 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Total # of neighbors = 151662
Ave neighs/atom = 37.9155
Neighbor list builds = 12
Dangerous builds not checked
write_data melt.data
write_restart melt.restart
clear
using 1 OpenMP thread(s) per MPI task
read_restart melt.restart
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
4000 atoms
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 2.815 | 2.816 | 2.816 Mbytes
Step Temp E_pair E_mol TotEng Press
250 1.6323462 -4.7292062 0 -2.2812991 5.9762168
300 1.6451788 -4.7488091 0 -2.2816578 5.8375485
350 1.6171909 -4.7064928 0 -2.2813129 6.0094235
400 1.6388136 -4.7387093 0 -2.2811035 5.9331084
450 1.6431295 -4.7452215 0 -2.2811435 5.8929898
500 1.643316 -4.7454222 0 -2.2810644 5.8454817
Loop time of 5.70169 on 4 procs for 250 steps with 4000 atoms
Performance: 18941.760 tau/day, 43.847 timesteps/s
98.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.3919 | 5.4905 | 5.6136 | 3.7 | 96.30
Neigh | 0.046791 | 0.047817 | 0.048795 | 0.3 | 0.84
Comm | 0.034221 | 0.1575 | 0.25635 | 22.1 | 2.76
Output | 0.00020409 | 0.00023448 | 0.00026131 | 0.0 | 0.00
Modify | 0.0035028 | 0.0035674 | 0.0036926 | 0.1 | 0.06
Other | | 0.002079 | | | 0.04
Nlocal: 1000 ave 1012 max 983 min
Histogram: 1 0 0 0 0 0 2 0 0 1
Nghost: 2699 ave 2706 max 2693 min
Histogram: 1 1 0 0 0 0 1 0 0 1
Neighs: 37930.8 ave 39292 max 36264 min
Histogram: 1 0 0 0 1 0 0 1 0 1
Total # of neighbors = 151723
Ave neighs/atom = 37.9308
Neighbor list builds = 25
Dangerous builds = 25
clear
using 1 OpenMP thread(s) per MPI task
units lj
atom_style atomic
read_data melt.data
orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
reading atoms ...
4000 atoms
reading velocities ...
4000 velocities
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 2.315 | 2.316 | 2.316 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.6323462 -4.7292062 0 -2.2812991 5.9762168
50 1.6450626 -4.7488948 0 -2.2819177 5.8370409
100 1.6169004 -4.7066969 0 -2.2819526 6.0082546
150 1.6384234 -4.7389689 0 -2.2819482 5.9315273
200 1.6428814 -4.7460743 0 -2.2823683 5.8888228
250 1.6432631 -4.7466603 0 -2.2823818 5.8398819
Loop time of 5.69568 on 4 procs for 250 steps with 4000 atoms
Performance: 18961.751 tau/day, 43.893 timesteps/s
98.3% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.4041 | 5.5245 | 5.6139 | 3.2 | 96.99
Neigh | 0.022658 | 0.022986 | 0.023398 | 0.2 | 0.40
Comm | 0.053521 | 0.14309 | 0.26385 | 20.2 | 2.51
Output | 0.00027037 | 0.00029504 | 0.00033665 | 0.0 | 0.01
Modify | 0.0035288 | 0.0035585 | 0.0035827 | 0.0 | 0.06
Other | | 0.001275 | | | 0.02
Nlocal: 1000 ave 1013 max 989 min
Histogram: 1 0 0 1 0 1 0 0 0 1
Nghost: 2695.5 ave 2706 max 2682 min
Histogram: 1 0 0 0 0 0 2 0 0 1
Neighs: 37927.2 ave 39002 max 36400 min
Histogram: 1 0 0 0 1 0 0 0 0 2
Total # of neighbors = 151709
Ave neighs/atom = 37.9273
Neighbor list builds = 12
Dangerous builds not checked
shell rm melt.data melt.restart
Total wall time: 0:00:17

View File

@ -0,0 +1,122 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
1 by 1 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style hybrid/overlay coul/long 12.0 python 12.0
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
thermo 10
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394674
estimated relative force accuracy = 1.18855e-06
using double precision FFTs
3d grid and FFT values/proc = 103823 64000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair python, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 41.05 | 41.05 | 41.05 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -16692.369 0 -16692.369 -1289.222
10 120.56861 -17769.719 0 -16691.902 -4082.7098
20 136.08014 -17884.591 0 -16668.109 -5140.7824
30 136.97316 -17874.351 0 -16649.887 -5351.3571
40 153.37285 -18001.493 0 -16630.424 -5227.0601
50 167.70414 -18105.435 0 -16606.252 -4473.2089
60 163.08253 -18037.29 0 -16579.422 -3295.8963
70 169.60395 -18067.078 0 -16550.912 -2615.7026
80 182.94811 -18155.978 0 -16520.523 -2393.3156
90 191.29902 -18197.887 0 -16487.779 -2242.7104
100 194.70949 -18195.021 0 -16454.425 -1955.2916
Loop time of 23.5385 on 1 procs for 100 steps with 4500 atoms
Performance: 0.367 ns/day, 65.385 hours/ns, 4.248 timesteps/s
98.9% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 21.642 | 21.642 | 21.642 | 0.0 | 91.94
Bond | 0.00021696 | 0.00021696 | 0.00021696 | 0.0 | 0.00
Kspace | 1.5436 | 1.5436 | 1.5436 | 0.0 | 6.56
Neigh | 0.25623 | 0.25623 | 0.25623 | 0.0 | 1.09
Comm | 0.024325 | 0.024325 | 0.024325 | 0.0 | 0.10
Output | 0.00064301 | 0.00064301 | 0.00064301 | 0.0 | 0.00
Modify | 0.065919 | 0.065919 | 0.065919 | 0.0 | 0.28
Other | | 0.005401 | | | 0.02
Nlocal: 4500 ave 4500 max 4500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 21216 ave 21216 max 21216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 2.60176e+06 ave 2.60176e+06 max 2.60176e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2601762
Ave neighs/atom = 578.169
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
Total wall time: 0:00:24

View File

@ -0,0 +1,122 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
units real
atom_style full
read_data data.spce
orthogonal box = (0.02645 0.02645 0.02641) to (35.5328 35.5328 35.4736)
2 by 2 by 1 MPI processor grid
reading atoms ...
4500 atoms
scanning bonds ...
2 = max bonds/atom
scanning angles ...
1 = max angles/atom
reading bonds ...
3000 bonds
reading angles ...
1500 angles
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
1 = max # of 1-4 neighbors
2 = max # of special neighbors
pair_style hybrid/overlay coul/long 12.0 python 12.0
kspace_style pppm 1.0e-6
pair_coeff * * coul/long
pair_coeff * * python py_pot.LJCutSPCE OW NULL
bond_style harmonic
angle_style harmonic
dihedral_style none
improper_style none
bond_coeff 1 1000.00 1.000
angle_coeff 1 100.0 109.47
special_bonds lj/coul 0.0 0.0 1.0
2 = max # of 1-2 neighbors
1 = max # of 1-3 neighbors
2 = max # of special neighbors
neighbor 2.0 bin
fix 1 all shake 0.0001 20 0 b 1 a 1
0 = # of size 2 clusters
0 = # of size 3 clusters
0 = # of size 4 clusters
1500 = # of frozen angles
fix 2 all nvt temp 300.0 300.0 100.0
thermo 10
run 100
PPPM initialization ...
WARNING: Using 12-bit tables for long-range coulomb (../kspace.cpp:321)
G vector (1/distance) = 0.279652
grid = 40 40 40
stencil order = 5
estimated absolute RMS force accuracy = 0.000394674
estimated relative force accuracy = 1.18855e-06
using double precision FFTs
3d grid and FFT values/proc = 34263 16000
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 14
ghost atom cutoff = 14
binsize = 7, bins = 6 6 6
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair coul/long, perpetual
attributes: half, newton on
pair build: half/bin/newton
stencil: half/bin/3d/newton
bin: standard
(2) pair python, perpetual, skip from (1)
attributes: half, newton on
pair build: skip
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 14.59 | 14.59 | 14.59 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -16692.369 0 -16692.369 -1289.222
10 120.56861 -17769.719 0 -16691.902 -4082.7098
20 136.08014 -17884.591 0 -16668.109 -5140.7824
30 136.97316 -17874.351 0 -16649.887 -5351.3571
40 153.37285 -18001.493 0 -16630.424 -5227.0601
50 167.70414 -18105.435 0 -16606.252 -4473.2089
60 163.08253 -18037.29 0 -16579.422 -3295.8963
70 169.60395 -18067.078 0 -16550.912 -2615.7026
80 182.94811 -18155.978 0 -16520.523 -2393.3156
90 191.29902 -18197.887 0 -16487.779 -2242.7104
100 194.70949 -18195.021 0 -16454.425 -1955.2916
Loop time of 6.4942 on 4 procs for 100 steps with 4500 atoms
Performance: 1.330 ns/day, 18.039 hours/ns, 15.398 timesteps/s
98.7% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.4084 | 5.572 | 5.8013 | 7.2 | 85.80
Bond | 0.00012994 | 0.0001421 | 0.00016356 | 0.0 | 0.00
Kspace | 0.52942 | 0.75773 | 0.92078 | 19.5 | 11.67
Neigh | 0.071055 | 0.07116 | 0.071278 | 0.0 | 1.10
Comm | 0.040311 | 0.041255 | 0.041817 | 0.3 | 0.64
Output | 0.00040603 | 0.00048071 | 0.00058675 | 0.0 | 0.01
Modify | 0.047507 | 0.047629 | 0.047772 | 0.1 | 0.73
Other | | 0.003771 | | | 0.06
Nlocal: 1125 ave 1154 max 1092 min
Histogram: 1 0 0 0 1 0 0 1 0 1
Nghost: 12256.2 ave 12296 max 12213 min
Histogram: 1 0 1 0 0 0 0 0 1 1
Neighs: 650440 ave 678828 max 626375 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 2601762
Ave neighs/atom = 578.169
Ave special neighs/atom = 2
Neighbor list builds = 3
Dangerous builds = 0
Total wall time: 0:00:06

View File

@ -0,0 +1,99 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 1 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.01 2.5 lj_1_1.table LJ
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
pair_style table linear 2000
pair_coeff 1 1 lj_1_1.table LJ
WARNING: 2 of 2000 force values in table are inconsistent with -dE/dr.
Should only be flagged at inflection points (../pair_table.cpp:476)
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair table, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 3.184 | 3.184 | 3.184 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733629 0 -2.2744879 -3.7032813
50 1.6758731 -4.7953067 0 -2.2821255 5.6706553
100 1.6458118 -4.7490281 0 -2.2809276 5.8697466
150 1.632425 -4.7284533 0 -2.2804279 5.9595684
200 1.6631578 -4.7749889 0 -2.2808759 5.7365839
250 1.6277062 -4.7224727 0 -2.2815238 5.9572913
Loop time of 0.996739 on 1 procs for 250 steps with 4000 atoms
Performance: 108353.298 tau/day, 250.818 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.87985 | 0.87985 | 0.87985 | 0.0 | 88.27
Neigh | 0.08799 | 0.08799 | 0.08799 | 0.0 | 8.83
Comm | 0.012301 | 0.012301 | 0.012301 | 0.0 | 1.23
Output | 0.00013161 | 0.00013161 | 0.00013161 | 0.0 | 0.01
Modify | 0.013656 | 0.013656 | 0.013656 | 0.0 | 1.37
Other | | 0.002808 | | | 0.28
Nlocal: 4000 ave 4000 max 4000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 5500 ave 5500 max 5500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 151496 ave 151496 max 151496 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 151496
Ave neighs/atom = 37.874
Neighbor list builds = 12
Dangerous builds not checked
shell rm lj_1_1.table
Total wall time: 0:00:01

View File

@ -0,0 +1,99 @@
LAMMPS (4 May 2017)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 10 0 10 0 10
create_box 1 box
Created orthogonal box = (0 0 0) to (16.796 16.796 16.796)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 4000 atoms
mass * 1.0
velocity all create 3.0 87287
pair_style python 2.5
pair_coeff * * py_pot.LJCutMelt lj
# generate tabulated potential from python variant
pair_write 1 1 2000 rsq 0.01 2.5 lj_1_1.table LJ
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair python, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
pair_style table linear 2000
pair_coeff 1 1 lj_1_1.table LJ
WARNING: 2 of 2000 force values in table are inconsistent with -dE/dr.
Should only be flagged at inflection points (../pair_table.cpp:476)
neighbor 0.3 bin
neigh_modify every 20 delay 0 check no
fix 1 all nve
thermo 50
run 250
Neighbor list info ...
update every 20 steps, delay 0 steps, check no
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 2.8
ghost atom cutoff = 2.8
binsize = 1.4, bins = 12 12 12
1 neighbor lists, perpetual/occasional/extra = 1 0 0
(1) pair table, perpetual
attributes: half, newton on
pair build: half/bin/atomonly/newton
stencil: half/bin/3d/newton
bin: standard
Per MPI rank memory allocation (min/avg/max) = 2.69 | 2.69 | 2.69 Mbytes
Step Temp E_pair E_mol TotEng Press
0 3 -6.7733629 0 -2.2744879 -3.7032813
50 1.675395 -4.7945736 0 -2.2821094 5.6620623
100 1.6503067 -4.7558145 0 -2.2809733 5.8055967
150 1.6595852 -4.7697199 0 -2.2809644 5.7837898
200 1.6371471 -4.7363942 0 -2.2812874 5.924977
250 1.6315623 -4.7278268 0 -2.2810951 5.9807196
Loop time of 0.291846 on 4 procs for 250 steps with 4000 atoms
Performance: 370058.286 tau/day, 856.616 timesteps/s
99.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.22586 | 0.23364 | 0.24085 | 1.3 | 80.06
Neigh | 0.022808 | 0.023235 | 0.023602 | 0.2 | 7.96
Comm | 0.022573 | 0.030065 | 0.038092 | 3.9 | 10.30
Output | 0.00013423 | 0.00014067 | 0.00015759 | 0.0 | 0.05
Modify | 0.0035079 | 0.0035501 | 0.0036008 | 0.1 | 1.22
Other | | 0.001211 | | | 0.42
Nlocal: 1000 ave 1010 max 981 min
Histogram: 1 0 0 0 0 0 1 0 0 2
Nghost: 2703 ave 2715 max 2688 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Neighs: 37915.2 ave 39191 max 36151 min
Histogram: 1 0 0 0 0 1 0 1 0 1
Total # of neighbors = 151661
Ave neighs/atom = 37.9153
Neighbor list builds = 12
Dangerous builds not checked
shell rm lj_1_1.table
Total wall time: 0:00:00

65
examples/python/py_pot.py Normal file
View File

@ -0,0 +1,65 @@
from __future__ import print_function
class LAMMPSPairPotential(object):
def __init__(self):
self.pmap=dict()
self.units='lj'
def map_coeff(self,name,ltype):
self.pmap[ltype]=name
def check_units(self,units):
if (units != self.units):
raise Exception("Conflicting units: %s vs. %s" % (self.units,units))
class LJCutMelt(LAMMPSPairPotential):
def __init__(self):
super(LJCutMelt,self).__init__()
# set coeffs: 48*eps*sig**12, 24*eps*sig**6,
# 4*eps*sig**12, 4*eps*sig**6
self.units = 'lj'
self.coeff = {'lj' : {'lj' : (48.0,24.0,4.0,4.0)}}
def compute_force(self,rsq,itype,jtype):
coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj1 = coeff[0]
lj2 = coeff[1]
return (r6inv * (lj1*r6inv - lj2))*r2inv
def compute_energy(self,rsq,itype,jtype):
coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj3 = coeff[2]
lj4 = coeff[3]
return (r6inv * (lj3*r6inv - lj4))
class LJCutSPCE(LAMMPSPairPotential):
def __init__(self):
super(LJCutSPCE,self).__init__()
self.units='real'
# SPCE oxygen LJ parameters in real units
eps=0.15535
sig=3.166
self.coeff = {'OW' : {'OW' : (48.0*eps*sig**12,24.0*eps*sig**6,
4.0*eps*sig**12, 4.0*eps*sig**6),
'HW' : (0.0,0.0, 0.0,0.0)},
'HW' : {'OW' : (0.0,0.0, 0.0,0.0),
'HW' : (0.0,0.0, 0.0,0.0)}}
def compute_force(self,rsq,itype,jtype):
coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj1 = coeff[0]
lj2 = coeff[1]
return (r6inv * (lj1*r6inv - lj2))*r2inv
def compute_energy(self,rsq,itype,jtype):
coeff = self.coeff[self.pmap[itype]][self.pmap[jtype]]
r2inv = 1.0/rsq
r6inv = r2inv*r2inv*r2inv
lj3 = coeff[2]
lj4 = coeff[3]
return (r6inv * (lj3*r6inv - lj4))

3
src/.gitignore vendored
View File

@ -852,8 +852,11 @@
/prd.h
/python_impl.cpp
/python_impl.h
/python_compat.h
/fix_python.cpp
/fix_python.h
/pair_python.cpp
/pair_python.h
/reader_molfile.cpp
/reader_molfile.h
/reaxc_allocate.cpp

View File

@ -11,6 +11,11 @@
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* ----------------------------------------------------------------------
Contributing author: Richard Berger (Temple U)
------------------------------------------------------------------------- */
#include <Python.h>
#include <stdio.h>
#include <string.h>
#include "fix_python.h"
@ -20,17 +25,11 @@
#include "respa.h"
#include "error.h"
#include "python.h"
#include "python_compat.h"
using namespace LAMMPS_NS;
using namespace FixConst;
// Wrap API changes between Python 2 and 3 using macros
#if PY_MAJOR_VERSION == 2
#define PY_VOID_POINTER(X) PyCObject_FromVoidPtr((void *) X, NULL)
#elif PY_MAJOR_VERSION == 3
#define PY_VOID_POINTER(X) PyCapsule_New((void *) X, NULL, NULL)
#endif
/* ---------------------------------------------------------------------- */
FixPython::FixPython(LAMMPS *lmp, int narg, char **arg) :
@ -87,7 +86,7 @@ void FixPython::end_of_step()
PyObject * ptr = PY_VOID_POINTER(lmp);
PyObject * arglist = Py_BuildValue("(O)", ptr);
PyObject * result = PyEval_CallObject(pFunc, arglist);
PyObject * result = PyEval_CallObject((PyObject*)pFunc, arglist);
Py_DECREF(arglist);
PyGILState_Release(gstate);
@ -104,7 +103,7 @@ void FixPython::post_force(int vflag)
PyObject * ptr = PY_VOID_POINTER(lmp);
PyObject * arglist = Py_BuildValue("(Oi)", ptr, vflag);
PyObject * result = PyEval_CallObject(pFunc, arglist);
PyObject * result = PyEval_CallObject((PyObject*)pFunc, arglist);
Py_DECREF(arglist);
PyGILState_Release(gstate);

View File

@ -21,7 +21,6 @@ FixStyle(python,FixPython)
#define LMP_FIX_PYTHON_H
#include "fix.h"
#include <Python.h>
namespace LAMMPS_NS {
@ -34,7 +33,7 @@ class FixPython : public Fix {
virtual void post_force(int);
private:
PyObject * pFunc;
void * pFunc;
int selected_callback;
};

483
src/PYTHON/pair_python.cpp Normal file
View File

@ -0,0 +1,483 @@
/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
http://lammps.sandia.gov, Sandia National Laboratories
Steve Plimpton, sjplimp@sandia.gov
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* ----------------------------------------------------------------------
Contributing authors: Axel Kohlmeyer and Richard Berger (Temple U)
------------------------------------------------------------------------- */
#include <Python.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "pair_python.h"
#include "atom.h"
#include "comm.h"
#include "force.h"
#include "memory.h"
#include "update.h"
#include "neigh_list.h"
#include "python.h"
#include "error.h"
#include "python_compat.h"
using namespace LAMMPS_NS;
/* ---------------------------------------------------------------------- */
PairPython::PairPython(LAMMPS *lmp) : Pair(lmp) {
respa_enable = 0;
single_enable = 1;
writedata = 0;
restartinfo = 0;
one_coeff = 1;
reinitflag = 0;
py_potential = NULL;
skip_types = NULL;
python->init();
// add current directory to PYTHONPATH
PyObject * py_path = PySys_GetObject((char *)"path");
PyList_Append(py_path, PY_STRING_FROM_STRING("."));
// if LAMMPS_POTENTIALS environment variable is set, add it to PYTHONPATH as well
const char * potentials_path = getenv("LAMMPS_POTENTIALS");
if (potentials_path != NULL) {
PyList_Append(py_path, PY_STRING_FROM_STRING(potentials_path));
}
}
/* ---------------------------------------------------------------------- */
PairPython::~PairPython()
{
if (py_potential) Py_DECREF((PyObject*) py_potential);
delete[] skip_types;
if (allocated) {
memory->destroy(setflag);
memory->destroy(cutsq);
}
}
/* ---------------------------------------------------------------------- */
void PairPython::compute(int eflag, int vflag)
{
int i,j,ii,jj,inum,jnum,itype,jtype;
double xtmp,ytmp,ztmp,delx,dely,delz,evdwl,fpair;
double rsq,factor_lj;
int *ilist,*jlist,*numneigh,**firstneigh;
evdwl = 0.0;
if (eflag || vflag) ev_setup(eflag,vflag);
else evflag = vflag_fdotr = 0;
double **x = atom->x;
double **f = atom->f;
int *type = atom->type;
int nlocal = atom->nlocal;
double *special_lj = force->special_lj;
int newton_pair = force->newton_pair;
inum = list->inum;
ilist = list->ilist;
numneigh = list->numneigh;
firstneigh = list->firstneigh;
// prepare access to compute_force and compute_energy functions
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject *py_pair_instance = (PyObject *) py_potential;
PyObject *py_compute_force = PyObject_GetAttrString(py_pair_instance,"compute_force");
if (!py_compute_force) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'compute_force' method'");
}
if (!PyCallable_Check(py_compute_force)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'compute_force' is not callable");
}
PyObject *py_compute_energy = PyObject_GetAttrString(py_pair_instance,"compute_energy");
if (!py_compute_energy) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'compute_energy' method'");
}
if (!PyCallable_Check(py_compute_energy)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'compute_energy' is not callable");
}
PyObject *py_compute_args = PyTuple_New(3);
if (!py_compute_args) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not create tuple for 'compute' function arguments");
}
PyObject *py_rsq, *py_itype, *py_jtype, *py_value;
// loop over neighbors of my atoms
for (ii = 0; ii < inum; ii++) {
i = ilist[ii];
xtmp = x[i][0];
ytmp = x[i][1];
ztmp = x[i][2];
itype = type[i];
jlist = firstneigh[i];
jnum = numneigh[i];
py_itype = PY_INT_FROM_LONG(itype);
PyTuple_SetItem(py_compute_args,1,py_itype);
for (jj = 0; jj < jnum; jj++) {
j = jlist[jj];
factor_lj = special_lj[sbmask(j)];
j &= NEIGHMASK;
delx = xtmp - x[j][0];
dely = ytmp - x[j][1];
delz = ztmp - x[j][2];
rsq = delx*delx + dely*dely + delz*delz;
jtype = type[j];
// with hybrid/overlay we might get called for skipped types
if (skip_types[itype] || skip_types[jtype]) continue;
py_jtype = PY_INT_FROM_LONG(jtype);
PyTuple_SetItem(py_compute_args,2,py_jtype);
if (rsq < cutsq[itype][jtype]) {
py_rsq = PyFloat_FromDouble(rsq);
PyTuple_SetItem(py_compute_args,0,py_rsq);
py_value = PyObject_CallObject(py_compute_force,py_compute_args);
if (!py_value) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Calling 'compute_force' function failed");
}
fpair = factor_lj*PyFloat_AsDouble(py_value);
f[i][0] += delx*fpair;
f[i][1] += dely*fpair;
f[i][2] += delz*fpair;
if (newton_pair || j < nlocal) {
f[j][0] -= delx*fpair;
f[j][1] -= dely*fpair;
f[j][2] -= delz*fpair;
}
if (eflag) {
py_value = PyObject_CallObject(py_compute_energy,py_compute_args);
evdwl = factor_lj*PyFloat_AsDouble(py_value);
} else evdwl = 0.0;
if (evflag) ev_tally(i,j,nlocal,newton_pair,
evdwl,0.0,fpair,delx,dely,delz);
}
}
}
Py_DECREF(py_compute_args);
PyGILState_Release(gstate);
if (vflag_fdotr) virial_fdotr_compute();
}
/* ----------------------------------------------------------------------
allocate all arrays
------------------------------------------------------------------------- */
void PairPython::allocate()
{
allocated = 1;
int n = atom->ntypes;
memory->create(setflag,n+1,n+1,"pair:setflag");
for (int i = 1; i <= n; i++)
for (int j = i; j <= n; j++)
setflag[i][j] = 0;
memory->create(cutsq,n+1,n+1,"pair:cutsq");
}
/* ----------------------------------------------------------------------
global settings
------------------------------------------------------------------------- */
void PairPython::settings(int narg, char **arg)
{
if (narg != 1)
error->all(FLERR,"Illegal pair_style command");
cut_global = force->numeric(FLERR,arg[0]);
}
/* ----------------------------------------------------------------------
set coeffs for all type pairs
------------------------------------------------------------------------- */
void PairPython::coeff(int narg, char **arg)
{
const int ntypes = atom->ntypes;
if (narg != 3+ntypes)
error->all(FLERR,"Incorrect args for pair coefficients");
if (!allocated) allocate();
// make sure I,J args are * *
if (strcmp(arg[0],"*") != 0 || strcmp(arg[1],"*") != 0)
error->all(FLERR,"Incorrect args for pair coefficients");
// check if python potential file exists and source it
char * full_cls_name = arg[2];
char * lastpos = strrchr(full_cls_name, '.');
if (lastpos == NULL) {
error->all(FLERR,"Python pair style requires fully qualified class name");
}
size_t module_name_length = strlen(full_cls_name) - strlen(lastpos);
size_t cls_name_length = strlen(lastpos)-1;
char * module_name = new char[module_name_length+1];
char * cls_name = new char[cls_name_length+1];
strncpy(module_name, full_cls_name, module_name_length);
module_name[module_name_length] = 0;
strcpy(cls_name, lastpos+1);
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject * pModule = PyImport_ImportModule(module_name);
if (!pModule) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Loading python pair style module failure");
}
// create LAMMPS atom type to potential file type mapping in python class
// by calling 'lammps_pair_style.map_coeff(name,type)'
PyObject *py_pair_type = PyObject_GetAttrString(pModule, cls_name);
if (!py_pair_type) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find pair style class in module'");
}
delete [] module_name;
delete [] cls_name;
PyObject * py_pair_instance = PyObject_CallObject(py_pair_type, NULL);
if (!py_pair_instance) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not instantiate instance of pair style class'");
}
py_potential = (void *) py_pair_instance;
PyObject *py_check_units = PyObject_GetAttrString(py_pair_instance,"check_units");
if (!py_check_units) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'check_units' method'");
}
if (!PyCallable_Check(py_check_units)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'check_units' is not callable");
}
PyObject *py_units_args = PyTuple_New(1);
if (!py_units_args) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not create tuple for 'check_units' function arguments");
}
PyObject *py_name = PY_STRING_FROM_STRING(update->unit_style);
PyTuple_SetItem(py_units_args,0,py_name);
PyObject *py_value = PyObject_CallObject(py_check_units,py_units_args);
if (!py_value) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Calling 'check_units' function failed");
}
Py_DECREF(py_units_args);
PyObject *py_map_coeff = PyObject_GetAttrString(py_pair_instance,"map_coeff");
if (!py_map_coeff) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'map_coeff' method'");
}
if (!PyCallable_Check(py_map_coeff)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'map_coeff' is not callable");
}
PyObject *py_map_args = PyTuple_New(2);
if (!py_map_args) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not create tuple for 'map_coeff' function arguments");
}
delete[] skip_types;
skip_types = new int[ntypes+1];
skip_types[0] = 1;
for (int i = 1; i <= ntypes ; i++) {
if (strcmp(arg[2+i],"NULL") == 0) {
skip_types[i] = 1;
continue;
} else skip_types[i] = 0;
PyObject *py_type = PY_INT_FROM_LONG(i);
py_name = PY_STRING_FROM_STRING(arg[2+i]);
PyTuple_SetItem(py_map_args,0,py_name);
PyTuple_SetItem(py_map_args,1,py_type);
py_value = PyObject_CallObject(py_map_coeff,py_map_args);
if (!py_value) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Calling 'map_coeff' function failed");
}
for (int j = i; j <= ntypes ; j++) {
setflag[i][j] = 1;
cutsq[i][j] = cut_global*cut_global;
}
}
Py_DECREF(py_map_args);
PyGILState_Release(gstate);
}
/* ---------------------------------------------------------------------- */
double PairPython::init_one(int, int)
{
return cut_global;
}
/* ---------------------------------------------------------------------- */
double PairPython::single(int i, int j, int itype, int jtype, double rsq,
double factor_coul, double factor_lj,
double &fforce)
{
// with hybrid/overlay we might get called for skipped types
if (skip_types[itype] || skip_types[jtype]) {
fforce = 0.0;
return 0.0;
}
// prepare access to compute_force and compute_energy functions
PyGILState_STATE gstate = PyGILState_Ensure();
PyObject *py_pair_instance = (PyObject *) py_potential;
PyObject *py_compute_force
= PyObject_GetAttrString(py_pair_instance,"compute_force");
if (!py_compute_force) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'compute_force' method'");
}
if (!PyCallable_Check(py_compute_force)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'compute_force' is not callable");
}
PyObject *py_compute_energy
= PyObject_GetAttrString(py_pair_instance,"compute_energy");
if (!py_compute_energy) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not find 'compute_energy' method'");
}
if (!PyCallable_Check(py_compute_energy)) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Python 'compute_energy' is not callable");
}
PyObject *py_rsq, *py_itype, *py_jtype, *py_value;
PyObject *py_compute_args = PyTuple_New(3);
if (!py_compute_args) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Could not create tuple for 'compute' function arguments");
}
py_itype = PY_INT_FROM_LONG(itype);
PyTuple_SetItem(py_compute_args,1,py_itype);
py_jtype = PY_INT_FROM_LONG(jtype);
PyTuple_SetItem(py_compute_args,2,py_jtype);
py_rsq = PyFloat_FromDouble(rsq);
PyTuple_SetItem(py_compute_args,0,py_rsq);
py_value = PyObject_CallObject(py_compute_force,py_compute_args);
if (!py_value) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Calling 'compute_force' function failed");
}
fforce = factor_lj*PyFloat_AsDouble(py_value);
py_value = PyObject_CallObject(py_compute_energy,py_compute_args);
if (!py_value) {
PyErr_Print();
PyErr_Clear();
PyGILState_Release(gstate);
error->all(FLERR,"Calling 'compute_energy' function failed");
}
double evdwl = factor_lj*PyFloat_AsDouble(py_value);
Py_DECREF(py_compute_args);
PyGILState_Release(gstate);
return evdwl;
}

77
src/PYTHON/pair_python.h Normal file
View File

@ -0,0 +1,77 @@
/* -*- c++ -*- ----------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
http://lammps.sandia.gov, Sandia National Laboratories
Steve Plimpton, sjplimp@sandia.gov
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
Pair zero is a dummy pair interaction useful for requiring a
force cutoff distance in the absense of pair-interactions or
with hybrid/overlay if a larger force cutoff distance is required.
This can be used in conjunction with bond/create to create bonds
that are longer than the cutoff of a given force field, or to
calculate radial distribution functions for models without
pair interactions.
------------------------------------------------------------------------- */
#ifdef PAIR_CLASS
PairStyle(python,PairPython)
#else
#ifndef LMP_PAIR_PYTHON_H
#define LMP_PAIR_PYTHON_H
#include "pair.h"
namespace LAMMPS_NS {
class PairPython : public Pair {
public:
PairPython(class LAMMPS *);
virtual ~PairPython();
virtual void compute(int, int);
void settings(int, char **);
void coeff(int, char **);
double init_one(int, int);
double single(int, int, int, int, double, double, double, double &);
protected:
double cut_global;
void * py_potential;
int * skip_types;
virtual void allocate();
};
}
#endif
#endif
/* ERROR/WARNING messages:
E: Illegal ... command
Self-explanatory. Check the input script syntax and compare to the
documentation for the command. You can use -echo screen as a
command-line option when running LAMMPS to see the offending line.
E: Incorrect args for pair coefficients
Self-explanatory. Check the input script or data file.
E: Pair cutoff < Respa interior cutoff
One or more pairwise cutoffs are too short to use with the specified
rRESPA cutoffs.
*/

View File

@ -0,0 +1,33 @@
/* -*- c++ -*- ----------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
http://lammps.sandia.gov, Sandia National Laboratories
Steve Plimpton, sjplimp@sandia.gov
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
#ifndef LMP_PYTHON_COMPAT_H
#define LMP_PYTHON_COMPAT_H
// Wrap API changes between Python 2 and 3 using macros
#if PY_MAJOR_VERSION == 2
#define PY_INT_FROM_LONG(X) PyInt_FromLong(X)
#define PY_INT_AS_LONG(X) PyInt_AsLong(X)
#define PY_STRING_FROM_STRING(X) PyString_FromString(X)
#define PY_VOID_POINTER(X) PyCObject_FromVoidPtr((void *) X, NULL)
#define PY_STRING_AS_STRING(X) PyString_AsString(X)
#elif PY_MAJOR_VERSION == 3
#define PY_INT_FROM_LONG(X) PyLong_FromLong(X)
#define PY_INT_AS_LONG(X) PyLong_AsLong(X)
#define PY_STRING_FROM_STRING(X) PyUnicode_FromString(X)
#define PY_VOID_POINTER(X) PyCapsule_New((void *) X, NULL, NULL)
#define PY_STRING_AS_STRING(X) PyUnicode_AsUTF8(X)
#endif
#endif

View File

@ -11,6 +11,10 @@
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* ----------------------------------------------------------------------
Contributing author: Richard Berger and Axel Kohlmeyer (Temple U)
------------------------------------------------------------------------- */
#include <Python.h>
#include "python.h"
#include "force.h"
@ -18,6 +22,7 @@
#include "variable.h"
#include "memory.h"
#include "error.h"
#include "python_compat.h"
using namespace LAMMPS_NS;
@ -25,21 +30,6 @@ enum{NONE,INT,DOUBLE,STRING,PTR};
#define VALUELENGTH 64 // also in variable.cpp
// Wrap API changes between Python 2 and 3 using macros
#if PY_MAJOR_VERSION == 2
#define PY_INT_FROM_LONG(X) PyInt_FromLong(X)
#define PY_INT_AS_LONG(X) PyInt_AsLong(X)
#define PY_STRING_FROM_STRING(X) PyString_FromString(X)
#define PY_VOID_POINTER(X) PyCObject_FromVoidPtr((void *) X, NULL)
#define PY_STRING_AS_STRING(X) PyString_AsString(X)
#elif PY_MAJOR_VERSION == 3
#define PY_INT_FROM_LONG(X) PyLong_FromLong(X)
#define PY_INT_AS_LONG(X) PyLong_AsLong(X)
#define PY_STRING_FROM_STRING(X) PyUnicode_FromString(X)
#define PY_VOID_POINTER(X) PyCapsule_New((void *) X, NULL, NULL)
#define PY_STRING_AS_STRING(X) PyUnicode_AsUTF8(X)
#endif
/* ---------------------------------------------------------------------- */
@ -51,7 +41,7 @@ PythonImpl::PythonImpl(LAMMPS *lmp) : Pointers(lmp)
pfuncs = NULL;
// one-time initialization of Python interpreter
// pymain stores pointer to main module
// pyMain stores pointer to main module
external_interpreter = Py_IsInitialized();
Py_Initialize();
@ -63,7 +53,6 @@ PythonImpl::PythonImpl(LAMMPS *lmp) : Pointers(lmp)
if (!pModule) error->all(FLERR,"Could not initialize embedded Python");
pyMain = (void *) pModule;
PyGILState_Release(gstate);
}