Merge branch 'develop' into dpd-exclusions

This commit is contained in:
Axel Kohlmeyer
2022-12-22 16:33:36 -05:00
271 changed files with 123436 additions and 5923 deletions

View File

@ -54,4 +54,4 @@ jobs:
- name: Run Unit Tests
working-directory: build
shell: bash
run: ctest -V -C Release
run: ctest -V -C Release -E FixTimestep:python_move_nve

View File

@ -124,7 +124,7 @@ set(KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/kokkos.cpp
if(PKG_KSPACE)
list(APPEND KOKKOS_PKG_SOURCES ${KOKKOS_PKG_SOURCES_DIR}/fft3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/gridcomm_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/grid3d_kokkos.cpp
${KOKKOS_PKG_SOURCES_DIR}/remap_kokkos.cpp)
if(Kokkos_ENABLE_CUDA)
if(NOT (FFT STREQUAL "KISS"))

View File

@ -45,7 +45,7 @@ SPHINXEXTRA = -j $(shell $(PYTHON) -c 'import multiprocessing;print(multiprocess
# we only want to use explicitly listed files.
DOXYFILES = $(shell sed -n -e 's/\#.*$$//' -e '/^ *INPUT \+=/,/^[A-Z_]\+ \+=/p' doxygen/Doxyfile.in | sed -e 's/@LAMMPS_SOURCE_DIR@/..\/src/g' -e 's/\\//g' -e 's/ \+/ /' -e 's/[A-Z_]\+ \+= *\(YES\|NO\|\)//')
.PHONY: help clean-all clean clean-spelling epub mobi html pdf spelling anchor_check style_check char_check xmlgen fasthtml
.PHONY: help clean-all clean clean-spelling epub mobi html pdf spelling anchor_check style_check char_check role_check xmlgen fasthtml
# ------------------------------------------
@ -96,6 +96,7 @@ html: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK) $(MATHJAX)
rst_anchor_check src/*.rst ;\
python $(BUILDDIR)/utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst ;\
python $(BUILDDIR)/utils/check-styles.py -s ../src -d src ;\
echo "############################################" ;\
deactivate ;\
@ -175,6 +176,7 @@ pdf: xmlgen $(VENV) $(SPHINXCONFIG)/conf.py $(ANCHORCHECK)
rst_anchor_check src/*.rst ;\
python utils/check-packages.py -s ../src -d src ;\
env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst ;\
env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst ;\
python utils/check-styles.py -s ../src -d src ;\
echo "############################################" ;\
deactivate ;\
@ -220,6 +222,9 @@ package_check : $(VENV)
char_check :
@( env LC_ALL=C grep -n '[^ -~]' $(RSTDIR)/*.rst && exit 1 || : )
role_check :
@( env LC_ALL=C grep -n ' :[a-z]\+`' $(RSTDIR)/*.rst && exit 1 || : )
xmlgen : doxygen/xml/index.xml
doxygen/Doxyfile: doxygen/Doxyfile.in

View File

@ -1,7 +1,7 @@
.TH LAMMPS "1" "3 November 2022" "2022-11-3"
.TH LAMMPS "1" "22 December 2022" "2022-12-22"
.SH NAME
.B LAMMPS
\- Molecular Dynamics Simulator. Version 3 November 2022
\- Molecular Dynamics Simulator. Version 22 December 2022
.SH SYNOPSIS
.B lmp

View File

@ -107,6 +107,7 @@ KOKKOS, o = OPENMP, t = OPT.
* :doc:`pressure/uef <compute_pressure_uef>`
* :doc:`property/atom <compute_property_atom>`
* :doc:`property/chunk <compute_property_chunk>`
* :doc:`property/grid <compute_property_grid>`
* :doc:`property/local <compute_property_local>`
* :doc:`ptm/atom <compute_ptm_atom>`
* :doc:`rdf <compute_rdf>`

View File

@ -36,7 +36,8 @@ An alphabetic list of all LAMMPS :doc:`dump <dump>` commands.
* :doc:`custom/mpiio <dump>`
* :doc:`custom/zstd <dump>`
* :doc:`dcd <dump>`
* :doc:`deprecated <dump>`
* :doc:`grid <dump>`
* :doc:`grid/vtk <dump>`
* :doc:`h5md <dump_h5md>`
* :doc:`image <dump_image>`
* :doc:`local <dump>`

View File

@ -38,6 +38,7 @@ OPT.
* :doc:`ave/chunk <fix_ave_chunk>`
* :doc:`ave/correlate <fix_ave_correlate>`
* :doc:`ave/correlate/long <fix_ave_correlate_long>`
* :doc:`ave/grid <fix_ave_grid>`
* :doc:`ave/histo <fix_ave_histo>`
* :doc:`ave/histo/weight <fix_ave_histo>`
* :doc:`ave/time <fix_ave_time>`

View File

@ -23,7 +23,7 @@ Please refer to the :doc:`chunk HOWTO <Howto_chunk>` section for an overview.
Box command
-----------
.. deprecated:: TBD
.. deprecated:: 22Dec2022
The *box* command has been removed and the LAMMPS code changed so it won't
be needed. If present, LAMMPS will ignore the command and print a warning.
@ -31,7 +31,7 @@ be needed. If present, LAMMPS will ignore the command and print a warning.
Reset_ids, reset_atom_ids, reset_mol_ids commands
-------------------------------------------------
.. deprecated:: TBD
.. deprecated:: 22Dec2022
The *reset_ids*, *reset_atom_ids*, and *reset_mol_ids* commands have
been folded into the :doc:`reset_atoms <reset_atoms>` command. If

View File

@ -23,3 +23,4 @@ of time and requests from the LAMMPS user community.
Classes
Developer_platform
Developer_utils
Developer_grid

846
doc/src/Developer_grid.rst Normal file
View File

@ -0,0 +1,846 @@
Use of distributed grids within style classes
---------------------------------------------
.. versionadded:: 22Dec2022
The LAMMPS source code includes two classes which facilitate the
creation and use of distributed grids. These are the Grid2d and
Grid3d classes in the src/grid2d.cpp.h and src/grid3d.cpp.h files
respectively. As the names imply, they are used for 2d or 3d
simulations, as defined by the :doc:`dimension <dimension>` command.
The :doc:`Howto_grid <Howto_grid>` page gives an overview of how
distributed grids are defined from a user perspective, lists LAMMPS
commands which use them, and explains how grid cell data is referenced
from an input script. Please read that page first as it motivates the
coding details discussed here.
This doc page is for users who wish to write new styles (input script
commands) which use distributed grids. There are a variety of
material models and analysis methods which use atoms (or
coarse-grained particles) and grids in tandem.
A *distributed* grid means each processor owns a subset of the grid
cells. In LAMMPS, the subset for each processor will be a sub-block
of grid cells with low and high index bounds in each dimension of the
grid. The union of the sub-blocks across all processors is the global
grid.
More specifically, a grid point is defined for each cell (by default
the center point), and a processor owns a grid cell if its point is
within the processor's spatial sub-domain. The union of processor
sub-domains is the global simulation box. If a grid point is on the
boundary of two sub-domains, the lower processor owns the grid cell. A
processor may also store copies of ghost cells which surround its
owned cells.
----------
Style commands
^^^^^^^^^^^^^^
Style commands which can define and use distributed grids include the
:doc:`compute <compute>`, :doc:`fix <fix>`, :doc:`pair <pair_style>`,
and :doc:`kspace <kspace_style>` styles. If you wish grid cell data
to persist across timesteps, then use a fix. If you wish grid cell
data to be accessible by other commands, then use a fix or compute.
Currently in LAMMPS, the :doc:`pair_style amoeba <pair_amoeba>`,
:doc:`kspace_style pppm <kspace_style>`, and :doc:`kspace_style msm
<kspace_style>` commands use distributed grids but do not require
either of these capabilities; they thus create and use distributed
grids internally. Note that a pair style which needs grid cell data
to persist could be coded to work in tandem with a fix style which
provides that capability.
The *size* of a grid is specified by the number of grid cells in each
dimension of the simulation domain. In any dimension the size can be
any value >= 1. Thus a 10x10x1 grid for a 3d simulation is
effectively a 2d grid, where each grid cell spans the entire
z-dimension. A 1x100x1 grid for a 3d simulation is effectively a 1d
grid, where grid cells are a series of thin xz slabs in the
y-dimension. It is even possible to define a 1x1x1 3d grid, though it
may be inefficient to use it in a computational sense.
Note that the choice of grid size is independent of the number of
processors or their layout in a grid of processor sub-domains which
overlays the simulations domain. Depending on the distributed grid
size, a single processor may own many 1000s or no grid cells.
A command can define multiple grids, each of a different size. Each
grid is an instantiation of the Grid2d or Grid3d class.
The command also defines what data it will store for each grid it
creates and it allocates the multi-dimensional array(s) needed to
store the data. No grid cell data is stored within the Grid2d or
Grid3d classes.
If a single value per grid cell is needed, the data array will have
the same dimension as the grid, i.e. a 2d array for a 2d grid,
likewise for 3d. If multiple values per grid cell are needed, the
data array will have one more dimension than the grid, i.e. a 3d array
for a 2d grid, or 4d array for a 3d grid. A command can choose to
define multiple data arrays for each grid it defines.
----------
Grid data allocation and access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The simplest way for a command to allocate and access grid cell data
is to use the *create_offset()* methods provided by the Memory class.
Arguments for these methods can be values returned by the
*setup_grid()* method (described below), which define the extent of
the grid cells (owned+ghost) the processor owns. These 4 methods
allocate memory for 2d (first two) and 3d (second two) grid data. The
two methods that end in "_one" allocate an array which stores a single
value per grid cell. The two that end in "_multi" allocate an array
which stores *Nvalues* per grid cell.
.. code-block:: c++
// single value per cell for a 2d grid = 2d array
memory->create2d_offset(data2d_one, nylo_out, nyhi_out,
nxlo_out, nxhi_out, "data2d_one");
// nvalues per cell for a 2d grid = 3d array
memory->create3d_offset_last(data2d_multi, nylo_out, nyhi_out,
nxlo_out, nxhi_out, nvalues, "data2d_multi");
// single value per cell for a 3d grid = 3d array
memory->create3d_offset(data3d_one, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, "data3d_one");
// nvalues per cell for a 3d grid = 4d array
memory->create4d_offset_last(data3d_multi, nzlo_out, nzhi_out, nylo_out,
nyhi_out, nxlo_out, nxhi_out, nvalues,
"data3d_multi");
Note that these multi-dimensional arrays are allocated as contiguous
chunks of memory where the x-index of the grid varies fastest, then y,
and the z-index slowest. For multiple values per grid cell, the
Nvalues are contiguous, so their index varies even faster than the
x-index.
The key point is that the "offset" methods create arrays which are
indexed by the range of indices which are the bounds of the sub-block
of the global grid owned by this processor. This means loops like
these can be written in the caller code to loop over owned grid cells,
where the "i" loop bounds are the range of owned grid cells for the
processor. These are the bounds returned by the *setup_grid()*
method:
.. code-block:: c++
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data2d_one[iy][ix] = 0.0;
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data2d_multi[iy][ix][m] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
data3d_one[iz][iy][ix] = 0.0;
for (int iz = izlo; iz <= izhi; iz++)
for (int iy = iylo; iy <= iyhi; iy++)
for (int ix = ixlo; ix <= ixhi; ix++)
for (int m = 0; m < nvalues; m++)
data3d_multi[iz][iy][ix][m] = 0.0;
Simply replacing the "i" bounds with "o" bounds, also returned by the
*setup_grid()* method, would alter this code to loop over owned+ghost
cells (the entire allocated grid).
----------
Grid class constructors
^^^^^^^^^^^^^^^^^^^^^^^
The following sub-sections describe the public methods of the Grid3d
class which a style command can invoke. The Grid2d methods are
similar; simply remove arguments which refer to the z-dimension.
There are 2 constructors which can be used. They differ in the extra
i/o xyz lo/hi arguments:
.. code-block:: c++
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz)
Grid3d(class LAMMPS *lmp, MPI_Comm gcomm, int gnx, int gny, int gnz,
int ixlo, int ixhi, int iylo, int iyhi, int izlo, int izhi,
int oxlo, int oxhi, int oylo, int oyhi, int ozlo, int ozhi)
Both constructors take the LAMMPS instance pointer and a communicator
over which the grid will be distributed. Typically this is the
*world* communicator the LAMMPS instance is using. The
:doc:`kspace_style msm <kspace_style>` command creates a series of
grids, each of different size, which are partitioned across different
sub-communicators of processors. Both constructors are also passed
the global grid size: *gnx* by *gny* by *gnz*.
The first constructor is used when the caller wants the Grid class to
partition the global grid across processors; the Grid class defines
which grid cells each processor owns and also which it stores as ghost
cells. A subsequent call to *setup_grid()*, discussed below, returns
this info to the caller.
The second constructor allows the caller to define the extent of owned
and ghost cells, and pass them to the Grid class. The 6 arguments
which start with "i" are the inclusive lower and upper index bounds of
the owned (inner) grid cells this processor owns in each of the 3
dimensions within the global grid. Owned grid cells are indexed from
0 to N-1 in each dimension.
The 6 arguments which start with "o" are the inclusive bounds of the
owned+ghost (outer) grid cells it stores. If the ghost cells are on
the other side of a periodic boundary, then these indices may be < 0
or >= N in any dimension, so that oxlo <= ixlo and ixhi >= ixhi is
always the case.
For example, if Nx = 100, then a processor might pass ixlo=50,
ixhi=60, oxlo=48, oxhi=62 to the Grid class. Or ixlo=0, ixhi=10,
oxlo=-2, oxhi=13. If a processor owns no grid cells in a dimension,
then the ihi value should be specified as one less than the ilo value.
Note that the only reason to use the second constructor is if the
logic for assigning ghost cells is too complex for the Grid class to
compute, using the various set() methods described next. Currently
only the kspace_style pppm/electrode and kspace_style msm commands use
the second constructor.
----------
Grid class set methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods affect how the Grid class computes which owned
and ghost cells are assigned to each processor. *Set_shift_grid()* is
the only method which influences owned cell assignment; all the rest
influence ghost cell assignment. These methods are only used with the
first constructor; they are ignored if the second constructor is used.
These methods must be called before the *setup_grid()* method is
invoked, because they influence its operation.
.. code-block:: c++
void set_shift_grid(double shift);
void set_distance(double distance);
void set_stencil_atom(int lo, int hi);
void set_shift_atom(double shift_lo, double shift_hi);
void set_stencil_grid(int lo, int hi);
void set_zfactor(double factor);
Processors own a grid cell if a point within the grid cell is inside
the processor's sub-domain. By default this is the center point of the
grid cell. The *set_shift_grid()* method can change this. The *shift*
argument is a value from 0.0 to 1.0 (inclusive) which is the offset of
the point within the grid cell in each dimension. The default is 0.5
for the center of the cell. A value of 0.0 is the lower left corner
point; a value of 1.0 is the upper right corner point. There is
typically no need to change the default as it is optimal for
minimizing the number of ghost cells needed.
If a processor maps its particles to grid cells, it needs to allow for
its particles being outside its sub-domain between reneighboring. The
*distance* argument of the *set_distance()* method sets the furthest
distance outside a processor's sub-domain which a particle can move.
Typically this is half the neighbor skin distance, assuming
reneighboring is done appropriately. This distance is used in
determining how many ghost cells a processor needs to store to enable
its particles to be mapped to grid cells. The default value is 0.0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, map values (charge in the case of PPPM) to a stencil of grid
cells beyond the grid cell the particle is in. The stencil extent may
be different in the low and high directions. The *set_stencil_atom()*
method defines the maximum values of those 2 extents, assumed to be
the same in each of the 3 dimensions. Both the lo and hi values are
specified as positive integers. The default values are both 0.
Some commands, like the :doc:`kspace_style pppm <kspace_style>`
command, shift the position of an atom when mapping it to a grid cell,
based on the size of the stencil used to map values to the grid
(charge in the case of PPPM). The lo and hi arguments of the
*set_shift_atom()* method are the minimum shift in the low direction
and the maximum shift in the high direction, assumed to be the same in
each of the 3 dimensions. The shifts should be fractions of a grid
cell size with values between 0.0 and 1.0 inclusive. The default
values are both 0.0. See the src/pppm.cpp file for examples of these
lo/hi values for regular and staggered grids.
Some methods like the :doc:`fix ttm/grid <fix_ttm>` command, perform
finite difference kinds of operations on the grid, to diffuse electron
heat in the case of the two-temperature model (TTM). This operation
uses ghost grid values beyond the owned grid values the processor
updates. The *set_stencil_grid()* method defines the extent of this
stencil in both directions, assumed to be the same in each of the 3
dimensions. Both the lo and hi values are specified as positive
integers. The default values are both 0.
The kspace_style pppm commands allow a grid to be defined which
overlays a volume which extends beyond the simulation box in the z
dimension. This is for the purpose of modeling a 2d-periodic slab
(non-periodic in z) as if it were a larger 3d periodic system,
extended (with empty space) in the z dimension. The
:doc:`kspace_modify slab <kspace_modify>` command is used to specify
the ratio of the larger volume to the simulation volume; a volume
ratio of ~3 is typical. For this kind of model, the PPPM caller sets
the global grid size *gnz* ~3x larger than it would be otherwise.
This same ratio is passed by the PPPM caller as the *factor* argument
to the Grid class via the *set_zfactor()* method (*set_yfactor()* for
2d grids). The Grid class will then assign ownership of the 1/3 of
grid cells that overlay the simulation box to the processors which
also overlay the simulation box. The remaining 2/3 of the grid cells
are assigned to processors whose sub-domains are adjacent to the upper
z boundary of the simulation box.
----------
Grid class setup_grid method
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The *setup_grid()* method is called after the first constructor
(above) to partition the grid across processors, which determines
which grid cells each processor owns. It also calculates how many
ghost grid cells in each dimension and each direction each processor
needs to store.
Note that this method is NOT called if the second constructor above is
used. In that case, the caller assigns owned and ghost cells to each
processor.
Also note that this method must be invoked after any *set_*()* methods have
been used, since they can influence the assignment of owned and ghost
cells.
.. code-block:: c++
void setup_grid(int &ixlo, int &ixhi, int &iylo, int &iyhi, int &izlo, int &izhi,
int &oxlo, int &oxhi, int &oylo, int &oyhi, int &ozlo, int &ozhi)
The 6 return arguments which start with "i" are the inclusive lower
and upper index bounds of the owned (inner) grid cells this processor
owns in each of the 3 dimensions within the global grid. Owned grid
cells are indexed from 0 to N-1 in each dimension.
The 6 return arguments which start with "o" are the inclusive bounds of
the owned+ghost cells it owns. If the ghost cells are on the other
side of a periodic boundary, then these indices may be < 0 or >= N in
any dimension, so that oxlo <= ixlo and ixhi >= ixhi is always the
case.
----------
More grid class set methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following 2 methods can be used to override settings made by the
constructors above. If used, they must be called called before the
*setup_comm()* method is invoked, since it uses the settings that
these methods override. In LAMMPS these methods are called by by the
:doc:`kspace_style msm <kspace_style>` command for the grids it
instantiates using the 2nd constructor above.
.. code-block:: c++
void set_proc_neighs(int pxlo, int pxhi, int pylo, int pyhi, int pzlo, int pzhi)
void set_caller_grid(int fxlo, int fxhi, int fylo, int fyhi, int fzlo, int fzhi)
The *set_proc_neighs()* method sets the processor IDs of the 6
neighboring processors for each processor. Normally these would match
the processor grid neighbors which LAMMPS creates to overlay the
simulation box (the default). However, MSM excludes non-participating
processors from coarse grid communication when less processors are
used. This method allows MSM to override the default values.
The *set_caller_grid()* method species the size of the data arrays the
caller allocates. Normally these would match the extent of the ghost
grid cells (the default). However the MSM caller allocates a larger
data array (more ghost cells) for its finest-level grid, for use in
other operations besides owned/ghost cell communication. This method
allows MSM to override the default values.
----------
Grid class get methods
^^^^^^^^^^^^^^^^^^^^^^
The following methods allow the caller to query the settings for a
specific grid, whether it created the grid or another command created
it.
.. code-block:: c++
void get_size(int &nxgrid, int &nygrid, int &nzgrid);
void get_bounds_owned(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
void get_bounds_ghost(int &xlo, int &xhi, int &ylo, int &yhi, int &zlo, int &zhi)
The *get_size()* method returns the size of the global grid in each dimension.
The *get_bounds_owned()* method return the inclusive index bounds of
the grid cells this processor owns. The values range from 0 to N-1 in
each dimension. These values are the same as the "i" values returned
by *setup_grid()*.
The *get_bounds_ghost()* method return the inclusive index bounds of
the owned+ghost grid cells this processor stores. The owned cell
indices range from 0 to N-1, so these indices may be less than 0 or
greater than or equal to N in each dimension. These values are the
same as the "o" values returned by *setup_grid()*.
----------
Grid class owned/ghost communication
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If needed by the command, the following methods setup and perform
communication of grid data to/from neighboring processors. The
*forward_comm()* method sends owned grid cell data to the
corresponding ghost grid cells on other processors. The
*reverse_comm()* method sends ghost grid cell data to the
corresponding owned grid cells on another processor. The caller can
choose to sum ghost grid cell data to the owned grid cell or simply
copy it.
.. code-block:: c++
void setup_comm(int &nbuf1, int &nbuf2)
void forward_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype);
void reverse_comm(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
int ghost_adjacent();
The *setup_comm()* method must be called one time before performing
*forward* or *reverse* communication (multiple times if needed). It
returns two integers, which should be used to allocate two buffers.
The *nbuf1* and *nbuf2* values are the number of grid cells whose data
will be stored in two buffers by the Grid class when *forward* or
*reverse* communication is performed. The caller should thus allocate
them to a size large enough to hold all the data used in any single
forward or reverse communication operation it performs. Note that the
caller may allocate and communicate multiple data arrays for a grid it
instantiates. This size includes the bytes needed for the data type
of the grid data it stores, e.g. double precision values.
The *forward_comm()* and *reverse_comm()* methods send grid cell data
from owned to ghost cells, or ghost to owned cells, respectively, as
described above. The *caller* argument should be one of these values
-- Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument is the
"this" pointer to the caller class. These 2 arguments are used to
call back to pack()/unpack() functions in the caller class, as
explained below.
The *which* argument is a flag the caller can set which is passed to
the caller's pack()/unpack() methods. This allows a single callback
method to pack/unpack data for several different flavors of
forward/reverse communication, e.g. operating on different grids or
grid data.
The *nper* argument is the number of values per grid cell to be
communicated. The *nbyte* argument is the number of bytes per value,
e.g. 8 for double-precision values. The *buf1* and *buf2* arguments
are the two allocated buffers described above. So long as they are
allocated for the maximum size communication, they can be re-used for
any *forward_comm()/reverse_comm()* call. The *datatype* argument is
the MPI_Datatype setting, which should match the buffer allocation and
the *nbyte* argument. E.g. MPI_DOUBLE for buffers storing double
precision values.
To use the *forward_grid()* method, the caller must provide two
callback functions; likewise for use of the *reverse_grid()* methods.
These are the 4 functions, their arguments are all the same.
.. code-block:: c++
void pack_forward_grid(int which, void *vbuf, int nlist, int *list);
void unpack_forward_grid(int which, void *vbuf, int nlist, int *list);
void pack_reverse_grid(int which, void *vbuf, int nlist, int *list);
void unpack_reverse_grid(int which, void *vbuf, int nlist, int *list);
The *which* argument is set to the *which* value of the
*forward_comm()* or *reverse_comm()* calls. It allows the pack/unpack
function to select what data values to pack/unpack. *Vbuf* is the
buffer to pack/unpack the data to/from. It is a void pointer so that
the caller can cast it to whatever data type it chooses, e.g. double
precision values. *Nlist* is the number of grid cells to pack/unpack
and *list* is a vector (nlist in length) of offsets to where the data
for each grid cell resides in the caller's data arrays, which is best
illustrated with an example from the src/EXTRA-FIX/fix_ttm_grid.cpp
class which stores the scalar electron temperature for 3d system in a
3d grid (one value per grid cell):
.. code-block:: c++
void FixTTMGrid::pack_forward_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *src = &T_electron[nzlo_out][nylo_out][nxlo_out];
for (int i = 0; i < nlist; i++) buf[i] = src[list[i]];
}
In this case, the *which* argument is not used, *vbuf* points to a
buffer of doubles, and the electron temperature is stored by the
FixTTMGrid class in a 3d array of owned+ghost cells called T_electron.
That array is allocated by the *memory->create_3d_offset()* method
described above so that the first grid cell it stores is indexed as
T_electron[nzlo_out][nylo_out][nxlo_out]. The *nlist* values in
*list* are integer offsets from that first grid cell. Setting *src*
to the address of the first cell allows those offsets to be used to
access the temperatures to pack into the buffer.
Here is a similar portion of code from the src/fix_ave_grid.cpp class
which can store two kinds of data, a scalar count of atoms in a grid
cell, and one or more grid-cell-averaged atom properties. The code
from its *unpack_reverse_grid()* function for 2d grids and multiple
per-atom properties per grid cell (*nvalues*) is shown here:
.. code-block:: c++
void FixAveGrid::unpack_reverse_grid(int /*which*/, void *vbuf, int nlist, int *list)
{
auto buf = (double *) vbuf;
double *count,*data,*values;
count = &count2d[nylo_out][nxlo_out];
data = &array2d[nylo_out][nxlo_out][0];
m = 0;
for (i = 0; i < nlist; i++) {
count[list[i]] += buf[m++];
values = &data[nvalues*list[i]];
for (j = 0; j < nvalues; j++)
values[j] += buf[m++];
}
}
Both the count and the multiple values per grid cell are communicated
in *vbuf*. Note that *data* is now a pointer to the first value in
the first grid cell. And *values* points to where the first value in
*data* is for an offset of grid cells, calculated by multiplying
*nvalues* by *list[i]*. Finally, because this is reverse
communication, the communicated buffer values are summed to the caller
values.
The *ghost_adjacent()* method returns a 1 if every processor can
perform the necessary owned/ghost communication with only its nearest
neighbor processors (4 in 2d, 6 in 3d). It returns a 0 if any
processor's ghost cells extend further than nearest neighbor
processors.
This can be checked by callers who have the option to change the
global grid size to insure more efficient nearest-neighbor-only
communication if they wish. In this case, they instantiate a grid of
a given size (resolution), then invoke *setup_comm()* followed by
*ghost_adjacent()*. If the ghost cells are not adjacent, they destroy
the grid instance and start over with a higher-resolution grid.
Several of the :doc:`kspace_style pppm <kspace_style>` command
variants have this option.
----------
Grid class remap methods for load balancing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following methods are used when a load-balancing operation,
triggered by the :doc:`balance <balance>` or :doc:`fix balance
<fix_balance>` commands, changes the partitioning of the simulation
domain into processor sub-domains.
In order to work with load-balancing, any style command (compute, fix,
pair, or kspace style) which allocates a grid and stores per-grid data
should define a *reset_grid()* method; it takes no arguments. It will
be called by the two balance commands after they have reset processor
sub-domains and migrated atoms (particles) to new owning processors.
The *reset_grid()* method will typically perform some or all of the
following operations. See the src/fix_ave_grid.cpp and
src/EXTRA_FIX/fix_ttm_grid.cpp files for examples of *reset_grid()*
methods, as well as the *pack_remap_grid()* and *unpack_remap_grid()*
functions.
First, the *reset_grid()* method can instantiate new grid(s) of the
same global size, then call *setup_grid()* to partition them via the
new processor sub-domains. At this point, it can invoke the
*identical()* method which compares the owned and ghost grid cell
index bounds between two grids, the old grid passed as a pointer
argument, and the new grid whose *identical()* method is being called.
It returns 1 if the indices match on all processors, otherwise 0. If
they all match, then the new grids can be deleted; the command can
continue to use the old grids.
If not, then the command should allocate new grid data array(s) which
depend on the new partitioning. If the command does not need to
persist its grid data from the old partitioning to the new one, then
the command can simply delete the old data array(s) and grid
instance(s). It can then return.
If the grid data does need to persist, then the data for each grid
needs to be "remapped" from the old grid partitioning to the new grid
partitioning. The *setup_remap()* and *remap()* methods are used for
that purpose.
.. code-block:: c++
int identical(Grid3d *old);
void setup_remap(Grid3d *old, int &nremap_buf1, int &nremap_buf2)
void remap(int caller, void *ptr, int which, int nper, int nbyte,
void *buf1, void *buf2, MPI_Datatype datatype)
The arguments to these methods are identical to those for
the *setup_comm()* and *forward_comm()* or *reverse_comm()* methods.
However the returned *nremap_buf1* and *nremap2_buf* values will be
different than the *nbuf1* and *nbuf2* values. They should be used to
allocate two different remap buffers, separate from the owned/ghost
communication buffers.
To use the *remap()* method, the caller must provide two
callback functions:
.. code-block:: c++
void pack_remap_grid(int which, void *vbuf, int nlist, int *list);
void unpack_remap_grid(int which, void *vbuf, int list, int *list);
Their arguments are identical to those for the *pack_forward_grid()*
and *unpack_forward_grid()* callback functions (or the reverse
variants) discussed above. Normally, both these methods pack/unpack
all the data arrays for a given grid. The *which* argument of the
*remap()* method sets the *which* value for the pack/unpack functions.
If the command instantiates multiple grids (of different sizes), it
can be used within the pack/unpack methods to select which grid's data
is being remapped.
Note that the *pack_remap_grid()* function must copy values from the
OLD grid data arrays into the *vbuf* buffer. The *unpack_remap_grid()*
function must copy values from the *vbuf* buffer into the NEW grid
data arrays.
After the remap operation for grid cell data has been performed, the
*reset_grid()* method can deallocate the two remap buffers it created,
and can then exit.
----------
Grid class I/O methods
^^^^^^^^^^^^^^^^^^^^^^
There are two I/O methods in the Grid classes which can be used to
read and write grid cell data to files. The caller can decide on the
precise format of each file, e.g. whether header lines are prepended
or comment lines are allowed. Fundamentally, the file should contain
one line per grid cell for the entire global grid. Each line should
contain identifying info as to which grid cell it is, e.g. a unique
grid cell ID or the ix,iy,iz indices of the cell within a 3d grid.
The line should also contain one or more data values which are stored
within the grid data arrays created by the command
For grid cell IDs, the LAMMPS convention is that the IDs run from 1 to
N, where N = Nx * Ny for 2d grids and N = Nx * Ny * Nz for 3d grids.
The x-index of the grid cell varies fastest, then y, and the z-index
varies slowest. So for a 10x10x10 grid the cell IDs from 901-1000
would be in the top xy layer of the z dimension.
The *read_file()* method does something simple. It reads a chunk of
consecutive lines from the file and passes them back to the caller to
process. The caller provides a *unpack_read_grid()* function for this
purpose. The function checks the grid cell ID or indices and only
stores grid cell data for the grid cells it owns.
The *write_file()* method does something slightly more complex. Each
processor packs the data for its owned grid cells into a buffer. The
caller provides a *pack_write_grid()* function for this purpose. The
*write_file()* method then loops over all processors and each sends
its buffer one at a time to processor 0, along with the 3d (or 2d)
index bounds of its grid cell data within the global grid. Processor
0 calls back to the *unpack_write_grid()* function provided by the
caller with the buffer. The function writes one line per grid cell to
the file.
See the src/EXTRA_FIX/fix_ttm_grid.cpp file for examples of now both
these methods are used to read/write electron temperature values
from/to a file, as well as for implementations of the the pack/unpack
functions described below.
Here are the details of the two I/O methods and the 3 callback
functions. See the src/fix_ave_grid.cpp file for examples of all of
them.
.. code-block:: c++
void read_file(int caller, void *ptr, FILE *fp, int nchunk, int maxline)
void write_file(int caller, void *ptr, int which,
int nper, int nbyte, MPI_Datatype datatype
The *caller* argument in both methods should be one of these values --
Grid3d::COMPUTE, Grid3d::FIX, Grid3d::KSPACE, Grid3d::PAIR --
depending on the style of the caller class. The *ptr* argument in
both methods is the "this" pointer to the caller class. These 2
arguments are used to call back to pack()/unpack() functions in the
caller class, as explained below.
For the *read_file()* method, the *fp* argument is a file pointer to
the file to be read from, opened on processor 0 by the caller.
*Nchunk* is the number of lines to read per chunk, and *maxline* is
the maximum number of characters per line. The Grid class will
allocate a buffer for storing chunks of lines based on these values.
For the *write_file()* method, the *which* argument is a flag the
caller can set which is passed back to the caller's pack()/unpack()
methods. If the command instantiates multiple grids (of different
sizes), this flag can be used within the pack/unpack methods to select
which grid's data is being written out (presumably to different
files). the *nper* argument is the number of values per grid cell to
be written out. The *nbyte* argument is the number of bytes per
value, e.g. 8 for double-precision values. The *datatype* argument is
the MPI_Datatype setting, which should match the *nbyte* argument.
E.g. MPI_DOUBLE for double precision values.
To use the *read_grid()* method, the caller must provide one callback
function. To use the *write_grid()* method, it provides two callback
functions:
.. code-block:: c++
int unpack_read_grid(int nlines, char *buffer)
void pack_write_grid(int which, void *vbuf)
void unpack_write_grid(int which, void *vbuf, int *bounds)
For *unpack_read_grid()* the *nlines* argument is the number of lines
of character data read from the file and contained in *buffer*. The
lines each include a newline character at the end. When the function
processes the lines, it may choose to skip some of them (header or
comment lines). It returns an integer count of the number of grid
cell lines it processed. This enables the Grid class *read_file()*
method to know when it has read the correct number of lines.
For *pack_write_grid()* and *unpack_write_grid()*, the *vbuf* argument
is the buffer to pack/unpack data to/from. It is a void pointer so
that the caller can cast it to whatever data type it chooses,
e.g. double precision values. the *which* argument is set to the
*which* value of the *write_file()* method. It allows the caller to
choose which grid data to operate on.
For *unpack_write_grid()*, the *bounds* argument is a vector of 4 or 6
integer grid indices (4 for 2d, 6 for 3d). They are the
xlo,xhi,ylo,yhi,zlo,zhi index bounds of the portion of the global grid
which the *vbuf* holds owned grid cell data values for. The caller
should loop over the values in *vbuf* with a double loop (2d) or
triple loop (3d), similar to the code snippets listed above. The
x-index varies fastest, then y, and the z-index slowest. If there are
multiple values per grid cell, the index for those values varies
fastest of all. The caller can add the x,y,z indices of the grid cell
(or the corresponding grid cell ID) to the data value(s) written as
one line to the output file.
----------
Style class grid access methods
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A style command can enable its grid cell data to be accessible from
other commands. For example :doc:`fix ave/grid <fix_ave_grid>` or
:doc:`dump grid <dump>` or :doc:`dump grid/vtk <dump>`. Those
commands access the grid cell data by using a *grid reference* in
their input script syntax, as described on the :doc:`Howto_grid
<Howto_grid>` doc page. They look like this:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
Each grid a command instantiates has a unique *gname*, defined by the
command. Likewise each grid cell data structure (scalar or vector)
associated with a grid has a unique *dname*, also defined by the
command.
To provide access to its grid cell data, a style command needs to
implement the following 4 methods:
.. code-block:: c++
int get_grid_by_name(const std::string &name, int &dim);
void *get_grid_by_index(int index);
int get_griddata_by_name(int igrid, const std::string &name, int &ncol);
void *get_griddata_by_index(int index);
Currently only computes and fixes can implement these methods. If it
does so, the compute of fix should also set the variable
*pergrid_flag* to 1. See any of the compute or fix commands which set
"pergrid_flag = 1" for examples of how these 4 functions can be
implemented.
The *get_grid_by_name()* method takes a grid name as input and returns
two values. The *dim* argument is returned as 2 or 3 for the
dimensionality of the grid. The function return is a grid index from
0 to G-1 where *G* is the number of grids the command instantiates. A
value of -1 is returned if the grid name is not recognized.
The *get_grid_by_index()* method is called after the
*get_grid_by_name()* method, using the grid index it returned as its
argument. This method will return a pointer to the Grid2d or Grid3d
class. The caller can use this to query grid attributes, such as the
global size of the grid. The :doc:`dump grid <dump>` to insure each
its grid reference arguments are for grids of the same size.
The *get_griddata_by_name()* method takes a grid index *igrid* and a
data name as input. It returns two values. The *ncol* argument is
returned as a 0 if the grid data is a single value (scalar) per grid
cell, or an integer M > 0 if there are M values (vector) per grid
cell. Note that even if M = 1, it is still a 1-length vector, not a
scalar. The function return is a data index from 0 to D-1 where *D*
is the number of data sets associated with that grid by the command.
A value of -1 is returned if the data name is not recognized.
The *get_griddata_by_index()* method is called after the
*get_griddata_by_name()* method, using the data index it returned as
its argument. This method will return a pointer to the
multi-dimensional array which stores the requested data.
As in the discussion above of the Memory class *create_offset()*
methods, the dimensionality of the array associated with the returned
pointer depends on whether it is a 2d or 3d grid and whether there is
a single or multiple values stored for each grid cell:
* single value per cell for a 2d grid = 2d array pointer
* multiple values per cell for a 2d grid = 3d array pointer
* single value per cell for a 3d grid = 3d array pointer
* multiple values per cell for a 3d grid = 4d array pointer
The caller will typically access the data by casting the void pointer
to the corresponding array pointer and using nested loops in x,y,z
between owned or ghost index bounds returned by the
*get_bounds_owned()* or *get_bounds_ghost()* methods to index into the
array. Example code snippets with this logic were listed above,
----------
Final notes
^^^^^^^^^^^
Finally, here are some additional issues to pay attention to for
writing any style command which uses distributed grids via the Grid2d
or Grid3d class.
The command destructor should delete all instances of the Grid class,
any buffers it allocated for forward/reverse or remap communication,
and any data arrays it allocated to store grid cell data.
If a command is intended to work for either 2d or 3d simulations, then
it should have logic to instantiate either 2d or 3d grids and their
associated data arrays, depending on the dimension of the simulation
box. The :doc:`fix ave/grid <fix_ave_grid>` command is an example of
such a command.
When a command maps its particles to the grid and updates grid cell
values, it should check that it is not updating or accessing a grid
cell value outside the range of its owned+ghost cells, and generate an
error message if that is the case. This could happen, for example, if
a particle has moved further than half the neighbor skin distance,
because the neighbor list update criterion are not adequate to prevent
it from happening. See the src/KSPACE/pppm.cpp file and its
*particle_map()* method for an example of this kind of error check.

View File

@ -25,6 +25,7 @@ Available topics in mostly chronological order are:
- `Simplified and more compact neighbor list requests`_
- `Split of fix STORE into fix STORE/GLOBAL and fix STORE/PERATOM`_
- `Use Output::get_dump_by_id() instead of Output::find_dump()`_
- `Refactored grid communication using Grid3d/Grid2d classes instead of GridComm`_
----
@ -423,3 +424,56 @@ New:
if (dumpflag) for (auto idump : dumplist) idump->write();
This change is **required** or else the code will not compile.
Refactored grid communication using Grid3d/Grid2d classes instead of GridComm
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. versionchanged:: 22Dec2022
The ``GridComm`` class was for creating and communicating distributed
grids was replaced by the ``Grid3d`` class with added functionality.
A ``Grid2d`` class was also added for additional flexibility.
The new functionality and commands using the two grid classes are
discussed on the following documentation pages:
- :doc:`Howto_grid`
- :doc:`Developer_grid`
If you have custom LAMMPS code, which uses the GridComm class, here are some notes
on how to adapt it for using the Grid3d class.
(1) The constructor has changed to allow the ``Grid3d`` / ``Grid2d``
classes to partition the global grid across processors, both for
owned and ghost grid cells. Previously any class which called
``GridComm`` performed the partitioning itself and that information
was passed in the ``GridComm::GridComm()`` constructor. There are
several "set" functions which can be called to alter how ``Grid3d``
/ ``Grid2d`` perform the partitioning. They should be sufficient
for most use cases of the grid classes.
(2) The partitioning is triggered by the ``setup_grid()`` method.
(3) The ``setup()`` method of the ``GridComm`` class has been replaced
by the ``setup_comm()`` method in the new grid classes. The syntax
for the ``forward_comm()`` and ``reverse_comm()`` methods is
slightly altered as is the syntax of the associated pack/unpack
callback methods. But the functionality of these operations is the
same as before.
(4) The new ``Grid3d`` / ``Grid2d`` classes have additional
functionality for dynamic load-balancing of grids and their
associated data across processors. This did not exist in the
``GridComm`` class.
This and more is explained in detail on the :doc:`Developer_grid` page.
The following LAMMPS source files can be used as illustrative examples
for how the new grid classes are used by computes, fixes, and various
KSpace solvers which use distributed FFT grids:
- ``src/fix_ave_grid.cpp``
- ``src/compute_property_grid.cpp``
- ``src/EXTRA-FIX/fix_ttm_grid.cpp``
- ``src/KSPACE/pppm.cpp``
This change is **required** or else the code will not compile.

View File

@ -214,6 +214,9 @@ Argument processing
.. doxygenfunction:: expand_args
:project: progguide
.. doxygenfunction:: parse_grid_id
:project: progguide
.. doxygenfunction:: expand_type
:project: progguide

File diff suppressed because it is too large Load Diff

View File

@ -51,6 +51,7 @@ Analysis howto
Howto_output
Howto_chunk
Howto_grid
Howto_temperature
Howto_elastic
Howto_kappa

102
doc/src/Howto_grid.rst Normal file
View File

@ -0,0 +1,102 @@
Using distributed grids
=======================
.. versionadded:: 22Dec2022
LAMMPS has internal capabilities to create uniformly spaced grids
which overlay the simulation domain. For 2d and 3d simulations these
are 2d and 3d grids respectively. Conceptually a grid can be thought
of as a collection of grid cells. Each grid cell can store one or
more values (data).
The grid cells and data they store are distributed across processors.
Each processor owns the grid cells (and data) whose center points lie
within the spatial sub-domain of the processor. If needed for its
computations, a processor may also store ghost grid cells with their
data.
Distributed grids can overlay orthogonal or triclinic simulation
boxes; see the :doc:`Howto triclinic <Howto_triclinic>` doc page for
an explanation of the latter. For a triclinic box, the grid cell
shape conforms to the shape of the simulation domain,
e.g. parallelograms instead of rectangles in 2d.
If the box size or shape changes during a simulation, the grid changes
with it, so that it always overlays the entire simulation domain. For
non-periodic dimensions, the grid size in that dimension matches the
box size, as set by the :doc:`boundary <boundary>` command for fixed
or shrink-wrapped boundaries.
If load-balancing is invoked by the :doc:`balance <balance>` or
:doc:`fix balance <fix_balance>` commands, then the sub-domain owned
by a processor can change which may also change which grid cells they
own.
Post-processing and visualization of grid cell data can be enabled by
the :doc:`dump grid <dump>`, :doc:`dump grid/vtk <dump>`, and
:doc:`dump image <dump_image>` commands. The latter has an optional
*grid* keyword. The `OVITO visualization tool
<https://www.ovito.org>`_ also plans (as of Nov 2022) to add support
for visualizing grid cell data (along with atoms) using :doc:`dump
grid <dump>` output files as input.
.. note::
For developers, distributed grids are implemented within the code via
two classes: Grid2d and Grid3d. These partition the grid across
processors and have methods which allow forward and reverse
communication of ghost grid data as well as load balancing. If you
write a new compute or fix which needs a distributed grid, these are
the classes to look at. A new pair style could use a distributed
grid by having a fix define it. Please see the section on
:doc:`using distributed grids within style classes <Developer_grid>`
for a detailed description.
----------
These are the commands which currently define or use distributed
grids:
* :doc:`fix ttm/grid <fix_ttm>` - store electron temperature on grid
* :doc:`fix ave/grid <fix_ave_grid>` - time average per-atom or per-grid values
* :doc:`compute property/grid <compute_property_grid>` - generate grid IDs and coords
* :doc:`dump grid <dump>` - output per-grid values in LAMMPS format
* :doc:`dump grid/vtk <dump>` - output per-grid values in VTK format
* :doc:`dump image grid <dump_image>` - include colored grid in output images
* :doc:`pair_style amoeba <pair_amoeba>` - FFT grids
* :doc:`kspace_style pppm <kspace_style>` (and variants) - FFT grids
* :doc:`kspace_style msm <kspace_style>` (and variants) - MSM grids
The grids used by the :doc:`kspace_style <kspace_style>` can not be
referenced by an input script. However the grids and data created and
used by the other commands can be.
A compute or fix command may create one or more grids (of different
sizes). Each grid can store one or more data fields. A data field
can be a single value per grid point (per-grid vector) or multiple
values per grid point (per-grid array). See the :doc:`Howto output
<Howto_output>` doc page for an explanation of how per-grid data can
be generated by some commands and used by other commands.
A command accesses grid data from a compute or fix using a *grid
reference* with the following syntax:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
The prefix "c\_" or "f\_" refers to the ID of the compute or fix; gname is
the name of the grid, which is assigned by the compute or fix; dname is
the name of the data field, which is also assigned by the compute or
fix.
If the data field is a per-grid vector (one value per grid point),
then no brackets are used to access the values. If the data field is
a per-grid array (multiple values per grid point), then brackets are
used to specify the column I of the array. I ranges from 1 to Ncol
inclusive, where Ncol is the number of columns in the array and is
defined by the compute or fix.
Currently, there are no per-grid variables implemented in LAMMPS. We
may add this feature at some point.

View File

@ -22,14 +22,17 @@ commands you specify.
As discussed below, LAMMPS gives you a variety of ways to determine
what quantities are computed and printed when the thermodynamics,
dump, or fix commands listed above perform output. Throughout this
discussion, note that users can also :doc:`add their own computes and fixes to LAMMPS <Modify>` which can then generate values that can then be
output with these commands.
discussion, note that users can also :doc:`add their own computes and
fixes to LAMMPS <Modify>` which can then generate values that can then
be output with these commands.
The following sub-sections discuss different LAMMPS command related
The following sub-sections discuss different LAMMPS commands related
to output and the kind of data they operate on and produce:
* :ref:`Global/per-atom/local data <global>`
* :ref:`Global/per-atom/local/per-grid data <global>`
* :ref:`Scalar/vector/array data <scalar>`
* :ref:`Per-grid data <grid>`
* :ref:`Disambiguation <disambiguation>`
* :ref:`Thermodynamic output <thermo>`
* :ref:`Dump file output <dump>`
* :ref:`Fixes that write output files <fixoutput>`
@ -42,27 +45,32 @@ to output and the kind of data they operate on and produce:
.. _global:
Global/per-atom/local data
--------------------------
Global/per-atom/local/per-grid data
-----------------------------------
Various output-related commands work with three different styles of
data: global, per-atom, or local. A global datum is one or more
system-wide values, e.g. the temperature of the system. A per-atom
datum is one or more values per atom, e.g. the kinetic energy of each
atom. Local datums are calculated by each processor based on the
atoms it owns, but there may be zero or more per atom, e.g. a list of
bond distances.
Various output-related commands work with four different styles of
data: global, per-atom, local, and per-grid. A global datum is one or
more system-wide values, e.g. the temperature of the system. A
per-atom datum is one or more values per atom, e.g. the kinetic energy
of each atom. Local datums are calculated by each processor based on
the atoms it owns, but there may be zero or more per atom, e.g. a list
of bond distances.
A per-grid datum is one or more values per grid cell, for a grid which
overlays the simulation domain. The grid cells and the data they
store are distributed across processors; each processor owns the grid
cells whose center point falls within its sub-domain.
.. _scalar:
Scalar/vector/array data
------------------------
Global, per-atom, and local datums can each come in three kinds: a
single scalar value, a vector of values, or a 2d array of values. The
doc page for a "compute" or "fix" or "variable" that generates data
will specify both the style and kind of data it produces, e.g. a
per-atom vector.
Global, per-atom, and local datums can come in three kinds: a single
scalar value, a vector of values, or a 2d array of values. The doc
page for a "compute" or "fix" or "variable" that generates data will
specify both the style and kind of data it produces, e.g. a per-atom
vector.
When a quantity is accessed, as in many of the output commands
discussed below, it can be referenced via the following bracket
@ -83,6 +91,18 @@ the dimension twice (array -> scalar). Thus a command that uses
scalar values as input can typically also process elements of a vector
or array.
.. _grid:
Per-grid data
------------------------
Per-grid data can come in two kinds: a vector of values (one per grid
cekk), or a 2d array of values (multiple values per grid ckk). The
doc page for a "compute" or "fix" that generates data will specify
names for both the grid(s) and datum(s) it produces, e.g. per-grid
vectors or arrays, which can be referenced by other commands. See the
:doc:`Howto grid <Howto_grid>` doc page for more details.
.. _disambiguation:
Disambiguation
@ -90,15 +110,15 @@ Disambiguation
Some computes and fixes produce data in multiple styles, e.g. a global
scalar and a per-atom vector. Usually the context in which the input
script references the data determines which style is meant. Example: if
a compute provides both a global scalar and a per-atom vector, the
script references the data determines which style is meant. Example:
if a compute provides both a global scalar and a per-atom vector, the
former will be accessed by using ``c_ID`` in an equal-style variable,
while the latter will be accessed by using ``c_ID`` in an atom-style
variable. Note that atom-style variable formulas can also access global
scalars, but in this case it is not possible to do directly because of
the ambiguity. Instead, an equal-style variable can be defined which
accesses the global scalar, and that variable used in the atom-style
variable formula in place of ``c_ID``.
variable. Note that atom-style variable formulas can also access
global scalars, but in this case it is not possible to do this
directly because of the ambiguity. Instead, an equal-style variable
can be defined which accesses the global scalar, and that variable can
be used in the atom-style variable formula in place of ``c_ID``.
.. _thermo:
@ -107,15 +127,14 @@ Thermodynamic output
The frequency and format of thermodynamic output is set by the
:doc:`thermo <thermo>`, :doc:`thermo_style <thermo_style>`, and
:doc:`thermo_modify <thermo_modify>` commands. The
:doc:`thermo_style <thermo_style>` command also specifies what values
are calculated and written out. Pre-defined keywords can be specified
(e.g. press, etotal, etc). Three additional kinds of keywords can
also be specified (c_ID, f_ID, v_name), where a :doc:`compute <compute>`
or :doc:`fix <fix>` or :doc:`variable <variable>` provides the value to be
output. In each case, the compute, fix, or variable must generate
global values for input to the :doc:`thermo_style custom <dump>`
command.
:doc:`thermo_modify <thermo_modify>` commands. The :doc:`thermo_style
<thermo_style>` command also specifies what values are calculated and
written out. Pre-defined keywords can be specified (e.g. press, etotal,
etc). Three additional kinds of keywords can also be specified (c_ID,
f_ID, v_name), where a :doc:`compute <compute>` or :doc:`fix <fix>` or
:doc:`variable <variable>` provides the value to be output. In each
case, the compute, fix, or variable must generate global values for
input to the :doc:`thermo_style custom <dump>` command.
Note that thermodynamic output values can be "extensive" or
"intensive". The former scale with the number of atoms in the system
@ -141,9 +160,10 @@ There is also a :doc:`dump custom <dump>` format where the user
specifies what values are output with each atom. Pre-defined atom
attributes can be specified (id, x, fx, etc). Three additional kinds
of keywords can also be specified (c_ID, f_ID, v_name), where a
:doc:`compute <compute>` or :doc:`fix <fix>` or :doc:`variable <variable>`
provides the values to be output. In each case, the compute, fix, or
variable must generate per-atom values for input to the :doc:`dump custom <dump>` command.
:doc:`compute <compute>` or :doc:`fix <fix>` or :doc:`variable
<variable>` provides the values to be output. In each case, the
compute, fix, or variable must generate per-atom values for input to
the :doc:`dump custom <dump>` command.
There is also a :doc:`dump local <dump>` format where the user specifies
what local values to output. A pre-defined index keyword can be
@ -154,18 +174,23 @@ provides the values to be output. In each case, the compute or fix
must generate local values for input to the :doc:`dump local <dump>`
command.
There is also a :doc:`dump grid <dump>` format where the user
specifies what per-grid values to output from computes or fixes that
generate per-grid data.
.. _fixoutput:
Fixes that write output files
-----------------------------
Several fixes take various quantities as input and can write output
files: :doc:`fix ave/time <fix_ave_time>`, :doc:`fix ave/chunk <fix_ave_chunk>`, :doc:`fix ave/histo <fix_ave_histo>`,
:doc:`fix ave/correlate <fix_ave_correlate>`, and :doc:`fix print <fix_print>`.
files: :doc:`fix ave/time <fix_ave_time>`, :doc:`fix ave/chunk
<fix_ave_chunk>`, :doc:`fix ave/histo <fix_ave_histo>`, :doc:`fix
ave/correlate <fix_ave_correlate>`, and :doc:`fix print <fix_print>`.
The :doc:`fix ave/time <fix_ave_time>` command enables direct output to
a file and/or time-averaging of global scalars or vectors. The user
specifies one or more quantities as input. These can be global
The :doc:`fix ave/time <fix_ave_time>` command enables direct output
to a file and/or time-averaging of global scalars or vectors. The
user specifies one or more quantities as input. These can be global
:doc:`compute <compute>` values, global :doc:`fix <fix>` values, or
:doc:`variables <variable>` of any style except the atom style which
produces per-atom values. Since a variable can refer to keywords used
@ -184,8 +209,14 @@ atoms, e.g. individual molecules. The per-atom quantities can be atom
density (mass or number) or atom attributes such as position,
velocity, force. They can also be per-atom quantities calculated by a
:doc:`compute <compute>`, by a :doc:`fix <fix>`, or by an atom-style
:doc:`variable <variable>`. The chunk-averaged output of this fix can
also be used as input to other output commands.
:doc:`variable <variable>`. The chunk-averaged output of this fix is
global and can also be used as input to other output commands.
Note that the :doc:`fix ave/grid <fix_ave_grid>` command can also
average the same per-atom quantities within spatial bins, but it does
this for a distributed grid whose grid cells are owned by different
processors. It outputs per-grid data, not global data, so it is more
efficient for large numbers of averaging bins.
The :doc:`fix ave/histo <fix_ave_histo>` command enables direct output
to a file of histogrammed quantities, which can be global or per-atom
@ -202,38 +233,53 @@ written to the screen and log file or to a separate file, periodically
during a running simulation. The line can contain one or more
:doc:`variable <variable>` values for any style variable except the
vector or atom styles). As explained above, variables themselves can
contain references to global values generated by :doc:`thermodynamic keywords <thermo_style>`, :doc:`computes <compute>`,
:doc:`fixes <fix>`, or other :doc:`variables <variable>`, or to per-atom
values for a specific atom. Thus the :doc:`fix print <fix_print>`
command is a means to output a wide variety of quantities separate
from normal thermodynamic or dump file output.
contain references to global values generated by :doc:`thermodynamic
keywords <thermo_style>`, :doc:`computes <compute>`, :doc:`fixes
<fix>`, or other :doc:`variables <variable>`, or to per-atom values
for a specific atom. Thus the :doc:`fix print <fix_print>` command is
a means to output a wide variety of quantities separate from normal
thermodynamic or dump file output.
.. _computeoutput:
Computes that process output quantities
---------------------------------------
The :doc:`compute reduce <compute_reduce>` and :doc:`compute reduce/region <compute_reduce>` commands take one or more per-atom
or local vector quantities as inputs and "reduce" them (sum, min, max,
The :doc:`compute reduce <compute_reduce>` and :doc:`compute
reduce/region <compute_reduce>` commands take one or more per-atom or
local vector quantities as inputs and "reduce" them (sum, min, max,
ave) to scalar quantities. These are produced as output values which
can be used as input to other output commands.
The :doc:`compute slice <compute_slice>` command take one or more global
vector or array quantities as inputs and extracts a subset of their
values to create a new vector or array. These are produced as output
values which can be used as input to other output commands.
The :doc:`compute slice <compute_slice>` command take one or more
global vector or array quantities as inputs and extracts a subset of
their values to create a new vector or array. These are produced as
output values which can be used as input to other output commands.
The :doc:`compute property/atom <compute_property_atom>` command takes a
list of one or more pre-defined atom attributes (id, x, fx, etc) and
The :doc:`compute property/atom <compute_property_atom>` command takes
a list of one or more pre-defined atom attributes (id, x, fx, etc) and
stores the values in a per-atom vector or array. These are produced
as output values which can be used as input to other output commands.
The list of atom attributes is the same as for the :doc:`dump custom <dump>` command.
The list of atom attributes is the same as for the :doc:`dump custom
<dump>` command.
The :doc:`compute property/local <compute_property_local>` command takes
a list of one or more pre-defined local attributes (bond info, angle
info, etc) and stores the values in a local vector or array. These
are produced as output values which can be used as input to other
output commands.
The :doc:`compute property/local <compute_property_local>` command
takes a list of one or more pre-defined local attributes (bond info,
angle info, etc) and stores the values in a local vector or array.
These are produced as output values which can be used as input to
other output commands.
The :doc:`compute property/grid <compute_property_grid>` command takes
a list of one or more pre-defined per-grid attributes (id, grid cell
coords, etc) and stores the values in a per-grid vector or array.
These are produced as output values which can be used as input to the
:doc:`dump grid <dump>` command.
The :doc:`compute property/chunk <compute_property_chunk>` command
takes a list of one or more pre-defined chunk attributes (id, count,
coords for spatial bins) and stores the values in a global vector or
array. These are produced as output values which can be used as input
to other output commands.
.. _fixprocoutput:
@ -247,18 +293,42 @@ a time.
The :doc:`fix ave/atom <fix_ave_atom>` command performs time-averaging
of per-atom vectors. The per-atom quantities can be atom attributes
such as position, velocity, force. They can also be per-atom
quantities calculated by a :doc:`compute <compute>`, by a
:doc:`fix <fix>`, or by an atom-style :doc:`variable <variable>`. The
quantities calculated by a :doc:`compute <compute>`, by a :doc:`fix
<fix>`, or by an atom-style :doc:`variable <variable>`. The
time-averaged per-atom output of this fix can be used as input to
other output commands.
The :doc:`fix store/state <fix_store_state>` command can archive one or
more per-atom attributes at a particular time, so that the old values
can be used in a future calculation or output. The list of atom
attributes is the same as for the :doc:`dump custom <dump>` command,
including per-atom quantities calculated by a :doc:`compute <compute>`,
by a :doc:`fix <fix>`, or by an atom-style :doc:`variable <variable>`.
The output of this fix can be used as input to other output commands.
The :doc:`fix store/state <fix_store_state>` command can archive one
or more per-atom attributes at a particular time, so that the old
values can be used in a future calculation or output. The list of
atom attributes is the same as for the :doc:`dump custom <dump>`
command, including per-atom quantities calculated by a :doc:`compute
<compute>`, by a :doc:`fix <fix>`, or by an atom-style :doc:`variable
<variable>`. The output of this fix can be used as input to other
output commands.
The :doc:`fix ave/grid <fix_ave_grid>` command performs time-averaging
of either per-atom or per-grid data.
For per-atom data it performs averaging for the atoms within each grid
cell, similar to the :doc:`fix ave/chunk <fix_ave_chunk>` command when
its chunks are defined as regular 2d or 3d bins. The per-atom
quantities can be atom density (mass or number) or atom attributes
such as position, velocity, force. They can also be per-atom
quantities calculated by a :doc:`compute <compute>`, by a :doc:`fix
<fix>`, or by an atom-style :doc:`variable <variable>`.
The chief difference between the :doc:`fix ave/grid <fix_ave_grid>`
and :doc:`fix ave/chunk <fix_ave_chunk>` commands when used in this
context is that the former uses a distributed grid, while the latter
uses a global grid. Distributed means that each processor owns the
subset of grid cells within its sub-domain. Global means that each
processor owns a copy of the entire grid. The :doc:`fix ave/grid
<fix_ave_grid>` command is thus more efficient for large grids.
For per-grid data, the :doc:`fix ave/grid <fix_ave_grid>` command
takes inputs for grid data produced by other computes or fixes and
averages the values for each grid point over time.
.. _compute:
@ -266,24 +336,25 @@ Computes that generate values to output
---------------------------------------
Every :doc:`compute <compute>` in LAMMPS produces either global or
per-atom or local values. The values can be scalars or vectors or
arrays of data. These values can be output using the other commands
described in this section. The page for each compute command
per-atom or local or per-grid values. The values can be scalars or
vectors or arrays of data. These values can be output using the other
commands described in this section. The page for each compute command
describes what it produces. Computes that produce per-atom or local
values have the word "atom" or "local" in their style name. Computes
without the word "atom" or "local" produce global values.
or per-grid values have the word "atom" or "local" or "grid as the
last word in their style name. Computes without the word "atom" or
"local" or "grid" produce global values.
.. _fix:
Fixes that generate values to output
------------------------------------
Some :doc:`fixes <fix>` in LAMMPS produces either global or per-atom or
local values which can be accessed by other commands. The values can
be scalars or vectors or arrays of data. These values can be output
using the other commands described in this section. The page for
each fix command tells whether it produces any output quantities and
describes them.
Some :doc:`fixes <fix>` in LAMMPS produces either global or per-atom
or local or per-grid values which can be accessed by other commands.
The values can be scalars or vectors or arrays of data. These values
can be output using the other commands described in this section. The
page for each fix command tells whether it produces any output
quantities and describes them.
.. _variable:
@ -300,6 +371,8 @@ computes, fixes, and other variables. The values generated by
variables can be used as input to and thus output by the other
commands described in this section.
Per-grid variables have not (yet) been implemented.
.. _table:
Summary table of output options and data flow between commands
@ -319,44 +392,52 @@ Also note that, as described above, when a command takes a scalar as
input, that could be an element of a vector or array. Likewise a
vector input could be a column of an array.
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| Command | Input | Output |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`thermo_style custom <thermo_style>` | global scalars | screen, log file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump custom <dump>` | per-atom vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump local <dump>` | local vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`dump grid <dump>` | per-grid vectors | dump file |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix print <fix_print>` | global scalar from variable | screen, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`print <print>` | global scalar from variable | screen |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`computes <compute>` | N/A | global/per-atom/local scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`fixes <fix>` | N/A | global/per-atom/local scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`computes <compute>` | N/A | global/per-atom/local/per-grid scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fixes <fix>` | N/A | global/per-atom/local/per-grid scalar/vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`variables <variable>` | global scalars and vectors, per-atom vectors | global scalar and vector, per-atom vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute reduce <compute_reduce>` | per-atom/local vectors | global scalar/vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute slice <compute_slice>` | global vectors/arrays | global vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute property/atom <compute_property_atom>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
| :doc:`compute property/local <compute_property_local>` | local vectors | local vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/atom <compute_property_atom>` | N/A | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/local <compute_property_local>` | N/A | local vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/grid <compute_property_grid>` | N/A | per-grid vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`compute property/chunk <compute_property_chunk>` | N/A | global vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix vector <fix_vector>` | global scalars | global vector |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/atom <fix_ave_atom>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/time <fix_ave_time>` | global scalars/vectors | global scalar/vector/array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/chunk <fix_ave_chunk>` | per-atom vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/grid <fix_ave_grid>` | per-atom vectors or per-grid vectors | per-grid vector/array |
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/histo <fix_ave_histo>` | global/per-atom/local scalars and vectors | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix ave/correlate <fix_ave_correlate>` | global scalars | global array, file |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+
| :doc:`fix store/state <fix_store_state>` | per-atom vectors | per-atom vector/array |
+--------------------------------------------------------+----------------------------------------------+-------------------------------------------+
+--------------------------------------------------------+----------------------------------------------+----------------------------------------------------+

View File

@ -7,6 +7,7 @@ functions. They do not directly call the LAMMPS library.
- :cpp:func:`lammps_encode_image_flags`
- :cpp:func:`lammps_decode_image_flags`
- :cpp:func:`lammps_set_fix_external_callback`
- :cpp:func:`lammps_fix_external_get_force`
- :cpp:func:`lammps_fix_external_set_energy_global`
- :cpp:func:`lammps_fix_external_set_energy_peratom`
- :cpp:func:`lammps_fix_external_set_virial_global`
@ -44,6 +45,11 @@ where such memory buffers were allocated that require the use of
-----------------------
.. doxygenfunction:: lammps_fix_external_get_force
:project: progguide
-----------------------
.. doxygenfunction:: lammps_fix_external_set_energy_global
:project: progguide

View File

@ -1816,7 +1816,7 @@ fitting the potentials natively in LAMMPS.
Ngoc Cuong Nguyen (MIT), Andrew Rohskopf (Sandia)
.. versionadded:: TBD
.. versionadded:: 22Dec2022
**Install:**

View File

@ -444,7 +444,7 @@ the LAMMPS simulation domain.
.. _restart2data:
**-restart2data restartfile [remap] datafile keyword value ...**
**-restart2data restartfile datafile keyword value ...**
Convert the restart file into a data file and immediately exit. This
is the same operation as if the following 2-line input script were
@ -452,7 +452,7 @@ run:
.. code-block:: LAMMPS
read_restart restartfile [remap]
read_restart restartfile
write_data datafile keyword value ...
The specified restartfile and/or datafile name may contain the wild-card
@ -464,28 +464,21 @@ Note that a filename such as file.\* may need to be enclosed in quotes or
the "\*" character prefixed with a backslash ("\") to avoid shell
expansion of the "\*" character.
Following restartfile argument, the optional word "remap" may be used.
This has the same effect like adding it to a
:doc:`read_restart <read_restart>` command, and operates as explained on
its doc page. This is useful if reading the restart file triggers an
error that atoms have been lost. In that case, use of the remap flag
should allow the data file to still be produced.
The syntax following restartfile (or remap), namely
The syntax following restartfile, namely
.. parsed-literal::
datafile keyword value ...
is identical to the arguments of the :doc:`write_data <write_data>`
command. See its page for details. This includes its
command. See its documentation page for details. This includes its
optional keyword/value settings.
----------
.. _restart2dump:
**-restart2dump restartfile [remap] group-ID dumpstyle dumpfile arg1 arg2 ...**
**-restart2dump restartfile group-ID dumpstyle dumpfile arg1 arg2 ...**
Convert the restart file into a dump file and immediately exit. This
is the same operation as if the following 2-line input script were
@ -493,7 +486,7 @@ run:
.. code-block:: LAMMPS
read_restart restartfile [remap]
read_restart restartfile
write_dump group-ID dumpstyle dumpfile arg1 arg2 ...
Note that the specified restartfile and dumpfile names may contain
@ -505,24 +498,17 @@ such as file.\* may need to be enclosed in quotes or the "\*" character
prefixed with a backslash ("\") to avoid shell expansion of the "\*"
character.
Note that following the restartfile argument, the optional word "remap"
can be used. This has the effect as adding it to the
:doc:`read_restart <read_restart>` command, as explained on its doc page.
This is useful if reading the restart file triggers an error that atoms
have been lost. In that case, use of the remap flag should allow the
dump file to still be produced.
The syntax following restartfile (or remap), namely
The syntax following restartfile, namely
.. code-block:: LAMMPS
group-ID dumpstyle dumpfile arg1 arg2 ...
is identical to the arguments of the :doc:`write_dump <write_dump>`
command. See its page for details. This includes what per-atom
fields are written to the dump file and optional dump_modify settings,
including ones that affect how parallel dump files are written, e.g.
the *nfile* and *fileper* keywords. See the
command. See its documentation page for details. This includes what
per-atom fields are written to the dump file and optional dump_modify
settings, including ones that affect how parallel dump files are written,
e.g. the *nfile* and *fileper* keywords. See the
:doc:`dump_modify <dump_modify>` page for details.
----------

View File

@ -212,14 +212,15 @@ threads/task as Nt. The product of these two values should be N, i.e.
.. note::
The default for the :doc:`package kokkos <package>` command when
running on KNL is to use "half" neighbor lists and set the Newton flag
to "on" for both pairwise and bonded interactions. This will typically
be best for many-body potentials. For simpler pairwise potentials, it
may be faster to use a "full" neighbor list with Newton flag to "off".
Use the "-pk kokkos" :doc:`command-line switch <Run_options>` to change
the default :doc:`package kokkos <package>` options. See its page for
details and default settings. Experimenting with its options can provide
a speed-up for specific calculations. For example:
running on KNL is to use "half" neighbor lists and set the Newton
flag to "on" for both pairwise and bonded interactions. This will
typically be best for many-body potentials. For simpler pairwise
potentials, it may be faster to use a "full" neighbor list with
Newton flag to "off". Use the "-pk kokkos" :doc:`command-line switch
<Run_options>` to change the default :doc:`package kokkos <package>`
options. See its documentation page for details and default
settings. Experimenting with its options can provide a speed-up for
specific calculations. For example:
.. code-block:: bash
@ -271,17 +272,18 @@ one or more nodes, each with two GPUs:
.. note::
The default for the :doc:`package kokkos <package>` command when
running on GPUs is to use "full" neighbor lists and set the Newton flag
to "off" for both pairwise and bonded interactions, along with threaded
communication. When running on Maxwell or Kepler GPUs, this will
typically be best. For Pascal GPUs and beyond, using "half" neighbor lists and
setting the Newton flag to "on" may be faster. For many pair styles,
setting the neighbor binsize equal to twice the CPU default value will
give speedup, which is the default when running on GPUs. Use the "-pk
kokkos" :doc:`command-line switch <Run_options>` to change the default
:doc:`package kokkos <package>` options. See its page for details and
default settings. Experimenting with its options can provide a speed-up
for specific calculations. For example:
running on GPUs is to use "full" neighbor lists and set the Newton
flag to "off" for both pairwise and bonded interactions, along with
threaded communication. When running on Maxwell or Kepler GPUs, this
will typically be best. For Pascal GPUs and beyond, using "half"
neighbor lists and setting the Newton flag to "on" may be faster. For
many pair styles, setting the neighbor binsize equal to twice the CPU
default value will give speedup, which is the default when running on
GPUs. Use the "-pk kokkos" :doc:`command-line switch <Run_options>`
to change the default :doc:`package kokkos <package>` options. See
its documentation page for details and default
settings. Experimenting with its options can provide a speed-up for
specific calculations. For example:
.. code-block:: bash

View File

@ -1164,7 +1164,7 @@ For illustration purposes below is a part of the Tcl example script.
tabulate tool
--------------
.. versionadded:: TBD
.. versionadded:: 22Dec2022
The ``tabulate`` folder contains Python scripts scripts to generate tabulated
potential files for LAMMPS. The bulk of the code is in the ``tabulate`` module

View File

@ -43,29 +43,38 @@ underscores.
----------
Computes calculate one of three styles of quantities: global,
per-atom, or local. A global quantity is one or more system-wide
values (e.g., the temperature of the system). A per-atom quantity is
one or more values per atom (e.g., the kinetic energy of each atom).
Per-atom values are set to 0.0 for atoms not in the specified compute
group. Local quantities are calculated by each processor based on the
atoms it owns, but there may be zero or more per atom (e.g., a list of
bond distances). Computes that produce per-atom quantities have the
word "atom" in their style (e.g., *ke/atom*\ ). Computes that produce
local quantities have the word "local" in their style
(e.g., *bond/local*\ ). Styles with neither "atom" or "local" in their
style produce global quantities.
Computes calculate one or more of four styles of quantities: global,
per-atom, local, or per-atom. A global quantity is one or more
system-wide values, e.g. the temperature of the system. A per-atom
quantity is one or more values per atom, e.g. the kinetic energy of
each atom. Per-atom values are set to 0.0 for atoms not in the
specified compute group. Local quantities are calculated by each
processor based on the atoms it owns, but there may be zero or more
per atom, e.g. a list of bond distances. Per-grid quantities are
calculated on a regular 2d or 3d grid which overlays a 2d or 3d
simulation domain. The grid points and the data they store are
distributed across processors; each processor owns the grid points
which fall within its sub-domain.
Note that a single compute can produce either global or per-atom or
local quantities, but not both global and per-atom. It can produce
local quantities in tandem with global or per-atom quantities. The
compute page will explain.
Computes that produce per-atom quantities have the word "atom" at the
end of their style, e.g. *ke/atom*\ . Computes that produce local
quantities have the word "local" at the end of their style,
e.g. *bond/local*\ . Computes that produce per-grid quantities have
the word "grid" at the end of their style, e.g. *property/grid*\ .
Styles with neither "atom" or "local" or "grid" at the end of their
style name produce global quantities.
Global, per-atom, and local quantities each come in three kinds: a
single scalar value, a vector of values, or a 2d array of values. The
doc page for each compute describes the style and kind of values it
produces (e.g., a per-atom vector). Some computes produce more than one
kind of a single style (e.g., a global scalar and a global vector).
Note that a single compute typically produces either global or
per-atom or local or per-grid values. It does not compute both global
and per-atom values. It can produce local values or per-grid values
in tandem with global or per-atom quantities. The compute doc page
will explain the details.
Global, per-atom, local, and per-grid quantities come in three kinds:
a single scalar value, a vector of values, or a 2d array of values.
The doc page for each compute describes the style and kind of values
it produces, e.g. a per-atom vector. Some computes produce more than
one kind of a single style, e.g. a global scalar and a global vector.
When a compute quantity is accessed, as in many of the output commands
discussed below, it can be referenced via the following bracket
@ -252,6 +261,7 @@ The individual style names on the :doc:`Commands compute <Commands_compute>` pag
* :doc:`pressure/uef <compute_pressure_uef>` - pressure tensor in the reference frame of an applied flow field
* :doc:`property/atom <compute_property_atom>` - convert atom attributes to per-atom vectors/arrays
* :doc:`property/chunk <compute_property_chunk>` - extract various per-chunk attributes
* :doc:`property/grid <compute_property_grid>` - convert per-grid attributes to per-grid vectors/arrays
* :doc:`property/local <compute_property_local>` - convert local attributes to local vectors/arrays
* :doc:`ptm/atom <compute_ptm_atom>` - determines the local lattice structure based on the Polyhedral Template Matching method
* :doc:`rdf <compute_rdf>` - radial distribution function :math:`g(r)` histogram of group of atoms

View File

@ -37,27 +37,29 @@ Description
Modify one or more parameters of a previously defined compute. Not
all compute styles support all parameters.
The *extra/dof* or *extra* keyword refers to how many degrees of freedom are
subtracted (typically from :math:`3N`) as a normalizing
The *extra/dof* or *extra* keyword refers to how many degrees of
freedom are subtracted (typically from :math:`3N`) as a normalizing
factor in a temperature computation. Only computes that compute a
temperature use this option. The default is 2 or 3 for
:doc:`2d or 3d systems <dimension>`, which is a correction factor for an
ensemble of velocities with zero total linear momentum. For compute
temp/partial, if one or more velocity components are excluded, the
value used for *extra* is scaled accordingly. You can use a negative
number for the *extra* parameter if you need to add
degrees-of-freedom. See the :doc:`compute temp/asphere <compute_temp_asphere>` command for an example.
temperature use this option. The default is 2 or 3 for :doc:`2d or 3d
systems <dimension>` which is a correction factor for an ensemble of
velocities with zero total linear momentum. For compute temp/partial,
if one or more velocity components are excluded, the value used for
*extra* is scaled accordingly. You can use a negative number for the
*extra* parameter if you need to add degrees-of-freedom. See the
:doc:`compute temp/asphere <compute_temp_asphere>` command for an
example.
The *dynamic/dof* or *dynamic* keyword determines whether the number
of atoms :math:`N` in the compute group and their associated degrees of
freedom (DOF) are re-computed each time a temperature is computed. Only
compute styles that calculate a temperature use this option. By
default, :math:`N` and their DOF are assumed to be constant. If you are
adding atoms or molecules to the system (see the :doc:`fix pour <fix_pour>`,
:doc:`fix deposit <fix_deposit>`, and :doc:`fix gcmc <fix_gcmc>` commands) or
expect atoms or molecules to be lost (e.g., due to exiting the simulation box
or via :doc:`fix evaporate <fix_evaporate>`), then this option should be used
to ensure the temperature is correctly normalized.
of atoms :math:`N` in the compute group and their associated degrees
of freedom (DOF) are re-computed each time a temperature is computed.
Only compute styles that calculate a temperature use this option. By
default, :math:`N` and their DOF are assumed to be constant. If you
are adding atoms or molecules to the system (see the :doc:`fix pour
<fix_pour>`, :doc:`fix deposit <fix_deposit>`, and :doc:`fix gcmc
<fix_gcmc>` commands) or expect atoms or molecules to be lost
(e.g. due to exiting the simulation box or via :doc:`fix evaporate
<fix_evaporate>`), then this option should be used to insure the
temperature is correctly normalized.
.. note::

View File

@ -12,7 +12,8 @@ Syntax
* ID, group-ID are documented in :doc:`compute <compute>` command
* property/chunk = style name of this compute command
* input = one or more attributes
* chunkID = ID of :doc:`compute chunk/atom <compute_chunk_atom>` command that defines the chunks
* input1,etc = one or more attributes
.. parsed-literal::
@ -26,8 +27,8 @@ Examples
.. code-block:: LAMMPS
compute 1 all property/chunk count
compute 1 all property/chunk ID coord1
compute 1 all property/chunk bin2d id count
compute 1 all property/chunk myChunks id coord1
Description
"""""""""""
@ -35,29 +36,28 @@ Description
Define a computation that stores the specified attributes of chunks of
atoms.
In LAMMPS, chunks are collections of atoms defined by a
:doc:`compute chunk/atom <compute_chunk_atom>` command, which assigns each atom
to a single chunk (or no chunk). The ID for this command is specified
as chunkID. For example, a single chunk could be the atoms in a molecule or
atoms in a spatial bin. See the :doc:`compute chunk/atom <compute_chunk_atom>`
and :doc:`Howto chunk <Howto_chunk>` doc pages for details of how chunks can be
defined and examples of how they can be used to measure properties of a system.
In LAMMPS, chunks are collections of atoms defined by a :doc:`compute
chunk/atom <compute_chunk_atom>` command, which assigns each atom to a
single chunk (or no chunk). The ID for this command is specified as
chunkID. For example, a single chunk could be the atoms in a molecule
or atoms in a spatial bin. See the :doc:`compute chunk/atom
<compute_chunk_atom>` and :doc:`Howto chunk <Howto_chunk>` doc pages
for details of how chunks can be defined and examples of how they can
be used to measure properties of a system.
This compute calculates and stores the specified attributes of chunks
as global data so they can be accessed by other
:doc:`output commands <Howto_output>` and used in conjunction with other
commands that generate per-chunk data, such as
:doc:`compute com/chunk <compute_com_chunk>` or
:doc:`compute msd/chunk <compute_msd_chunk>`.
as global data so they can be accessed by other :doc:`output commands
<Howto_output>` and used in conjunction with other commands that
generate per-chunk data, such as :doc:`compute com/chunk
<compute_com_chunk>` or :doc:`compute msd/chunk <compute_msd_chunk>`.
Note that only atoms in the specified group contribute to the
calculation of the *count* attribute. The
:doc:`compute chunk/atom <compute_chunk_atom>` command defines its own group;
atoms will have a chunk ID = 0 if they are not in that group,
signifying they are not assigned to a chunk, and will thus also not
contribute to this calculation. You can specify the "all" group for
this command if you simply want to include atoms with non-zero chunk
IDs.
calculation of the *count* attribute. The :doc:`compute chunk/atom
<compute_chunk_atom>` command defines its own group; atoms will have a
chunk ID = 0 if they are not in that group, signifying they are not
assigned to a chunk, and will thus also not contribute to this
calculation. You can specify the "all" group for this command if you
simply want to include atoms with non-zero chunk IDs.
The *count* attribute is the number of atoms in the chunk.
@ -66,21 +66,24 @@ can only be used if the *compress* keyword was set to *yes* for the
:doc:`compute chunk/atom <compute_chunk_atom>` command referenced by
chunkID. This means that the original chunk IDs (e.g., molecule IDs)
will have been compressed to remove chunk IDs with no atoms assigned
to them. Thus a compressed chunk ID of 3 may correspond to an original
chunk ID (molecule ID in this case) of 415. The *id* attribute will
then be 415 for the third chunk.
to them. Thus a compressed chunk ID of 3 may correspond to an
original chunk ID (molecule ID in this case) of 415. The *id*
attribute will then be 415 for the third chunk.
The *coordN* attributes can only be used if a *binning* style was used
in the :doc:`compute chunk/atom <compute_chunk_atom>` command referenced
by chunkID. For *bin/1d*, *bin/2d*, and *bin/3d* styles the attribute
is the center point of the bin in the corresponding dimension. Style
*bin/1d* only defines a *coord1* attribute. Style *bin/2d* adds a
*coord2* attribute. Style *bin/3d* adds a *coord3* attribute.
Note that if the value of the *units* keyword used in the :doc:`compute chunk/atom command <compute_chunk_atom>` is *box* or *lattice*, the
*coordN* attributes will be in distance :doc:`units <units>`. If the
value of the *units* keyword is *reduced*, the *coordN* attributes
will be in unitless reduced units (0--1).
in the :doc:`compute chunk/atom <compute_chunk_atom>` command
referenced by chunkID. For *bin/1d*, *bin/2d*, and *bin/3d* styles
the attribute is the center point of the bin in the corresponding
dimension. Style *bin/1d* only defines a *coord1* attribute. Style
*bin/2d* adds a *coord2* attribute. Style *bin/3d* adds a *coord3*
attribute.
Note that if the value of the *units* keyword used in the
:doc:`compute chunk/atom command <compute_chunk_atom>` is *box* or
*lattice*, the *coordN* attributes will be in distance :doc:`units
<units>`. If the value of the *units* keyword is *reduced*, the
*coordN* attributes will be in unitless reduced units (0-1).
The simplest way to output the results of the compute property/chunk
calculation to a file is to use the :doc:`fix ave/time <fix_ave_time>`

View File

@ -0,0 +1,114 @@
.. index:: compute property/grid
compute property/grid command
=============================
Syntax
""""""
.. parsed-literal::
compute ID group-ID property/grid Nx Ny Nz input1 input2 ...
* ID, group-ID are documented in :doc:`compute <compute>` command
* property/grid = style name of this compute command
* Nx, Ny, Nz = grid size in each dimension
* input1,etc = one or more attributes
.. parsed-literal::
attributes = id, ix, iy, iz, x, y, z, xs, ys, zs, xc, yc, zc, xsc, ysc, zsc
id = ID of grid cell, x fastest, y next, z slowest
ix,iy,iz = grid indices in each dimension (1 to N inclusive)
x,y,z = coords of lower left corner of grid cell
xs,ys,zs = scaled coords of lower left corner of grid cell (0.0 to 1.0)
xc,yc,zc = coords of center point of grid cell
xsc,ysc,zsc = scaled coords of center point of grid cell (0.0 to 1.0)
Examples
""""""""
.. code-block:: LAMMPS
compute 1 all property/grid id ix iy iz
compute 1 all property/grid id xc yc zc
Description
"""""""""""
Define a computation that stores the specified attributes of a
distributed grid. In LAMMPS, distributed grids are regular 2d or 3d
grids which overlay a 2d or 3d simulation domain. Each processor owns
the grid cells whose center points lie within its sub-domain. See the
:doc:`Howto grid <Howto_grid>` doc page for details of how distributed
grids can be defined by various commands and referenced.
This compute stores the specified attributes of grids as per-grid data
so they can be accessed by other :doc:`output commands <Howto_output>`
such as :doc:`dump grid <dump>`.
*Nx*, *Ny*, and *Nz* define the size of the grid. For a 2d simulation
*Nz* must be 1. When this compute is used by :doc:`dump grid <dump>`,
to output per-grid values from other computes of fixes, the grid size
specified for this command must be consistent with the grid sizes
used by the other commands.
The *id* attribute stores the grid ID for each grid cell. For a
global grid of size Nx by Ny by Nz (in 3d simulations) the grid IDs
range from 1 to Nx*Ny*Nz. They are ordered with the X index of the 3d
grid varying fastest, then Y, then Z slowest. For 2d grids (in 2d
simulations), the grid IDs range from 1 to Nx*Ny, with X varying
fastest and Y slowest.
The *ix*, *iy*, *iz* attributes are the indices of a grid cell in
each dimension. They range from 1 to Nx inclusive in the X dimension,
and similar for Y and Z.
The *x*, *y*, *z* attributes are the coordinates of the lower left
corner point of each grid cell.
The *xs*, *ys*, *zs* attributes are also coordinates of the lower left
corner point of each grid cell, except in scaled coordinates, where
the lower-left corner of the entire simulation box is (0,0,0) and the
upper right corner is (1,1,1).
The *xc*, *yc*, *zc* attributes are the coordinates of the center
point of each grid cell.
The *xsc*, *ysc*, *zsc* attributes are also coordinates of the center
point each grid cell, except in scaled coordinates, where the
lower-left corner of the entire simulation box is (0,0,0) and the upper
right corner is (1,1,1).
For :doc:`triclinic simulation boxes <Howto_triclinic>`, the grid
point coordinates for (x,y,z) and (xc,yc,zc) will reflect the
triclinic geometry. For (xs,yz,zs) and (xsc,ysc,zsc), the coordinates
are the same for orthogonal versus triclinic boxes.
Output info
"""""""""""
This compute calculates a per-grid vector or array depending on the
number of input values. The length of the vector or number of array
rows (distributed across all processors) is Nx * Ny * Nz. For access
by other commands, the name of the single grid produced by this
command is "grid". The name of its per-grid data is "data".
The (x,y,z) and (xc,yc,zc) coordinates are in distance :doc:`units
<units>`.
Restrictions
""""""""""""
For 2d simulations, the attributes which refer to
the Z dimension cannot be used.
Related commands
""""""""""""""""
:doc:`dump grid <dump>`
Default
"""""""
none

View File

@ -85,10 +85,11 @@ Description
"""""""""""
Define a computation that stores the specified attributes as local
data so it can be accessed by other :doc:`output commands <Howto_output>`. If the input attributes refer to bond
information, then the number of datums generated, aggregated across
all processors, equals the number of bonds in the system. Ditto for
pairs, angles, etc.
data so it can be accessed by other :doc:`output commands
<Howto_output>`. If the input attributes refer to bond information,
then the number of datums generated, aggregated across all processors,
equals the number of bonds in the system. Ditto for pairs, angles,
etc.
If multiple attributes are specified then they must all generate the
same amount of information, so that the resulting local array has the
@ -129,17 +130,20 @@ specified compute group. Likewise for angles, dihedrals, etc.
For bonds and angles, a bonds/angles that have been broken by setting
their bond/angle type to 0 will not be included. Bonds/angles that
have been turned off (see the :doc:`fix shake <fix_shake>` or
:doc:`delete_bonds <delete_bonds>` commands) by setting their bond/angle
type negative are written into the file. This is consistent with the
:doc:`compute bond/local <compute_bond_local>` and :doc:`compute angle/local <compute_angle_local>` commands
:doc:`delete_bonds <delete_bonds>` commands) by setting their
bond/angle type negative are written into the file. This is
consistent with the :doc:`compute bond/local <compute_bond_local>` and
:doc:`compute angle/local <compute_angle_local>` commands
Note that as atoms migrate from processor to processor, there will be
no consistent ordering of the entries within the local vector or array
from one timestep to the next. The only consistency that is
guaranteed is that the ordering on a particular timestep will be the
same for local vectors or arrays generated by other compute commands.
For example, output from the :doc:`compute bond/local <compute_bond_local>` command can be combined with bond
atom indices from this command and output by the :doc:`dump local <dump>` command in a consistent way.
For example, output from the :doc:`compute bond/local
<compute_bond_local>` command can be combined with bond atom indices
from this command and output by the :doc:`dump local <dump>` command
in a consistent way.
The *natom1* and *natom2* or *patom1* and *patom2* attributes refer
to the atom IDs of the 2 atoms in each pairwise interaction computed
@ -177,9 +181,8 @@ the array is the number of bonds, angles, etc. If a single input is
specified, a local vector is produced. If two or more inputs are
specified, a local array is produced where the number of columns = the
number of inputs. The vector or array can be accessed by any command
that uses local values from a compute as input. See the
:doc:`Howto output <Howto_output>` page for an overview of LAMMPS output
options.
that uses local values from a compute as input. See the :doc:`Howto
output <Howto_output>` page for an overview of LAMMPS output options.
The vector or array values will be integers that correspond to the
specified attribute.

View File

@ -29,8 +29,9 @@ Description
Define a computation that calculates the temperature of a group of
atoms. A compute of this style can be used by any command that
computes a temperature (e.g., :doc:`thermo_modify <thermo_modify>`,
:doc:`fix temp/rescale <fix_temp_rescale>`, :doc:`fix npt <fix_nh>`)
computes a temperature, e.g. :doc:`thermo_modify <thermo_modify>`,
:doc:`fix temp/rescale <fix_temp_rescale>`, :doc:`fix npt <fix_nh>`,
etc.
The temperature is calculated by the formula
@ -39,17 +40,17 @@ The temperature is calculated by the formula
\text{KE} = \frac{\text{dim}}{2} N k_B T,
where KE = total kinetic energy of the group of atoms (sum of
:math:`\frac12 m v^2`), dim = 2 or 3 is the dimensionality of the simulation,
:math:`N` is the number of atoms in the group, :math:`k_B` is the Boltzmann
constant, and :math:`T` is the absolute temperature.
:math:`\frac12 m v^2`), dim = 2 or 3 is the dimensionality of the
simulation, :math:`N` is the number of atoms in the group, :math:`k_B`
is the Boltzmann constant, and :math:`T` is the absolute temperature.
A kinetic energy tensor, stored as a six-element vector, is also
calculated by this compute for use in the computation of a pressure
tensor. The formula for the components of the tensor is the same as
the above formula, except that :math:`v^2` is replaced by
:math:`v_x v_y` for the :math:`xy` component, and so on.
The six components of the vector are ordered :math:`xx`, :math:`yy`,
:math:`zz`, :math:`xy`, :math:`xz`, :math:`yz`.
the above formula, except that :math:`v^2` is replaced by :math:`v_x
v_y` for the :math:`xy` component, and so on. The six components of
the vector are ordered :math:`xx`, :math:`yy`, :math:`zz`, :math:`xy`,
:math:`xz`, :math:`yz`.
The number of atoms contributing to the temperature is assumed to be
constant for the duration of the run; use the *dynamic* option of the
@ -85,11 +86,10 @@ Output info
"""""""""""
This compute calculates a global scalar (the temperature) and a global
vector of length six (KE tensor), which can be accessed by indices 1--6.
These values can be used by any command that uses global scalar or
vector values from a compute as input. See the
:doc:`Howto output <Howto_output>` page for an overview of LAMMPS output
options.
vector of length six (KE tensor), which can be accessed by indices
1--6. These values can be used by any command that uses global scalar
or vector values from a compute as input. See the :doc:`Howto output
<Howto_output>` page for an overview of LAMMPS output options.
The scalar value calculated by this compute is "intensive". The
vector values are "extensive".
@ -104,7 +104,9 @@ Restrictions
Related commands
""""""""""""""""
:doc:`compute temp/partial <compute_temp_partial>`, :doc:`compute temp/region <compute_temp_region>`, :doc:`compute pressure <compute_pressure>`
:doc:`compute temp/partial <compute_temp_partial>`,
:doc:`compute temp/region <compute_temp_region>`,
:doc:`compute pressure <compute_pressure>`
Default
"""""""

View File

@ -3,6 +3,8 @@
.. index:: dump cfg
.. index:: dump custom
.. index:: dump dcd
.. index:: dump grid
.. index:: dump grid/vtk
.. index:: dump local
.. index:: dump xtc
.. index:: dump yaml
@ -57,46 +59,48 @@ Syntax
.. code-block:: LAMMPS
dump ID group-ID style N file args
dump ID group-ID style N file attribute1 attribute2 ...
* ID = user-assigned name for the dump
* group-ID = ID of the group of atoms to be dumped
* style = *atom* or *atom/adios* or *atom/gz* or *atom/zstd* or *atom/mpiio* or *cfg* or *cfg/gz* or *cfg/zstd* or *cfg/mpiio* or *cfg/uef* or *custom* or *custom/gz* or *custom/zstd* or *custom/mpiio* or *custom/adios* or *dcd* or *h5md* or *image* or *local* or *local/gz* or *local/zstd* or *molfile* or *movie* or *netcdf* or *netcdf/mpiio* or *vtk* or *xtc* or *xyz* or *xyz/gz* or *xyz/zstd* or *xyz/mpiio* or *yaml*
* style = *atom* or *atom/adios* or *atom/gz* or *atom/zstd* or *atom/mpiio* or *cfg* or *cfg/gz* or *cfg/zstd* or *cfg/mpiio* or *cfg/uef* or *custom* or *custom/gz* or *custom/zstd* or *custom/mpiio* or *custom/adios* or *dcd* or *grid* or *grid/vtk* or *h5md* or *image* or *local* or *local/gz* or *local/zstd* or *molfile* or *movie* or *netcdf* or *netcdf/mpiio* or *vtk* or *xtc* or *xyz* or *xyz/gz* or *xyz/zstd* or *xyz/mpiio* or *yaml*
* N = dump on timesteps which are multiples of N
* file = name of file to write dump info to
* args = list of arguments for a particular style
* attribute1,attribute2,... = list of attributes for a particular style
.. parsed-literal::
*atom* args = none
*atom/adios* args = none, discussed on :doc:`dump atom/adios <dump_adios>` page
*atom/gz* args = none
*atom/zstd* args = none
*atom/mpiio* args = none
*cfg* args = same as *custom* args, see below
*cfg/gz* args = same as *custom* args, see below
*cfg/zstd* args = same as *custom* args, see below
*cfg/mpiio* args = same as *custom* args, see below
*cfg/uef* args = same as *custom* args, discussed on :doc:`dump cfg/uef <dump_cfg_uef>` page
*custom*, *custom/gz*, *custom/zstd*, *custom/mpiio* args = see below
*custom/adios* args = same as *custom* args, discussed on :doc:`dump custom/adios <dump_adios>` page
*dcd* args = none
*h5md* args = discussed on :doc:`dump h5md <dump_h5md>` page
*image* args = discussed on :doc:`dump image <dump_image>` page
*local*, *local/gz*, *local/zstd* args = see below
*molfile* args = discussed on :doc:`dump molfile <dump_molfile>` page
*movie* args = discussed on :doc:`dump image <dump_image>` page
*netcdf* args = discussed on :doc:`dump netcdf <dump_netcdf>` page
*netcdf/mpiio* args = discussed on :doc:`dump netcdf <dump_netcdf>` page
*vtk* args = same as *custom* args, see below, also :doc:`dump vtk <dump_vtk>` page
*xtc* args = none
*xyz* args = none
*xyz/gz* args = none
*xyz/zstd* args = none
*xyz/mpiio* args = none
*yaml* args = same as *custom* args, see below
*atom* attributes = none
*atom/adios* attributes = none, discussed on :doc:`dump atom/adios <dump_adios>` page
*atom/gz* attributes = none
*atom/zstd* attributes = none
*atom/mpiio* attributes = none
*cfg* attributes = same as *custom* attributes, see below
*cfg/gz* attributes = same as *custom* attributes, see below
*cfg/zstd* attributes = same as *custom* attributes, see below
*cfg/mpiio* attributes = same as *custom* attributes, see below
*cfg/uef* attributes = same as *custom* attributes, discussed on :doc:`dump cfg/uef <dump_cfg_uef>` page
*custom*, *custom/gz*, *custom/zstd*, *custom/mpiio* attributes = see below
*custom/adios* attributes = same as *custom* attributes, discussed on :doc:`dump custom/adios <dump_adios>` page
*dcd* attributes = none
*h5md* attributes = discussed on :doc:`dump h5md <dump_h5md>` page
*grid* attributes = see below
*grid/vtk* attributes = see below
*image* attributes = discussed on :doc:`dump image <dump_image>` page
*local*, *local/gz*, *local/zstd* attributes = see below
*molfile* attributes = discussed on :doc:`dump molfile <dump_molfile>` page
*movie* attributes = discussed on :doc:`dump image <dump_image>` page
*netcdf* attributes = discussed on :doc:`dump netcdf <dump_netcdf>` page
*netcdf/mpiio* attributes = discussed on :doc:`dump netcdf <dump_netcdf>` page
*vtk* attributes = same as *custom* attributes, see below, also :doc:`dump vtk <dump_vtk>` page
*xtc* attributes = none
*xyz* attributes = none
*xyz/gz* attributes = none
*xyz/zstd* attributes = none
*xyz/mpiio* attributes = none
*yaml* attributes = same as *custom* attributes, see below
* *custom* or *custom/gz* or *custom/zstd* or *custom/mpiio* or *cfg* or *cfg/gz* or *cfg/zstd* or *cfg/mpiio* or *cfg/uef* or *netcdf* or *netcdf/mpiio* or *yaml* args = list of atom attributes
* *custom* or *custom/gz* or *custom/zstd* or *custom/mpiio* or *cfg* or *cfg/gz* or *cfg/zstd* or *cfg/mpiio* or *cfg/uef* or *netcdf* or *netcdf/mpiio* or *yaml* attributes:
.. parsed-literal::
@ -143,7 +147,7 @@ Syntax
i2_name[I] = Ith column of custom integer array with name, I can include wildcard (see below)
d2_name[I] = Ith column of custom floating point vector with name, I can include wildcard (see below)
* *local* or *local/gz* or *local/zstd* args = list of local attributes
* *local* or *local/gz* or *local/zstd* attributes:
.. parsed-literal::
@ -154,6 +158,18 @@ Syntax
f_ID = local vector calculated by a fix with ID
f_ID[I] = Ith column of local array calculated by a fix with ID, I can include wildcard (see below)
* *grid* or *grid/vtk* attributes:
.. parsed-literal::
possible attributes = c_ID:gname:dname, c_ID:gname:dname[I], f_ID:gname:dname, f_ID:gname:dname[I]
gname = name of grid defined by compute or fix
dname = name of data field defined by compute or fix
c_ID = per-grid vector calculated by a compute with ID
c_ID[I] = Ith column of per-grid array calculated by a compute with ID, I can include wildcard (see below)
f_ID = per-grid vector calculated by a fix with ID
f_ID[I] = Ith column of per-grid array calculated by a fix with ID, I can include wildcard (see below)
Examples
""""""""
@ -176,24 +192,32 @@ Examples
Description
"""""""""""
Dump a snapshot of atom quantities to one or more files once every
:math:`N` timesteps in one of several styles. The *image* and *movie*
styles are the exception: the *image* style renders a JPG, PNG, or PPM
image file of the atom configuration every :math:`N` timesteps while
the *movie* style combines and compresses them into a movie file; both
are discussed in detail on the :doc:`dump image <dump_image>` page.
The timesteps on which dump output is written can also be controlled
by a variable. See the :doc:`dump_modify every <dump_modify>`
command.
Dump a snapshot of quantities to one or more files once every
:math:`N` timesteps in one of several styles. The timesteps on which
dump output is written can also be controlled by a variable. See the
:doc:`dump_modify every <dump_modify>` command.
Almost all the styles output per-atom data, i.e. one or more values
per atom. The exceptions are as follows. The *local* styles output
one or more values per bond (angle, dihedral, improper) or per pair of
interacting atoms (force or neighbor interactions). The *grid* styles
output one or more values per grid cell, which are produced by other
commands which overlay the simulation domain with a regular grid. See
the :doc:`Howto grid <Howto_grid>` doc page for details. The *image*
style renders a JPG, PNG, or PPM image file of the system for each
snapshot, while the *movie* style combines and compresses the series
of images into a movie file; both styles are discussed in detail on
the :doc:`dump image <dump_image>` page.
Only information for atoms in the specified group is dumped. The
:doc:`dump_modify thresh and region and refresh <dump_modify>` commands
can also alter what atoms are included. Not all styles support
these options; see details on the :doc:`dump_modify <dump_modify>` doc page.
:doc:`dump_modify thresh and region and refresh <dump_modify>`
commands can also alter what atoms are included. Not all styles
support these options; see details on the :doc:`dump_modify
<dump_modify>` doc page.
As described below, the filename determines the kind of output (text
or binary or gzipped, one big file or one per timestep, one big file
or multiple smaller files).
As described below, the filename determines the kind of output: text
or binary or gzipped, one big file or one per timestep, one file for
all the processors or multiple smaller files.
.. note::
@ -207,74 +231,61 @@ or multiple smaller files).
.. note::
Unless the :doc:`dump_modify sort <dump_modify>` option is
invoked, the lines of atom information written to dump files
(typically one line per atom) will be in an indeterminate order for
each snapshot. This is even true when running on a single processor,
if the :doc:`atom_modify sort <atom_modify>` option is on, which it is
by default. In this case atoms are re-ordered periodically during a
simulation, due to spatial sorting. It is also true when running in
parallel, because data for a single snapshot is collected from
multiple processors, each of which owns a subset of the atoms.
Unless the :doc:`dump_modify sort <dump_modify>` option is invoked,
the lines of atom or grid information written to dump files
(typically one line per atom or grid cell) will be in an
indeterminate order for each snapshot. This is even true when
running on a single processor, if the :doc:`atom_modify sort
<atom_modify>` option is on, which it is by default. In this case
atoms are re-ordered periodically during a simulation, due to
spatial sorting. It is also true when running in parallel, because
data for a single snapshot is collected from multiple processors,
each of which owns a subset of the atoms.
For the *atom*, *custom*, *cfg*, and *local* styles, sorting is off by
default. For the *dcd*, *xtc*, *xyz*, and *molfile* styles, sorting
by atom ID is on by default. See the :doc:`dump_modify <dump_modify>`
page for details.
For the *atom*, *custom*, *cfg*, *grid*, and *local* styles, sorting
is off by default. For the *dcd*, *grid/vtk*, *xtc*, *xyz*, and
*molfile* styles, sorting by atom ID or grid ID is on by default. See
the :doc:`dump_modify <dump_modify>` page for details.
The *atom/gz*, *cfg/gz*, *custom/gz*, *local/gz*, and *xyz/gz* styles
are identical in command syntax to the corresponding styles without
"gz", however, they generate compressed files using the zlib
library. Thus the filename suffix ".gz" is mandatory. This is an
alternative approach to writing compressed files via a pipe, as done by
the regular dump styles, which may be required on clusters where the
interface to the high-speed network disallows using the fork() library
call (which is needed for a pipe). For the remainder of this page, you
should thus consider the *atom* and *atom/gz* styles (etc.) to be
inter-changeable, with the exception of the required filename suffix.
The *style* keyword determines what kind of data is written to the
dump file(s) and in what format.
Similarly, the *atom/zstd*, *cfg/zstd*, *custom/zstd*, *local/zstd*, and
*xyz/zstd* styles are identical to the gz styles, but use the Zstd
compression library instead and require the ".zst" suffix. See the
:doc:`dump_modify <dump_modify>` page for details on how to control the
compression level in both variants.
Note that *atom*, *custom*, *dcd*, *xtc*, *xyz*, and *yaml* style dump
files can be read directly by `VMD <https://www.ks.uiuc.edu/Research/vmd>`_,
a popular tool for visualizing and analyzing trajectories from atomic
and molecular systems. For reading *netcdf* style dump files, the
netcdf plugin needs to be recompiled from source using a NetCDF version
compatible with the one used by LAMMPS. The bundled plugin binary
uses a very old version of NetCDF that is not compatible with LAMMPS.
As explained below, the *atom/mpiio*, *cfg/mpiio*, *custom/mpiio*, and
*xyz/mpiio* styles are identical in command syntax and in the format of
the dump files they create, to the corresponding styles without "mpiio",
except the single dump file they produce is written in parallel via the
MPI-IO library. For the remainder of this page, you should thus
consider the *atom* and *atom/mpiio* styles (etc.) to be
inter-changeable. The one exception is how the filename is specified
for the MPI-IO styles, as explained below.
Likewise the `OVITO visualization package <https://www.ovito.org>`_,
popular for materials modeling, can read the *atom*, *custom*,
*local*, *xtc*, *cfg*, *netcdf*, and *xyz* style atom dump files
directly. With version 3.8 and above, OVITO can also read and
visualize *grid* style dump files with grid cell data, including
iso-surface images of the grid cell values.
.. warning::
The MPIIO package is currently unmaintained and has become
unreliable. Use with caution.
The precision of values output to text-based dump files can be
controlled by the :doc:`dump_modify format <dump_modify>` command and
its options.
Note that settings made via the :doc:`dump_modify <dump_modify>`
command can also alter the format of individual values and content of
the dump file itself. This includes the precision of values output to
text-based dump files which is controlled by the :doc:`dump_modify
format <dump_modify>` command and its options.
----------
The *style* keyword determines what atom quantities are written to the
file and in what format. Settings made via the
:doc:`dump_modify <dump_modify>` command can also alter the format of
individual values and the file itself.
Format of native LAMMPS format dump files:
The *atom*, *local*, and *custom* styles create files in a simple text
format that is self-explanatory when viewing a dump file. Some of the
LAMMPS post-processing tools described on the :doc:`Tools <Tools>` doc
page, including `Pizza.py <https://lammps.github.io/pizza>`_,
work with this format, as does the :doc:`rerun <rerun>` command.
The *atom*, *custom*, *grid*, and *local* styles create files in a
simple LAMMPS-specific text format that is self-explanatory when
viewing a dump file. Many post-processing tools either included with
LAMMPS or third-party tools can read this format, as does the
:doc:`rerun <rerun>` command. See tools described on the :doc:`Tools
<Tools>` doc page for examples, including `Pizza.py
<https://lammps.github.io/pizza>`_.
For post-processing purposes the *atom*, *local*, and *custom* text
files are self-describing in the following sense.
The dimensions of the simulation box are included in each snapshot.
For an orthogonal simulation box this information is formatted as:
For all these styles, the dimensions of the simulation box are
included in each snapshot. For an orthogonal simulation box this
information is formatted as:
.. parsed-literal::
@ -316,10 +327,13 @@ bounding box extents (xlo_bound, xhi_bound, etc.) are calculated from the
triclinic parameters, and how to transform those parameters to and
from other commonly used triclinic representations.
The "ITEM: ATOMS" line in each snapshot lists column descriptors for
the per-atom lines that follow. For example, the descriptors would be
"id type xs ys zs" for the default *atom* style, and would be the atom
attributes you specify in the dump command for the *custom* style.
The *atom* and *custom* styles output a "ITEM: NUMBER OF ATOMS" line
with the count of atoms in the snapshot. Likewise they output an
"ITEM: ATOMS" line which includes column descriptors for the per-atom
lines that follow. For example, the descriptors would be "id type xs
ys zs" for the default *atom* style, and would be the atom attributes
you specify in the dump command for the *custom* style. Each
subsequent line will list the data for a single atom.
For style *atom*, atom coordinates are written to the file, along with
the atom ID and atom type. By default, atom coords are written in a
@ -332,12 +346,31 @@ added for each atom via dump_modify.
Style *custom* allows you to specify a list of atom attributes to be
written to the dump file for each atom. Possible attributes are
listed above and will appear in the order specified. You cannot
specify a quantity that is not defined for a particular simulation---such as
*q* for atom style *bond*, since that atom style does not
assign charges. Dumps occur at the very end of a timestep, so atom
attributes will include effects due to fixes that are applied during
the timestep. An explanation of the possible dump custom attributes
is given below.
specify a quantity that is not defined for a particular
simulation---such as *q* for atom style *bond*, since that atom style
does not assign charges. Dumps occur at the very end of a timestep,
so atom attributes will include effects due to fixes that are applied
during the timestep. An explanation of the possible dump custom
attributes is given below.
.. versionadded:: 22Dec2022
For style *grid* the extent of the Nx by Ny by Nz grid that overlays
the simulation domain is output with each snapshot:
.. parsed-literal::
ITEM: GRID EXTENT
nx ny nz
For 2d simulations, nz will be 1. There will also be an "ITEM: GRID
DATA" line which includes column descriptors for the per grid cell
data. Each subsequent line (Nx * Ny * Nz lines) will list the data
for a single grid cell. If grid cell IDs are included in the output
via the :doc:`compute property/grid <compute_property_grid>` command,
then the IDs will range from 1 to N = Nx*Ny*Nz. The ordering of IDs
is with the x index varying fastest, then the y index, and the z index
varying slowest.
For style *local*, local output generated by :doc:`computes <compute>`
and :doc:`fixes <fix>` is used to generate lines of output that is
@ -351,6 +384,17 @@ generate information on bonds, angles, etc. that can be cut and pasted
directly into a data file read by the :doc:`read_data <read_data>`
command.
----------
Dump files in other popular formats:
.. note::
This section only discusses file formats relevant to this doc page.
The top of this page has links to other dump commands (with their
own pages) which write files in additional popular formats.
Style *cfg* has the same command syntax as style *custom* and writes
extended CFG format files, as used by the `AtomEye
<http://li.mit.edu/Archive/Graphics/A/>`_ visualization package.
@ -387,15 +431,15 @@ periodic box. Note that these coordinates may thus be far outside
the box size stored with the snapshot.
The *xtc* style writes XTC files, a compressed trajectory format used
by the GROMACS molecular dynamics package, and described
`here <https://manual.gromacs.org/current/reference-manual/file-formats.html#xtc>`_.
by the GROMACS molecular dynamics package, and described `here
<https://manual.gromacs.org/current/reference-manual/file-formats.html#xtc>`_.
The precision used in XTC files can be adjusted via the
:doc:`dump_modify <dump_modify>` command. The default value of 1000
means that coordinates are stored to 1/1000 nanometer accuracy. XTC
files are portable binary files written in the NFS XDR data format,
so that any machine which supports XDR should be able to read them.
The number of atoms per snapshot cannot change with the *xtc* style.
The *unwrap* option of the :doc:`dump_modify <dump_modify>` command allows
files are portable binary files written in the NFS XDR data format, so
that any machine which supports XDR should be able to read them. The
number of atoms per snapshot cannot change with the *xtc* style. The
*unwrap* option of the :doc:`dump_modify <dump_modify>` command allows
XTC coordinates to be written "unwrapped" by the image flags for each
atom. Unwrapped means that if the atom has passed through a periodic
boundary one or more times, the value is printed for what the
@ -404,27 +448,41 @@ box. Note that these coordinates may thus be far outside the box size
stored with the snapshot.
The *xyz* style writes XYZ files, which is a simple text-based
coordinate format that many codes can read. Specifically it has
a line with the number of atoms, then a comment line that is
usually ignored followed by one line per atom with the atom type
and the :math:`x`-, :math:`y`-, and :math:`z`-coordinate of that atom.
You can use the :doc:`dump_modify element <dump_modify>` option to change the
output from using the (numerical) atom type to an element name (or some other
label). This will help many visualization programs to guess bonds and colors.
coordinate format that many codes can read. Specifically it has a line
with the number of atoms, then a comment line that is usually ignored
followed by one line per atom with the atom type and the :math:`x`-,
:math:`y`-, and :math:`z`-coordinate of that atom. You can use the
:doc:`dump_modify element <dump_modify>` option to change the output
from using the (numerical) atom type to an element name (or some other
label). This will help many visualization programs to guess bonds and
colors.
.. versionadded:: 22Dec2022
The *grid/vtk* style writes VTK files for grid data on a regular
rectilinear grid. Its content is conceptually similar to that of the
text file produced by the *grid* style, except that it in an XML-based
format which visualization programs which support the VTK format can
read, e.g. the `ParaView tool <https://www.paraview.org>`_. For this
style, there can only be 1 or 3 per grid cell attributes specified.
If it is a single value, it is a scalar quantity. If 3 values are
specified it is encoded in the VTK file as a vector quantity (for each
grid cell). The filename for this style must include a "\*" wildcard
character to produce one file per snapshot; see details below.
.. versionadded:: 4May2022
Dump style *yaml* has the same command syntax as style *custom* and
writes YAML format files that can be easily parsed by a variety of data
processing tools and programming languages. Each timestep will be
written as a YAML "document" (i.e., starts with "---" and ends with
writes YAML format files that can be easily parsed by a variety of
data processing tools and programming languages. Each timestep will
be written as a YAML "document" (i.e., starts with "---" and ends with
"..."). The style supports writing one file per timestep through the
"\*" wildcard but not multi-processor outputs with the "%" token in the
filename. In addition to per-atom data, :doc:`thermo <thermo>` data can
be included in the *yaml* style dump file using the :doc:`dump_modify
thermo yes <dump_modify>`. The data included in the dump file uses the
"thermo" tag and is otherwise identical to data specified by the
:doc:`thermo_style <thermo_style>` command.
"\*" wildcard but not multi-processor outputs with the "%" token in
the filename. In addition to per-atom data, :doc:`thermo <thermo>`
data can be included in the *yaml* style dump file using the
:doc:`dump_modify thermo yes <dump_modify>`. The data included in the
dump file uses the "thermo" tag and is otherwise identical to data
specified by the :doc:`thermo_style <thermo_style>` command.
Below is an example for a YAML format dump created by the following commands.
@ -435,13 +493,13 @@ Below is an example for a YAML format dump created by the following commands.
The tags "time", "units", and "thermo" are optional and enabled by the
dump_modify command. The list under the "box" tag has three lines for
orthogonal boxes and four lines for triclinic boxes, where the first three are
the box boundaries and the fourth the three tilt factors (:math:`xy`,
:math:`xz`, :math:`yz`). The "thermo" data follows the format of the *yaml*
thermo style. The "keywords" tag lists the per-atom properties contained in
the "data" columns, which contain a list with one line per atom. The keywords
may be renamed using the dump_modify command same as for the *custom* dump
style.
orthogonal boxes and four lines for triclinic boxes, where the first
three are the box boundaries and the fourth the three tilt factors
(:math:`xy`, :math:`xz`, :math:`yz`). The "thermo" data follows the
format of the *yaml* thermo style. The "keywords" tag lists the
per-atom properties contained in the "data" columns, which contain a
list with one line per atom. The keywords may be renamed using the
dump_modify command same as for the *custom* dump style.
.. code-block:: yaml
@ -479,11 +537,7 @@ style.
----------
Note that *atom*, *custom*, *dcd*, *xtc*, and *xyz* style dump files
can be read directly by `VMD <https://www.ks.uiuc.edu/Research/vmd>`_, a
popular molecular viewing program.
----------
Frequency of dump output:
Dumps are performed on timesteps that are a multiple of :math:`N`
(including timestep 0) and on the last timestep of a minimization if
@ -508,29 +562,35 @@ every/time <dump_modify>` command can be used. This can be useful
when the timestep size varies during a simulation run, e.g. by use of
the :doc:`fix dt/reset <fix_dt_reset>` command.
The specified filename determines how the dump file(s) is written.
The default is to write one large text file, which is opened when the
dump command is invoked and closed when an :doc:`undump <undump>`
command is used or when LAMMPS exits. For the *dcd* and *xtc* styles,
this is a single large binary file.
----------
Dump filenames can contain two wildcard characters. If a "\*"
character appears in the filename, then one file per snapshot is
written and the "\*" character is replaced with the timestep value.
For example, tmp.dump.\* becomes tmp.dump.0, tmp.dump.10000,
tmp.dump.20000, etc. This option is not available for the *dcd* and
*xtc* styles. Note that the :doc:`dump_modify pad <dump_modify>`
command can be used to insure all timestep numbers are the same length
(e.g., 00010), which can make it easier to read a series of dump files
in order with some post-processing tools.
Dump filenames:
The specified dump filename determines how the dump file(s) is
written. The default is to write one large text file, which is opened
when the dump command is invoked and closed when an :doc:`undump
<undump>` command is used or when LAMMPS exits. For the *dcd* and
*xtc* styles, this is a single large binary file.
Many of the styles allow dump filenames to contain either or both of
two wildcard characters. If a "\*" character appears in the filename,
then one file per snapshot is written and the "\*" character is
replaced with the timestep value. For example, tmp.dump.\* becomes
tmp.dump.0, tmp.dump.10000, tmp.dump.20000, etc. This option is not
available for the *dcd* and *xtc* styles. Note that the
:doc:`dump_modify pad <dump_modify>` command can be used to insure all
timestep numbers are the same length (e.g., 00010), which can make it
easier to read a series of dump files in order with some
post-processing tools.
If a "%" character appears in the filename, then each of P processors
writes a portion of the dump file, and the "%" character is replaced
with the processor ID from :math:`0` to :math:`P-1`. For example, tmp.dump.%
becomes tmp.dump.0, tmp.dump.1, ... tmp.dump.:math:`P-1`, etc. This creates
smaller files and can be a fast mode of output on parallel machines that
support parallel I/O for output. This option is **not** available for the
*dcd*, *xtc*, *xyz*, and *yaml* styles.
with the processor ID from :math:`0` to :math:`P-1`. For example,
tmp.dump.% becomes tmp.dump.0, tmp.dump.1, ... tmp.dump.:math:`P-1`,
etc. This creates smaller files and can be a fast mode of output on
parallel machines that support parallel I/O for output. This option is
**not** available for the *dcd*, *xtc*, *xyz*, *grid/vtk*, and *yaml*
styles.
By default, :math:`P` is the the number of processors, meaning one file per
processor, but :math:`P` can be set to a smaller value via the *nfile* or
@ -541,47 +601,41 @@ when running on large numbers of processors.
Note that using the "\*" and "%" characters together can produce a
large number of small dump files!
For the *atom/mpiio*, *cfg/mpiio*, *custom/mpiio*, and *xyz/mpiio*
styles, a single dump file is written in parallel via the MPI-IO
library, which is part of the MPI standard for versions 2.0 and above.
Using MPI-IO requires two steps. First, build LAMMPS with its MPIIO
package installed, viz.,
.. code-block:: bash
make yes-mpiio # installs the MPIIO package
make mpi # build LAMMPS for your platform
Second, use a dump filename which contains ".mpiio". Note that it does
not have to end in ".mpiio", just contain those characters. Unlike
MPI-IO restart files, which must be both written and read using MPI-IO,
the dump files produced by these MPI-IO styles are identical in format
to the files produced by their non-MPI-IO style counterparts. This
means you can write a dump file using MPI-IO and use the :doc:`read_dump
For styles that end with *mpiio* an ".mpiio" must appear somewhere in
the specified filename. These styles write their dump file in
parallel via the MPI-IO library, which is part of the MPI standard for
versions 2.0 and above. Note these styles are identical in command
syntax to the corresponding styles without "mpiio". Likewise, the
dump files produced by these MPI-IO styles are identical in format to
the files produced by their non-MPI-IO style counterparts. This means
you can write a dump file using MPI-IO and use the :doc:`read_dump
<read_dump>` command or perform other post-processing, just as if the
dump file was not written using MPI-IO.
Because MPI-IO dump files are one large file which all processors
write to, you cannot use the "%" wildcard character described above in
the filename. However you can use the ".bin" or ".lammpsbin" suffix
described below. Again, this file will be written in parallel and
have the same binary format as if it were written without MPI-IO.
.. warning::
The MPIIO package is currently unmaintained and has become
unreliable. Use with caution.
The MPIIO package within LAMMPS is currently unmaintained and has
become unreliable. Use with caution.
Note that MPI-IO dump files are one large file which all processors
write to. You thus cannot use the "%" wildcard character described
above in the filename since that specifies generation of multiple files.
You can use the ".bin" or ".lammpsbin" suffix described below in an
MPI-IO dump file; again this file will be written in parallel and have
the same binary format as if it were written without MPI-IO.
----------
If the filename ends with ".bin" or ".lammpsbin", the dump file (or
files, if "\*" or "%" is also used) is written in binary format. A
binary dump file will be about the same size as a text version, but will
typically write out much faster. Of course, when post-processing, you
will need to convert it back to text format (see the :ref:`binary2txt
tool <binary>`) or write your own code to read the binary file. The
format of the binary file can be understood by looking at the
:file:`tools/binary2txt.cpp` file. This option is only available for
the *atom* and *custom* styles.
Compression of dump file data:
If the specified filename ends with ".bin" or ".lammpsbin", the dump
file (or files, if "\*" or "%" is also used) is written in binary
format. A binary dump file will be about the same size as a text
version, but will typically write out much faster. Of course, when
post-processing, you will need to convert it back to text format (see
the :ref:`binary2txt tool <binary>`) or write your own code to read
the binary file. The format of the binary file can be understood by
looking at the :file:`tools/binary2txt.cpp` file. This option is only
available for the *atom* and *custom* styles.
If the filename ends with ".gz", the dump file (or files, if "\*" or "%"
is also used) is written in gzipped format. A gzipped dump file will be
@ -589,19 +643,40 @@ about :math:`3\times` smaller than the text version, but will also take
longer to write. This option is not available for the *dcd* and *xtc*
styles.
Note that styles that end with *gz* are identical in command syntax to
the corresponding styles without "gz", however, they generate
compressed files using the zlib library. Thus the filename suffix
".gz" is mandatory. This is an alternative approach to writing
compressed files via a pipe, as done by the regular dump styles, which
may be required on clusters where the interface to the high-speed
network disallows using the fork() library call (which is needed for a
pipe). For the remainder of this page, you should thus consider the
*atom* and *atom/gz* styles (etc.) to be inter-changeable, with the
exception of the required filename suffix.
Similarly, styles that end with *zstd* are identical to the gz styles,
but use the Zstd compression library instead and require a ".zst"
suffix. See the :doc:`dump_modify <dump_modify>` page for details on
how to control the compression level in both variants.
----------
Note that in the discussion which follows, for styles which can
reference values from a compute or fix or custom atom property, like the
*custom*\ , *cfg*\ , or *local* styles, the bracketed index :math:`i`
can be specified using a wildcard asterisk with the index to effectively
specify multiple values. This takes the form "\*" or "\*n" or "m\*" or
"m\*n". If :math:`N` is the number of columns in the array, then an
asterisk with no numeric values means all column indices from 1 to
:math:`N`. A leading asterisk means all indices from 1 to n
(inclusive). A trailing asterisk means all indices from m to :math:`N`
(inclusive). A middle asterisk means all indices from m to n
(inclusive).
Arguments for different styles:
The sections below describe per-atom, local, and per grid cell
attributes which can be used as arguments to the various styles.
Note that in the discussion below, for styles which can reference
values from a compute or fix or custom atom property, like the
*custom*\ , *cfg*\ , *grid* or *local* styles, the bracketed index
:math:`i` can be specified using a wildcard asterisk with the index to
effectively specify multiple values. This takes the form "\*" or
"\*n" or "m\*" or "m\*n". If :math:`N` is the number of columns in
the array, then an asterisk with no numeric values means all column
indices from 1 to :math:`N`. A leading asterisk means all indices
from 1 to n (inclusive). A trailing asterisk means all indices from m
to :math:`N` (inclusive). A middle asterisk means all indices from m
to n (inclusive).
Using a wildcard is the same as if the individual columns of the array
had been listed one by one. For example, these two dump commands are
@ -617,63 +692,7 @@ command creates a per-atom array with six columns:
----------
This section explains the local attributes that can be specified as
part of the *local* style.
The *index* attribute can be used to generate an index number from 1
to :math:`N` for each line written into the dump file, where :math:`N` is the
total number of local datums from all processors, or lines of output that
will appear in the snapshot. Note that because data from different
processors depend on what atoms they currently own, and atoms migrate
between processor, there is no guarantee that the same index will be
used for the same info (e.g., a particular bond) in successive snapshots.
The *c_ID* and *c_ID[I]* attributes allow local vectors or arrays
calculated by a :doc:`compute <compute>` to be output. The ID in the
attribute should be replaced by the actual ID of the compute that has
been defined previously in the input script. See the
:doc:`compute <compute>` command for details. There are computes for
calculating local information such as indices, types, and energies for
bonds and angles.
Note that computes which calculate global or per-atom quantities, as
opposed to local quantities, cannot be output in a dump local command.
Instead, global quantities can be output by the :doc:`thermo_style
custom <thermo_style>` command, and per-atom quantities can be output
by the dump custom command.
If *c_ID* is used as a attribute, then the local vector calculated by
the compute is printed. If *c_ID[i]* is used, then :math:`i` must be in the
range from :math:`1-M`, which will print the Ith column of the local array
with :math:`M` columns calculated by the compute. See the discussion above
for how :math:`i` can be specified with a wildcard asterisk to effectively
specify multiple values.
The *f_ID* and *f_ID[I]* attributes allow local vectors or arrays
calculated by a :doc:`fix <fix>` to be output. The ID in the attribute
should be replaced by the actual ID of the fix that has been defined
previously in the input script.
If *f_ID* is used as a attribute, then the local vector calculated by
the fix is printed. If *f_ID[i]* is used, then :math:`i` must be in the
range :math:`1`--:math:`M`, which will print the :math:`i`\ th column of the
local with :math:`M` columns calculated by the fix. See the discussion above
for how :math:`i` can be specified with a wildcard asterisk to effectively
specify multiple values.
Here is an example of how to dump bond info for a system, including
the distance and energy of each bond:
.. code-block:: LAMMPS
compute 1 all property/local batom1 batom2 btype
compute 2 all bond/local dist eng
dump 1 all local 1000 tmp.dump index c_1[1] c_1[2] c_1[3] c_2[1] c_2[2]
----------
This section explains the atom attributes that can be specified as
part of the *custom* and *cfg* styles.
Per-atom attributes used as arguments to the *custom* and *cfg* styles:
The *id*, *mol*, *proc*, *procp1*, *type*, *element*, *mass*, *vx*,
*vy*, *vz*, *fx*, *fy*, *fz*, *q* attributes are self-explanatory.
@ -808,6 +827,97 @@ which could then be output into dump files.
----------
Attributes used as arguments to the *local* style:
The *index* attribute can be used to generate an index number from 1
to N for each line written into the dump file, where N is the total
number of local datums from all processors, or lines of output that
will appear in the snapshot. Note that because data from different
processors depend on what atoms they currently own, and atoms migrate
between processor, there is no guarantee that the same index will be
used for the same info (e.g. a particular bond) in successive
snapshots.
The *c_ID* and *c_ID[I]* attributes allow local vectors or arrays
calculated by a :doc:`compute <compute>` to be output. The ID in the
attribute should be replaced by the actual ID of the compute that has
been defined previously in the input script. See the
:doc:`compute <compute>` command for details. There are computes for
calculating local information such as indices, types, and energies for
bonds and angles.
Note that computes which calculate global or per-atom quantities, as
opposed to local quantities, cannot be output in a dump local command.
Instead, global quantities can be output by the :doc:`thermo_style
custom <thermo_style>` command, and per-atom quantities can be output
by the dump custom command.
If *c_ID* is used as a attribute, then the local vector calculated by
the compute is printed. If *c_ID[I]* is used, then I must be in the
range from 1-M, which will print the Ith column of the local array
with M columns calculated by the compute. See the discussion above
for how I can be specified with a wildcard asterisk to effectively
specify multiple values.
The *f_ID* and *f_ID[I]* attributes allow local vectors or arrays
calculated by a :doc:`fix <fix>` to be output. The ID in the attribute
should be replaced by the actual ID of the fix that has been defined
previously in the input script.
If *f_ID* is used as a attribute, then the local vector calculated by
the fix is printed. If *f_ID[I]* is used, then I must be in the
range from 1-M, which will print the Ith column of the local with M
columns calculated by the fix. See the discussion above for how I can
be specified with a wildcard asterisk to effectively specify multiple
values.
Here is an example of how to dump bond info for a system, including
the distance and energy of each bond:
.. code-block:: LAMMPS
compute 1 all property/local batom1 batom2 btype
compute 2 all bond/local dist eng
dump 1 all local 1000 tmp.dump index c_1[1] c_1[2] c_1[3] c_2[1] c_2[2]
----------
Attributes used as arguments to the *grid* and *grid/vtk* styles:
The attributes that begin with *c_ID* and *f_ID* both take
colon-separated fields *gname* and *dname*. These refer to a grid
name and data field name which is defined by the compute or fix. Note
that a compute or fix can define one or more grids (of different
sizes) and one or more data fields for each of those grids. The sizes
of all grids output in a single dump grid command must be the same.
The *c_ID:gname:dname* and *c_ID:gname:dname[I]* attributes allow
per-grid vectors or arrays calculated by a :doc:`compute <compute>` to
be output. The ID in the attribute should be replaced by the actual
ID of the compute that has been defined previously in the input
script.
If *c_ID:gname:dname* is used as a attribute, then the per-grid vector
calculated by the compute is printed. If *c_ID:gname:dname[I]* is
used, then I must be in the range from 1-M, which will print the Ith
column of the per-grid array with M columns calculated by the compute.
See the discussion above for how I can be specified with a wildcard
asterisk to effectively specify multiple values.
The *f_ID:gname:dname* and *f_ID:gname:dname[I]* attributes allow
per-grid vectors or arrays calculated by a :doc:`fix <fix>` to be
output. The ID in the attribute should be replaced by the actual ID
of the fix that has been defined previously in the input script.
If *f_ID:gname:dname* is used as a attribute, then the per-grid vector
calculated by the fix is printed. If *f_ID:gname:dname[I]* is used,
then I must be in the range from 1-M, which will print the Ith column
of the per-grid with M columns calculated by the fix. See the
discussion above for how I can be specified with a wildcard asterisk
to effectively specify multiple values.
----------
Restrictions
""""""""""""
@ -816,9 +926,9 @@ To write gzipped dump files, you must either compile LAMMPS with the
See the :doc:`Build settings <Build_settings>` page for details.
While a dump command is active (i.e., has not been stopped by using
the :doc:`undump command <undump>`), no commands may be used that will change
the timestep (e.g., :doc:`reset_timestep <reset_timestep>`). LAMMPS
will terminate with an error otherwise.
the :doc:`undump command <undump>`), no commands may be used that will
change the timestep (e.g., :doc:`reset_timestep <reset_timestep>`).
LAMMPS will terminate with an error otherwise.
The *atom/gz*, *cfg/gz*, *custom/gz*, and *xyz/gz* styles are part of
the COMPRESS package. They are only enabled if LAMMPS was built with

View File

@ -24,7 +24,7 @@ Syntax
* color = atom attribute that determines color of each atom
* diameter = atom attribute that determines size of each atom
* zero or more keyword/value pairs may be appended
* keyword = *atom* or *adiam* or *bond* or *line* or *tri* or *body* or *fix* or *size* or *view* or *center* or *up* or *zoom* or *box* or *axes* or *subbox* or *shiny* or *ssao*
* keyword = *atom* or *adiam* or *bond* or *grid* or *line* or *tri* or *body* or *fix* or *size* or *view* or *center* or *up* or *zoom* or *box* or *axes* or *subbox* or *shiny* or *ssao*
.. parsed-literal::
@ -34,6 +34,14 @@ Syntax
color = *atom* or *type* or *none*
width = number or *atom* or *type* or *none*
number = numeric value for bond width (distance units)
*grid* = per-grid value to use when coloring each grid cell
per-grid value = c_ID:gname:dname, c_ID:gname:dname[I], f_ID:gname:dname, f_ID:gname:dname[I]
gname = name of grid defined by compute or fix
dname = name of data field defined by compute or fix
c_ID = per-grid vector calculated by a compute with ID
c_ID[I] = Ith column of per-grid array calculated by a compute with ID
f_ID = per-grid vector calculated by a fix with ID
f_ID[I] = Ith column of per-grid array calculated by a fix with ID
*line* = color width
color = *type*
width = numeric value for line width (distance units)
@ -95,7 +103,7 @@ Syntax
dump_modify dump-ID keyword values ...
* these keywords apply only to the *image* and *movie* styles and are documented on this page
* keyword = *acolor* or *adiam* or *amap* or *backcolor* or *bcolor* or *bdiam* or *boxcolor* or *color* or *bitrate* or *framerate*
* keyword = *acolor* or *adiam* or *amap* or *gmap* or *backcolor* or *bcolor* or *bdiam* or *bitrate* or *boxcolor* or *color* or *framerate* or *gmap*
* see the :doc:`dump modify <dump_modify>` doc page for more general keywords
.. parsed-literal::
@ -134,15 +142,16 @@ Syntax
*bdiam* args = type diam
type = bond type or range of types (see below)
diam = diameter of bonds of that type (distance units)
*bitrate* arg = rate
rate = target bitrate for movie in kbps
*boxcolor* arg = color
color = name of color for simulation box lines and processor sub-domain lines
*color* args = name R G B
name = name of color
R,G,B = red/green/blue numeric values from 0.0 to 1.0
*bitrate* arg = rate
rate = target bitrate for movie in kbps
*framerate* arg = fps
fps = frames per second for movie
*gmap* args = identical to *amap* args
Examples
""""""""
@ -214,7 +223,7 @@ Similarly, the format of the resulting movie is chosen with the
and thus details have to be looked up in the `FFmpeg documentation
<https://ffmpeg.org/ffmpeg.html>`_. Typical examples are: .avi, .mpg,
.m4v, .mp4, .mkv, .flv, .mov, .gif Additional settings of the movie
compression like bitrate and framerate can be set using the
compression like *bitrate* and *framerate* can be set using the
dump_modify command as described below.
To write out JPEG and PNG format files, you must build LAMMPS with
@ -300,13 +309,13 @@ settings, they are interpreted in the following way.
If "vx", for example, is used as the *color* setting, then the color
of the atom will depend on the x-component of its velocity. The
association of a per-atom value with a specific color is determined by
a "color map", which can be specified via the dump_modify command, as
described below. The basic idea is that the atom-attribute will be
within a range of values, and every value within the range is mapped
to a specific color. Depending on how the color map is defined, that
mapping can take place via interpolation so that a value of -3.2 is
halfway between "red" and "blue", or discretely so that the value of
-3.2 is "orange".
a "color map", which can be specified via the dump_modify amap
command, as described below. The basic idea is that the
atom-attribute will be within a range of values, and every value
within the range is mapped to a specific color. Depending on how the
color map is defined, that mapping can take place via interpolation so
that a value of -3.2 is halfway between "red" and "blue", or
discretely so that the value of -3.2 is "orange".
If "vx", for example, is used as the *diameter* setting, then the atom
will be rendered using the x-component of its velocity as the
@ -948,6 +957,17 @@ frequently.
----------
The *gmap* keyword can be used with the dump image command, with its
*grid* keyword, to setup a color map. The color map is used to assign
a specific RGB (red/green/blue) color value to an individual grid cell
when it is drawn, based on the grid cell value, which is a numeric
quantity specified with the *grid* keyword.
The arguments for the *gmap* keyword are identical to those for the
*amap* keyword (for atom coloring) described above.
----------
Restrictions
""""""""""""
@ -1031,6 +1051,7 @@ The defaults for the dump_modify keywords specific to dump image and dump movie
* boxcolor = yellow
* color = 140 color names are pre-defined as listed below
* framerate = 24
* gmap = min max cf 0.0 2 min blue max red
----------

View File

@ -90,8 +90,8 @@ hexahedrons in either legacy .vtk or .vtu XML format.
Style *vtk* allows you to specify a list of atom attributes to be
written to the dump file for each atom. The list of possible attributes
is the same as for the :doc:`dump_style custom <dump>` command; see
its page for a listing and an explanation of each attribute.
is the same as for the :doc:`dump_style custom <dump>` command; see its
documentation page for a listing and an explanation of each attribute.
.. note::

View File

@ -23,7 +23,7 @@ Examples
Description
"""""""""""
.. versionadded:: TBD
.. versionadded:: 22Dec2022
Fit a machine-learning interatomic potential (ML-IAP) based on proper
orthogonal descriptors (POD). Two input files are required for this

View File

@ -77,26 +77,31 @@ for individual fixes for info on which ones can be restarted.
----------
Some fixes calculate one of three styles of quantities: global,
per-atom, or local, which can be used by other commands or output as
described below. A global quantity is one or more system-wide values
(e.g., the energy of a wall interacting with particles). A per-atom
quantity is one or more values per atom (e.g., the displacement vector
for each atom since time 0). Per-atom values are set to 0.0 for atoms
not in the specified fix group. Local quantities are calculated by
each processor based on the atoms it owns, but there may be zero or
more per atoms.
Some fixes calculate one or more of four styles of quantities: global,
per-atom, local, or per-grid, which can be used by other commands or
output as described below. A global quantity is one or more
system-wide values, e.g. the energy of a wall interacting with
particles. A per-atom quantity is one or more values per atom,
e.g. the displacement vector for each atom since time 0. Per-atom
values are set to 0.0 for atoms not in the specified fix group. Local
quantities are calculated by each processor based on the atoms it
owns, but there may be zero or more per atoms. Per-grid quantities
are calculated on a regular 2d or 3d grid which overlays a 2d or 3d
simulation domain. The grid points and the data they store are
distributed across processors; each processor owns the grid points
which fall within its sub-domain.
Note that a single fix can produce either global or per-atom or local
quantities (or none at all), but not both global and per-atom. It can
produce local quantities in tandem with global or per-atom quantities.
The fix page will explain.
Note that a single fix typically produces either global or per-atom or
local or per-grid values (or none at all). It does not produce both
global and per-atom. It can produce local or per-grid values in
tandem with global or per-atom values. The fix doc page will explain
the details.
Global, per-atom, and local quantities each come in three kinds: a
single scalar value, a vector of values, or a 2d array of values. The
doc page for each fix describes the style and kind of values it
produces (e.g., a per-atom vector). Some fixes produce more than one
kind of a single style (e.g., a global scalar and a global vector).
Global, per-atom, local, and per-grid quantities come in three kinds:
a single scalar value, a vector of values, or a 2d array of values.
The doc page for each fix describes the style and kind of values it
produces, e.g. a per-atom vector. Some fixes produce more than one
kind of a single style, e.g. a global scalar and a global vector.
When a fix quantity is accessed, as in many of the output commands
discussed below, it can be referenced via the following bracket
@ -185,6 +190,7 @@ accelerated styles exist.
* :doc:`ave/chunk <fix_ave_chunk>` - compute per-chunk time-averaged quantities
* :doc:`ave/correlate <fix_ave_correlate>` - compute/output time correlations
* :doc:`ave/correlate/long <fix_ave_correlate_long>` - alternative to :doc:`ave/correlate <fix_ave_correlate>` that allows efficient calculation over long time windows
* :doc:`ave/grid <fix_ave_grid>` - compute per-grid time-averaged quantities
* :doc:`ave/histo <fix_ave_histo>` - compute/output time-averaged histograms
* :doc:`ave/histo/weight <fix_ave_histo>` - weighted version of fix ave/histo
* :doc:`ave/time <fix_ave_time>` - compute/output global time-averaged quantities

View File

@ -63,7 +63,8 @@ be needed when running such a hybrid simulation, especially if the
swapped atoms are not well equilibrated.
The *types* keyword is required. At least two atom types must be
specified.
specified. If not using *semi-grand*, exactly two atom types
are required.
The *ke* keyword can be set to *no* to turn off kinetic energy
conservation for swaps. The default is *yes*, which means that swapped

View File

@ -112,6 +112,17 @@ or atoms in a spatial bin. See the :doc:`compute chunk/atom
page for details of how chunks can be defined and examples of how they
can be used to measure properties of a system.
Note that if the :doc:`compute chunk/atom <compute_chunk_atom>`
command defines spatial bins, the fix ave/chunk command performs a
similar computation as the :doc:`fix ave/grid <fix_ave_grid>` command.
However, the per-bin outputs from the fix ave/chunk command are
global; each processor stores a copy of the entire set of bin data.
By contrast, the :doc:`fix ave/grid <fix_ave_grid>` command uses a
distributed grid where each processor owns a subset of the bins. Thus
it is more efficient to use the :doc:`fix ave/grid <fix_ave_grid>`
command when the grid is large and a simulation is run on many
processors.
Note that only atoms in the specified group contribute to the summing
and averaging calculations. The :doc:`compute chunk/atom
<compute_chunk_atom>` command defines its own group as well as an
@ -141,10 +152,11 @@ quantities. :doc:`Variables <variable>` of style *atom* are the only
ones that can be used with this fix since all other styles of variable
produce global quantities.
Note that for values from a compute or fix, the bracketed index I can
be specified using a wildcard asterisk with the index to effectively
specify multiple values. This takes the form "\*" or "\*n" or "m\*" or
"m\*n". If :math:`N` is the size of the vector (for *mode* = scalar) or the
Note that for values from a compute or fix that produces a per-atom
array (multiple values per atom), the bracketed index I can be
specified using a wildcard asterisk with the index to effectively
specify multiple values. This takes the form "\*" or "\*n" or "n\*"
or "m\*n". If :math:`N` = the size of the vector (for *mode* = scalar) or the
number of columns in the array (for *mode* = vector), then an asterisk
with no numeric values means all indices from 1 to :math:`N`. A leading
asterisk means all indices from 1 to n (inclusive). A trailing
@ -359,6 +371,8 @@ For the *density/number* and *density/mass* values, the
volume (bin volume or system volume) used in the per-sample sum
normalization will be the current volume at each sampling step.
----------
The *ave* keyword determines how the per-chunk values produced every
:math:`N_\text{freq}` steps are averaged with values produced on previous steps
that were multiples of :math:`N_\text{freq}`, before they are accessed by
@ -385,6 +399,8 @@ of the individual chunk values on time steps 8000, 9000, and 10000. Outputs on
early steps will average over less than :math:`M` values if they are not
available.
----------
The *bias* keyword specifies the ID of a temperature compute that
removes a "bias" velocity from each atom, specified as *bias-ID*\ .
It is only used when the *temp* value is calculated, to compute the
@ -415,6 +431,8 @@ set to the remaining degrees of freedom for the entire molecule
(entire chunk in this case), that is, 6 for 3d or 3 for 2d for a rigid
molecule.
----------
The *file* keyword allows a filename to be specified. Every
:math:`N_\text{freq}` timesteps, a section of chunk info will be written to a
text file in the following format. A line with the timestep and number of
@ -523,12 +541,14 @@ Restrictions
Related commands
""""""""""""""""
:doc:`compute <compute>`, :doc:`fix ave/atom <fix_ave_atom>`,
:doc:`fix ave/histo <fix_ave_histo>`, :doc:`fix ave/time <fix_ave_time>`,
:doc:`variable <variable>`, :doc:`fix ave/correlate <fix_ave_correlate>`
:doc:`compute <compute>`, :doc:`fix ave/atom <fix_ave_atom>`, `fix
:doc:ave/histo <fix_ave_histo>`, :doc:`fix ave/time <fix_ave_time>`,
:doc:`variable <variable>`, :doc:`fix ave/correlate
:doc:<fix_ave_correlate>`, `fix ave/atogrid <fix_ave_grid>`
Default
"""""""
The option defaults are norm = all, ave = one, bias = none, no file output, and
title 1,2,3 = strings as described above.
The option defaults are norm = all, ave = one, bias = none, no file
output, and title 1,2,3 = strings as described above.

509
doc/src/fix_ave_grid.rst Normal file
View File

@ -0,0 +1,509 @@
.. index:: fix ave/grid
fix ave/grid command
=====================
Syntax
""""""
.. parsed-literal::
fix ID group-ID ave/grid Nevery Nrepeat Nfreq Nx Ny Nz value1 value2 ... keyword args ...
* ID, group-ID are documented in :doc:`fix <fix>` command
* ave/grid = style name of this fix command
* Nevery = use input values every this many timesteps
* Nrepeat = # of times to use input values for calculating averages
* Nfreq = calculate averages every this many timesteps
* Nx, Ny, Nz = grid size in each dimension
* one or more per-atom or per-grid input values can be listed
* per-atom value = vx, vy, vz, fx, fy, fz, density/mass, density/number, mass, temp, c_ID, c_ID[I], f_ID, f_ID[I], v_name
.. parsed-literal::
vx,vy,vz,fx,fy,fz,mass = atom attribute (velocity, force component, mass)
density/number, density/mass = number or mass density (per volume)
temp = temperature
c_ID = per-atom vector calculated by a compute with ID
c_ID[I] = Ith column of per-atom array calculated by a compute with ID, I can include wildcard (see below)
f_ID = per-atom vector calculated by a fix with ID
f_ID[I] = Ith column of per-atom array calculated by a fix with ID, I can include wildcard (see below)
v_name = per-atom vector calculated by an atom-style variable with name
* per-grid value = c_ID:gname:dname, c_ID:gname:dname[I], f_ID:gname:dname, f_ID:gname:dname[I]
.. parsed-literal::
gname = name of grid defined by compute or fix
dname = name of data field defined by compute or fix
c_ID = per-grid vector calculated by a compute with ID
c_ID[I] = Ith column of per-grid array calculated by a compute with ID, I can include wildcard (see below)
f_ID = per-grid vector calculated by a fix with ID
f_ID[I] = Ith column of per-grid array calculated by a fix with ID, I can include wildcard (see below)
* zero or more keyword/arg pairs may be appended
* keyword = *discard* or *norm* or *ave* or *bias* or *adof* or *cdof*
.. parsed-literal::
*discard* arg = *yes* or *no*
yes = discard an atom outside grid in a non-periodic dimension
no = remap an atom outside grid in a non-periodic dimension to first or last grid cell
*norm* arg = *all* or *sample* or *none* = how output on *Nfreq* steps is normalized
all = output is sum of atoms across all *Nrepeat* samples, divided by atom count
sample = output is sum of *Nrepeat* sample averages, divided by *Nrepeat*
none = output is sum of *Nrepeat* sample sums, divided by *Nrepeat*
*ave* args = *one* or *running* or *window M*
one = output new average value every Nfreq steps
running = output cumulative average of all previous Nfreq steps
window M = output average of M most recent Nfreq steps
*bias* arg = bias-ID
bias-ID = ID of a temperature compute that removes a velocity bias for temperature calculation
*adof* value = dof_per_atom
dof_per_atom = define this many degrees-of-freedom per atom for temperature calculation
*cdof* value = dof_per_grid_cell
dof_per_grid_cell = add this many degrees-of-freedom per grid_cell for temperature calculation
Examples
""""""""
.. code-block:: LAMMPS
fix 1 all ave/grid 10000 1 10000 10 10 10 fx fy fz c_myMSD[*]
fix 1 flow ave/grid 100 10 1000 20 20 30 f_TTM:grid:data
Description
"""""""""""
Overlay the 2d or 3d simulation box with a uniformly spaced 2d or 3d
grid and use it to either (a) time-average per-atom quantities for the
atoms in each grid cell, or to (b) time-average per-grid quantities
produced by other computes or fixes. This fix operates in either
"per-atom mode" (all input values are per-atom) or in "per-grid mode"
(all input values are per-grid). You cannot use both per-atom and
per-grid inputs in the same command.
The grid created by this command is distributed; each processor owns
the grid points that are within its sub-domain. This is similar to
the :doc:`fix ave/chunk <fix_ave_chunk>` command when it uses chunks
from the :doc:`compute chunk/atom <compute_chunk_atom>` command which
are 2d or 3d regular bins. However, the per-bin outputs in that case
are global; each processor stores a copy of the entire set of bin
data. Thus it more efficient to use the fix ave/grid command when the
grid is large and a simulation is run on many processors.
For per-atom mode, only atoms in the specified group contribute to the
summing and averaging calculations. For per-grid mode, the specified
group is ignored.
----------
The *Nevery*, *Nrepeat*, and *Nfreq* arguments specify on what
timesteps the input values will be accessed and contribute to the
average. The final averaged quantities are generated on timesteps
that are a multiples of *Nfreq*\ . The average is over *Nrepeat*
quantities, computed in the preceding portion of the simulation every
*Nevery* timesteps. *Nfreq* must be a multiple of *Nevery* and
*Nevery* must be non-zero even if *Nrepeat* is 1. Also, the timesteps
contributing to the average value cannot overlap, i.e. Nrepeat\*Nevery
can not exceed Nfreq.
For example, if Nevery=2, Nrepeat=6, and Nfreq=100, then values on
timesteps 90,92,94,96,98,100 will be used to compute the final average
on timestep 100. Similarly for timesteps 190,192,194,196,198,200 on
timestep 200, etc. If Nrepeat=1 and Nfreq = 100, then no time
averaging is done; values are simply generated on timesteps
100,200,etc.
In per-atom mode, each input value can also be averaged over the atoms
in each grid cell. The way the averaging is done across the *Nrepeat*
timesteps to produce output on the *Nfreq* timesteps, and across
multiple *Nfreq* outputs, is determined by the *norm* and *ave*
keyword settings, as discussed below.
----------
The *Nx*, *Ny*, and *Nz* arguments specify the size of the grid that
overlays the simulation box. For 2d simulations, *Nz* must be 1. The
*Nx*, *Ny*, *Nz* values can be any positive integer. The grid can be
very coarse compared to the particle count, or very fine. If one or
more of the values = 1, then bins are 2d planes or 1d slices of the
simulation domain. Note that if the total number of grid cells is
small, it may be more efficient to use the doc:`fix ave/chunk
<fix_ave_chunk>` command which can treat a grid defined by the
:doc:`compute chunk/atom <compute_chunk_atom>` command as a global
grid where each processor owns a copy of all the grid cells. If *Nx*
= *Ny* = *Nz* = 1 is used, the same calculation would be more
efficiently performed by the doc:`fix ave/atom <fix_ave_atom>`
command.
If the simulation box size or shape changes during a simulation, the
grid always conforms to the size/shape of the current simulation box.
If one more dimensions have non-periodic shrink-wrapped boundary
conditions, as defined by the :doc:`boundary <boundary>` command, then
the grid will extend over the (dynamic) shrink-wrapped extent in each
dimension. If the box shape is triclinic, as explained in :doc:`Howto
triclinic <Howto_triclinic>`, then the grid is also triclinic; each
grid cell is a small triclinic cell with the same shape as the
simulation box.
----------
In both per-atom and per-grid mode, input values from a compute or fix
that produces an array of values (multiple values per atom or per grid
point), the bracketed index I can be specified using a wildcard
asterisk with the index to effectively specify multiple values. This
takes the form "\*" or "\*n" or "n\*" or "m\*n". If N = the number of
columns in the array (for *mode* = vector), then an asterisk with no
numeric values means all indices from 1 to N. A leading asterisk
means all indices from 1 to n (inclusive). A trailing asterisk means
all indices from n to N (inclusive). A middle asterisk means all
indices from m to n (inclusive).
Using a wildcard is the same as if the individual columns of the array
had been listed one by one. E.g. if there were a compute fft/grid
command which produced 3 values for each grid point, these two fix
ave/grid commands would be equivalent:
.. code-block:: LAMMPS
compute myFFT all fft/grid 10 10 10 ...
fix 1 all ave/grid 100 1 100 10 10 10 c_myFFT:grid:data[*]
fix 2 all ave/grid 100 1 100 10 10 10 c_myFFT:grid:data[*][1] c_myFFT:grid:data[*][2] c_myFFT:grid:data[3]
----------
*Per-atom mode*:
Each specified per-atom value can be an atom attribute (velocity,
force component), a number or mass density, a mass or temperature, or
the result of a :doc:`compute <compute>` or :doc:`fix <fix>` or the
evaluation of an atom-style :doc:`variable <variable>`. In the latter
cases, the compute, fix, or variable must produce a per-atom quantity,
not a global quantity. Note that the :doc:`compute property/atom
<compute_property_atom>` command provides access to any attribute
defined and stored by atoms.
The per-atom values of each input vector are summed and averaged
independently of the per-atom values in other input vectors.
:doc:`Computes <compute>` that produce per-atom quantities are those
which have the word *atom* in their style name. See the doc pages for
individual :doc:`fixes <fix>` to determine which ones produce per-atom
quantities. :doc:`Variables <variable>` of style *atom* are the only
ones that can be used with this fix since all other styles of variable
produce global quantities.
----------
The atom attribute values (vx,vy,vz,fx,fy,fz,mass) are
self-explanatory. As noted above, any other atom attributes can be
used as input values to this fix by using the :doc:`compute
property/atom <compute_property_atom>` command and then specifying an
input value from that compute.
The *density/number* value means the number density is computed for
each grid cell, i.e. number/volume. The *density/mass* value means
the mass density is computed for each grid/cell,
i.e. total-mass/volume. The output values are in units of 1/volume or
density (mass/volume). See the :doc:`units <units>` command page for
the definition of density for each choice of units, e.g. gram/cm\^3.
The *temp* value computes the temperature for each grid cell, by the
formula
.. math::
\text{KE} = \frac{\text{DOF}}{2} k_B T,
where KE = total kinetic energy of the atoms in the grid cell (
:math:`\frac{1}{2} m v^2`), DOF = the total number of degrees of
freedom for all atoms in the grid cell, :math:`k_B` = Boltzmann
constant, and :math:`T` = temperature.
The DOF is calculated as N\*adof + cdof, where N = number of atoms in
the grid cell, adof = degrees of freedom per atom, and cdof = degrees
of freedom per grid cell. By default adof = 2 or 3 = dimensionality
of system, as set via the :doc:`dimension <dimension>` command, and
cdof = 0.0. This gives the usual formula for temperature.
Note that currently this temperature only includes translational
degrees of freedom for each atom. No rotational degrees of freedom
are included for finite-size particles. Also no degrees of freedom
are subtracted for any velocity bias or constraints that are applied,
such as :doc:`compute temp/partial <compute_temp_partial>`, or
:doc:`fix shake <fix_shake>` or :doc:`fix rigid <fix_rigid>`. This is
because those degrees of freedom (e.g. a constrained bond) could apply
to sets of atoms that are both inside and outside a specific grid
cell, and hence the concept is somewhat ill-defined. In some cases,
you can use the *adof* and *cdof* keywords to adjust the calculated
degrees of freedom appropriately, as explained below.
Also note that a bias can be subtracted from atom velocities before
they are used in the above formula for KE, by using the *bias*
keyword. This allows, for example, a thermal temperature to be
computed after removal of a flow velocity profile.
Note that the per-grid-cell temperature calculated by this fix and the
:doc:`compute temp/chunk <compute_temp_chunk>` command (using bins)
can be different. The compute calculates the temperature for each
chunk for a single snapshot. This fix can do that but can also time
average those values over many snapshots, or it can compute a
temperature as if the atoms in the grid cell on different timesteps
were collected together as one set of atoms to calculate their
temperature. The compute allows the center-of-mass velocity of each
chunk to be subtracted before calculating the temperature; this fix
does not.
If a value begins with "c\_", a compute ID must follow which has been
previously defined in the input script. If no bracketed integer is
appended, the per-atom vector calculated by the compute is used. If a
bracketed integer is appended, the Ith column of the per-atom array
calculated by the compute is used. Users can also write code for
their own compute styles and :doc:`add them to LAMMPS <Modify>`. See
the discussion above for how I can be specified with a wildcard
asterisk to effectively specify multiple values.
If a value begins with "f\_", a fix ID must follow which has been
previously defined in the input script. If no bracketed integer is
appended, the per-atom vector calculated by the fix is used. If a
bracketed integer is appended, the Ith column of the per-atom array
calculated by the fix is used. Note that some fixes only produce
their values on certain timesteps, which must be compatible with
*Nevery*, else an error results. Users can also write code for their
own fix styles and :doc:`add them to LAMMPS <Modify>`. See the
discussion above for how I can be specified with a wildcard asterisk
to effectively specify multiple values.
If a value begins with "v\_", a variable name must follow which has
been previously defined in the input script. Variables of style
*atom* can reference thermodynamic keywords and various per-atom
attributes, or invoke other computes, fixes, or variables when they
are evaluated, so this is a very general means of generating per-atom
quantities to average within grid cells.
----------
*Per-grid mode*:
The attributes that begin with *c_ID* and *f_ID* both take
colon-separated fields *gname* and *dname*. These refer to a grid
name and data field name which is defined by the compute or fix. Note
that a compute or fix can define one or more grids (of different
sizes) and one or more data fields for each of those grids. The sizes
of all grids used as values for one instance of this fix must be the
same.
The *c_ID:gname:dname* and *c_ID:gname:dname[I]* attributes allow
per-grid vectors or arrays calculated by a :doc:`compute <compute>` to
be accessed. The ID in the attribute should be replaced by the actual
ID of the compute that has been defined previously in the input
script.
If *c_ID:gname:dname* is used as a attribute, then the per-grid vector
calculated by the compute is accessed. If *c_ID:gname:dname[I]* is
used, then I must be in the range from 1-M, which will access the Ith
column of the per-grid array with M columns calculated by the compute.
See the discussion above for how I can be specified with a wildcard
asterisk to effectively specify multiple values.
The *f_ID:gname:dname* and *f_ID:gname:dname[I]* attributes allow
per-grid vectors or arrays calculated by a :doc:`fix <fix>` to be
output. The ID in the attribute should be replaced by the actual ID
of the fix that has been defined previously in the input script.
If *f_ID:gname:dname* is used as a attribute, then the per-grid vector
calculated by the fix is printed. If *f_ID:gname:dname[I]* is used,
then I must be in the range from 1-M, which will print the Ith column
of the per-grid with M columns calculated by the fix. See the
discussion above for how I can be specified with a wildcard asterisk
to effectively specify multiple values.
----------
Additional optional keywords also affect the operation of this fix and
its outputs. Some are only applicable to per-atom mode. Some are
applicable to both per-atom and per-grid mode.
The *discard* keyword is only applicable to per-atom mode. If a
dimension of the system is non-periodic, then grid cells will only
span the box dimension (fixed or shrink-wrap boundaries as set by the
:doc:`boundary` command). An atom may thus be slightly outside the
range of grid cells on a particular timestep. If *discard* is set to
*yes* (the default), then the atom will be assigned to the closest
grid cell (lowest or highest) in that dimension. If *discard* is set
to *no* the atom will be ignored.
----------
The *norm* keyword is only applicable to per-atom mode. In per-grid
mode, the *norm* keyword setting is ignored. The output grid value on
an *Nfreq* timestep is the sum of the grid values in each of the
*Nrepeat* samples, divided by *Nrepeat*.
In per-atom mode, the *norm" keywod affects how averaging is done for
the per-grid values that are output on an *Nfreq* timestep. *Nrepeat*
samples contribute to the output. The *norm* keyword has 3 possible
settings: *all* or *sample* or *none*. *All* is the default.
In the formulas that follow, SumI is the sum of a per-atom property
over the CountI atoms in a grid cell for a single sample I, where I
varies from 1 to N, and N = Nrepeat. These formulas are used for any
per-atom input value listed above, except *density/number*,
*density/mass*, and *temp*. Those input values are discussed below.
In per-atom mode, for *norm all* the output grid value on the *Nfreq*
timestep is an average over atoms across the entire *Nfreq* timescale:
Output = (Sum1 + Sum2 + ... + SumN) / (Count1 + Count2 + ... + CountN)
In per-atom mode, for *norm sample* the output grid value on the
*Nfreq* timestep is an average of an average:
Output = (Sum1/Count1 + Sum2/Count2 + ... + SumN/CountN) / Nrepeat
In per-atom mode, for *norm none* the output grid value on the
*Nfreq* timestep is not normalized by the atom counts:
Output = (Sum1 + Sum2 + ... SumN) / Nrepeat
For *density/number* and *density/mass*, the output value is the same
as in the formulas above for *norm all* and *norm sample*, except that
the result is also divided by the grid cell volume. For *norm all*,
this will be the volume at the final *Nfreq* timestep. For *norm
sample*, the divide-by-volume is done for each sample, using the grid
cell volume at the sample timestep. For *norm none*, the output is
the same as for *norm all*.
For *temp*, the output temperature uses the formula for kinetic energy
KE listed above, and is normalized similarly to the formulas above for
*norm all* and *norm sample*, except for the way the degrees of
freedom (DOF) are calculated. For *norm none*, the output is the same
as for *norm all*.
For *norm all*, the DOF = *Nrepeat* times *cdof* plus *Count* times
*adof*, where *Count* = (Count1 + Count2 + ... + CountN). The *cdof*
and *adof* keywords are discussed below. The output temperature is
computed with all atoms across all samples contributing.
For *norm sample*, the DOF for a single sample = *cdof* plus *Count*
times *adof*, where *Count* = CountI for a single sample. The output
temperature is the average of *Nsample* temperatures calculated for
each sample.
Finally, for all 3 *norm* settings the output count of atoms per grid
cell is:
Output count = (Count1 + Count2 + ... CountN) / Nrepeat
This count is the same for all per-atom input values, including
*density/number*, *density/mass*, and *temp*.
----------
The *ave* keyword is applied to both per-atom and per-grid mode. It
determines how the per-grid values produced once every *Nfreq* steps
are averaged with values produced on previous steps that were
multiples of *Nfreq*, before they are accessed by another output
command.
If the *ave* setting is *one*, which is the default, then the grid
values produced on *Nfreq* timesteps are independent of each other;
they are output as-is without further averaging.
If the *ave* setting is *running*, then the grid values produced on
*Nfreq* timesteps are summed and averaged in a cumulative sense before
being output. Each output grid value is thus the average of the grid
value produced on that timestep with all preceding values for the same
grid value. This running average begins when the fix is defined; it
can only be restarted by deleting the fix via the :doc:`unfix <unfix>`
command, or re-defining the fix by re-specifying it.
If the *ave* setting is *window*, then the grid values produced on
*Nfreq* timesteps are summed and averaged within a moving "window" of
time, so that the last M values for the same grid are used to produce
the output. E.g. if M = 3 and Nfreq = 1000, then the grid value
output on step 10000 will be the average of the grid values on steps
8000,9000,10000. Outputs on early steps will average over less than M
values if they are not available.
----------
The *bias*, *adof*, and *cdof* keywords are only applicable to
per-atom mode.
The *bias* keyword specifies the ID of a temperature compute that
removes a "bias" velocity from each atom, specified as *bias-ID*\ .
It is only used when the *temp* value is calculated, to compute the
thermal temperature of each grid cell after the translational kinetic
energy components have been altered in a prescribed way, e.g. to
remove a flow velocity profile. See the doc pages for individual
computes that calculate a temperature to see which ones implement a
bias.
The *adof* and *cdof* keywords define the values used in the degree of
freedom (DOF) formula described above for temperature calculation for
each grid cell. They are only used when the *temp* value is
calculated. They can be used to calculate a more appropriate
temperature in some cases. Here are 3 examples:
If grid cells contain some number of water molecules and :doc:`fix
shake <fix_shake>` is used to make each molecule rigid, then you could
calculate a temperature with 6 degrees of freedom (DOF) (3
translational, 3 rotational) per molecule by setting *adof* to 2.0.
If :doc:`compute temp/partial <compute_temp_partial>` is used with the
*bias* keyword to only allow the x component of velocity to contribute
to the temperature, then *adof* = 1.0 would be appropriate.
Using *cdof* = -2 or -3 (for 2d or 3d simulations) will subtract out 2
or 3 degrees of freedom for each grid cell, similar to how the
:doc:`compute temp <compute_temp>` command subtracts out 3 DOF for the
entire system.
----------
Restart, fix_modify, output, run start/stop, minimize info
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
No information about this fix is written to :doc:`binary restart files
<restart>`. None of the :doc:`fix_modify <fix_modify>` options are
relevant to this fix.
This fix calculates a per-grid array which has one column for each of
the specified input values. The units for each column with be in the
:doc:`units <units>` for the per-atom or per-grid quantity for the
corresponding input value. If the fix is used in per-atom mode, it
also calculates a per-grid vector with the count of atoms in each grid
cell. The number of rows in the per-grid array and number of values
in the per-grid vector (distributed across all processors) is Nx *
Ny * Nz.
For access by other commands, the name of the single grid produced by
this fix is "grid". The names of its two per-grid datums are "data"
for the per-grid array and "count" for the per-grid vector (if using
per-atom values). Both datums can be accessed by various :doc:`output
commands <Howto_output>`.
In per-atom mode, the per-grid array values calculated by this fix are
treated as "intensive", since they are typically already normalized by
the count of atoms in each grid cell.
No parameter of this fix can be used with the *start/stop* keywords of
the :doc:`run <run>` command. This fix is not invoked during
:doc:`energy minimization <minimize>`.
Restrictions
""""""""""""
none
Related commands
""""""""""""""""
:doc:`fix ave/atom <fix_ave_atom>`, :doc:`fix ave/chunk <fix_ave_chunk>`
Default
"""""""
The option defaults are discard = yes, norm = all, ave = one, and bias
= none.

View File

@ -520,7 +520,7 @@ example, the molecule fragment could consist of only the backbone
atoms of a polymer chain. This constraint can be used to enforce a
specific relative position and orientation between reacting molecules.
.. versionchanged:: TBD
.. versionchanged:: 22Dec2022
The constraint of type "custom" has the following syntax:
@ -637,7 +637,7 @@ eligible reaction only occurs if the random number is less than the
fraction. Up to :math:`N` reactions are permitted to occur, as optionally
specified by the *max_rxn* keyword.
.. versionadded:: TBD
.. versionadded:: 22Dec2022
The *rate_limit* keyword can enforce an upper limit on the overall
rate of the reaction. The number of reaction occurrences is limited to
@ -664,7 +664,7 @@ charges are updated to those specified by the post-reaction template
fragment defined in the pre-reaction molecule template. In this case,
only the atomic charges of atoms in the molecule fragment are updated.
.. versionadded:: TBD
.. versionadded:: 22Dec2022
The *rescale_charges* keyword can be used to ensure the total charge
of the system does not change as reactions occur. When the argument is

View File

@ -94,12 +94,9 @@ insert (delete) a proton (atom type 2). Besides, the fix implements
self-ionization reaction of water :math:`\emptyset \rightleftharpoons
\mathrm{H}^++\mathrm{OH}^-`.
However, this approach is highly
inefficient at :math:`\mathrm{pH} \approx 7` when the concentration of
both protons and hydroxyl ions is low, resulting in a relatively low
acceptance rate of MC moves.
However, this approach is highly inefficient at :math:`\mathrm{pH}
\approx 7` when the concentration of both protons and hydroxyl ions is
low, resulting in a relatively low acceptance rate of MC moves.
A more efficient way is to allow salt ions to participate in ionization
reactions, which can be easily achieved via
@ -108,10 +105,13 @@ reactions, which can be easily achieved via
fix acid_reaction2 all charge/regulation 4 5 acid_type 1 pH 7.0 pKa 5.0 pIp 2.0 pIm 2.0
where particles of atom type 4 and 5 are the salt cations and anions, both at activity (effective concentration) of :math:`10^{-2}` mol/l, see :ref:`(Curk1) <Curk1>` and
:ref:`(Landsgesell) <Landsgesell>` for more details.
where particles of atom type 4 and 5 are the salt cations and anions,
both at activity (effective concentration) of :math:`10^{-2}` mol/l, see
:ref:`(Curk1) <Curk1>` and :ref:`(Landsgesell) <Landsgesell>` for more
details.
We could have simultaneously added a base ionization reaction (:math:`\mathrm{B} \rightleftharpoons \mathrm{B}^++\mathrm{OH}^-`)
We could have simultaneously added a base ionization reaction
(:math:`\mathrm{B} \rightleftharpoons \mathrm{B}^++\mathrm{OH}^-`)
.. code-block:: LAMMPS
@ -122,7 +122,18 @@ where the fix will attempt to charge :math:`\mathrm{B}` (discharge
insert (delete) a hydroxyl ion :math:`\mathrm{OH}^-` of atom type 3.
Dissociated ions and salt ions can be combined into a single particle type, which reduces the number of necessary MC moves and increases sampling performance, see :ref:`(Curk1) <Curk1>`. The :math:`\mathrm{H}^+` and monovalent salt cation (:math:`\mathrm{S}^+`) are combined into a single particle type, :math:`\mathrm{X}^+ = \{\mathrm{H}^+, \mathrm{S}^+\}`. In this case "pIp" refers to the effective concentration of the combined cation type :math:`\mathrm{X}^+` and its value is determined by :math:`10^{-\mathrm{pIp}} = 10^{-\mathrm{pH}} + 10^{-\mathrm{pSp}}`, where :math:`10^{-\mathrm{pSp}}` is the effective concentration of salt cations. For example, at pH=7 and pSp=6 we would find pIp~5.958 and the command that performs reactions with combined ions could read,
Dissociated ions and salt ions can be combined into a single particle
type, which reduces the number of necessary MC moves and increases
sampling performance, see :ref:`(Curk1) <Curk1>`. The
:math:`\mathrm{H}^+` and monovalent salt cation (:math:`\mathrm{S}^+`)
are combined into a single particle type, :math:`\mathrm{X}^+ =
\{\mathrm{H}^+, \mathrm{S}^+\}`. In this case "pIp" refers to the
effective concentration of the combined cation type :math:`\mathrm{X}^+`
and its value is determined by :math:`10^{-\mathrm{pIp}} =
10^{-\mathrm{pH}} + 10^{-\mathrm{pSp}}`, where
:math:`10^{-\mathrm{pSp}}` is the effective concentration of salt
cations. For example, at pH=7 and pSp=6 we would find pIp~5.958 and the
command that performs reactions with combined ions could read,
.. code-block:: LAMMPS
@ -138,16 +149,16 @@ If neither the acid or the base type is specified, for example,
the fix simply inserts or deletes an ion pair of a free cation (atom
type 4) and a free anion (atom type 5) as done in a conventional
grand-canonical MC simulation. Multivalent ions can be inserted (deleted) by using the *onlysalt* keyword.
grand-canonical MC simulation. Multivalent ions can be inserted
(deleted) by using the *onlysalt* keyword.
The fix is compatible with LAMMPS sub-packages such as *molecule* or
*rigid*. The acid and base particles can be part of larger
molecules or rigid bodies. Free ions that are inserted to or deleted
from the system must be defined as single particles (no bonded
interactions allowed) and cannot be part of larger molecules or rigid
bodies. If *molecule* package is used, all inserted ions have a molecule
ID equal to zero.
This fix is compatible with LAMMPS packages such as MOLECULE or
RIGID. The acid and base particles can be part of larger molecules or
rigid bodies. Free ions that are inserted to or deleted from the system
must be defined as single particles (no bonded interactions allowed) and
cannot be part of larger molecules or rigid bodies. If an atom style
with molecule IDs is used, all inserted ions have a molecule ID equal to
zero.
Note that LAMMPS implicitly assumes a constant number of particles
(degrees of freedom). Since using this fix alters the total number of
@ -164,14 +175,15 @@ Langevin thermostat:
fix fT all langevin 1.0 1.0 1.0 123
fix_modify fT temp dtemp
The units of pH, pKa, pKb, pIp, pIm are considered to be in the standard -log10
representation assuming reference concentration :math:`\rho_0 =
\mathrm{mol}/\mathrm{l}`. For example, in the dilute
The units of pH, pKa, pKb, pIp, pIm are considered to be in the
standard -log10 representation assuming reference concentration
:math:`\rho_0 = \mathrm{mol}/\mathrm{l}`. For example, in the dilute
ideal solution limit, the concentration of free cations will be
:math:`c_\mathrm{I} = 10^{-\mathrm{pIp}}\mathrm{mol}/\mathrm{l}`. To perform the internal unit
conversion, the the value of the LAMMPS unit length must be
specified in nanometers via *lunit_nm*. The default value is set to the Bjerrum length in water
at room temperature (0.71 nm), *lunit_nm* = 0.71.
:math:`c_\mathrm{I} = 10^{-\mathrm{pIp}}\mathrm{mol}/\mathrm{l}`. To
perform the internal unit conversion, the the value of the LAMMPS unit
length must be specified in nanometers via *lunit_nm*. The default value
is set to the Bjerrum length in water at room temperature (0.71 nm),
*lunit_nm* = 0.71.
The temperature used in MC acceptance probability is set by *temp*. This
temperature should be the same as the temperature set by the molecular
@ -236,9 +248,9 @@ quantities:
Restrictions
""""""""""""
This fix is part of the MC package. It is only enabled if LAMMPS
was built with that package. See the :doc:`Build package
<Build_package>` page for more info.
This fix is part of the MC package. It is only enabled if LAMMPS was
built with that package. See the :doc:`Build package <Build_package>`
page for more info.
The :doc:`atom_style <atom_style>`, used must contain the charge
property, for example, the style could be *charge* or *full*. Only

View File

@ -1,4 +1,5 @@
.. index:: fix dt/reset
.. index:: fix dt/reset/kk
fix dt/reset command
====================

View File

@ -125,7 +125,7 @@ usually modelled as a Gaussian distribution to make the charge-charge
interaction matrix invertible (:ref:`Gingrich <Gingrich>`). The keyword
*eta* specifies the distribution's width in units of inverse length.
.. versionadded:: TBD
.. versionadded:: 22Dec2022
Three algorithms are available to minimize the energy, varying in how
matrices are pre-calculated before a run to provide computational
@ -234,7 +234,7 @@ issue an error with any other number of electrodes. This keyword
requires electroneutrality to be imposed (*symm on*) and will issue an
error otherwise.
.. versionchanged:: TBD
.. versionchanged:: 22Dec2022
For all versions of the fix, the keyword-value *etypes on* enables
type-based optimized neighbor lists. With this feature enabled, LAMMPS

View File

@ -44,7 +44,7 @@ Examples
Description
"""""""""""
.. versionadded:: TBD
.. versionadded:: 22Dec2022
This command allows to carry out parallel hybrid molecular
dynamics/Monte Carlo (MD/MC) simulations using the algorithms described

View File

@ -183,29 +183,32 @@ embedded within a larger continuum representation of the electronic
subsystem.
The *set* keyword specifies a *Tinit* temperature value to initialize
the value stored on all grid points.
the value stored on all grid points. By default the temperatures
are all zero when the grid is created.
The *infile* keyword specifies an input file of electronic temperatures
for each grid point to be read in to initialize the grid. By default
the temperatures are all zero when the grid is created. The input file
is a text file which may have comments starting with the '#' character.
Each line contains four numeric columns: ix,iy,iz,Temperature. Empty
or comment-only lines will be ignored. The
number of lines must be equal to the number of user-specified grid
points (Nx by Ny by Nz). The ix,iy,iz are grid point indices ranging
from 0 to nxnodes-1 inclusive in each dimension. The lines can appear
in any order. For example, the initial electronic temperatures on a 1
by 2 by 3 grid could be specified in the file as follows:
for each grid point to be read in to initialize the grid, as an alternative
to using the *set* keyword.
The input file is a text file which may have comments starting with
the '#' character. Each line contains four numeric columns:
ix,iy,iz,Temperature. Empty or comment-only lines will be
ignored. The number of lines must be equal to the number of
user-specified grid points (Nx by Ny by Nz). The ix,iy,iz are grid
point indices ranging from 1 to Nxyz inclusive in each dimension. The
lines can appear in any order. For example, the initial electronic
temperatures on a 1 by 2 by 3 grid could be specified in the file as
follows:
.. parsed-literal::
# UNITS: metal COMMENT: initial electron temperature
0 0 0 1.0
0 0 1 1.0
0 0 2 1.0
0 1 0 2.0
0 1 1 2.0
0 1 2 2.0
1 1 1 1.0
1 1 2 1.0
1 1 3 1.0
1 2 1 2.0
1 2 2 2.0
1 2 3 2.0
where the electronic temperatures along the y=0 plane have been set to
1.0, and the electronic temperatures along the y=1 plane have been set
@ -223,17 +226,31 @@ units used.
The *outfile* keyword has 2 values. The first value *Nout* triggers
output of the electronic temperatures for each grid point every Nout
timesteps. The second value is the filename for output which will
be suffixed by the timestep. The format of each output file is exactly
timesteps. The second value is the filename for output, which will be
suffixed by the timestep. The format of each output file is exactly
the same as the input temperature file. It will contain a comment in
the first line reporting the date the file was created, the LAMMPS
units setting in use, grid size and the current timestep.
Note that the atomic temperature for atoms in each grid cell can also
be computed and output by the :doc:`fix ave/chunk <fix_ave_chunk>`
command using the :doc:`compute chunk/atom <compute_chunk_atom>`
command to create a 3d array of chunks consistent with the grid used
by this fix.
.. note::
The fix ttm/grid command does not support the *outfile* keyword.
Instead you can use the :doc:`dump grid <dump>` command to output
the electronic temperature on the distributed grid to a dump file or
the :doc:`restart <restart>` command which creates a file specific
to this fix which the :doc:`read restart <read_restart>` command
reads. The file has the same format as the file the *infile* option
reads.
For the fix ttm and fix ttm/mod commands, the corresponding atomic
temperature for atoms in each grid cell can be computed and output by
the :doc:`fix ave/chunk <fix_ave_chunk>` command using the
:doc:`compute chunk/atom <compute_chunk_atom>` command to create a 3d
array of chunks consistent with the grid used by this fix.
For the fix ttm/grid command the same thing can be done using the
:doc:`fix ave/grid <fix_ave_grid>` command and its per-grid values can
be output via the :doc:`dump grid <dump>` command.
----------
@ -339,19 +356,25 @@ ignored. The lines with the even numbers are treated as follows:
Restart, fix_modify, output, run start/stop, minimize info
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
These fixes write the state of the electronic subsystem and the energy
exchange between the subsystems to :doc:`binary restart files
<restart>`. See the :doc:`read_restart <read_restart>` command for
info on how to re-specify a fix in an input script that reads a
restart file, so that the operation of the fix continues in an
uninterrupted fashion. Note that the restart script must define the
same size grid as the original script.
The fix ttm and fix ttm/mod commands write the state of the electronic
subsystem and the energy exchange between the subsystems to
:doc:`binary restart files <restart>`. The fix ttm/grid command does
not yet support writing of its distributed grid to a restart file.
Because the state of the random number generator is not saved in the
restart files, this means you cannot do "exact" restarts with this
fix, where the simulation continues on the same as if no restart had
taken place. However, in a statistical sense, a restarted simulation
should produce the same behavior.
See the :doc:`read_restart <read_restart>` command for info on how to
re-specify a fix in an input script that reads a restart file, so that
the operation of the fix continues in an uninterrupted fashion. Note
that the restart script must define the same size grid as the original
script.
The fix ttm/grid command also outputs an auxiliary file each time a
restart file is written, with the electron temperatures for each grid
cell. The format of this file is the same as that read by the
*infile* option explained above. The filename is the same as the
restart filename with ".ttm" appended. This auxiliary file can be
read in for a restarted run by using the *infile* option for the fix
ttm/grid command, following the :doc:`read_restart <read_restart>`
command.
None of the :doc:`fix_modify <fix_modify>` options are relevant to
these fixes.
@ -371,6 +394,14 @@ electronic subsystem energies reported at the end of the timestep.
The vector values calculated are "extensive".
The fix ttm/grid command also outputs a per-grid vector which stores
the electron temperature for each grid cell in temperature :doc:`units
<units>`. which can be accessed by various :doc:`output commands
<Howto_output>`. The length of the vector (distributed across all
processors) is Nx * Ny * Nz. For access by other commands, the name
of the single grid produced by fix ttm/grid is "grid". The name of
its per-grid data is "data".
No parameter of the fixes can be used with the *start/stop* keywords
of the :doc:`run <run>` command. The fixes are not invoked during
:doc:`energy minimization <minimize>`.
@ -385,6 +416,15 @@ package <Build_package>` page for more info.
As mentioned above, these fixes require 3d simulations and orthogonal
simulation boxes periodic in all 3 dimensions.
These fixes used a random number generator to Langevin thermostat the
electron temperature. This means you will not get identical answers
when running on different numbers of processors or when restarting a
simulation (even on the same number of processors). However, in a
statistical sense, simulations on different processor counts and
restarted simulation should produce results which are statistically
the same.
Related commands
""""""""""""""""

View File

@ -1,4 +1,5 @@
.. index:: fix viscous
.. index:: fix viscous/kk
fix viscous command
===================

View File

@ -1333,13 +1333,13 @@ For example,
Citation of OpenKIM IMs
"""""""""""""""""""""""
When publishing results obtained using OpenKIM IMs researchers are requested
to cite the OpenKIM project :ref:`(Tadmor) <kim-mainpaper>`, KIM API
:ref:`(Elliott) <kim-api>`, and the specific IM codes used in the simulations,
in addition to the relevant scientific references for the IM. The citation
format for an IM is displayed on its page on
`OpenKIM <https://openkim.org>`_ along with the corresponding BibTex file, and
is automatically added to the LAMMPS citation reminder.
When publishing results obtained using OpenKIM IMs researchers are
requested to cite the OpenKIM project :ref:`(Tadmor) <kim-mainpaper>`,
KIM API :ref:`(Elliott) <kim-api>`, and the specific IM codes used in
the simulations, in addition to the relevant scientific references for
the IM. The citation format for an IM is displayed on its page on
`OpenKIM <https://openkim.org>`_ along with the corresponding BibTex
file, and is automatically added to the LAMMPS citation reminder.
Citing the IM software (KIM infrastructure and specific PM or SM codes) used in
the simulation gives credit to the researchers who developed them and enables
@ -1348,15 +1348,15 @@ open source efforts like OpenKIM to function.
Restrictions
""""""""""""
The *kim* command is part of the KIM package. It is only enabled if LAMMPS is
built with that package. A requirement for the KIM package, is the KIM API
library that must be downloaded from the
`OpenKIM website <https://openkim.org/kim-api/>`_ and installed before LAMMPS is
The *kim* command is part of the KIM package. It is only enabled if
LAMMPS is built with that package. A requirement for the KIM package,
is the KIM API library that must be downloaded from the `OpenKIM website
<https://openkim.org/kim-api/>`_ and installed before LAMMPS is
compiled. When installing LAMMPS from binary, the kim-api package is a
dependency that is automatically downloaded and installed. The *kim query*
command requires the *libcurl* library to be installed. The *kim property*
command requires *Python* 3.6 or later and the *kim-property* python package to
be installed. See the KIM section of the
dependency that is automatically downloaded and installed. The *kim
query* command requires the *libcurl* library to be installed. The *kim
property* command requires *Python* 3.6 or later and the *kim-property*
python package to be installed. See the KIM section of the
:doc:`Packages details <Packages_details>` for details.
Furthermore, when using *kim* command to run KIM SMs, any packages required by

View File

@ -51,7 +51,6 @@ Syntax
*slab* value = volfactor or *nozforce*
volfactor = ratio of the total extended volume used in the
2d approximation compared with the volume of the simulation domain
*ew2d* EW2D correction (available with ELECTRODE package)
*nozforce* turns off kspace forces in the z direction
*splittol* value = tol
tol = relative size of two eigenvalues (see discussion below)
@ -381,18 +380,22 @@ solver is set up.
The *slab* keyword allows an Ewald or PPPM solver to be used for a
systems that are periodic in x,y but non-periodic in z - a
:doc:`boundary <boundary>` setting of "boundary p p f". This is done by
treating the system as if it were periodic in z, but inserting empty
volume between atom slabs and removing dipole inter-slab interactions
so that slab-slab interactions are effectively turned off. The
volfactor value sets the ratio of the extended dimension in z divided
by the actual dimension in z. The recommended value is 3.0. A larger
value is inefficient; a smaller value introduces unwanted slab-slab
:doc:`boundary <boundary>` setting of "boundary p p f". This is done
by treating the system as if it were periodic in z, but inserting
empty volume between atom slabs and removing dipole inter-slab
interactions so that slab-slab interactions are effectively turned
off. The volfactor value sets the ratio of the extended dimension in
z divided by the actual dimension in z. It must be a value >= 1.0. A
value of 1.0 (the default) means the slab approximation is not used.
The recommended value for volfactor is 3.0. A larger value is
inefficient; a smaller value introduces unwanted slab-slab
interactions. The use of fixed boundaries in z means that the user
must prevent particle migration beyond the initial z-bounds, typically
by providing a wall-style fix. The methodology behind the *slab*
option is explained in the paper by :ref:`(Yeh) <Yeh>`. The *slab* option
is also extended to non-neutral systems :ref:`(Ballenegger) <Ballenegger>`.
option is explained in the paper by :ref:`(Yeh) <Yeh>`. The *slab*
option is also extended to non-neutral systems :ref:`(Ballenegger)
<Ballenegger>`.
An alternative slab option can be invoked with the *nozforce* keyword
in lieu of the volfactor. This turns off all kspace forces in the z
@ -402,8 +405,8 @@ boundaries can be set using :doc:`boundary <boundary>` (the slab
approximation in not needed). The *slab* keyword is not currently
supported by Ewald or PPPM when using a triclinic simulation cell. The
slab correction has also been extended to point dipole interactions
:ref:`(Klapp) <Klapp>` in :doc:`kspace_style <kspace_style>` *ewald/disp*,
*ewald/dipole*, and *pppm/dipole*\ .
:ref:`(Klapp) <Klapp>` in :doc:`kspace_style <kspace_style>`
*ewald/disp*, *ewald/dipole*, and *pppm/dipole*\ .
.. note::
@ -448,15 +451,32 @@ Related commands
Default
"""""""
The option defaults are mesh = mesh/disp = 0 0 0, order = order/disp =
5 (PPPM), order = 10 (MSM), minorder = 2, overlap = yes, force = -1.0,
gewald = gewald/disp = 0.0, slab = 1.0, compute = yes, cutoff/adjust =
yes (MSM), pressure/scalar = yes (MSM), fftbench = no (PPPM), diff =
ik (PPPM), mix/disp = pair, force/disp/real = -1.0, force/disp/kspace
= -1.0, split = 0, tol = 1.0e-6, and disp/auto = no. For pppm/intel,
order = order/disp = 7. For scafacos settings, the scafacos tolerance
option depends on the method chosen, as documented above. The
scafacos fmm_tuning default = 0.
The option defaults are as follows:
* compute = yes
* cutoff/adjust = yes (MSM)
* diff = ik (PPPM)
* disp/auto = no
* fftbench = no (PPPM)
* force = -1.0,
* force/disp/kspace = -1.0
* force/disp/real = -1.0
* gewald = gewald/disp = 0.0
* mesh = mesh/disp = 0 0 0
* minorder = 2
* mix/disp = pair
* order = 10 (MSM)
* order = order/disp = 5 (PPPM)
* order = order/disp = 7 (PPPM/intel)
* overlap = yes
* pressure/scalar = yes (MSM)
* slab = 1.0
* split = 0
* tol = 1.0e-6
For scafacos settings, the scafacos tolerance option depends on the
method chosen, as documented above. The scafacos fmm_tuning default
= 0.
----------

View File

@ -115,10 +115,10 @@ each pair of atoms types via the :doc:`pair_coeff <pair_coeff>` command
as in the examples above:
* A (force units)
* :math:`\gamma_{\perp}` (force/velocity units)
* :math:`\gamma_{\parallel}` (force/velocity units)
* :math:`s_{\perp}` (unitless)
* :math:`\gamma_{\perp}` (force/velocity units)
* :math:`s_{\parallel}` (unitless)
* :math:`s_{\perp}` (unitless)
* :math:`r_c` (distance units)
The last coefficient is optional. If not specified, the global DPD

View File

@ -21,7 +21,7 @@ Examples
Description
"""""""""""
.. versionadded:: TBD
.. versionadded:: 22Dec2022
Pair style *pod* defines the proper orthogonal descriptor (POD)
potential :ref:`(Nguyen) <Nguyen20221>`. The mathematical definition of

View File

@ -129,7 +129,7 @@ The first argument of the *python* command is either the *source*
keyword or the name of a Python function. This defines the mode
of the python command.
.. versionchanged:: TBD
.. versionchanged:: 22Dec2022
If the *source* keyword is used, it is followed by either a file name or
the *here* keyword. No other keywords can be used. The *here* keyword

View File

@ -8,7 +8,7 @@ Syntax
.. code-block:: LAMMPS
read_restart file flag
read_restart file
* file = name of binary restart file to read in
@ -37,6 +37,13 @@ processors in the current simulation and the settings of the
changed by the :doc:`balance <balance>` or :doc:`fix balance
<fix_balance>` commands.
.. deprecated:: 23Jun2022
Atom coordinates that are found to be outside the simulation box when
reading the restart will be remapped back into the box and their image
flags updated accordingly. This previously required specifying the
*remap* option, but that is no longer required.
Restart files are saved in binary format to enable exact restarts,
meaning that the trajectories of a restarted run will precisely match
those produced by the original run had it continued on.

View File

@ -192,13 +192,12 @@ calculated which saves time. The :doc:`comm_modify cutoff
acquired from far enough away for operations like bond and angle
evaluations, if no pair style is being used.
Every time a snapshot is read, the timestep for the simulation is
reset, as if the :doc:`reset_timestep <reset_timestep>` command were
used. This command has some restrictions as to what fixes can be
defined. See its page for details. For example, the :doc:`fix
deposit <fix_deposit>` and :doc:`fix dt/reset <fix_dt_reset>` fixes
are in this category. They also make no sense to use with a rerun
command.
Every time a snapshot is read, the timestep for the simulation is reset,
as if the :doc:`reset_timestep <reset_timestep>` command were used.
This command has some restrictions as to what fixes can be defined. See
its documentation page for details. For example, the :doc:`fix deposit
<fix_deposit>` and :doc:`fix dt/reset <fix_dt_reset>` fixes are in this
category. They also make no sense to use with a rerun command.
If time-averaging fixes like :doc:`fix ave/time <fix_ave_time>` are
used, they are invoked on timesteps that are a function of their

View File

@ -63,7 +63,7 @@ Examples
Description
"""""""""""
.. versionadded:: TBD
.. versionadded:: 22Dec2022
The *reset_atoms* command resets the values of a specified atom
property. In contrast to the set command, it does this in a

View File

@ -154,7 +154,8 @@ temperature using a compute that is defined internally as follows:
where group-ID is the same ID used in the velocity command. i.e. the
group of atoms whose velocity is being altered. This compute is
deleted when the velocity command is finished. See the :doc:`compute temp <compute_temp>` command for details. If the calculated
deleted when the velocity command is finished. See the :doc:`compute
temp <compute_temp>` command for details. If the calculated
temperature should have degrees-of-freedom removed due to fix
constraints (e.g. SHAKE or rigid-body constraints), then the
appropriate fix command must be specified before the velocity command
@ -163,13 +164,13 @@ is issued.
The *bias* keyword with a *yes* setting is used by *create* and
*scale*, but only if the *temp* keyword is also used to specify a
:doc:`compute <compute>` that calculates temperature in a desired way.
If the temperature compute also calculates a velocity bias, the
bias is subtracted from atom velocities before the *create* and
*scale* operations are performed. After the operations, the bias is
added back to the atom velocities. See the :doc:`Howto thermostat <Howto_thermostat>` page for more discussion of
temperature computes with biases. Note that the velocity bias is only
applied to atoms in the temperature compute specified with the *temp*
keyword.
If the temperature compute also calculates a velocity bias, the bias
is subtracted from atom velocities before the *create* and *scale*
operations are performed. After the operations, the bias is added
back to the atom velocities. See the :doc:`Howto thermostat
<Howto_thermostat>` page for more discussion of temperature computes
with biases. Note that the velocity bias is only applied to atoms in
the temperature compute specified with the *temp* keyword.
As an example, assume atoms are currently streaming in a flow
direction (which could be separately initialized with the *ramp*
@ -218,7 +219,8 @@ coordinate as stored on a particular machine.
----------
The *rigid* keyword only has meaning when used with the *zero* style.
It allows specification of a fix-ID for one of the :doc:`rigid-body fix <fix_rigid>` variants which defines a set of rigid bodies. The
It allows specification of a fix-ID for one of the :doc:`rigid-body
fix <fix_rigid>` variants which defines a set of rigid bodies. The
zeroing of linear or angular momentum is then performed for each rigid
body defined by the fix, as described above.
@ -235,16 +237,18 @@ command must have been previously used to define the lattice spacing.
Restrictions
""""""""""""
Assigning a temperature via the *create* style to a system with :doc:`rigid bodies <fix_rigid>` or :doc:`SHAKE constraints <fix_shake>` may not
have the desired outcome for two reasons. First, the velocity command
can be invoked before all of the relevant fixes are created and
initialized and the number of adjusted degrees of freedom (DOFs) is
known. Thus it is not possible to compute the target temperature
correctly. Second, the assigned velocities may be partially canceled
when constraints are first enforced, leading to a different
temperature than desired. A workaround for this is to perform a :doc:`run 0 <run>` command, which insures all DOFs are accounted for
properly, and then rescale the temperature to the desired value before
performing a simulation. For example:
Assigning a temperature via the *create* style to a system with
:doc:`rigid bodies <fix_rigid>` or :doc:`SHAKE constraints
<fix_shake>` may not have the desired outcome for two reasons. First,
the velocity command can be invoked before all of the relevant fixes
are created and initialized and the number of adjusted degrees of
freedom (DOFs) is known. Thus it is not possible to compute the
target temperature correctly. Second, the assigned velocities may be
partially canceled when constraints are first enforced, leading to a
different temperature than desired. A workaround for this is to
perform a :doc:`run 0 <run>` command, which insures all DOFs are
accounted for properly, and then rescale the temperature to the
desired value before performing a simulation. For example:
.. code-block:: LAMMPS

View File

@ -414,6 +414,7 @@ cdennist
cdof
ceil
Ceil
cekk
centerline
centro
centroid
@ -473,6 +474,7 @@ Cij
cis
civ
CKD
ckk
Clang
clearstore
Cleary
@ -562,6 +564,8 @@ Coulombic
Coulombics
counterion
counterions
CountI
CountN
Courant
covalent
covalently
@ -779,6 +783,7 @@ dmax
Dmax
dmg
dmi
dname
dnf
DNi
Dobnikar
@ -1253,10 +1258,12 @@ Glosli
Glotzer
gmail
gmake
gmap
gmask
Gmask
GMock
gmres
gname
gneb
GNEB
Goldfarb
@ -1452,6 +1459,7 @@ ieni
ifdefs
iff
ifort
ihi
Ihle
ij
ijk
@ -1462,6 +1470,7 @@ ilabel
Ilie
ilmenau
Ilmenau
ilo
ilp
Ilya
im
@ -1566,6 +1575,8 @@ Iw
Iwers
iwyu
ixcm
ixhi
ixlo
ixx
Ixx
ixy
@ -2413,6 +2424,7 @@ nlen
Nlimit
nlines
Nlines
nlist
nlo
nlocal
Nlocal
@ -2544,11 +2556,15 @@ Nwait
nwchem
nx
Nx
nxlo
nxnodes
Nxyz
ny
Ny
nylo
nz
Nz
nzlo
ocl
octahedral
octants
@ -2618,6 +2634,8 @@ overlayed
Ovito
oxdna
oxDNA
oxhi
oxlo
oxrna
oxRNA
packings
@ -2686,6 +2704,7 @@ Peng
peptide
peratom
Pergamon
pergrid
peri
peridynamic
Peridynamic
@ -3407,6 +3426,8 @@ Sugaku
Suhai
Sukumaran
Sulc
SumI
SumN
sumsq
Sunderland
supercell
@ -3887,6 +3908,7 @@ xa
xAVX
xb
Xc
xc
xcm
Xcm
Xcode
@ -3918,6 +3940,7 @@ xplane
XPlor
xrd
xs
xsc
xstk
xsu
xtc
@ -3941,6 +3964,7 @@ Yazdani
Ybar
ybox
Yc
yc
ycm
Yeh
yellowgreen
@ -3957,6 +3981,7 @@ ymin
yml
Yoshida
ys
ysc
ysu
yu
Yu
@ -3974,6 +3999,7 @@ Zavattieri
zbl
ZBL
Zc
zc
zcm
zeeman
Zeeman
@ -4001,6 +4027,7 @@ zmin
zmq
zN
zs
zsc
zst
Zstandard
zstd

View File

@ -199,7 +199,7 @@ int liblammpsplugin_release(liblammpsplugin_t *lmp)
if (lmp->handle == NULL) return 2;
#ifdef _WIN32
FreeLibrary((HINSTANCE) handle);
FreeLibrary((HINSTANCE) lmp->handle);
#else
dlclose(lmp->handle);
#endif

View File

@ -1,63 +0,0 @@
DATE: 2012-02-01 CONTRIBUTOR: Alexander Stukowski, stukowski@mm.tu-darmstadt.de CITATION: Lenosky, Sadigh, Alonso, Bulatov, de la Rubia, Kim, Voter and Kress, Modell Simul Mater Sci Eng, 8, 825 (2000) COMMENT: Spline-based MEAM potential for Si. Reference: T. J. Lenosky, B. Sadigh, E. Alonso, V. V. Bulatov, T. D. de la Rubia, J. Kim, A. F. Voter, and J. D. Kress, Modell. Simul. Mater. Sci. Eng. 8, 825 (2000)
10
-4.266966781858503300e+01 0.000000000000000000e+00
1 0 1 0
1.500000000000000000e+00 6.929943430771341000e+00 1.653321602557917600e+02
1.833333333333333300e+00 -4.399503747408950400e-01 3.941543472528634600e+01
2.166666666666666500e+00 -1.701233725061446700e+00 6.871065423413908100e+00
2.500000000000000000e+00 -1.624732919215791800e+00 5.340648014033163800e+00
2.833333333333333000e+00 -9.969641728342462100e-01 1.534811309391571000e+00
3.166666666666667000e+00 -2.739141845072665100e-01 -6.334706186546093900e+00
3.500000000000000000e+00 -2.499156963774082700e-02 -1.798864729909626500e+00
3.833333333333333500e+00 -1.784331481529976400e-02 4.743496636420091500e-01
4.166666666666666100e+00 -9.612303290166881000e-03 -4.006506271304824400e-02
4.500000000000000000e+00 0.000000000000000000e+00 -2.394996574779807200e-01
11
-1.000000000000000000e+00 0.000000000000000000e+00
1 0 0 0
1.500000000000000000e+00 1.374674212682983900e-01 -3.227795813279568500e+00
1.700000000000000000e+00 -1.483141815327918000e-01 -6.411648793604404900e+00
1.899999999999999900e+00 -5.597204896096039700e-01 1.003068519633888300e+01
2.100000000000000100e+00 -7.310964379372824100e-01 2.293461970618954700e+00
2.299999999999999800e+00 -7.628287071954063000e-01 1.742018781618444500e+00
2.500000000000000000e+00 -7.291769685066557000e-01 5.460640949384478700e-01
2.700000000000000200e+00 -6.662022220044453400e-01 4.721760106467195500e-01
2.899999999999999900e+00 -5.732830582550895200e-01 2.056894449546524200e+00
3.100000000000000100e+00 -4.069014309729406300e-01 2.319615721086100800e+00
3.299999999999999800e+00 -1.666155295956388300e-01 -2.497162196179187900e-01
3.500000000000000000e+00 0.000000000000000000e+00 -1.237130660986393100e+01
8
7.351364478015182100e-01 6.165217237728655200e-01
1 1 1 1
-1.770934559908718700e+00 -1.074925682941420000e+00 -1.482768170233858500e-01
-3.881557649503457600e-01 -2.004503493658201000e-01 -1.492100354067345500e-01
9.946230300080272100e-01 4.142241371345077300e-01 -7.012475119623896900e-02
2.377401824966400000e+00 8.793892953828742500e-01 -3.944355024164965900e-02
3.760180619924772900e+00 1.266888024536562100e+00 -1.581431192239436000e-02
5.142959414883146800e+00 1.629979548834614900e+00 2.611224310900800400e-02
6.525738209841518900e+00 1.977379549636293600e+00 -1.378738550324104500e-01
7.908517004799891800e+00 2.396177220616657200e+00 7.494253977092666400e-01
10
-3.618936018538757300e+00 0.000000000000000000e+00
1 0 1 0
1.500000000000000000e+00 1.250311510312851300e+00 2.790400588857243500e+01
1.722222222222222300e+00 8.682060369372680600e-01 -4.522554291731776900e+00
1.944444444444444400e+00 6.084604017544847900e-01 5.052931618779816800e+00
2.166666666666666500e+00 4.875624808097850400e-01 1.180825096539679600e+00
2.388888888888888800e+00 4.416345603457190700e-01 -6.673769465415171400e-01
2.611111111111111200e+00 3.760976313325982700e-01 -8.938118490837722000e-01
2.833333333333333000e+00 2.714524157414608400e-01 -5.090324763524399800e-01
3.055555555555555400e+00 1.481440300150710900e-01 6.623665830603995300e-01
3.277777777777777700e+00 4.854596610856590900e-02 7.403702452268122700e-01
3.500000000000000000e+00 0.000000000000000000e+00 2.578982318481970500e+00
8
-1.395041572145673000e+01 1.134616739799360700e+00
1 1 1 1
-1.000000000000000900e+00 5.254163992149617700e+00 1.582685381253900500e+01
-7.428367052748285900e-01 2.359149452448745100e+00 3.117611233789983400e+01
-4.856734105496561800e-01 1.195946960915646100e+00 1.658962813584905800e+01
-2.285101158244838800e-01 1.229952028074150000e+00 1.108360928564026400e+01
2.865317890068852500e-02 2.035650777568434500e+00 9.088861456447702400e+00
2.858164736258610400e-01 3.424741418405580000e+00 5.489943377538379500e+00
5.429797683510331200e-01 4.948585892304984100e+00 -1.882291580187675700e+01
8.001430630762056400e-01 5.617988713941801200e+00 -7.718625571850646200e+00

View File

@ -0,0 +1 @@
../../../potentials/Si_1.meam.spline

View File

@ -1,130 +0,0 @@
# Ti-O cubic spline potential where O is in the dilute limit. DATE: 2016-06-05 CONTRIBUTOR: Pinchao Zhang, Dallas R. Trinkle
meam/spline 2 Ti O
spline3eq
13
-20 0
1.742692837 3.744277175966 99.4865081627958
2.05580176725 0.910839730906 10.8702523265355
2.3689106975 0.388045896634 -1.55322418749562
2.68201962775 -0.018840906533 2.43630041329215
2.995128558 -0.248098929639 2.67912713976835
3.30823748825 -0.264489550297 -0.125056384603077
3.6213464185 -0.227196189283 1.10662555360438
3.93445534875 -0.129293090176 -0.592053676745914
4.247564279 -0.059685366933 -0.470123414607672
4.56067320925 -0.031100025561 -0.0380739973059663
4.8737821395 -0.013847363202 -0.0711547960695406
5.18689106975 -0.003203412728 -0.081768292420175
5.5 0 -0.0571422964883619
spline3eq
5
0.155001355787331 0
1.9 0.533321679606674 0
2.8 0.456402081843862 -1.60311717015859
3.7 -0.324281383502201 1.19940299483249
4.6 -0.474029826906675 1.47909794595154
5.5 0 -2.49521499855605
spline3eq
13
0 0
1.742692837 0 0
2.05580176725 0 0
2.3689106975 0 0
2.68201962775 0 0
2.995128558 0 0
3.30823748825 0 0
3.6213464185 0 0
3.93445534875 0 0
4.247564279 0 0
4.56067320925 0 0
4.8737821395 0 0
5.18689106975 0 0
5.5 0 0
spline3eq
11
-1 0
2.055801767 1.7475279661 -525.869786904802
2.2912215903 -5.8677963945 252.796316927755
2.5266414136 -8.3376288737 71.7318388721015
2.7620612369 -5.8398712842 -1.93587742753693
2.9974810602 -3.1140648231 -39.2999192667503
3.2329008835 -1.7257245065 14.3424136002004
3.4683207068 -0.4428977017 -29.4925534559498
3.7037405301 -0.1466643003 -3.18010534572236
3.9391603534 -0.2095507945 3.33490838803603
4.1745801767 -0.1442384563 3.71918691359508
4.41 0 -9.66717019857564
spline3eq
5
-61.9827585211652 0
1.9 11.2293641315584 0
2.8 -27.9976343076148 122.648031332411
3.7 -8.32979773113248 -54.3340881766381
4.6 -1.00863195297399 3.23150064581724
5.5 0 -5.3514242228123
spline3eq
4
0.00776934946045395 0.105197706160344
-55.14233165 -0.29745568008 0.00152870603877451
-44.7409899033333 -0.15449458722 0.00038933722543571
-34.3396481566667 0.05098657168 0.00038124926922248
-23.93830641 0.57342694704 0.0156639264890892
spline3eq
5
-0.00676745157022662 -0.0159520381982146
-23.9928 0.297607384684645 0
-15.9241175 0.216691597077105 -0.0024248755353942
-7.855435 0.0637598673719069 0.00306245895013358
0.213247499999998 -0.00183450621970427 -0.00177588407633909
8.28193 -0.111277018874367 0
spline3eq
10
2.77327511656661 0
2.055801767 -0.1485215264 72.2010867146919
2.31737934844444 1.6845304918 -47.2744689053404
2.57895692988889 2.0113365977 -15.1859578405326
2.84053451133333 1.1444092747 3.33978204841873
3.10211209277778 0.2861606803 2.587867603808
3.36368967422222 -0.3459281126 6.14070694084556
3.62526725566667 -0.6257480601 3.7397696717154
3.88684483711111 -0.6119510826 4.64749084871402
4.14842241855556 -0.3112059651 2.83275746415936
4.41 0 -15.0612086827734
spline3eq
5
12.3315547862781 0
1.9 2.62105440156724 0
2.8 10.2850803058354 -25.439802988016
3.7 3.23933763743897 -7.20203673434025
4.6 -5.79049355858613 39.5509978688682
5.5 0 -41.221771373642
spline3eq
8
8.33642274810572 -60.4024574736564
-1 0.07651409193 -110.652321293778
-0.724509054371429 0.14155824541 44.8853405500508
-0.449018108742857 0.75788697341 -25.3065115342002
-0.173527163114286 0.63011570378 -2.48510144915082
0.101963782514286 0.09049597305 2.68769386908235
0.377454728142857 -0.35741586657 -1.01558570129633
0.652945673771428 -0.65293217647 13.4224786001212
0.9284366194 -6.00912190653 -452.752542694929
spline3eq
5
0.137191606537625 -1.55094230968985
-1 0.0513843442016519 0
-0.5 0.0179024412245673 -2.44986494990154
0 -0.260650876879273 3.91774583656401
0.5 -0.190163791764901 -4.84414871911743
1 -0.763795416646599 0
spline3eq
8
0 0
-1 0 0
-0.724509054371429 0 0
-0.449018108742857 0 0
-0.173527163114286 0 0
0.101963782514286 0 0
0.377454728142857 0 0
0.652945673771428 0 0
0.9284366194 0 0

View File

@ -0,0 +1 @@
../../../potentials/TiO.meam.spline

View File

@ -0,0 +1,91 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 3.98
Lattice spacing in x,y,z = 3.98 3.98 3.98
region box block 0 5 0 5 0 5
create_box 1 box
Created orthogonal box = (0 0 0) to (19.9 19.9 19.9)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 500 atoms
using lattice units in orthogonal box = (0 0 0) to (19.9 19.9 19.9)
create_atoms CPU = 0.000 seconds
pair_style meam/spline
pair_coeff * * Si_1.meam.spline Si
Reading meam/spline potential file Si_1.meam.spline with DATE: 2012-02-01
mass * 28.085
velocity all create 500.0 44226611
fix 1 all nvt temp 500.0 500.0 1.0
thermo 50
run 500
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.5
ghost atom cutoff = 6.5
binsize = 3.25, bins = 7 7 7
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.902 | 3.902 | 3.902 Mbytes
Step Temp E_pair E_mol TotEng Press
0 500 -1847.729 0 -1815.4786 1813162.7
50 1934.0932 -1940.8016 0 -1816.051 -48657.676
100 2570.1286 -1984.8725 0 -1819.0971 8002.4248
150 2566.7917 -1990.2724 0 -1824.7123 16819.447
200 2555.1319 -1995.2233 0 -1830.4152 5891.5313
250 2487.2881 -1995.8302 0 -1835.3981 -4339.7172
300 2381.4836 -1994.2492 0 -1840.6415 16508.04
350 2330.8663 -1996.6588 0 -1846.3161 24194.447
400 2212.6035 -1994.9278 0 -1852.2131 -9856.3709
450 2257.7531 -2003.8187 0 -1858.1918 -8029.6019
500 2211.4385 -2006.9846 0 -1864.345 4152.4867
Loop time of 3.06076 on 1 procs for 500 steps with 500 atoms
Performance: 14.114 ns/day, 1.700 hours/ns, 163.358 timesteps/s, 81.679 katom-step/s
99.9% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 3.03 | 3.03 | 3.03 | 0.0 | 99.00
Neigh | 0.020755 | 0.020755 | 0.020755 | 0.0 | 0.68
Comm | 0.0045293 | 0.0045293 | 0.0045293 | 0.0 | 0.15
Output | 0.00020334 | 0.00020334 | 0.00020334 | 0.0 | 0.01
Modify | 0.0038919 | 0.0038919 | 0.0038919 | 0.0 | 0.13
Other | | 0.001352 | | | 0.04
Nlocal: 500 ave 500 max 500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1767 ave 1767 max 1767 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 18059 ave 18059 max 18059 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 36118 ave 36118 max 36118 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 36118
Ave neighs/atom = 72.236
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:03

View File

@ -0,0 +1,91 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 3.98
Lattice spacing in x,y,z = 3.98 3.98 3.98
region box block 0 5 0 5 0 5
create_box 1 box
Created orthogonal box = (0 0 0) to (19.9 19.9 19.9)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 500 atoms
using lattice units in orthogonal box = (0 0 0) to (19.9 19.9 19.9)
create_atoms CPU = 0.000 seconds
pair_style meam/spline
pair_coeff * * Si_1.meam.spline Si
Reading meam/spline potential file Si_1.meam.spline with DATE: 2012-02-01
mass * 28.085
velocity all create 500.0 44226611
fix 1 all nvt temp 500.0 500.0 1.0
thermo 50
run 500
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.5
ghost atom cutoff = 6.5
binsize = 3.25, bins = 7 7 7
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.866 | 3.866 | 3.866 Mbytes
Step Temp E_pair E_mol TotEng Press
0 500 -1847.729 0 -1815.4786 1813162.7
50 1923.4262 -1940.0936 0 -1816.0311 -38700.835
100 2535.2542 -1982.6249 0 -1819.0989 10216.821
150 2592.8247 -1992.1569 0 -1824.9176 4839.3385
200 2484.7391 -1990.8452 0 -1830.5775 14040.141
250 2597.4401 -2003.1619 0 -1835.625 1261.5199
300 2513.0793 -2002.942 0 -1840.8463 6690.9815
350 2390.933 -2001.0761 0 -1846.859 -4880.1146
400 2269.0782 -1999.3441 0 -1852.9867 -4921.4391
450 2287.5096 -2006.8236 0 -1859.2774 -7313.6151
500 2303.0918 -2014.0693 0 -1865.518 -9995.1789
Loop time of 0.845261 on 4 procs for 500 steps with 500 atoms
Performance: 51.108 ns/day, 0.470 hours/ns, 591.533 timesteps/s, 295.767 katom-step/s
99.6% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.82311 | 0.82403 | 0.82556 | 0.1 | 97.49
Neigh | 0.0054304 | 0.0055826 | 0.0058949 | 0.2 | 0.66
Comm | 0.0095108 | 0.011321 | 0.012448 | 1.0 | 1.34
Output | 0.00019703 | 0.0002108 | 0.00024574 | 0.0 | 0.02
Modify | 0.0026442 | 0.002759 | 0.0028243 | 0.1 | 0.33
Other | | 0.001353 | | | 0.16
Nlocal: 125 ave 131 max 118 min
Histogram: 1 0 0 1 0 0 0 0 1 1
Nghost: 979.25 ave 986 max 975 min
Histogram: 1 1 0 1 0 0 0 0 0 1
Neighs: 4541.75 ave 4712 max 4362 min
Histogram: 1 1 0 0 0 0 0 0 0 2
FullNghs: 9083.5 ave 9485 max 8601 min
Histogram: 1 0 0 1 0 0 0 0 1 1
Total # of neighbors = 36334
Ave neighs/atom = 72.668
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:00

View File

@ -0,0 +1,253 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
#
variable T_depart equal 300
variable dt equal 0.0002
variable a equal 4.5937
variable c equal 2.9587
variable ca equal ${c}/${a}
variable ca equal 2.9587/${a}
variable ca equal 2.9587/4.5937
variable nx equal 6
variable ny equal 6
variable nz equal 11
variable bx equal ${a}*${nx}
variable bx equal 4.5937*${nx}
variable bx equal 4.5937*6
variable by equal ${a}*${ny}
variable by equal 4.5937*${ny}
variable by equal 4.5937*6
variable bz equal ${c}*${nz}
variable bz equal 2.9587*${nz}
variable bz equal 2.9587*11
# =======================================================================
units metal
atom_style atomic
dimension 3
boundary p p p
lattice sc 1.0
Lattice spacing in x,y,z = 1 1 1
region box_vide prism 0 ${bx} 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 32.5457 0.0 0.0 0.0
create_box 2 box_vide
Created triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
1 by 1 by 1 MPI processor grid
#lattice sc 1.0
#region box_TiO2 block 0 ${bx} 0 ${by} 0 ${bz}
# titanium atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 2 region box_vide
Created 792 atoms
using lattice units in triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
create_atoms CPU = 0.000 seconds
# Oxygen atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 1 region box_vide
Created 1584 atoms
using lattice units in triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
create_atoms CPU = 0.000 seconds
mass 1 16.00
group Oxy type 1
1584 atoms in group Oxy
mass 2 47.867
group Ti type 2
792 atoms in group Ti
velocity all create ${T_depart} 277387
velocity all create 300 277387
pair_style meam/spline
pair_coeff * * TiO.meam.spline O Ti
Reading meam/spline potential file TiO.meam.spline with DATE: 2016-06-05
neighbor 0.5 bin
neigh_modify every 2 delay 0 check yes
timestep ${dt}
timestep 0.0002
thermo_style custom step temp press pe ke etotal lx ly lz vol
thermo 10
#dump 5 all custom 500 boxAlpha_alumina.lammpstrj id type q x y z
fix 3 all nve
run 100
Neighbor list info ...
update: every = 2 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6
ghost atom cutoff = 6
binsize = 3, bins = 10 10 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 5.175 | 5.175 | 5.175 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 300 22403.656 -14374.073 92.097853 -14281.975 27.5622 27.5622 32.5457 24724.15
10 301.41345 23612.297 -14374.507 92.531772 -14281.975 27.5622 27.5622 32.5457 24724.15
20 305.11674 25127.832 -14375.643 93.668657 -14281.974 27.5622 27.5622 32.5457 24724.15
30 313.28903 26655.89 -14378.151 96.17749 -14281.974 27.5622 27.5622 32.5457 24724.15
40 328.94567 26999.049 -14382.957 100.98397 -14281.974 27.5622 27.5622 32.5457 24724.15
50 354.05827 23023.294 -14390.667 108.69336 -14281.974 27.5622 27.5622 32.5457 24724.15
60 390.48404 13594.655 -14401.849 119.87581 -14281.973 27.5622 27.5622 32.5457 24724.15
70 442.69928 151.15709 -14417.877 135.90551 -14281.972 27.5622 27.5622 32.5457 24724.15
80 516.89551 -14984.124 -14440.654 158.68322 -14281.971 27.5622 27.5622 32.5457 24724.15
90 618.22135 -29948.066 -14471.76 189.78953 -14281.971 27.5622 27.5622 32.5457 24724.15
100 747.6193 -41964.291 -14511.487 229.51378 -14281.973 27.5622 27.5622 32.5457 24724.15
Loop time of 25.3398 on 1 procs for 100 steps with 2376 atoms
Performance: 0.068 ns/day, 351.941 hours/ns, 3.946 timesteps/s, 9.377 katom-step/s
99.9% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 25.324 | 25.324 | 25.324 | 0.0 | 99.94
Neigh | 0.0079644 | 0.0079644 | 0.0079644 | 0.0 | 0.03
Comm | 0.0030695 | 0.0030695 | 0.0030695 | 0.0 | 0.01
Output | 0.00032829 | 0.00032829 | 0.00032829 | 0.0 | 0.00
Modify | 0.0028312 | 0.0028312 | 0.0028312 | 0.0 | 0.01
Other | | 0.00137 | | | 0.01
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4479 ave 4479 max 4479 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 106396 ave 106396 max 106396 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 212792 ave 212792 max 212792 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 212792
Ave neighs/atom = 89.558923
Neighbor list builds = 1
Dangerous builds = 0
unfix 3
fix 1 all box/relax tri 0.0 vmax 0.001
minimize 1.0e-3 1.0e-5 1000 10000
Switching to 'neigh_modify every 1 delay 0 check yes' setting during minimization
Per MPI rank memory allocation (min/avg/max) = 6.3 | 6.3 | 6.3 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
100 747.6193 -41964.291 -14511.487 229.51378 -14281.973 27.5622 27.5622 32.5457 24724.15
101 747.6193 -39284.65 -14517.424 229.51378 -14287.91 27.569615 27.569695 32.513154 24712.789
Loop time of 0.515558 on 1 procs for 1 steps with 2376 atoms
99.7% CPU use with 1 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = energy tolerance
Energy initial, next-to-last, final =
-14511.4866189158 -14511.4866189158 -14517.4235162115
Force two-norm initial, final = 5602.2481 5486.9746
Force max component initial, final = 5232.0514 5109.4284
Final line search alpha, max atom move = 1.9112962e-07 0.00097656312
Iterations, force evaluations = 1 1
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.51518 | 0.51518 | 0.51518 | 0.0 | 99.93
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 6.888e-05 | 6.888e-05 | 6.888e-05 | 0.0 | 0.01
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.0003093 | | | 0.06
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4449 ave 4449 max 4449 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 105639 ave 105639 max 105639 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 211278 ave 211278 max 211278 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 211278
Ave neighs/atom = 88.921717
Neighbor list builds = 0
Dangerous builds = 0
unfix 1
reset_timestep 0
thermo 50
fix 3 all npt temp 300 300 0.1 aniso 1.0 1.0 1.0
run 500
Per MPI rank memory allocation (min/avg/max) = 5.19 | 5.19 | 5.19 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 747.6193 -39284.65 -14517.424 229.51378 -14287.91 27.569615 27.569695 32.513154 24712.789
50 1155.2878 30637.502 -14678.803 354.6651 -14324.138 27.608715 27.609165 32.375366 24678.238
100 790.04907 99856.609 -14678.837 242.53941 -14436.297 27.777983 27.777976 32.017141 24704.942
150 938.88715 -21502.296 -14803.769 288.23164 -14515.537 27.996567 27.995118 31.67022 24822.079
200 420.1181 -791.77075 -14671.671 128.97325 -14542.698 28.126851 28.125845 31.431177 24864.936
250 352.17802 -3234.483 -14664.99 108.11613 -14556.874 28.222575 28.223558 31.238791 24882.993
300 622.92198 3667.4381 -14758.193 191.23259 -14566.96 28.301663 28.304917 31.072279 24891.264
350 888.27299 26277.515 -14852.568 272.69345 -14579.875 28.370265 28.375054 30.937123 24904.626
400 735.44199 63107.92 -14823.872 225.77543 -14598.097 28.44692 28.452281 30.838022 24959.67
450 804.82182 6213.5499 -14861.115 247.07454 -14614.04 28.543993 28.548769 30.775738 25079.021
500 628.1908 -33923.393 -14814.724 192.85008 -14621.874 28.612082 28.615255 30.740711 25168.712
Loop time of 112.349 on 1 procs for 500 steps with 2376 atoms
Performance: 0.077 ns/day, 312.081 hours/ns, 4.450 timesteps/s, 10.574 katom-step/s
99.9% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 112.15 | 112.15 | 112.15 | 0.0 | 99.82
Neigh | 0.13243 | 0.13243 | 0.13243 | 0.0 | 0.12
Comm | 0.01269 | 0.01269 | 0.01269 | 0.0 | 0.01
Output | 0.00029334 | 0.00029334 | 0.00029334 | 0.0 | 0.00
Modify | 0.053182 | 0.053182 | 0.053182 | 0.0 | 0.05
Other | | 0.005153 | | | 0.00
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4358 ave 4358 max 4358 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 102634 ave 102634 max 102634 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 205268 ave 205268 max 205268 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 205268
Ave neighs/atom = 86.392256
Neighbor list builds = 16
Dangerous builds = 0
Total wall time: 0:02:19

View File

@ -0,0 +1,253 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
#
variable T_depart equal 300
variable dt equal 0.0002
variable a equal 4.5937
variable c equal 2.9587
variable ca equal ${c}/${a}
variable ca equal 2.9587/${a}
variable ca equal 2.9587/4.5937
variable nx equal 6
variable ny equal 6
variable nz equal 11
variable bx equal ${a}*${nx}
variable bx equal 4.5937*${nx}
variable bx equal 4.5937*6
variable by equal ${a}*${ny}
variable by equal 4.5937*${ny}
variable by equal 4.5937*6
variable bz equal ${c}*${nz}
variable bz equal 2.9587*${nz}
variable bz equal 2.9587*11
# =======================================================================
units metal
atom_style atomic
dimension 3
boundary p p p
lattice sc 1.0
Lattice spacing in x,y,z = 1 1 1
region box_vide prism 0 ${bx} 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 32.5457 0.0 0.0 0.0
create_box 2 box_vide
Created triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
1 by 2 by 2 MPI processor grid
#lattice sc 1.0
#region box_TiO2 block 0 ${bx} 0 ${by} 0 ${bz}
# titanium atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 2 region box_vide
Created 792 atoms
using lattice units in triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
create_atoms CPU = 0.000 seconds
# Oxygen atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 1 region box_vide
Created 1584 atoms
using lattice units in triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
create_atoms CPU = 0.000 seconds
mass 1 16.00
group Oxy type 1
1584 atoms in group Oxy
mass 2 47.867
group Ti type 2
792 atoms in group Ti
velocity all create ${T_depart} 277387
velocity all create 300 277387
pair_style meam/spline
pair_coeff * * TiO.meam.spline O Ti
Reading meam/spline potential file TiO.meam.spline with DATE: 2016-06-05
neighbor 0.5 bin
neigh_modify every 2 delay 0 check yes
timestep ${dt}
timestep 0.0002
thermo_style custom step temp press pe ke etotal lx ly lz vol
thermo 10
#dump 5 all custom 500 boxAlpha_alumina.lammpstrj id type q x y z
fix 3 all nve
run 100
Neighbor list info ...
update: every = 2 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6
ghost atom cutoff = 6
binsize = 3, bins = 10 10 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.934 | 3.934 | 3.934 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 300 22403.656 -14374.073 92.097853 -14281.975 27.5622 27.5622 32.5457 24724.15
10 301.16725 23582.084 -14374.431 92.456192 -14281.975 27.5622 27.5622 32.5457 24724.15
20 304.58237 25059.749 -14375.479 93.504609 -14281.974 27.5622 27.5622 32.5457 24724.15
30 312.41477 26504.358 -14377.883 95.9091 -14281.974 27.5622 27.5622 32.5457 24724.15
40 327.67099 26687.057 -14382.566 100.59265 -14281.974 27.5622 27.5622 32.5457 24724.15
50 352.32125 22677.292 -14390.134 108.1601 -14281.974 27.5622 27.5622 32.5457 24724.15
60 388.40592 12472.705 -14401.211 119.23784 -14281.973 27.5622 27.5622 32.5457 24724.15
70 439.97199 -1520.4694 -14417.04 135.06825 -14281.972 27.5622 27.5622 32.5457 24724.15
80 513.34361 -16733.316 -14439.564 157.59282 -14281.971 27.5622 27.5622 32.5457 24724.15
90 613.3542 -31099.591 -14470.267 188.29535 -14281.971 27.5622 27.5622 32.5457 24724.15
100 741.02836 -42358.226 -14509.464 227.4904 -14281.973 27.5622 27.5622 32.5457 24724.15
Loop time of 6.2168 on 4 procs for 100 steps with 2376 atoms
Performance: 0.278 ns/day, 86.344 hours/ns, 16.085 timesteps/s, 38.219 katom-step/s
99.7% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 6.1958 | 6.2012 | 6.2089 | 0.2 | 99.75
Neigh | 0.0021079 | 0.0021422 | 0.0021639 | 0.0 | 0.03
Comm | 0.0038617 | 0.011586 | 0.017042 | 4.9 | 0.19
Output | 0.00027671 | 0.00029064 | 0.00032183 | 0.0 | 0.00
Modify | 0.00078288 | 0.0008221 | 0.00085066 | 0.0 | 0.01
Other | | 0.0007406 | | | 0.01
Nlocal: 594 ave 599 max 589 min
Histogram: 1 0 0 0 0 2 0 0 0 1
Nghost: 2290.25 ave 2296 max 2282 min
Histogram: 1 0 0 0 0 1 0 0 1 1
Neighs: 26671.5 ave 26934 max 26495 min
Histogram: 1 0 0 2 0 0 0 0 0 1
FullNghs: 53343 ave 53828 max 52922 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 213372
Ave neighs/atom = 89.80303
Neighbor list builds = 1
Dangerous builds = 0
unfix 3
fix 1 all box/relax tri 0.0 vmax 0.001
minimize 1.0e-3 1.0e-5 1000 10000
Switching to 'neigh_modify every 1 delay 0 check yes' setting during minimization
Per MPI rank memory allocation (min/avg/max) = 5.059 | 5.059 | 5.059 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
100 741.02836 -42358.226 -14509.464 227.4904 -14281.973 27.5622 27.5622 32.5457 24724.15
101 741.02836 -39686.588 -14515.398 227.4904 -14287.907 27.569587 27.569656 32.513154 24712.729
Loop time of 0.129231 on 4 procs for 1 steps with 2376 atoms
99.7% CPU use with 4 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = energy tolerance
Energy initial, next-to-last, final =
-14509.4635100091 -14509.4635100091 -14515.3978891321
Force two-norm initial, final = 5602.6938 5487.7658
Force max component initial, final = 5235.2654 5113.0611
Final line search alpha, max atom move = 1.9101228e-07 0.00097665746
Iterations, force evaluations = 1 1
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.12891 | 0.12896 | 0.129 | 0.0 | 99.79
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 5.5406e-05 | 9.5992e-05 | 0.00015051 | 0.0 | 0.07
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.0001737 | | | 0.13
Nlocal: 594 ave 601 max 586 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Nghost: 2263.25 ave 2271 max 2251 min
Histogram: 1 0 0 0 0 0 1 0 1 1
Neighs: 26425.8 ave 26807 max 26121 min
Histogram: 1 0 0 1 1 0 0 0 0 1
FullNghs: 52851.5 ave 53580 max 52175 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 211406
Ave neighs/atom = 88.975589
Neighbor list builds = 0
Dangerous builds = 0
unfix 1
reset_timestep 0
thermo 50
fix 3 all npt temp 300 300 0.1 aniso 1.0 1.0 1.0
run 500
Per MPI rank memory allocation (min/avg/max) = 3.95 | 3.95 | 3.95 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 741.02836 -39686.588 -14515.398 227.4904 -14287.907 27.569587 27.569656 32.513154 24712.729
50 1157.3495 29319.762 -14679.318 355.29803 -14324.02 27.609057 27.60935 32.375563 24678.86
100 777.56728 101869.39 -14674.833 238.70759 -14436.125 27.778509 27.77736 32.017401 24705.064
150 945.51255 -18319.494 -14806.675 290.26559 -14516.409 27.998296 27.995331 31.670366 24823.916
200 427.47153 -4045.9984 -14674.872 131.2307 -14543.641 28.130223 28.127085 31.431723 24869.445
250 362.817 -7274.2701 -14669.054 111.38222 -14557.672 28.225123 28.222595 31.238594 24884.233
300 626.29209 7236.0808 -14760.119 192.26719 -14567.852 28.302278 28.299838 31.070157 24885.639
350 859.86407 30087.808 -14845.065 263.97212 -14581.093 28.372301 28.369278 30.934494 24899.226
400 755.2581 54745.968 -14830.701 231.85883 -14598.842 28.450314 28.448368 30.836162 24957.71
450 802.52878 5682.9998 -14860.196 246.37059 -14613.826 28.542362 28.541716 30.773281 25069.392
500 631.84048 -31484.881 -14816.098 193.97051 -14622.127 28.605943 28.605973 30.737856 25152.813
Loop time of 27.3207 on 4 procs for 500 steps with 2376 atoms
Performance: 0.316 ns/day, 75.891 hours/ns, 18.301 timesteps/s, 43.484 katom-step/s
99.8% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 27.21 | 27.221 | 27.237 | 0.2 | 99.64
Neigh | 0.036501 | 0.036849 | 0.037083 | 0.1 | 0.13
Comm | 0.02089 | 0.036492 | 0.047866 | 5.1 | 0.13
Output | 0.00023096 | 0.00024391 | 0.00027788 | 0.0 | 0.00
Modify | 0.022565 | 0.022706 | 0.022764 | 0.1 | 0.08
Other | | 0.003102 | | | 0.01
Nlocal: 594 ave 606 max 582 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Nghost: 2226 ave 2238 max 2214 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Neighs: 25652.8 ave 26129 max 25153 min
Histogram: 1 0 0 0 1 1 0 0 0 1
FullNghs: 51305.5 ave 52398 max 50251 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Total # of neighbors = 205222
Ave neighs/atom = 86.372896
Neighbor list builds = 16
Dangerous builds = 0
Total wall time: 0:00:33

View File

@ -1,88 +0,0 @@
LAMMPS (13 Apr 2017)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 3.98
Lattice spacing in x,y,z = 3.98 3.98 3.98
region box block 0 5 0 5 0 5
create_box 1 box
Created orthogonal box = (0 0 0) to (19.9 19.9 19.9)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 500 atoms
pair_style meam/spline
pair_coeff * * Si_1.meam.spline Si
Reading potential file Si_1.meam.spline with DATE: 2012-02-01
mass * 28.085
velocity all create 500.0 44226611
fix 1 all nvt temp 500.0 500.0 1.0
thermo 50
run 500
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.5
ghost atom cutoff = 6.5
binsize = 3.25, bins = 7 7 7
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.892 | 3.892 | 3.892 Mbytes
Step Temp E_pair E_mol TotEng Press
0 500 -1847.729 0 -1815.4786 1813162.7
50 1934.0932 -1940.8016 0 -1816.051 -48657.676
100 2570.1286 -1984.8725 0 -1819.0971 8002.4248
150 2566.7917 -1990.2724 0 -1824.7123 16819.447
200 2555.1319 -1995.2233 0 -1830.4152 5891.5313
250 2487.2881 -1995.8302 0 -1835.3981 -4339.7172
300 2381.4836 -1994.2492 0 -1840.6415 16508.04
350 2330.8663 -1996.6588 0 -1846.3161 24194.447
400 2212.6035 -1994.9278 0 -1852.2131 -9856.3709
450 2257.7531 -2003.8187 0 -1858.1918 -8029.6019
500 2211.4385 -2006.9846 0 -1864.345 4152.4867
Loop time of 5.13837 on 1 procs for 500 steps with 500 atoms
Performance: 8.407 ns/day, 2.855 hours/ns, 97.307 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 5.0952 | 5.0952 | 5.0952 | 0.0 | 99.16
Neigh | 0.026447 | 0.026447 | 0.026447 | 0.0 | 0.51
Comm | 0.0063307 | 0.0063307 | 0.0063307 | 0.0 | 0.12
Output | 0.0001905 | 0.0001905 | 0.0001905 | 0.0 | 0.00
Modify | 0.0082877 | 0.0082877 | 0.0082877 | 0.0 | 0.16
Other | | 0.00187 | | | 0.04
Nlocal: 500 ave 500 max 500 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1767 ave 1767 max 1767 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 18059 ave 18059 max 18059 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 36118 ave 36118 max 36118 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 36118
Ave neighs/atom = 72.236
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:05

View File

@ -1,88 +0,0 @@
LAMMPS (13 Apr 2017)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 3.98
Lattice spacing in x,y,z = 3.98 3.98 3.98
region box block 0 5 0 5 0 5
create_box 1 box
Created orthogonal box = (0 0 0) to (19.9 19.9 19.9)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 500 atoms
pair_style meam/spline
pair_coeff * * Si_1.meam.spline Si
Reading potential file Si_1.meam.spline with DATE: 2012-02-01
mass * 28.085
velocity all create 500.0 44226611
fix 1 all nvt temp 500.0 500.0 1.0
thermo 50
run 500
Neighbor list info ...
update every 1 steps, delay 10 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.5
ghost atom cutoff = 6.5
binsize = 3.25, bins = 7 7 7
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.861 | 3.861 | 3.861 Mbytes
Step Temp E_pair E_mol TotEng Press
0 500 -1847.729 0 -1815.4786 1813162.7
50 1923.4262 -1940.0936 0 -1816.0311 -38700.835
100 2535.2542 -1982.6249 0 -1819.0989 10216.821
150 2592.8247 -1992.1569 0 -1824.9176 4839.3385
200 2484.7391 -1990.8452 0 -1830.5775 14040.141
250 2597.4401 -2003.1619 0 -1835.625 1261.5199
300 2513.0793 -2002.942 0 -1840.8463 6690.9815
350 2390.933 -2001.0761 0 -1846.859 -4880.1146
400 2269.0782 -1999.3441 0 -1852.9867 -4921.4391
450 2287.5096 -2006.8236 0 -1859.2774 -7313.6151
500 2303.0918 -2014.0693 0 -1865.518 -9995.1789
Loop time of 1.46588 on 4 procs for 500 steps with 500 atoms
Performance: 29.470 ns/day, 0.814 hours/ns, 341.093 timesteps/s
99.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 1.4273 | 1.4292 | 1.432 | 0.1 | 97.50
Neigh | 0.0068567 | 0.0070301 | 0.0073655 | 0.2 | 0.48
Comm | 0.019111 | 0.022127 | 0.024148 | 1.2 | 1.51
Output | 0.00023174 | 0.00024784 | 0.00029206 | 0.0 | 0.02
Modify | 0.005043 | 0.0052016 | 0.0054417 | 0.2 | 0.35
Other | | 0.002066 | | | 0.14
Nlocal: 125 ave 131 max 118 min
Histogram: 1 0 0 1 0 0 0 0 1 1
Nghost: 979.25 ave 986 max 975 min
Histogram: 1 1 0 1 0 0 0 0 0 1
Neighs: 4541.75 ave 4712 max 4362 min
Histogram: 1 1 0 0 0 0 0 0 0 2
FullNghs: 9083.5 ave 9485 max 8601 min
Histogram: 1 0 0 1 0 0 0 0 1 1
Total # of neighbors = 36334
Ave neighs/atom = 72.668
Neighbor list builds = 14
Dangerous builds = 0
Total wall time: 0:00:01

View File

@ -1,248 +0,0 @@
LAMMPS (13 Apr 2017)
using 1 OpenMP thread(s) per MPI task
#
variable T_depart equal 300
variable dt equal 0.0002
variable a equal 4.5937
variable c equal 2.9587
variable ca equal ${c}/${a}
variable ca equal 2.9587/${a}
variable ca equal 2.9587/4.5937
variable nx equal 6
variable ny equal 6
variable nz equal 11
variable bx equal ${a}*${nx}
variable bx equal 4.5937*${nx}
variable bx equal 4.5937*6
variable by equal ${a}*${ny}
variable by equal 4.5937*${ny}
variable by equal 4.5937*6
variable bz equal ${c}*${nz}
variable bz equal 2.9587*${nz}
variable bz equal 2.9587*11
# =======================================================================
units metal
atom_style atomic
dimension 3
boundary p p p
lattice sc 1.0
Lattice spacing in x,y,z = 1 1 1
region box_vide prism 0 ${bx} 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 32.5457 0.0 0.0 0.0
create_box 2 box_vide
Created triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
1 by 1 by 1 MPI processor grid
#lattice sc 1.0
#region box_TiO2 block 0 ${bx} 0 ${by} 0 ${bz}
# titanium atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 2 region box_vide
Created 792 atoms
# Oxygen atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 1 region box_vide
Created 1584 atoms
mass 1 16.00
group Oxy type 1
1584 atoms in group Oxy
mass 2 47.867
group Ti type 2
792 atoms in group Ti
velocity all create ${T_depart} 277387
velocity all create 300 277387
pair_style meam/spline
pair_coeff * * TiO.meam.spline O Ti
Reading potential file TiO.meam.spline with DATE: 2016-06-05
neighbor 0.5 bin
neigh_modify every 2 delay 0 check yes
timestep ${dt}
timestep 0.0002
thermo_style custom step temp press pe ke etotal lx ly lz vol
thermo 10
#dump 5 all custom 500 boxAlpha_alumina.lammpstrj id type q x y z
fix 3 all nve
run 100
Neighbor list info ...
update every 2 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6
ghost atom cutoff = 6
binsize = 3, bins = 10 10 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 5.146 | 5.146 | 5.146 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 300 22403.656 -14374.073 92.097853 -14281.975 27.5622 27.5622 32.5457 24724.15
10 301.41345 23612.297 -14374.507 92.531772 -14281.975 27.5622 27.5622 32.5457 24724.15
20 305.11674 25127.832 -14375.643 93.668657 -14281.974 27.5622 27.5622 32.5457 24724.15
30 313.28903 26655.89 -14378.151 96.17749 -14281.974 27.5622 27.5622 32.5457 24724.15
40 328.94567 26999.049 -14382.957 100.98397 -14281.974 27.5622 27.5622 32.5457 24724.15
50 354.05827 23023.294 -14390.667 108.69336 -14281.974 27.5622 27.5622 32.5457 24724.15
60 390.48404 13594.655 -14401.849 119.87581 -14281.973 27.5622 27.5622 32.5457 24724.15
70 442.69928 151.15709 -14417.877 135.90551 -14281.972 27.5622 27.5622 32.5457 24724.15
80 516.89551 -14984.124 -14440.654 158.68322 -14281.971 27.5622 27.5622 32.5457 24724.15
90 618.22135 -29948.066 -14471.76 189.78953 -14281.971 27.5622 27.5622 32.5457 24724.15
100 747.6193 -41964.291 -14511.487 229.51378 -14281.973 27.5622 27.5622 32.5457 24724.15
Loop time of 38.7948 on 1 procs for 100 steps with 2376 atoms
Performance: 0.045 ns/day, 538.817 hours/ns, 2.578 timesteps/s
99.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 38.774 | 38.774 | 38.774 | 0.0 | 99.95
Neigh | 0.010751 | 0.010751 | 0.010751 | 0.0 | 0.03
Comm | 0.0039313 | 0.0039313 | 0.0039313 | 0.0 | 0.01
Output | 0.00048804 | 0.00048804 | 0.00048804 | 0.0 | 0.00
Modify | 0.0039241 | 0.0039241 | 0.0039241 | 0.0 | 0.01
Other | | 0.001809 | | | 0.00
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4479 ave 4479 max 4479 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 106396 ave 106396 max 106396 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 212792 ave 212792 max 212792 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 212792
Ave neighs/atom = 89.5589
Neighbor list builds = 1
Dangerous builds = 0
unfix 3
fix 1 all box/relax tri 0.0 vmax 0.001
minimize 1.0e-3 1.0e-5 1000 10000
WARNING: Resetting reneighboring criteria during minimization (../min.cpp:168)
Per MPI rank memory allocation (min/avg/max) = 6.271 | 6.271 | 6.271 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
100 747.6193 -41964.291 -14511.487 229.51378 -14281.973 27.5622 27.5622 32.5457 24724.15
101 747.6193 -39284.65 -14517.424 229.51378 -14287.91 27.569615 27.569695 32.513154 24712.789
Loop time of 0.814693 on 1 procs for 1 steps with 2376 atoms
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = energy tolerance
Energy initial, next-to-last, final =
-14511.4866189 -14511.4866189 -14517.4235162
Force two-norm initial, final = 5602.25 5486.97
Force max component initial, final = 5232.05 5109.43
Final line search alpha, max atom move = 1.9113e-07 0.000976563
Iterations, force evaluations = 1 1
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.81429 | 0.81429 | 0.81429 | 0.0 | 99.95
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 6.485e-05 | 6.485e-05 | 6.485e-05 | 0.0 | 0.01
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.0003347 | | | 0.04
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4449 ave 4449 max 4449 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 105639 ave 105639 max 105639 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 211278 ave 211278 max 211278 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 211278
Ave neighs/atom = 88.9217
Neighbor list builds = 0
Dangerous builds = 0
unfix 1
reset_timestep 0
thermo 50
fix 3 all npt temp 300 300 0.1 aniso 1.0 1.0 1.0
run 500
Per MPI rank memory allocation (min/avg/max) = 5.162 | 5.162 | 5.162 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 747.6193 -39284.65 -14517.424 229.51378 -14287.91 27.569615 27.569695 32.513154 24712.789
50 1155.2849 30650.319 -14678.807 354.6642 -14324.143 27.608688 27.60914 32.375311 24678.15
100 790.03926 99869.991 -14678.858 242.5364 -14436.322 27.777994 27.77799 32.017001 24704.857
150 938.86463 -21488.442 -14803.782 288.22472 -14515.557 27.996584 27.995139 31.67008 24822.003
200 420.11331 -790.80799 -14671.687 128.97178 -14542.715 28.126911 28.125909 31.431033 24864.93
250 352.18149 -3244.2491 -14665.007 108.1172 -14556.889 28.222686 28.223673 31.238649 24883.078
300 622.91245 3657.7097 -14758.201 191.22967 -14566.972 28.301771 28.30503 31.07216 24891.363
350 888.25374 26274.358 -14852.568 272.68754 -14579.881 28.370312 28.375107 30.937051 24904.656
400 735.44163 63109.066 -14823.872 225.77532 -14598.097 28.446905 28.45227 30.838015 24959.642
450 804.81905 6221.0364 -14861.113 247.07369 -14614.039 28.543942 28.548719 30.775793 25078.977
500 628.19106 -33912.026 -14814.726 192.85016 -14621.876 28.611997 28.615169 30.74081 25168.642
Loop time of 176.167 on 1 procs for 500 steps with 2376 atoms
Performance: 0.049 ns/day, 489.353 hours/ns, 2.838 timesteps/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 175.9 | 175.9 | 175.9 | 0.0 | 99.85
Neigh | 0.17043 | 0.17043 | 0.17043 | 0.0 | 0.10
Comm | 0.018243 | 0.018243 | 0.018243 | 0.0 | 0.01
Output | 0.00040984 | 0.00040984 | 0.00040984 | 0.0 | 0.00
Modify | 0.067142 | 0.067142 | 0.067142 | 0.0 | 0.04
Other | | 0.00828 | | | 0.00
Nlocal: 2376 ave 2376 max 2376 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 4358 ave 4358 max 4358 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 102634 ave 102634 max 102634 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 205268 ave 205268 max 205268 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 205268
Ave neighs/atom = 86.3923
Neighbor list builds = 16
Dangerous builds = 0
Total wall time: 0:03:37

View File

@ -1,248 +0,0 @@
LAMMPS (13 Apr 2017)
using 1 OpenMP thread(s) per MPI task
#
variable T_depart equal 300
variable dt equal 0.0002
variable a equal 4.5937
variable c equal 2.9587
variable ca equal ${c}/${a}
variable ca equal 2.9587/${a}
variable ca equal 2.9587/4.5937
variable nx equal 6
variable ny equal 6
variable nz equal 11
variable bx equal ${a}*${nx}
variable bx equal 4.5937*${nx}
variable bx equal 4.5937*6
variable by equal ${a}*${ny}
variable by equal 4.5937*${ny}
variable by equal 4.5937*6
variable bz equal ${c}*${nz}
variable bz equal 2.9587*${nz}
variable bz equal 2.9587*11
# =======================================================================
units metal
atom_style atomic
dimension 3
boundary p p p
lattice sc 1.0
Lattice spacing in x,y,z = 1 1 1
region box_vide prism 0 ${bx} 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 ${by} 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 ${bz} 0.0 0.0 0.0
region box_vide prism 0 27.5622 0 27.5622 0 32.5457 0.0 0.0 0.0
create_box 2 box_vide
Created triclinic box = (0 0 0) to (27.5622 27.5622 32.5457) with tilt (0 0 0)
1 by 2 by 2 MPI processor grid
#lattice sc 1.0
#region box_TiO2 block 0 ${bx} 0 ${by} 0 ${bz}
# titanium atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.0 0.0 0.0 basis 0.5 0.5 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 2 region box_vide
Created 792 atoms
# Oxygen atoms
lattice custom ${a} origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 ${ca} basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
lattice custom 4.5937 origin 0.0 0.0 0.0 orient x 1 0 0 orient y 0 1 0 orient z 0 0 1 a1 1.0 0.0 0.0 a2 0.0 1.0 0.0 a3 0.0 0.0 0.644077758669482 basis 0.30478 0.30478 0.0 basis 0.69522 0.69522 0.0 basis 0.19522 0.80478 0.5 basis 0.80478 0.19522 0.5
Lattice spacing in x,y,z = 4.5937 4.5937 2.9587
create_atoms 1 region box_vide
Created 1584 atoms
mass 1 16.00
group Oxy type 1
1584 atoms in group Oxy
mass 2 47.867
group Ti type 2
792 atoms in group Ti
velocity all create ${T_depart} 277387
velocity all create 300 277387
pair_style meam/spline
pair_coeff * * TiO.meam.spline O Ti
Reading potential file TiO.meam.spline with DATE: 2016-06-05
neighbor 0.5 bin
neigh_modify every 2 delay 0 check yes
timestep ${dt}
timestep 0.0002
thermo_style custom step temp press pe ke etotal lx ly lz vol
thermo 10
#dump 5 all custom 500 boxAlpha_alumina.lammpstrj id type q x y z
fix 3 all nve
run 100
Neighbor list info ...
update every 2 steps, delay 0 steps, check yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6
ghost atom cutoff = 6
binsize = 3, bins = 10 10 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.922 | 3.922 | 3.922 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 300 22403.656 -14374.073 92.097853 -14281.975 27.5622 27.5622 32.5457 24724.15
10 301.16725 23582.084 -14374.431 92.456192 -14281.975 27.5622 27.5622 32.5457 24724.15
20 304.58237 25059.749 -14375.479 93.504609 -14281.974 27.5622 27.5622 32.5457 24724.15
30 312.41477 26504.358 -14377.883 95.9091 -14281.974 27.5622 27.5622 32.5457 24724.15
40 327.67099 26687.057 -14382.566 100.59265 -14281.974 27.5622 27.5622 32.5457 24724.15
50 352.32125 22677.292 -14390.134 108.1601 -14281.974 27.5622 27.5622 32.5457 24724.15
60 388.40592 12472.705 -14401.211 119.23784 -14281.973 27.5622 27.5622 32.5457 24724.15
70 439.97199 -1520.4694 -14417.04 135.06825 -14281.972 27.5622 27.5622 32.5457 24724.15
80 513.34361 -16733.316 -14439.564 157.59282 -14281.971 27.5622 27.5622 32.5457 24724.15
90 613.3542 -31099.591 -14470.267 188.29535 -14281.971 27.5622 27.5622 32.5457 24724.15
100 741.02836 -42358.226 -14509.464 227.4904 -14281.973 27.5622 27.5622 32.5457 24724.15
Loop time of 8.92317 on 4 procs for 100 steps with 2376 atoms
Performance: 0.194 ns/day, 123.933 hours/ns, 11.207 timesteps/s
99.5% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 8.8912 | 8.9 | 8.9064 | 0.2 | 99.74
Neigh | 0.0027034 | 0.0028808 | 0.0032032 | 0.4 | 0.03
Comm | 0.010964 | 0.017648 | 0.026568 | 5.0 | 0.20
Output | 0.00037575 | 0.00047809 | 0.00053835 | 0.0 | 0.01
Modify | 0.00099134 | 0.001001 | 0.0010085 | 0.0 | 0.01
Other | | 0.001162 | | | 0.01
Nlocal: 594 ave 599 max 589 min
Histogram: 1 0 0 0 0 2 0 0 0 1
Nghost: 2290.25 ave 2296 max 2282 min
Histogram: 1 0 0 0 1 0 0 0 1 1
Neighs: 26671.5 ave 26934 max 26495 min
Histogram: 1 0 0 2 0 0 0 0 0 1
FullNghs: 53343 ave 53828 max 52922 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 213372
Ave neighs/atom = 89.803
Neighbor list builds = 1
Dangerous builds = 0
unfix 3
fix 1 all box/relax tri 0.0 vmax 0.001
minimize 1.0e-3 1.0e-5 1000 10000
WARNING: Resetting reneighboring criteria during minimization (../min.cpp:168)
Per MPI rank memory allocation (min/avg/max) = 5.047 | 5.047 | 5.047 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
100 741.02836 -42358.226 -14509.464 227.4904 -14281.973 27.5622 27.5622 32.5457 24724.15
101 741.02836 -39686.588 -14515.398 227.4904 -14287.907 27.569587 27.569656 32.513154 24712.729
Loop time of 0.193516 on 4 procs for 1 steps with 2376 atoms
99.5% CPU use with 4 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = energy tolerance
Energy initial, next-to-last, final =
-14509.46351 -14509.46351 -14515.3978891
Force two-norm initial, final = 5602.69 5487.77
Force max component initial, final = 5235.27 5113.06
Final line search alpha, max atom move = 1.91012e-07 0.000976657
Iterations, force evaluations = 1 1
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.19287 | 0.19299 | 0.19318 | 0.0 | 99.73
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.00014043 | 0.00033247 | 0.00045896 | 0.0 | 0.17
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.0001886 | | | 0.10
Nlocal: 594 ave 601 max 586 min
Histogram: 1 0 0 0 0 1 1 0 0 1
Nghost: 2263.25 ave 2271 max 2251 min
Histogram: 1 0 0 0 0 0 1 0 1 1
Neighs: 26425.8 ave 26807 max 26121 min
Histogram: 1 0 0 1 1 0 0 0 0 1
FullNghs: 52851.5 ave 53580 max 52175 min
Histogram: 1 0 0 0 2 0 0 0 0 1
Total # of neighbors = 211406
Ave neighs/atom = 88.9756
Neighbor list builds = 0
Dangerous builds = 0
unfix 1
reset_timestep 0
thermo 50
fix 3 all npt temp 300 300 0.1 aniso 1.0 1.0 1.0
run 500
Per MPI rank memory allocation (min/avg/max) = 3.937 | 3.937 | 3.937 Mbytes
Step Temp Press PotEng KinEng TotEng Lx Ly Lz Volume
0 741.02836 -39686.588 -14515.398 227.4904 -14287.907 27.569587 27.569656 32.513154 24712.729
50 1157.347 29332.549 -14679.321 355.29725 -14324.024 27.60903 27.609325 32.375509 24678.772
100 777.55858 101883.12 -14674.854 238.70492 -14436.149 27.778518 27.777373 32.017262 24704.976
150 945.49014 -18305.383 -14806.687 290.25871 -14516.428 27.998313 27.99535 31.670225 24823.838
200 427.46608 -4045.0095 -14674.887 131.22903 -14543.658 28.130283 28.127147 31.431578 24869.438
250 362.82166 -7283.1332 -14669.07 111.38365 -14557.687 28.225232 28.222707 31.238451 24884.314
300 626.2858 7228.0309 -14760.128 192.26526 -14567.862 28.302384 28.299949 31.070038 24885.734
350 859.84293 30084.735 -14845.064 263.96563 -14581.099 28.372349 28.369334 30.934424 24899.261
400 755.26136 54745.408 -14830.701 231.85983 -14598.842 28.450301 28.448361 30.836159 24957.691
450 802.52344 5690.2863 -14860.193 246.36895 -14613.824 28.542311 28.541672 30.773339 25069.354
500 631.84734 -31473.795 -14816.101 193.97261 -14622.128 28.605857 28.605891 30.737955 25152.746
Loop time of 39.7881 on 4 procs for 500 steps with 2376 atoms
Performance: 0.217 ns/day, 110.522 hours/ns, 12.567 timesteps/s
99.4% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 39.617 | 39.633 | 39.653 | 0.2 | 99.61
Neigh | 0.043624 | 0.046792 | 0.051708 | 1.4 | 0.12
Comm | 0.05215 | 0.072616 | 0.092142 | 5.6 | 0.18
Output | 0.00042915 | 0.00045079 | 0.00051546 | 0.0 | 0.00
Modify | 0.029836 | 0.030341 | 0.03094 | 0.2 | 0.08
Other | | 0.004489 | | | 0.01
Nlocal: 594 ave 606 max 582 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Nghost: 2226 ave 2238 max 2214 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Neighs: 25652.8 ave 26129 max 25153 min
Histogram: 1 0 0 0 1 1 0 0 0 1
FullNghs: 51305.5 ave 52398 max 50251 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Total # of neighbors = 205222
Ave neighs/atom = 86.3729
Neighbor list builds = 16
Dangerous builds = 0
Total wall time: 0:00:49

View File

@ -0,0 +1 @@
../../../../potentials/Si.b.meam.sw.spline

View File

@ -1,33 +0,0 @@
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.245
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.37 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 3.245 A"
print "===================================================="
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0

View File

@ -1,37 +0,0 @@
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.2488516
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.37 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 3.238 A"
print "===================================================="
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0

View File

@ -1,34 +0,0 @@
# Si dc phase
units metal
boundary p p p
atom_style atomic
lattice diamond 5.431
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (8*vol/atoms)^0.33333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.63 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 5.431 A"
print "===================================================="
#dump 1 all custom 1 dc.dump id type x y z fx fy fz
#run 0

View File

@ -1,34 +0,0 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.147
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.288 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 4.147 A"
print "===================================================="
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0

View File

@ -1,38 +0,0 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.309793856093661
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.289 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 4.137 A"
print "===================================================="
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0

View File

@ -1,40 +0,0 @@
# Si hcp
units metal
boundary p p p
atom_style atomic
#lattice custom 2.93093 a1 0.5 -0.866025 0 a2 0.5 0.866025 0 a3 0 0 1.596 basis 0.333333 0.666666 0.25 basis 0.666666 0.333333 0.75
lattice custom 2.93093 a1 0.5 -0.866025 0 a2 0.5 0.866025 0 a3 0 0 1.7 basis 0.333333 0.666666 0.25 basis 0.666666 0.333333 0.75
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable lattice_parameter equal lx
variable c_to_a equal lz/lx
variable atmVol equal vol/atoms
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.290 eV/atom"
print "Calculated lattice parameter: ${lattice_parameter} A"
print "Reference lattice parameter: 2.925 A"
print "Atomic volume ${atmVol} A^3"
print "c/a ratio: ${c_to_a}"
print "Reference c/a ratio: 1.633"
print "===================================================="
#dump 1 all custom 1 hcp.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,33 @@
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.245
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.37 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 3.245 A"
print "===================================================="
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,37 @@
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.2488516
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.37 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 3.238 A"
print "===================================================="
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,34 @@
# Si dc phase
units metal
boundary p p p
atom_style atomic
lattice diamond 5.431
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (8*vol/atoms)^0.33333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.63 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 5.431 A"
print "===================================================="
#dump 1 all custom 1 dc.dump id type x y z fx fy fz
#run 0

View File

@ -10,7 +10,7 @@ create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0

View File

@ -10,7 +10,7 @@ atom_style atomic
atom_style atomic
lattice diamond 5.431
region box block 0 20 0 20 0 20
region box block 0 5 0 5 0 5
boundary p p p
create_box 1 box
@ -18,7 +18,7 @@ create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * ../../potentials/Si.b.meam.sw.spline Si
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
velocity all create 300.0 376847 loop geom
@ -41,5 +41,5 @@ thermo_modify format 7 %14.8f
timestep 0.002
thermo 10
run 20000
run 2000

View File

@ -0,0 +1,34 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.147
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.288 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 4.147 A"
print "===================================================="
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,38 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.309793856093661
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.289 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 4.137 A"
print "===================================================="
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,40 @@
# Si hcp
units metal
boundary p p p
atom_style atomic
#lattice custom 2.93093 a1 0.5 -0.866025 0 a2 0.5 0.866025 0 a3 0 0 1.596 basis 0.333333 0.666666 0.25 basis 0.666666 0.333333 0.75
lattice custom 2.93093 a1 0.5 -0.866025 0 a2 0.5 0.866025 0 a3 0 0 1.7 basis 0.333333 0.666666 0.25 basis 0.666666 0.333333 0.75
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable lattice_parameter equal lx
variable c_to_a equal lz/lx
variable atmVol equal vol/atoms
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.352 eV/atom"
print "Calculated lattice parameter: ${lattice_parameter} A"
print "Reference lattice parameter: 2.736 A"
print "Atomic volume ${atmVol} A^3"
print "c/a ratio: ${c_to_a}"
print "Reference c/a ratio: 1.633"
print "===================================================="
#dump 1 all custom 1 hcp.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,34 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice sc 2.612
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.337 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 2.612 A"
print "===================================================="
#dump 1 all custom 1 sc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,38 @@
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice sc 2.612
region box block 0 1 0 1 0 1
create_box 1 box
create_atoms 1 box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (vol/atoms)^0.3333333333
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: -4.337 eV/atom"
print "Atomic volume ${atmVol} A^3"
print "Lattice constant ${aLatt} A"
print "Reference lattice constant 2.612 A"
print "===================================================="
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,27 @@
# Si single atom in vacuum
units metal
boundary f f f
atom_style atomic
region box block -100 100 -100 100 -100 100 units box
create_box 1 box
create_atoms 1 single 0 0 0 units box
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
mass * 28.085
variable cohesive_energy equal pe/atoms
run 0
print "===================================================="
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
print "Reference cohesive energy: 0 eV/atom"
print "===================================================="
#dump 1 all custom 1 single_atom.dump id type x y z fx fy fz
#run 0

View File

@ -0,0 +1,99 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.245
Lattice spacing in x,y,z = 3.245 3.245 3.245
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (3.245 3.245 3.245)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 2 atoms
using lattice units in orthogonal box = (0 0 0) to (3.245 3.245 3.245)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.714 | 3.714 | 3.714 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -8.7453652 0 -8.7453652 -50884.003
Loop time of 1.393e-06 on 1 procs for 0 steps with 2 atoms
71.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.393e-06 | | |100.00
Nlocal: 2 ave 2 max 2 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 187 ave 187 max 187 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 58 ave 58 max 58 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 116 ave 116 max 116 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 116
Ave neighs/atom = 58
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.37268261764397 eV/atom
print "Reference cohesive energy: -4.37 eV/atom"
Reference cohesive energy: -4.37 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.0849655625 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 3.24499999999618 A
print "Reference lattice constant 3.245 A"
Reference lattice constant 3.245 A
print "===================================================="
====================================================
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si bcc
units metal
boundary p p p
atom_style atomic
lattice bcc 3.245
Lattice spacing in x,y,z = 3.245 3.245 3.245
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (3.245 3.245 3.245)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 2 atoms
using lattice units in orthogonal box = (0 0 0) to (3.245 3.245 3.245)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (2*vol/atoms)^0.333333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
WARNING: Proc sub-domain size < neighbor skin, could lead to lost atoms (src/domain.cpp:966)
Per MPI rank memory allocation (min/avg/max) = 3.745 | 3.745 | 3.745 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -8.7453652 0 -8.7453652 -50884.003
Loop time of 5.23625e-06 on 4 procs for 0 steps with 2 atoms
81.2% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 5.236e-06 | | |100.00
Nlocal: 0.5 ave 1 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Nghost: 143.5 ave 144 max 143 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Neighs: 14.5 ave 29 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 2
FullNghs: 29 ave 58 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 2
Total # of neighbors = 116
Ave neighs/atom = 58
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.37268261764397 eV/atom
print "Reference cohesive energy: -4.37 eV/atom"
Reference cohesive energy: -4.37 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.0849655625 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 3.24499999999618 A
print "Reference lattice constant 3.245 A"
Reference lattice constant 3.245 A
print "===================================================="
====================================================
#dump 1 all custom 1 bcc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si dc phase
units metal
boundary p p p
atom_style atomic
lattice diamond 5.431
Lattice spacing in x,y,z = 5.431 5.431 5.431
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (5.431 5.431 5.431)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 8 atoms
using lattice units in orthogonal box = (0 0 0) to (5.431 5.431 5.431)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (8*vol/atoms)^0.33333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.715 | 3.715 | 3.715 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -37.039999 0 -37.039999 -32.742245
Loop time of 1.31e-06 on 1 procs for 0 steps with 8 atoms
76.3% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.31e-06 | | |100.00
Nlocal: 8 ave 8 max 8 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 272 ave 272 max 272 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 184 ave 184 max 184 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 368 ave 368 max 368 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 368
Ave neighs/atom = 46
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.62999988298102 eV/atom
print "Reference cohesive energy: -4.63 eV/atom"
Reference cohesive energy: -4.63 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 20.023934748875 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 5.4309999999081 A
print "Reference lattice constant 5.431 A"
Reference lattice constant 5.431 A
print "===================================================="
====================================================
#dump 1 all custom 1 dc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si dc phase
units metal
boundary p p p
atom_style atomic
lattice diamond 5.431
Lattice spacing in x,y,z = 5.431 5.431 5.431
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (5.431 5.431 5.431)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 8 atoms
using lattice units in orthogonal box = (0 0 0) to (5.431 5.431 5.431)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (8*vol/atoms)^0.33333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.729 | 3.729 | 3.729 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -37.039999 0 -37.039999 -32.742245
Loop time of 5.486e-06 on 4 procs for 0 steps with 8 atoms
86.6% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 5.486e-06 | | |100.00
Nlocal: 2 ave 2 max 2 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 199 ave 199 max 199 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 46 ave 46 max 46 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 92 ave 92 max 92 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 368
Ave neighs/atom = 46
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.62999988298102 eV/atom
print "Reference cohesive energy: -4.63 eV/atom"
Reference cohesive energy: -4.63 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 20.023934748875 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 5.4309999999081 A
print "Reference lattice constant 5.431 A"
Reference lattice constant 5.431 A
print "===================================================="
====================================================
#dump 1 all custom 1 dc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,304 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# bulk Si lattice
variable x index 1
variable y index 1
variable z index 1
units metal
atom_style atomic
atom_style atomic
lattice diamond 5.431
Lattice spacing in x,y,z = 5.431 5.431 5.431
region box block 0 5 0 5 0 5
boundary p p p
create_box 1 box
Created orthogonal box = (0 0 0) to (27.155 27.155 27.155)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 1000 atoms
using lattice units in orthogonal box = (0 0 0) to (27.155 27.155 27.155)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
velocity all create 300.0 376847 loop geom
neighbor 1.0 bin
neigh_modify every 1 delay 5 check yes
fix 1 all nve
thermo 1
thermo_style custom step vol etotal press pxx pyy pxz
thermo_modify format 2 %14.8f
thermo_modify format 3 %14.8f
thermo_modify format 4 %14.8f
thermo_modify format 5 %14.8f
thermo_modify format 6 %14.8f
thermo_modify format 7 %14.8f
timestep 0.002
thermo 10
run 2000
Neighbor list info ...
update: every = 1 steps, delay = 5 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.4
ghost atom cutoff = 5.4
binsize = 2.7, bins = 11 11 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.882 | 3.882 | 3.882 Mbytes
Step Volume TotEng Press Pxx Pyy Pxz
0 20023.93474888 -4591.26061752 2033.68946971 2021.11445194 1952.17782797 -97.47450759
10 20023.93474888 -4591.06635027 2106.23969322 2152.04799431 2155.92194220 12.68865715
20 20023.93474888 -4591.08410781 1520.06153315 1492.61610264 1497.10285778 106.41609464
30 20023.93474888 -4591.20090599 1491.31281700 1502.41875599 1432.20332178 -220.78939631
40 20023.93474888 -4591.12407583 2061.65437359 2086.56902463 2156.12774891 133.73119636
50 20023.93474888 -4591.07086388 1998.55985680 2023.28426545 1940.19019547 -154.08048062
60 20023.93474888 -4591.17588927 1598.13531474 1534.49237825 1668.01401046 -30.47944201
70 20023.93474888 -4591.14658914 1774.80198893 1761.97777404 1785.46768303 265.88782944
80 20023.93474888 -4591.07344257 2005.07361554 2086.14231294 1947.20619069 -215.28622700
90 20023.93474888 -4591.16235879 1688.39497858 1707.61777360 1644.33608005 -164.72130224
100 20023.93474888 -4591.14061103 1841.94537985 1773.73213126 1872.51475381 270.59501062
110 20023.93474888 -4591.10220075 1929.22145308 1909.87270800 1904.88051163 -225.24314462
120 20023.93474888 -4591.13680660 1781.93134139 1866.38642644 1768.89564107 99.86322623
130 20023.93474888 -4591.14594757 1822.72100705 1800.47495717 1820.60642574 80.95885439
140 20023.93474888 -4591.10390676 1910.20395959 1955.77665062 1995.74598843 -90.83622328
150 20023.93474888 -4591.14245989 1806.86945717 1826.66971108 1792.88598455 109.51618275
160 20023.93474888 -4591.13052367 1788.16506016 1720.21348685 1791.47937216 -21.37756610
170 20023.93474888 -4591.11781009 1807.71365716 1900.29204203 1719.93993616 -188.87504172
180 20023.93474888 -4591.13704844 1743.08721343 1658.19411470 1778.16073808 259.52251154
190 20023.93474888 -4591.13308836 1922.54250327 2098.60870590 1929.55744997 -258.20774060
200 20023.93474888 -4591.11910322 1948.92068139 1792.61227410 1942.76349531 -13.04781205
210 20023.93474888 -4591.13284135 1783.61066563 1820.43796850 1760.07633603 159.07171405
220 20023.93474888 -4591.13715614 1662.49241020 1659.81600936 1681.59150854 -132.96912515
230 20023.93474888 -4591.11129339 1856.01542680 1850.81261755 1867.66185416 52.30767173
240 20023.93474888 -4591.13924088 1891.58063505 1873.70129244 1956.58093826 98.93914776
250 20023.93474888 -4591.13115836 1860.91219630 1879.89822419 1879.31544524 -206.02783594
260 20023.93474888 -4591.12006308 1818.08099684 1941.33545552 1695.61949198 135.96435310
270 20023.93474888 -4591.13331488 1745.21920087 1632.20932864 1841.75237705 -14.79711037
280 20023.93474888 -4591.12668399 1922.27473363 1983.53049663 1801.16867887 -31.89499092
290 20023.93474888 -4591.12976814 1897.55799948 1827.64752799 1980.45982095 2.76502468
300 20023.93474888 -4591.13041107 1825.92142175 1878.68168429 1802.25725197 -38.06898013
310 20023.93474888 -4591.12342070 1743.43519435 1729.85694627 1756.42228046 -60.06468142
320 20023.93474888 -4591.13442022 1763.60998541 1802.84007786 1758.08838840 143.69519620
330 20023.93474888 -4591.12695785 1845.27144767 1808.79851160 1834.73311592 -121.25298963
340 20023.93474888 -4591.12739037 1824.35911264 1832.04065906 1890.80248509 133.62517019
350 20023.93474888 -4591.12837932 1846.72083448 1835.30033595 1798.95928707 4.24639304
360 20023.93474888 -4591.13622705 1815.71663299 1841.77777220 1827.49780217 -143.29464941
370 20023.93474888 -4591.11509215 1875.67484263 1827.27976696 1896.21161847 45.76662873
380 20023.93474888 -4591.13911382 1797.59606691 1812.24811799 1794.04473228 -41.59509792
390 20023.93474888 -4591.13044234 1898.76664850 1956.19730294 1860.25003173 50.92492249
400 20023.93474888 -4591.11980093 1844.92592342 1784.40892834 1906.40469373 43.28606683
410 20023.93474888 -4591.13203164 1712.71494613 1814.06280558 1635.86741674 -103.93631537
420 20023.93474888 -4591.13931019 1733.36385225 1744.49160982 1807.86989588 133.74823990
430 20023.93474888 -4591.11091920 1949.42231060 1909.20892751 1975.13043695 -14.29575994
440 20023.93474888 -4591.13618833 1864.59807598 1877.62852076 1843.27438001 -89.29004851
450 20023.93474888 -4591.14065042 1792.33232906 1768.15519821 1791.97620416 -5.46931032
460 20023.93474888 -4591.11500763 1855.81887789 1911.02702117 1854.11481264 -11.65879123
470 20023.93474888 -4591.12641341 1766.54892333 1681.66409644 1790.72946629 -16.19300593
480 20023.93474888 -4591.14320376 1784.94640413 1837.12258760 1829.45130428 -24.58372083
490 20023.93474888 -4591.12275909 1917.84426215 1874.02051396 1823.93473736 -53.21102808
500 20023.93474888 -4591.11536593 1947.65004536 1984.00927202 1955.20583715 170.72348231
510 20023.93474888 -4591.14946014 1646.92190921 1637.16721605 1721.95260390 0.75910182
520 20023.93474888 -4591.11878765 1731.94108930 1743.76148438 1765.97663216 -57.64792042
530 20023.93474888 -4591.12058050 1866.85423508 1909.03069117 1803.97339119 16.08917224
540 20023.93474888 -4591.14440078 1899.09055097 1869.45536614 1955.63982682 -24.32088390
550 20023.93474888 -4591.12281013 1852.60179253 1888.89598923 1828.87997557 33.36368633
560 20023.93474888 -4591.12596918 1774.97474435 1755.08558684 1723.01020118 87.23634641
570 20023.93474888 -4591.13304645 1873.49764770 1893.06123674 1876.90953167 -195.18190912
580 20023.93474888 -4591.13041940 1893.40369832 1948.39675218 1946.70465437 29.22769962
590 20023.93474888 -4591.12628365 1825.99871929 1778.59925204 1799.29497744 3.08469865
600 20023.93474888 -4591.12623199 1711.14291256 1726.06334645 1756.56328186 20.17045328
610 20023.93474888 -4591.13575822 1762.74383759 1696.53579466 1742.16989186 137.99551512
620 20023.93474888 -4591.11748177 1915.62953629 1922.50709681 1889.71048322 -151.19246541
630 20023.93474888 -4591.13946145 1831.51838075 1853.88531957 1944.94176350 -78.29498078
640 20023.93474888 -4591.12580524 1823.82318701 1846.02546609 1743.64065792 65.64051216
650 20023.93474888 -4591.11948676 1873.78285550 1775.89089801 1906.72223535 -139.90734745
660 20023.93474888 -4591.14305191 1839.42659647 1933.10913722 1792.91114197 95.22053922
670 20023.93474888 -4591.12366236 1774.96754052 1789.81121744 1723.60115478 55.42770410
680 20023.93474888 -4591.12112515 1741.65490764 1742.36094980 1800.06072377 -192.49818667
690 20023.93474888 -4591.13668764 1895.42242504 1903.97215886 1897.27680319 147.64765468
700 20023.93474888 -4591.13154232 1907.73595360 1909.70840176 1899.59355684 -41.88693465
710 20023.93474888 -4591.12123644 1787.17165151 1846.04442558 1856.62285015 14.61013131
720 20023.93474888 -4591.12912228 1773.50711774 1711.48986545 1678.98894392 99.13962644
730 20023.93474888 -4591.13943426 1877.91861157 1809.62732738 1952.21169623 -179.48456876
740 20023.93474888 -4591.11922024 1847.48164813 1916.28361817 1760.80492531 -43.61964522
750 20023.93474888 -4591.12596610 1728.48429245 1815.68288594 1697.85161513 55.12427062
760 20023.93474888 -4591.14243576 1796.32698641 1612.95116168 1889.48842714 -56.19243304
770 20023.93474888 -4591.11566749 1950.43433206 2057.36511752 1888.45827111 128.97571878
780 20023.93474888 -4591.13794531 1856.08214494 1774.55616742 1998.19458473 -130.30085155
790 20023.93474888 -4591.12484960 1705.79355619 1781.11561557 1610.09362743 6.08209071
800 20023.93474888 -4591.13248423 1653.85467554 1641.87803368 1645.29795462 53.02769875
810 20023.93474888 -4591.12053801 1850.72927101 1853.81007899 1934.33578436 -11.41348471
820 20023.93474888 -4591.13568633 2005.01327886 2023.65736202 1948.64345022 -22.95511661
830 20023.93474888 -4591.13348945 1940.46273386 1921.41281592 1947.71907957 -14.69201391
840 20023.93474888 -4591.11607962 1815.62995416 1907.12718388 1802.10585446 -143.02255465
850 20023.93474888 -4591.14111076 1700.83719470 1651.77649527 1634.11260381 100.81708093
860 20023.93474888 -4591.12299798 1784.63349939 1801.31186313 1877.00630923 -28.97604911
870 20023.93474888 -4591.12551010 1837.85662130 1761.53486861 1840.30471001 -52.75496595
880 20023.93474888 -4591.13571795 1820.08693693 1877.16684816 1867.96423182 86.61080127
890 20023.93474888 -4591.12866291 1826.87796103 1839.08169130 1817.17394238 -134.01673514
900 20023.93474888 -4591.12028235 1746.32172050 1741.62288887 1761.74467257 152.25484122
910 20023.93474888 -4591.14179606 1761.97332567 1830.92751090 1713.93524698 -9.10599600
920 20023.93474888 -4591.12106188 1914.68900864 1818.03927415 1943.64134070 -147.61072264
930 20023.93474888 -4591.12848096 1926.62340190 1986.31823801 1928.48168675 91.05693225
940 20023.93474888 -4591.13069876 1761.33037533 1672.68740184 1752.00036085 -89.58641136
950 20023.93474888 -4591.13173687 1805.68203857 1904.16936367 1803.40809345 66.24135736
960 20023.93474888 -4591.11991153 1917.48459209 1923.72305582 1935.83439145 87.27226176
970 20023.93474888 -4591.13868359 1847.33106721 1840.24455972 1893.06308349 -129.01573111
980 20023.93474888 -4591.13181295 1727.55316411 1738.87912907 1668.27059033 59.44259326
990 20023.93474888 -4591.11625584 1786.71812091 1715.85427725 1854.96328384 126.73228092
1000 20023.93474888 -4591.13934315 1826.70687382 1852.65649263 1809.58902755 -174.47060530
1010 20023.93474888 -4591.13095663 1811.18694875 1846.66780622 1799.54031746 82.32118283
1020 20023.93474888 -4591.12183334 1887.55743479 1888.81913424 1892.46209156 -133.57878235
1030 20023.93474888 -4591.13070772 1816.08941967 1795.09854639 1813.60363124 -85.34396177
1040 20023.93474888 -4591.13464400 1793.05553513 1867.58492264 1755.85054655 230.90630163
1050 20023.93474888 -4591.12336443 1737.63082740 1638.33978731 1775.64351248 -218.66770699
1060 20023.93474888 -4591.12391336 1909.02227267 1917.61923360 1929.46998267 201.87082324
1070 20023.93474888 -4591.14276207 1853.77633175 1845.01344377 1926.71448240 7.58141037
1080 20023.93474888 -4591.12033703 1840.89520938 1986.47111954 1764.94655261 -247.98825747
1090 20023.93474888 -4591.12285870 1832.49648001 1766.39647524 1807.79358153 337.75570868
1100 20023.93474888 -4591.14555863 1760.67028787 1709.70075974 1799.92422039 -115.37262627
1110 20023.93474888 -4591.11192008 1900.26741848 1977.75347876 1822.20529111 -155.07728288
1120 20023.93474888 -4591.13385646 1846.80363719 1808.16104869 1966.17497644 225.10841939
1130 20023.93474888 -4591.13677038 1841.68639992 1848.15406996 1742.05105373 -236.03870840
1140 20023.93474888 -4591.11917782 1698.85212076 1683.67420262 1769.66448366 -17.01311110
1150 20023.93474888 -4591.12452665 1784.18127764 1793.37702282 1832.40349926 223.51415711
1160 20023.93474888 -4591.14073576 1851.04085412 1886.26603210 1747.67661478 -243.41304842
1170 20023.93474888 -4591.12852378 1864.81370801 1815.62177095 1968.34473464 56.20720441
1180 20023.93474888 -4591.11147236 1764.74624859 1757.71396067 1739.81644521 145.86370499
1190 20023.93474888 -4591.14727099 1743.23431143 1756.61309537 1783.17306740 -228.37177614
1200 20023.93474888 -4591.12826190 1966.84129537 1984.72539337 1881.61224992 250.19286662
1210 20023.93474888 -4591.12135178 1991.65207643 1967.17238770 2032.30298146 -170.69659769
1220 20023.93474888 -4591.12704074 1798.24767693 1886.54781818 1750.35115144 -51.42237622
1230 20023.93474888 -4591.15251028 1648.42299475 1626.54609762 1707.43237169 201.89424169
1240 20023.93474888 -4591.10437001 1807.54033792 1804.75428684 1800.73969035 -130.77309115
1250 20023.93474888 -4591.12866282 1822.12377428 1924.97526529 1809.30741820 20.43220727
1260 20023.93474888 -4591.15253116 1834.13256135 1775.15312363 1862.92726901 143.80072625
1270 20023.93474888 -4591.11697045 1895.99420213 1890.05548682 1876.70705636 -196.28190118
1280 20023.93474888 -4591.10998558 1810.69685083 1786.90144766 1890.32743580 114.61127078
1290 20023.93474888 -4591.15663369 1683.26225221 1732.91813399 1605.72640055 -56.89297456
1300 20023.93474888 -4591.12644490 1877.89503414 1849.54325292 1892.65946124 -90.18957430
1310 20023.93474888 -4591.10427489 1965.23083403 1891.58965014 2012.51270249 177.72644110
1320 20023.93474888 -4591.15434861 1777.36326273 1897.73365762 1722.00033373 -82.34768718
1330 20023.93474888 -4591.12298907 1836.66913522 1799.70130374 1793.68254013 -61.75151998
1340 20023.93474888 -4591.12490141 1777.80141168 1795.27706143 1895.24485296 166.42735417
1350 20023.93474888 -4591.12378570 1791.10623391 1818.30852833 1798.44218549 -226.55174357
1360 20023.93474888 -4591.14595156 1831.57041599 1836.41493856 1803.22092404 47.63921757
1370 20023.93474888 -4591.12178926 1874.48627007 1830.97360382 1875.01454925 24.21227347
1380 20023.93474888 -4591.11951398 1796.21540344 1815.96845897 1814.42525184 -116.20769836
1390 20023.93474888 -4591.14872688 1781.31452347 1832.31081575 1656.75621082 188.41394605
1400 20023.93474888 -4591.11780318 1969.45961602 1942.83749662 2052.95436774 -155.30340466
1410 20023.93474888 -4591.13042094 1805.38744778 1812.16090834 1841.18785723 -135.61134425
1420 20023.93474888 -4591.13286369 1739.68760936 1659.96693548 1707.24203303 110.27513123
1430 20023.93474888 -4591.13198735 1747.53328395 1806.76726000 1784.09093686 5.21735925
1440 20023.93474888 -4591.11951402 1859.97208154 1781.23741206 1873.95961458 101.92428758
1450 20023.93474888 -4591.13735783 1846.95509332 1932.51431610 1875.78491023 -20.51854956
1460 20023.93474888 -4591.13226567 1823.25742012 1856.67667532 1742.46107569 -236.53527110
1470 20023.93474888 -4591.11753177 1840.40145258 1804.81903258 1927.84741838 204.06984499
1480 20023.93474888 -4591.12988851 1857.83857551 1869.95780410 1816.24608994 47.30748811
1490 20023.93474888 -4591.14097247 1856.07106933 1865.36744675 1821.35003034 -170.29765752
1500 20023.93474888 -4591.11239658 1788.62741104 1806.80467824 1740.17442721 184.92894743
1510 20023.93474888 -4591.12783714 1790.12382236 1762.63806951 1871.28139822 -186.60004317
1520 20023.93474888 -4591.14195717 1821.44183211 1942.11747262 1841.04727513 162.73737184
1530 20023.93474888 -4591.11752486 1893.52056191 1779.18475868 1942.62286196 -36.33494025
1540 20023.93474888 -4591.12184439 1817.26275805 1859.82625327 1763.71144260 -112.75758714
1550 20023.93474888 -4591.13981158 1749.80836038 1700.61859748 1825.01128874 217.38903989
1560 20023.93474888 -4591.12728980 1805.71852411 1871.59756252 1719.68258949 -238.74223976
1570 20023.93474888 -4591.11445788 1888.39024553 1835.20226211 1912.08399927 -215.27224989
1580 20023.93474888 -4591.14252397 1838.41560957 1873.47969145 1837.47624533 354.25680129
1590 20023.93474888 -4591.13218156 1810.87114008 1789.08855052 1828.99123618 -209.15084305
1600 20023.93474888 -4591.10968279 1836.00776639 1825.81027089 1837.34962151 100.90452637
1610 20023.93474888 -4591.14604707 1757.48298096 1809.03232957 1740.69121405 2.74809819
1620 20023.93474888 -4591.12529727 1849.75708598 1814.73147905 1947.60906283 -148.12289096
1630 20023.93474888 -4591.12162235 1786.81944541 1807.60181185 1733.20416838 206.15614063
1640 20023.93474888 -4591.13525115 1749.94510017 1689.93885297 1800.12923385 -51.81620378
1650 20023.93474888 -4591.12880584 1980.51885483 2129.50862093 1891.59140452 -174.84268771
1660 20023.93474888 -4591.12623477 1937.01570743 1858.01901653 2010.43954630 262.44914826
1670 20023.93474888 -4591.13072559 1694.39186347 1719.93995818 1664.78730910 -145.83592220
1680 20023.93474888 -4591.12856494 1668.28855671 1605.96305541 1655.07886827 -142.62923930
1690 20023.93474888 -4591.12284600 1970.77558510 1970.04100693 2023.90407383 345.77356846
1700 20023.93474888 -4591.13613310 1866.78766162 1829.21192148 1874.14168461 -348.93449367
1710 20023.93474888 -4591.12999014 1699.05790295 1794.09077236 1679.36565966 178.31013280
1720 20023.93474888 -4591.11591976 1782.98025612 1700.01516442 1839.71274928 70.34469196
1730 20023.93474888 -4591.14002514 1828.78788640 1977.52189719 1827.54175302 -277.62511918
1740 20023.93474888 -4591.13058733 1798.70721142 1684.56540042 1750.52641931 233.57737288
1750 20023.93474888 -4591.11769219 1851.38818281 1927.22480022 1826.49368852 -71.98649667
1760 20023.93474888 -4591.14007689 1883.09481847 1852.22751222 1980.94397520 -186.99017028
1770 20023.93474888 -4591.12640655 1894.21954462 1854.80842095 1863.10559976 245.84910078
1780 20023.93474888 -4591.12667944 1783.63681589 1763.81183224 1762.03364824 -313.41414175
1790 20023.93474888 -4591.13111090 1787.61153134 1881.28039744 1851.63506346 105.01216221
1800 20023.93474888 -4591.12970750 1803.03035944 1787.16231095 1760.00551047 314.84058979
1810 20023.93474888 -4591.12225161 1812.82894513 1745.83085735 1834.29341515 -340.09732132
1820 20023.93474888 -4591.13716194 1775.41292525 1771.18484944 1813.93798001 70.81021199
1830 20023.93474888 -4591.12268274 1837.06086954 1854.30962385 1844.39144552 145.53217437
1840 20023.93474888 -4591.12375023 1837.54812559 1877.01563366 1835.09263974 -272.38620552
1850 20023.93474888 -4591.14592434 1799.83851836 1722.86830853 1788.44737857 224.62763516
1860 20023.93474888 -4591.11203857 1905.18247850 1950.95536684 1931.86453236 -124.44793867
1870 20023.93474888 -4591.13265210 1806.29161872 1787.78341130 1766.71844421 -34.60592395
1880 20023.93474888 -4591.14209985 1740.88195157 1752.31650148 1773.74624207 343.80423723
1890 20023.93474888 -4591.11622075 1861.70663705 1810.81638169 1911.13160086 -309.99090581
1900 20023.93474888 -4591.12680903 1858.70135891 2003.32077130 1814.47948340 26.70054215
1910 20023.93474888 -4591.14366548 1806.10434545 1792.54775377 1864.38160004 301.24438764
1920 20023.93474888 -4591.11781857 1833.48848596 1766.77497003 1815.64080638 -293.06438183
1930 20023.93474888 -4591.12285278 1839.11335622 1929.28055650 1841.52147596 59.60052069
1940 20023.93474888 -4591.14018804 1796.22184540 1744.26853225 1746.83391971 40.37326649
1950 20023.93474888 -4591.12901254 1816.28164897 1873.15646355 1849.53951502 -206.15332558
1960 20023.93474888 -4591.10959346 1829.95888178 1759.19881131 1818.96147638 140.43862743
1970 20023.93474888 -4591.15041597 1695.66220939 1759.10227856 1713.31191846 1.48998705
1980 20023.93474888 -4591.12373926 1839.11714441 1819.64176487 1799.65559207 -171.07051582
1990 20023.93474888 -4591.11065302 1930.61206830 1898.47853948 1985.20466055 305.47151137
2000 20023.93474888 -4591.15175708 1707.09302769 1754.70782606 1717.76389055 -259.68475322
Loop time of 14.4074 on 1 procs for 2000 steps with 1000 atoms
Performance: 23.988 ns/day, 1.001 hours/ns, 138.817 timesteps/s, 138.817 katom-step/s
99.8% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 14.354 | 14.354 | 14.354 | 0.0 | 99.63
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.017188 | 0.017188 | 0.017188 | 0.0 | 0.12
Output | 0.0050332 | 0.0050332 | 0.0050332 | 0.0 | 0.03
Modify | 0.021505 | 0.021505 | 0.021505 | 0.0 | 0.15
Other | | 0.009614 | | | 0.07
Nlocal: 1000 ave 1000 max 1000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1470 ave 1470 max 1470 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 14000 ave 14000 max 14000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 28000 ave 28000 max 28000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 28000
Ave neighs/atom = 28
Neighbor list builds = 0
Dangerous builds = 0
Total wall time: 0:00:14

View File

@ -0,0 +1,304 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# bulk Si lattice
variable x index 1
variable y index 1
variable z index 1
units metal
atom_style atomic
atom_style atomic
lattice diamond 5.431
Lattice spacing in x,y,z = 5.431 5.431 5.431
region box block 0 5 0 5 0 5
boundary p p p
create_box 1 box
Created orthogonal box = (0 0 0) to (27.155 27.155 27.155)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 1000 atoms
using lattice units in orthogonal box = (0 0 0) to (27.155 27.155 27.155)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
velocity all create 300.0 376847 loop geom
neighbor 1.0 bin
neigh_modify every 1 delay 5 check yes
fix 1 all nve
thermo 1
thermo_style custom step vol etotal press pxx pyy pxz
thermo_modify format 2 %14.8f
thermo_modify format 3 %14.8f
thermo_modify format 4 %14.8f
thermo_modify format 5 %14.8f
thermo_modify format 6 %14.8f
thermo_modify format 7 %14.8f
timestep 0.002
thermo 10
run 2000
Neighbor list info ...
update: every = 1 steps, delay = 5 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 5.4
ghost atom cutoff = 5.4
binsize = 2.7, bins = 11 11 11
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.844 | 3.844 | 3.844 Mbytes
Step Volume TotEng Press Pxx Pyy Pxz
0 20023.93474888 -4591.26061752 2033.68946971 2021.11445194 1952.17782797 -97.47450759
10 20023.93474888 -4591.06635027 2106.23969322 2152.04799431 2155.92194220 12.68865715
20 20023.93474888 -4591.08410781 1520.06153315 1492.61610264 1497.10285778 106.41609464
30 20023.93474888 -4591.20090599 1491.31281700 1502.41875599 1432.20332178 -220.78939631
40 20023.93474888 -4591.12407583 2061.65437359 2086.56902463 2156.12774891 133.73119636
50 20023.93474888 -4591.07086388 1998.55985680 2023.28426545 1940.19019547 -154.08048062
60 20023.93474888 -4591.17588927 1598.13531474 1534.49237825 1668.01401046 -30.47944201
70 20023.93474888 -4591.14658914 1774.80198893 1761.97777404 1785.46768303 265.88782944
80 20023.93474888 -4591.07344257 2005.07361554 2086.14231294 1947.20619069 -215.28622700
90 20023.93474888 -4591.16235879 1688.39497858 1707.61777360 1644.33608005 -164.72130224
100 20023.93474888 -4591.14061103 1841.94537985 1773.73213126 1872.51475381 270.59501062
110 20023.93474888 -4591.10220075 1929.22145308 1909.87270800 1904.88051163 -225.24314462
120 20023.93474888 -4591.13680660 1781.93134139 1866.38642644 1768.89564107 99.86322623
130 20023.93474888 -4591.14594757 1822.72100705 1800.47495717 1820.60642574 80.95885439
140 20023.93474888 -4591.10390676 1910.20395959 1955.77665062 1995.74598843 -90.83622328
150 20023.93474888 -4591.14245989 1806.86945717 1826.66971108 1792.88598455 109.51618275
160 20023.93474888 -4591.13052367 1788.16506016 1720.21348685 1791.47937216 -21.37756610
170 20023.93474888 -4591.11781009 1807.71365716 1900.29204203 1719.93993616 -188.87504172
180 20023.93474888 -4591.13704844 1743.08721343 1658.19411470 1778.16073808 259.52251154
190 20023.93474888 -4591.13308836 1922.54250327 2098.60870590 1929.55744997 -258.20774060
200 20023.93474888 -4591.11910322 1948.92068139 1792.61227410 1942.76349531 -13.04781205
210 20023.93474888 -4591.13284135 1783.61066563 1820.43796850 1760.07633603 159.07171405
220 20023.93474888 -4591.13715614 1662.49241020 1659.81600936 1681.59150854 -132.96912515
230 20023.93474888 -4591.11129339 1856.01542680 1850.81261755 1867.66185416 52.30767173
240 20023.93474888 -4591.13924088 1891.58063505 1873.70129244 1956.58093826 98.93914776
250 20023.93474888 -4591.13115836 1860.91219630 1879.89822419 1879.31544524 -206.02783594
260 20023.93474888 -4591.12006308 1818.08099684 1941.33545552 1695.61949198 135.96435310
270 20023.93474888 -4591.13331488 1745.21920087 1632.20932864 1841.75237705 -14.79711037
280 20023.93474888 -4591.12668399 1922.27473363 1983.53049663 1801.16867887 -31.89499092
290 20023.93474888 -4591.12976814 1897.55799948 1827.64752799 1980.45982095 2.76502468
300 20023.93474888 -4591.13041107 1825.92142175 1878.68168429 1802.25725197 -38.06898013
310 20023.93474888 -4591.12342070 1743.43519435 1729.85694627 1756.42228046 -60.06468142
320 20023.93474888 -4591.13442022 1763.60998541 1802.84007786 1758.08838840 143.69519620
330 20023.93474888 -4591.12695785 1845.27144767 1808.79851160 1834.73311592 -121.25298963
340 20023.93474888 -4591.12739037 1824.35911264 1832.04065906 1890.80248509 133.62517019
350 20023.93474888 -4591.12837932 1846.72083448 1835.30033595 1798.95928707 4.24639304
360 20023.93474888 -4591.13622705 1815.71663299 1841.77777220 1827.49780217 -143.29464941
370 20023.93474888 -4591.11509215 1875.67484263 1827.27976696 1896.21161847 45.76662873
380 20023.93474888 -4591.13911382 1797.59606691 1812.24811799 1794.04473228 -41.59509792
390 20023.93474888 -4591.13044234 1898.76664850 1956.19730294 1860.25003173 50.92492249
400 20023.93474888 -4591.11980093 1844.92592342 1784.40892834 1906.40469373 43.28606684
410 20023.93474888 -4591.13203164 1712.71494613 1814.06280558 1635.86741674 -103.93631537
420 20023.93474888 -4591.13931019 1733.36385225 1744.49160982 1807.86989588 133.74823990
430 20023.93474888 -4591.11091920 1949.42231060 1909.20892751 1975.13043695 -14.29575994
440 20023.93474888 -4591.13618833 1864.59807598 1877.62852076 1843.27438001 -89.29004851
450 20023.93474888 -4591.14065042 1792.33232906 1768.15519821 1791.97620416 -5.46931032
460 20023.93474888 -4591.11500763 1855.81887789 1911.02702117 1854.11481264 -11.65879123
470 20023.93474888 -4591.12641341 1766.54892333 1681.66409644 1790.72946629 -16.19300593
480 20023.93474888 -4591.14320376 1784.94640413 1837.12258760 1829.45130428 -24.58372083
490 20023.93474888 -4591.12275909 1917.84426215 1874.02051396 1823.93473736 -53.21102808
500 20023.93474888 -4591.11536593 1947.65004536 1984.00927202 1955.20583715 170.72348231
510 20023.93474888 -4591.14946014 1646.92190921 1637.16721605 1721.95260390 0.75910182
520 20023.93474888 -4591.11878765 1731.94108930 1743.76148438 1765.97663216 -57.64792043
530 20023.93474888 -4591.12058050 1866.85423508 1909.03069117 1803.97339119 16.08917224
540 20023.93474888 -4591.14440078 1899.09055097 1869.45536614 1955.63982682 -24.32088390
550 20023.93474888 -4591.12281013 1852.60179253 1888.89598923 1828.87997557 33.36368633
560 20023.93474888 -4591.12596918 1774.97474435 1755.08558684 1723.01020118 87.23634641
570 20023.93474888 -4591.13304645 1873.49764770 1893.06123674 1876.90953167 -195.18190912
580 20023.93474888 -4591.13041940 1893.40369832 1948.39675218 1946.70465437 29.22769962
590 20023.93474888 -4591.12628365 1825.99871929 1778.59925204 1799.29497744 3.08469865
600 20023.93474888 -4591.12623199 1711.14291256 1726.06334645 1756.56328186 20.17045328
610 20023.93474888 -4591.13575822 1762.74383759 1696.53579466 1742.16989186 137.99551512
620 20023.93474888 -4591.11748177 1915.62953629 1922.50709681 1889.71048322 -151.19246541
630 20023.93474888 -4591.13946145 1831.51838075 1853.88531957 1944.94176350 -78.29498078
640 20023.93474888 -4591.12580524 1823.82318701 1846.02546609 1743.64065792 65.64051216
650 20023.93474888 -4591.11948676 1873.78285550 1775.89089801 1906.72223535 -139.90734745
660 20023.93474888 -4591.14305191 1839.42659647 1933.10913722 1792.91114197 95.22053922
670 20023.93474888 -4591.12366236 1774.96754052 1789.81121744 1723.60115478 55.42770410
680 20023.93474888 -4591.12112515 1741.65490764 1742.36094980 1800.06072377 -192.49818667
690 20023.93474888 -4591.13668764 1895.42242504 1903.97215886 1897.27680319 147.64765468
700 20023.93474888 -4591.13154232 1907.73595360 1909.70840176 1899.59355684 -41.88693465
710 20023.93474888 -4591.12123644 1787.17165151 1846.04442558 1856.62285015 14.61013131
720 20023.93474888 -4591.12912228 1773.50711774 1711.48986545 1678.98894392 99.13962644
730 20023.93474888 -4591.13943426 1877.91861157 1809.62732738 1952.21169623 -179.48456876
740 20023.93474888 -4591.11922024 1847.48164813 1916.28361817 1760.80492531 -43.61964522
750 20023.93474888 -4591.12596610 1728.48429245 1815.68288594 1697.85161513 55.12427062
760 20023.93474888 -4591.14243576 1796.32698641 1612.95116168 1889.48842714 -56.19243304
770 20023.93474888 -4591.11566749 1950.43433206 2057.36511752 1888.45827111 128.97571878
780 20023.93474888 -4591.13794531 1856.08214494 1774.55616742 1998.19458473 -130.30085155
790 20023.93474888 -4591.12484960 1705.79355619 1781.11561557 1610.09362743 6.08209071
800 20023.93474888 -4591.13248423 1653.85467554 1641.87803368 1645.29795462 53.02769875
810 20023.93474888 -4591.12053801 1850.72927101 1853.81007899 1934.33578436 -11.41348471
820 20023.93474888 -4591.13568633 2005.01327886 2023.65736202 1948.64345022 -22.95511661
830 20023.93474888 -4591.13348945 1940.46273386 1921.41281592 1947.71907956 -14.69201391
840 20023.93474888 -4591.11607962 1815.62995416 1907.12718388 1802.10585446 -143.02255465
850 20023.93474888 -4591.14111076 1700.83719470 1651.77649527 1634.11260381 100.81708093
860 20023.93474888 -4591.12299798 1784.63349939 1801.31186313 1877.00630923 -28.97604911
870 20023.93474888 -4591.12551010 1837.85662130 1761.53486861 1840.30471001 -52.75496595
880 20023.93474888 -4591.13571795 1820.08693693 1877.16684816 1867.96423182 86.61080127
890 20023.93474888 -4591.12866291 1826.87796103 1839.08169130 1817.17394238 -134.01673514
900 20023.93474888 -4591.12028235 1746.32172050 1741.62288887 1761.74467257 152.25484122
910 20023.93474888 -4591.14179606 1761.97332567 1830.92751090 1713.93524698 -9.10599599
920 20023.93474888 -4591.12106188 1914.68900864 1818.03927415 1943.64134070 -147.61072264
930 20023.93474888 -4591.12848096 1926.62340190 1986.31823801 1928.48168675 91.05693225
940 20023.93474888 -4591.13069876 1761.33037533 1672.68740184 1752.00036085 -89.58641136
950 20023.93474888 -4591.13173687 1805.68203857 1904.16936367 1803.40809345 66.24135736
960 20023.93474888 -4591.11991153 1917.48459209 1923.72305582 1935.83439145 87.27226176
970 20023.93474888 -4591.13868359 1847.33106721 1840.24455972 1893.06308349 -129.01573111
980 20023.93474888 -4591.13181295 1727.55316411 1738.87912907 1668.27059033 59.44259326
990 20023.93474888 -4591.11625584 1786.71812091 1715.85427725 1854.96328384 126.73228092
1000 20023.93474888 -4591.13934315 1826.70687382 1852.65649263 1809.58902755 -174.47060530
1010 20023.93474888 -4591.13095663 1811.18694875 1846.66780622 1799.54031746 82.32118283
1020 20023.93474888 -4591.12183334 1887.55743479 1888.81913424 1892.46209156 -133.57878235
1030 20023.93474888 -4591.13070772 1816.08941967 1795.09854639 1813.60363124 -85.34396177
1040 20023.93474888 -4591.13464400 1793.05553513 1867.58492264 1755.85054655 230.90630163
1050 20023.93474888 -4591.12336443 1737.63082740 1638.33978731 1775.64351248 -218.66770699
1060 20023.93474888 -4591.12391336 1909.02227267 1917.61923360 1929.46998267 201.87082324
1070 20023.93474888 -4591.14276207 1853.77633175 1845.01344377 1926.71448240 7.58141037
1080 20023.93474888 -4591.12033703 1840.89520938 1986.47111954 1764.94655261 -247.98825747
1090 20023.93474888 -4591.12285870 1832.49648001 1766.39647524 1807.79358153 337.75570868
1100 20023.93474888 -4591.14555863 1760.67028787 1709.70075974 1799.92422039 -115.37262627
1110 20023.93474888 -4591.11192008 1900.26741848 1977.75347876 1822.20529111 -155.07728288
1120 20023.93474888 -4591.13385646 1846.80363719 1808.16104869 1966.17497644 225.10841939
1130 20023.93474888 -4591.13677038 1841.68639992 1848.15406996 1742.05105373 -236.03870840
1140 20023.93474888 -4591.11917782 1698.85212076 1683.67420262 1769.66448366 -17.01311110
1150 20023.93474888 -4591.12452665 1784.18127764 1793.37702282 1832.40349926 223.51415711
1160 20023.93474888 -4591.14073576 1851.04085412 1886.26603210 1747.67661478 -243.41304842
1170 20023.93474888 -4591.12852378 1864.81370801 1815.62177095 1968.34473464 56.20720441
1180 20023.93474888 -4591.11147236 1764.74624859 1757.71396067 1739.81644521 145.86370499
1190 20023.93474888 -4591.14727099 1743.23431143 1756.61309537 1783.17306740 -228.37177614
1200 20023.93474888 -4591.12826190 1966.84129537 1984.72539337 1881.61224992 250.19286662
1210 20023.93474888 -4591.12135178 1991.65207643 1967.17238770 2032.30298146 -170.69659769
1220 20023.93474888 -4591.12704074 1798.24767693 1886.54781818 1750.35115144 -51.42237623
1230 20023.93474888 -4591.15251028 1648.42299475 1626.54609762 1707.43237169 201.89424169
1240 20023.93474888 -4591.10437001 1807.54033792 1804.75428684 1800.73969035 -130.77309115
1250 20023.93474888 -4591.12866282 1822.12377428 1924.97526529 1809.30741820 20.43220727
1260 20023.93474888 -4591.15253116 1834.13256135 1775.15312363 1862.92726901 143.80072625
1270 20023.93474888 -4591.11697045 1895.99420213 1890.05548682 1876.70705636 -196.28190118
1280 20023.93474888 -4591.10998558 1810.69685083 1786.90144766 1890.32743580 114.61127078
1290 20023.93474888 -4591.15663369 1683.26225221 1732.91813399 1605.72640055 -56.89297456
1300 20023.93474888 -4591.12644490 1877.89503414 1849.54325292 1892.65946124 -90.18957430
1310 20023.93474888 -4591.10427489 1965.23083403 1891.58965014 2012.51270249 177.72644110
1320 20023.93474888 -4591.15434861 1777.36326273 1897.73365762 1722.00033373 -82.34768718
1330 20023.93474888 -4591.12298907 1836.66913522 1799.70130374 1793.68254013 -61.75151998
1340 20023.93474888 -4591.12490141 1777.80141168 1795.27706143 1895.24485296 166.42735417
1350 20023.93474888 -4591.12378570 1791.10623391 1818.30852832 1798.44218549 -226.55174357
1360 20023.93474888 -4591.14595156 1831.57041599 1836.41493856 1803.22092404 47.63921757
1370 20023.93474888 -4591.12178926 1874.48627007 1830.97360382 1875.01454925 24.21227347
1380 20023.93474888 -4591.11951398 1796.21540344 1815.96845897 1814.42525184 -116.20769836
1390 20023.93474888 -4591.14872688 1781.31452346 1832.31081575 1656.75621082 188.41394605
1400 20023.93474888 -4591.11780318 1969.45961602 1942.83749662 2052.95436774 -155.30340466
1410 20023.93474888 -4591.13042094 1805.38744778 1812.16090834 1841.18785723 -135.61134425
1420 20023.93474888 -4591.13286369 1739.68760936 1659.96693548 1707.24203303 110.27513123
1430 20023.93474888 -4591.13198735 1747.53328395 1806.76726000 1784.09093686 5.21735925
1440 20023.93474888 -4591.11951402 1859.97208154 1781.23741206 1873.95961458 101.92428758
1450 20023.93474888 -4591.13735783 1846.95509332 1932.51431610 1875.78491023 -20.51854956
1460 20023.93474888 -4591.13226567 1823.25742012 1856.67667532 1742.46107569 -236.53527110
1470 20023.93474888 -4591.11753177 1840.40145258 1804.81903258 1927.84741838 204.06984499
1480 20023.93474888 -4591.12988851 1857.83857551 1869.95780410 1816.24608994 47.30748811
1490 20023.93474888 -4591.14097247 1856.07106933 1865.36744675 1821.35003034 -170.29765752
1500 20023.93474888 -4591.11239658 1788.62741104 1806.80467824 1740.17442721 184.92894743
1510 20023.93474888 -4591.12783714 1790.12382236 1762.63806951 1871.28139822 -186.60004317
1520 20023.93474888 -4591.14195717 1821.44183211 1942.11747262 1841.04727513 162.73737184
1530 20023.93474888 -4591.11752486 1893.52056191 1779.18475868 1942.62286196 -36.33494025
1540 20023.93474888 -4591.12184439 1817.26275805 1859.82625327 1763.71144260 -112.75758714
1550 20023.93474888 -4591.13981158 1749.80836038 1700.61859748 1825.01128874 217.38903988
1560 20023.93474888 -4591.12728980 1805.71852411 1871.59756252 1719.68258949 -238.74223976
1570 20023.93474888 -4591.11445788 1888.39024553 1835.20226211 1912.08399927 -215.27224989
1580 20023.93474888 -4591.14252397 1838.41560957 1873.47969145 1837.47624533 354.25680129
1590 20023.93474888 -4591.13218156 1810.87114008 1789.08855052 1828.99123618 -209.15084305
1600 20023.93474888 -4591.10968279 1836.00776639 1825.81027089 1837.34962151 100.90452637
1610 20023.93474888 -4591.14604707 1757.48298096 1809.03232957 1740.69121405 2.74809819
1620 20023.93474888 -4591.12529727 1849.75708598 1814.73147905 1947.60906283 -148.12289096
1630 20023.93474888 -4591.12162235 1786.81944541 1807.60181185 1733.20416838 206.15614063
1640 20023.93474888 -4591.13525115 1749.94510017 1689.93885297 1800.12923385 -51.81620378
1650 20023.93474888 -4591.12880584 1980.51885483 2129.50862093 1891.59140452 -174.84268771
1660 20023.93474888 -4591.12623477 1937.01570743 1858.01901653 2010.43954630 262.44914826
1670 20023.93474888 -4591.13072559 1694.39186347 1719.93995818 1664.78730910 -145.83592220
1680 20023.93474888 -4591.12856494 1668.28855671 1605.96305541 1655.07886827 -142.62923930
1690 20023.93474888 -4591.12284600 1970.77558510 1970.04100693 2023.90407383 345.77356846
1700 20023.93474888 -4591.13613310 1866.78766162 1829.21192148 1874.14168461 -348.93449367
1710 20023.93474888 -4591.12999014 1699.05790295 1794.09077236 1679.36565966 178.31013280
1720 20023.93474888 -4591.11591976 1782.98025612 1700.01516442 1839.71274928 70.34469196
1730 20023.93474888 -4591.14002514 1828.78788640 1977.52189719 1827.54175302 -277.62511918
1740 20023.93474888 -4591.13058733 1798.70721142 1684.56540042 1750.52641931 233.57737288
1750 20023.93474888 -4591.11769219 1851.38818281 1927.22480022 1826.49368852 -71.98649667
1760 20023.93474888 -4591.14007689 1883.09481847 1852.22751222 1980.94397520 -186.99017028
1770 20023.93474888 -4591.12640655 1894.21954462 1854.80842095 1863.10559976 245.84910078
1780 20023.93474888 -4591.12667944 1783.63681589 1763.81183224 1762.03364824 -313.41414175
1790 20023.93474888 -4591.13111090 1787.61153134 1881.28039744 1851.63506346 105.01216221
1800 20023.93474888 -4591.12970750 1803.03035944 1787.16231095 1760.00551047 314.84058979
1810 20023.93474888 -4591.12225161 1812.82894513 1745.83085735 1834.29341515 -340.09732132
1820 20023.93474888 -4591.13716194 1775.41292525 1771.18484944 1813.93798001 70.81021199
1830 20023.93474888 -4591.12268274 1837.06086954 1854.30962384 1844.39144552 145.53217438
1840 20023.93474888 -4591.12375023 1837.54812558 1877.01563366 1835.09263974 -272.38620552
1850 20023.93474888 -4591.14592434 1799.83851836 1722.86830853 1788.44737857 224.62763516
1860 20023.93474888 -4591.11203857 1905.18247850 1950.95536684 1931.86453236 -124.44793867
1870 20023.93474888 -4591.13265210 1806.29161872 1787.78341130 1766.71844421 -34.60592396
1880 20023.93474888 -4591.14209985 1740.88195157 1752.31650148 1773.74624207 343.80423723
1890 20023.93474888 -4591.11622075 1861.70663705 1810.81638169 1911.13160086 -309.99090581
1900 20023.93474888 -4591.12680903 1858.70135891 2003.32077130 1814.47948340 26.70054215
1910 20023.93474888 -4591.14366548 1806.10434545 1792.54775377 1864.38160004 301.24438764
1920 20023.93474888 -4591.11781857 1833.48848596 1766.77497003 1815.64080638 -293.06438183
1930 20023.93474888 -4591.12285278 1839.11335622 1929.28055650 1841.52147596 59.60052069
1940 20023.93474888 -4591.14018804 1796.22184540 1744.26853225 1746.83391971 40.37326649
1950 20023.93474888 -4591.12901254 1816.28164897 1873.15646355 1849.53951502 -206.15332558
1960 20023.93474888 -4591.10959346 1829.95888177 1759.19881131 1818.96147637 140.43862743
1970 20023.93474888 -4591.15041597 1695.66220939 1759.10227856 1713.31191846 1.48998705
1980 20023.93474888 -4591.12373926 1839.11714441 1819.64176487 1799.65559207 -171.07051582
1990 20023.93474888 -4591.11065302 1930.61206830 1898.47853948 1985.20466055 305.47151137
2000 20023.93474888 -4591.15175708 1707.09302769 1754.70782606 1717.76389055 -259.68475322
Loop time of 3.74224 on 4 procs for 2000 steps with 1000 atoms
Performance: 92.351 ns/day, 0.260 hours/ns, 534.439 timesteps/s, 534.439 katom-step/s
99.5% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 3.6818 | 3.6824 | 3.683 | 0.0 | 98.40
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.035276 | 0.035759 | 0.036375 | 0.2 | 0.96
Output | 0.0048602 | 0.0051628 | 0.0059413 | 0.6 | 0.14
Modify | 0.00711 | 0.00726 | 0.0074246 | 0.1 | 0.19
Other | | 0.01162 | | | 0.31
Nlocal: 250 ave 250 max 250 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 733 ave 733 max 733 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 3500 ave 3500 max 3500 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 7000 ave 7000 max 7000 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 28000
Ave neighs/atom = 28
Neighbor list builds = 0
Dangerous builds = 0
Total wall time: 0:00:03

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.147
Lattice spacing in x,y,z = 4.147 4.147 4.147
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (4.147 4.147 4.147)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 4 atoms
using lattice units in orthogonal box = (0 0 0) to (4.147 4.147 4.147)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.715 | 3.715 | 3.715 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -17.15065 0 -17.15065 -53071.74
Loop time of 1.463e-06 on 1 procs for 0 steps with 4 atoms
136.7% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.463e-06 | | |100.00
Nlocal: 4 ave 4 max 4 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 360 ave 360 max 360 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 108 ave 108 max 108 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 216 ave 216 max 216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 216
Ave neighs/atom = 54
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.28766254199965 eV/atom
print "Reference cohesive energy: -4.288 eV/atom"
Reference cohesive energy: -4.288 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.82962113075 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 4.14699999941014 A
print "Reference lattice constant 4.147 A"
Reference lattice constant 4.147 A
print "===================================================="
====================================================
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.147
Lattice spacing in x,y,z = 4.147 4.147 4.147
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (4.147 4.147 4.147)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 4 atoms
using lattice units in orthogonal box = (0 0 0) to (4.147 4.147 4.147)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.746 | 3.746 | 3.746 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -17.15065 0 -17.15065 -53071.74
Loop time of 3.73975e-06 on 4 procs for 0 steps with 4 atoms
86.9% CPU use with 4 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 3.74e-06 | | |100.00
Nlocal: 1 ave 1 max 1 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 287 ave 287 max 287 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 27 ave 27 max 27 min
Histogram: 4 0 0 0 0 0 0 0 0 0
FullNghs: 54 ave 54 max 54 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 216
Ave neighs/atom = 54
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.28766254199965 eV/atom
print "Reference cohesive energy: -4.288 eV/atom"
Reference cohesive energy: -4.288 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.82962113075 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 4.14699999941014 A
print "Reference lattice constant 4.147 A"
Reference lattice constant 4.147 A
print "===================================================="
====================================================
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

View File

@ -0,0 +1,563 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice fcc 4.309793856093661
Lattice spacing in x,y,z = 4.3097939 4.3097939 4.3097939
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (4.3097939 4.3097939 4.3097939)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 4 atoms
using lattice units in orthogonal box = (0 0 0) to (4.3097939 4.3097939 4.3097939)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
fix relax all box/relax aniso 0
thermo 1
minimize 0 0 10000 100000
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 2 2 2
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 4.839 | 4.839 | 4.839 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 0 -14.885032 0 -14.885032 -693238.94 80.051503
1 0 -14.895414 0 -14.895414 -692151.97 80.02749
2 0 -14.905777 0 -14.905777 -691061.83 80.003482
3 0 -14.916122 0 -14.916122 -689968.51 79.979479
4 0 -14.926449 0 -14.926449 -688872.02 79.95548
5 0 -14.936757 0 -14.936757 -687772.36 79.931486
6 0 -14.947047 0 -14.947047 -686669.52 79.907497
7 0 -14.957318 0 -14.957318 -685563.5 79.883513
8 0 -14.96757 0 -14.96757 -684454.31 79.859534
9 0 -14.977804 0 -14.977804 -683341.94 79.835559
10 0 -14.988019 0 -14.988019 -682226.39 79.811589
11 0 -14.998215 0 -14.998215 -681107.66 79.787624
12 0 -15.008393 0 -15.008393 -679985.75 79.763664
13 0 -15.018551 0 -15.018551 -678860.66 79.739708
14 0 -15.028691 0 -15.028691 -677732.39 79.715758
15 0 -15.038812 0 -15.038812 -676600.93 79.691812
16 0 -15.048914 0 -15.048914 -675466.29 79.667871
17 0 -15.058997 0 -15.058997 -674328.46 79.643934
18 0 -15.06906 0 -15.06906 -673187.45 79.620003
19 0 -15.079105 0 -15.079105 -672043.25 79.596076
20 0 -15.089131 0 -15.089131 -670895.86 79.572154
21 0 -15.099137 0 -15.099137 -669745.28 79.548237
22 0 -15.109125 0 -15.109125 -668591.52 79.524325
23 0 -15.119093 0 -15.119093 -667434.56 79.500418
24 0 -15.129041 0 -15.129041 -666274.42 79.476515
25 0 -15.138971 0 -15.138971 -665111.08 79.452617
26 0 -15.148881 0 -15.148881 -663944.55 79.428724
27 0 -15.158771 0 -15.158771 -662774.82 79.404835
28 0 -15.168643 0 -15.168643 -661601.9 79.380952
29 0 -15.178494 0 -15.178494 -660425.79 79.357073
30 0 -15.188327 0 -15.188327 -659246.47 79.333199
31 0 -15.198139 0 -15.198139 -658063.97 79.30933
32 0 -15.207932 0 -15.207932 -656878.26 79.285466
33 0 -15.217706 0 -15.217706 -655689.36 79.261606
34 0 -15.227459 0 -15.227459 -654497.25 79.237751
35 0 -15.237193 0 -15.237193 -653301.95 79.213901
36 0 -15.246907 0 -15.246907 -652103.44 79.190056
37 0 -15.256602 0 -15.256602 -650901.73 79.166215
38 0 -15.266276 0 -15.266276 -649696.82 79.14238
39 0 -15.275931 0 -15.275931 -648488.71 79.118549
40 0 -15.285566 0 -15.285566 -647277.39 79.094723
41 0 -15.295181 0 -15.295181 -646062.87 79.070901
42 0 -15.304775 0 -15.304775 -644845.14 79.047085
43 0 -15.31435 0 -15.31435 -643624.2 79.023273
44 0 -15.323905 0 -15.323905 -642400.06 78.999466
45 0 -15.333439 0 -15.333439 -641172.71 78.975664
46 0 -15.342953 0 -15.342953 -639942.14 78.951867
47 0 -15.352447 0 -15.352447 -638708.37 78.928074
48 0 -15.361921 0 -15.361921 -637471.39 78.904286
49 0 -15.371375 0 -15.371375 -636231.2 78.880503
50 0 -15.380808 0 -15.380808 -634987.79 78.856725
51 0 -15.390221 0 -15.390221 -633741.17 78.832951
52 0 -15.399613 0 -15.399613 -632491.34 78.809183
53 0 -15.408985 0 -15.408985 -631238.3 78.785419
54 0 -15.418337 0 -15.418337 -629982.03 78.761659
55 0 -15.427668 0 -15.427668 -628722.56 78.737905
56 0 -15.436978 0 -15.436978 -627459.86 78.714155
57 0 -15.446268 0 -15.446268 -626193.95 78.690411
58 0 -15.455537 0 -15.455537 -624924.82 78.666671
59 0 -15.464786 0 -15.464786 -623652.47 78.642935
60 0 -15.474014 0 -15.474014 -622376.9 78.619205
61 0 -15.483221 0 -15.483221 -621098.11 78.595479
62 0 -15.492407 0 -15.492407 -619816.1 78.571758
63 0 -15.501572 0 -15.501572 -618530.86 78.548042
64 0 -15.510716 0 -15.510716 -617242.41 78.52433
65 0 -15.51984 0 -15.51984 -615950.73 78.500624
66 0 -15.528942 0 -15.528942 -614655.82 78.476922
67 0 -15.538024 0 -15.538024 -613357.7 78.453225
68 0 -15.547084 0 -15.547084 -612056.34 78.429532
69 0 -15.556124 0 -15.556124 -610751.76 78.405845
70 0 -15.565142 0 -15.565142 -609443.95 78.382162
71 0 -15.574139 0 -15.574139 -608132.92 78.358484
72 0 -15.583115 0 -15.583115 -606818.65 78.334811
73 0 -15.59207 0 -15.59207 -605501.16 78.311142
74 0 -15.601003 0 -15.601003 -604180.44 78.287479
75 0 -15.609915 0 -15.609915 -602856.49 78.26382
76 0 -15.618806 0 -15.618806 -601529.3 78.240165
77 0 -15.627675 0 -15.627675 -600198.88 78.216516
78 0 -15.636523 0 -15.636523 -598865.24 78.192871
79 0 -15.645349 0 -15.645349 -597528.35 78.169231
80 0 -15.654154 0 -15.654154 -596188.24 78.145596
81 0 -15.662937 0 -15.662937 -594844.89 78.121966
82 0 -15.671699 0 -15.671699 -593498.3 78.09834
83 0 -15.680439 0 -15.680439 -592148.48 78.07472
84 0 -15.689157 0 -15.689157 -590795.42 78.051103
85 0 -15.697854 0 -15.697854 -589439.12 78.027492
86 0 -15.706528 0 -15.706528 -588079.59 78.003886
87 0 -15.715181 0 -15.715181 -586716.81 77.980284
88 0 -15.723812 0 -15.723812 -585350.8 77.956687
89 0 -15.732422 0 -15.732422 -583981.55 77.933095
90 0 -15.741009 0 -15.741009 -582609.05 77.909507
91 0 -15.749575 0 -15.749575 -581233.32 77.885924
92 0 -15.758118 0 -15.758118 -579854.34 77.862346
93 0 -15.766639 0 -15.766639 -578472.13 77.838773
94 0 -15.775139 0 -15.775139 -577086.66 77.815205
95 0 -15.783616 0 -15.783616 -575697.96 77.791641
96 0 -15.792071 0 -15.792071 -574306.01 77.768082
97 0 -15.800504 0 -15.800504 -572910.81 77.744528
98 0 -15.808914 0 -15.808914 -571512.37 77.720978
99 0 -15.817303 0 -15.817303 -570110.68 77.697434
100 0 -15.825669 0 -15.825669 -568705.75 77.673894
101 0 -15.834012 0 -15.834012 -567297.57 77.650359
102 0 -15.842334 0 -15.842334 -565886.14 77.626828
103 0 -15.850632 0 -15.850632 -564471.46 77.603303
104 0 -15.858909 0 -15.858909 -563053.53 77.579782
105 0 -15.867163 0 -15.867163 -561632.35 77.556266
106 0 -15.875394 0 -15.875394 -560207.92 77.532754
107 0 -15.883603 0 -15.883603 -558780.24 77.509247
108 0 -15.891789 0 -15.891789 -557349.31 77.485746
109 0 -15.899952 0 -15.899952 -555915.12 77.462248
110 0 -15.908093 0 -15.908093 -554477.69 77.438756
111 0 -15.916211 0 -15.916211 -553036.99 77.415268
112 0 -15.924306 0 -15.924306 -551593.05 77.391785
113 0 -15.932379 0 -15.932379 -550145.85 77.368307
114 0 -15.940428 0 -15.940428 -548695.39 77.344834
115 0 -15.948455 0 -15.948455 -547241.68 77.321365
116 0 -15.956458 0 -15.956458 -545784.72 77.297901
117 0 -15.964439 0 -15.964439 -544324.49 77.274442
118 0 -15.972397 0 -15.972397 -542861.01 77.250988
119 0 -15.980332 0 -15.980332 -541394.27 77.227538
120 0 -15.988243 0 -15.988243 -539924.27 77.204093
121 0 -15.996132 0 -15.996132 -538451.02 77.180653
122 0 -16.003997 0 -16.003997 -536974.5 77.157218
123 0 -16.011839 0 -16.011839 -535494.73 77.133787
124 0 -16.019658 0 -16.019658 -534011.69 77.110361
125 0 -16.027453 0 -16.027453 -532525.4 77.08694
126 0 -16.035225 0 -16.035225 -531035.84 77.063523
127 0 -16.042974 0 -16.042974 -529543.02 77.040112
128 0 -16.0507 0 -16.0507 -528046.94 77.016705
129 0 -16.058402 0 -16.058402 -526547.6 76.993303
130 0 -16.06608 0 -16.06608 -525044.99 76.969905
131 0 -16.073735 0 -16.073735 -523539.12 76.946512
132 0 -16.081366 0 -16.081366 -522029.98 76.923124
133 0 -16.088974 0 -16.088974 -520517.58 76.899741
134 0 -16.096559 0 -16.096559 -519001.91 76.876363
135 0 -16.104119 0 -16.104119 -517482.98 76.852989
136 0 -16.111656 0 -16.111656 -515960.78 76.82962
137 0 -16.119169 0 -16.119169 -514435.32 76.806255
138 0 -16.126658 0 -16.126658 -512906.59 76.782896
139 0 -16.134124 0 -16.134124 -511374.59 76.759541
140 0 -16.141565 0 -16.141565 -509839.32 76.736191
141 0 -16.148983 0 -16.148983 -508300.78 76.712846
142 0 -16.156377 0 -16.156377 -506758.97 76.689505
143 0 -16.163746 0 -16.163746 -505213.89 76.666169
144 0 -16.171092 0 -16.171092 -503665.55 76.642838
145 0 -16.178414 0 -16.178414 -502113.93 76.619512
146 0 -16.185711 0 -16.185711 -500559.04 76.59619
147 0 -16.192985 0 -16.192985 -499000.88 76.572873
148 0 -16.200234 0 -16.200234 -497439.44 76.549561
149 0 -16.207459 0 -16.207459 -495874.74 76.526253
150 0 -16.21466 0 -16.21466 -494306.76 76.50295
151 0 -16.221837 0 -16.221837 -492735.51 76.479652
152 0 -16.228989 0 -16.228989 -491160.98 76.456359
153 0 -16.236117 0 -16.236117 -489583.18 76.433071
154 0 -16.24322 0 -16.24322 -488002.1 76.409787
155 0 -16.250299 0 -16.250299 -486417.75 76.386508
156 0 -16.257354 0 -16.257354 -484830.12 76.363233
157 0 -16.264384 0 -16.264384 -483239.22 76.339964
158 0 -16.271389 0 -16.271389 -481645.03 76.316699
159 0 -16.27837 0 -16.27837 -480047.58 76.293438
160 0 -16.285326 0 -16.285326 -478446.84 76.270183
161 0 -16.292258 0 -16.292258 -476842.83 76.246932
162 0 -16.299165 0 -16.299165 -475235.53 76.223686
163 0 -16.306047 0 -16.306047 -473624.96 76.200445
164 0 -16.312904 0 -16.312904 -472011.11 76.177208
165 0 -16.319737 0 -16.319737 -470393.98 76.153977
166 0 -16.326544 0 -16.326544 -468773.57 76.130749
167 0 -16.333327 0 -16.333327 -467149.88 76.107527
168 0 -16.340085 0 -16.340085 -465522.91 76.084309
169 0 -16.346818 0 -16.346818 -463892.66 76.061096
170 0 -16.353526 0 -16.353526 -462259.12 76.037888
171 0 -16.360208 0 -16.360208 -460622.31 76.014685
172 0 -16.366866 0 -16.366866 -458982.21 75.991486
173 0 -16.373499 0 -16.373499 -457338.83 75.968292
174 0 -16.380106 0 -16.380106 -455692.16 75.945102
175 0 -16.386688 0 -16.386688 -454042.21 75.921918
176 0 -16.393245 0 -16.393245 -452388.98 75.898738
177 0 -16.399777 0 -16.399777 -450732.46 75.875563
178 0 -16.406284 0 -16.406284 -449072.66 75.852392
179 0 -16.412765 0 -16.412765 -447409.57 75.829227
180 0 -16.41922 0 -16.41922 -445743.2 75.806066
181 0 -16.425651 0 -16.425651 -444073.54 75.782909
182 0 -16.432056 0 -16.432056 -442400.6 75.759758
183 0 -16.438435 0 -16.438435 -440724.36 75.736611
184 0 -16.444789 0 -16.444789 -439044.85 75.713469
185 0 -16.451117 0 -16.451117 -437362.04 75.690331
186 0 -16.457419 0 -16.457419 -435675.95 75.667198
187 0 -16.463696 0 -16.463696 -433986.57 75.64407
188 0 -16.469948 0 -16.469948 -432293.9 75.620947
189 0 -16.476173 0 -16.476173 -430597.94 75.597828
190 0 -16.482373 0 -16.482373 -428898.69 75.574715
191 0 -16.488547 0 -16.488547 -427196.15 75.551605
192 0 -16.494695 0 -16.494695 -425490.33 75.528501
193 0 -16.500818 0 -16.500818 -423781.21 75.505401
194 0 -16.506914 0 -16.506914 -422068.8 75.482306
195 0 -16.512984 0 -16.512984 -420353.11 75.459216
196 0 -16.519029 0 -16.519029 -418634.12 75.43613
197 0 -16.525047 0 -16.525047 -416911.84 75.413049
198 0 -16.53104 0 -16.53104 -415186.27 75.389973
199 0 -16.537006 0 -16.537006 -413457.41 75.366901
200 0 -16.542946 0 -16.542946 -411725.25 75.343835
201 0 -16.54886 0 -16.54886 -409989.8 75.320773
202 0 -16.554748 0 -16.554748 -408251.06 75.297715
203 0 -16.560609 0 -16.560609 -406509.03 75.274663
204 0 -16.566445 0 -16.566445 -404763.7 75.251615
205 0 -16.572253 0 -16.572253 -403015.08 75.228571
206 0 -16.578036 0 -16.578036 -401263.17 75.205533
207 0 -16.583792 0 -16.583792 -399507.96 75.182499
208 0 -16.589522 0 -16.589522 -397749.46 75.15947
209 0 -16.595225 0 -16.595225 -395987.66 75.136445
210 0 -16.600902 0 -16.600902 -394222.57 75.113426
211 0 -16.606552 0 -16.606552 -392454.18 75.090411
212 0 -16.612176 0 -16.612176 -390682.5 75.0674
213 0 -16.617773 0 -16.617773 -388907.52 75.044395
214 0 -16.623344 0 -16.623344 -387129.24 75.021394
215 0 -16.628887 0 -16.628887 -385347.67 74.998397
216 0 -16.634404 0 -16.634404 -383562.84 74.975406
217 0 -16.639895 0 -16.639895 -381774.83 74.952419
218 0 -16.645358 0 -16.645358 -379983.65 74.929437
219 0 -16.650795 0 -16.650795 -378189.3 74.906459
220 0 -16.656204 0 -16.656204 -376391.78 74.883487
221 0 -16.661587 0 -16.661587 -374591.09 74.860519
222 0 -16.666943 0 -16.666943 -372787.22 74.837555
223 0 -16.672272 0 -16.672272 -370980.19 74.814597
224 0 -16.677574 0 -16.677574 -369169.99 74.791643
225 0 -16.682849 0 -16.682849 -367356.61 74.768693
226 0 -16.688097 0 -16.688097 -365540.06 74.745749
227 0 -16.693318 0 -16.693318 -363720.35 74.722809
228 0 -16.698511 0 -16.698511 -361897.46 74.699874
229 0 -16.703678 0 -16.703678 -360071.39 74.676943
230 0 -16.708817 0 -16.708817 -358242.16 74.654018
231 0 -16.713929 0 -16.713929 -356409.75 74.631096
232 0 -16.719014 0 -16.719014 -354574.18 74.60818
233 0 -16.724071 0 -16.724071 -352735.43 74.585268
234 0 -16.729101 0 -16.729101 -350893.5 74.562361
235 0 -16.734104 0 -16.734104 -349048.41 74.539459
236 0 -16.739079 0 -16.739079 -347200.14 74.516561
237 0 -16.744027 0 -16.744027 -345348.7 74.493668
238 0 -16.748947 0 -16.748947 -343494.08 74.47078
239 0 -16.75384 0 -16.75384 -341636.3 74.447897
240 0 -16.758705 0 -16.758705 -339775.34 74.425018
241 0 -16.763543 0 -16.763543 -337911.2 74.402143
242 0 -16.768353 0 -16.768353 -336043.9 74.379274
243 0 -16.773135 0 -16.773135 -334173.42 74.356409
244 0 -16.77789 0 -16.77789 -332299.77 74.333549
245 0 -16.782617 0 -16.782617 -330422.94 74.310693
246 0 -16.787316 0 -16.787316 -328542.94 74.287843
247 0 -16.791988 0 -16.791988 -326659.76 74.264997
248 0 -16.796631 0 -16.796631 -324773.42 74.242155
249 0 -16.801247 0 -16.801247 -322883.9 74.219319
250 0 -16.805835 0 -16.805835 -320991.2 74.196487
251 0 -16.810395 0 -16.810395 -319095.33 74.173659
252 0 -16.814926 0 -16.814926 -317196.29 74.150836
253 0 -16.81943 0 -16.81943 -315294.07 74.128019
254 0 -16.823906 0 -16.823906 -313388.68 74.105205
255 0 -16.828354 0 -16.828354 -311480.11 74.082397
256 0 -16.832774 0 -16.832774 -309568.37 74.059593
257 0 -16.837165 0 -16.837165 -307653.46 74.036793
258 0 -16.841529 0 -16.841529 -305735.37 74.013999
259 0 -16.845864 0 -16.845864 -303814.11 73.991209
260 0 -16.850171 0 -16.850171 -301889.67 73.968424
261 0 -16.85445 0 -16.85445 -299962.06 73.945643
262 0 -16.8587 0 -16.8587 -298031.27 73.922867
263 0 -16.862922 0 -16.862922 -296097.31 73.900096
264 0 -16.867116 0 -16.867116 -294160.18 73.87733
265 0 -16.871281 0 -16.871281 -292219.87 73.854568
266 0 -16.875418 0 -16.875418 -290276.38 73.831811
267 0 -16.879527 0 -16.879527 -288329.73 73.809058
268 0 -16.883606 0 -16.883606 -286379.89 73.78631
269 0 -16.887658 0 -16.887658 -284426.89 73.763567
270 0 -16.891681 0 -16.891681 -282470.7 73.740829
271 0 -16.895675 0 -16.895675 -280511.35 73.718095
272 0 -16.89964 0 -16.89964 -278548.82 73.695366
273 0 -16.903577 0 -16.903577 -276583.11 73.672641
274 0 -16.907485 0 -16.907485 -274614.23 73.649922
275 0 -16.911365 0 -16.911365 -272642.17 73.627206
276 0 -16.915215 0 -16.915215 -270666.95 73.604496
277 0 -16.919037 0 -16.919037 -268688.54 73.58179
278 0 -16.92283 0 -16.92283 -266706.96 73.559089
279 0 -16.926594 0 -16.926594 -264722.21 73.536393
280 0 -16.930329 0 -16.930329 -262734.28 73.513701
281 0 -16.934036 0 -16.934036 -260743.18 73.491014
282 0 -16.937713 0 -16.937713 -258748.91 73.468332
283 0 -16.941361 0 -16.941361 -256751.46 73.445654
284 0 -16.944981 0 -16.944981 -254750.83 73.422981
285 0 -16.948571 0 -16.948571 -252747.04 73.400312
286 0 -16.952132 0 -16.952132 -250740.06 73.377649
287 0 -16.955664 0 -16.955664 -248729.92 73.35499
288 0 -16.959166 0 -16.959166 -246716.6 73.332335
289 0 -16.96264 0 -16.96264 -244700.1 73.309685
290 0 -16.966084 0 -16.966084 -242680.43 73.28704
291 0 -16.969499 0 -16.969499 -240657.59 73.2644
292 0 -16.972885 0 -16.972885 -238631.58 73.241764
293 0 -16.976241 0 -16.976241 -236602.39 73.219133
294 0 -16.979568 0 -16.979568 -234570.02 73.196507
295 0 -16.982866 0 -16.982866 -232534.49 73.173885
296 0 -16.986134 0 -16.986134 -230495.78 73.151268
297 0 -16.989373 0 -16.989373 -228453.89 73.128655
298 0 -16.992582 0 -16.992582 -226408.84 73.106047
299 0 -16.995762 0 -16.995762 -224360.61 73.083444
300 0 -16.998912 0 -16.998912 -222309.2 73.060846
301 0 -17.002033 0 -17.002033 -220254.63 73.038252
302 0 -17.005123 0 -17.005123 -218196.88 73.015663
303 0 -17.008185 0 -17.008185 -216135.96 72.993078
304 0 -17.011216 0 -17.011216 -214071.86 72.970499
305 0 -17.014218 0 -17.014218 -212004.59 72.947923
306 0 -17.01719 0 -17.01719 -209934.15 72.925353
307 0 -17.020132 0 -17.020132 -207860.54 72.902787
308 0 -17.023045 0 -17.023045 -205783.76 72.880226
309 0 -17.025927 0 -17.025927 -203703.8 72.857669
310 0 -17.02878 0 -17.02878 -201620.65 72.835117
311 0 -17.031602 0 -17.031602 -199534.29 72.81257
312 0 -17.034395 0 -17.034395 -197444.73 72.790028
313 0 -17.037158 0 -17.037158 -195351.97 72.76749
314 0 -17.039891 0 -17.039891 -193256 72.744956
315 0 -17.042593 0 -17.042593 -191156.83 72.722428
316 0 -17.045266 0 -17.045266 -189054.45 72.699904
317 0 -17.047908 0 -17.047908 -186948.87 72.677384
318 0 -17.050521 0 -17.050521 -184840.09 72.65487
319 0 -17.053103 0 -17.053103 -182728.1 72.63236
320 0 -17.055655 0 -17.055655 -180612.9 72.609854
321 0 -17.058176 0 -17.058176 -178494.51 72.587354
322 0 -17.060668 0 -17.060668 -176372.9 72.564857
323 0 -17.063129 0 -17.063129 -174248.1 72.542366
324 0 -17.065559 0 -17.065559 -172120.08 72.519879
325 0 -17.06796 0 -17.06796 -169988.87 72.497397
326 0 -17.070329 0 -17.070329 -167854.45 72.47492
327 0 -17.072669 0 -17.072669 -165716.82 72.452447
328 0 -17.074978 0 -17.074978 -163576 72.429979
329 0 -17.077256 0 -17.077256 -161431.96 72.407515
330 0 -17.079504 0 -17.079504 -159284.73 72.385056
331 0 -17.081721 0 -17.081721 -157134.28 72.362602
332 0 -17.083908 0 -17.083908 -154980.64 72.340152
333 0 -17.086064 0 -17.086064 -152823.79 72.317707
334 0 -17.088189 0 -17.088189 -150663.73 72.295267
335 0 -17.090284 0 -17.090284 -148500.48 72.272831
336 0 -17.092348 0 -17.092348 -146334.01 72.2504
337 0 -17.094381 0 -17.094381 -144164.35 72.227974
338 0 -17.096383 0 -17.096383 -141991.48 72.205552
339 0 -17.098355 0 -17.098355 -139815.4 72.183135
340 0 -17.100295 0 -17.100295 -137636.12 72.160722
341 0 -17.102205 0 -17.102205 -135453.64 72.138315
342 0 -17.104084 0 -17.104084 -133267.96 72.115911
343 0 -17.105932 0 -17.105932 -131079.07 72.093513
344 0 -17.107749 0 -17.107749 -128886.97 72.071119
345 0 -17.109534 0 -17.109534 -126691.68 72.04873
346 0 -17.111289 0 -17.111289 -124493.17 72.026345
347 0 -17.113013 0 -17.113013 -122291.47 72.003965
348 0 -17.114705 0 -17.114705 -120086.56 71.98159
349 0 -17.116366 0 -17.116366 -117878.45 71.959219
350 0 -17.117997 0 -17.117997 -115667.14 71.936853
351 0 -17.119595 0 -17.119595 -113452.62 71.914491
352 0 -17.121163 0 -17.121163 -111234.9 71.892134
353 0 -17.1227 0 -17.1227 -109013.98 71.869782
354 0 -17.124205 0 -17.124205 -106789.85 71.847435
355 0 -17.125678 0 -17.125678 -104562.52 71.825092
356 0 -17.127121 0 -17.127121 -102331.99 71.802753
357 0 -17.128531 0 -17.128531 -100098.26 71.78042
358 0 -17.129911 0 -17.129911 -97861.318 71.758091
359 0 -17.131259 0 -17.131259 -95621.179 71.735766
360 0 -17.132575 0 -17.132575 -93377.837 71.713446
361 0 -17.13386 0 -17.13386 -91131.293 71.691131
362 0 -17.135114 0 -17.135114 -88881.547 71.668821
363 0 -17.136335 0 -17.136335 -86628.599 71.646515
364 0 -17.137526 0 -17.137526 -84372.449 71.624214
365 0 -17.138684 0 -17.138684 -82113.098 71.601917
366 0 -17.139811 0 -17.139811 -79850.545 71.579625
367 0 -17.140906 0 -17.140906 -77584.79 71.557338
368 0 -17.141969 0 -17.141969 -75315.834 71.535055
369 0 -17.143 0 -17.143 -73043.677 71.512777
370 0 -17.144 0 -17.144 -70768.319 71.490503
371 0 -17.144968 0 -17.144968 -68489.76 71.468234
372 0 -17.145904 0 -17.145904 -66208 71.44597
373 0 -17.146808 0 -17.146808 -63923.04 71.423711
374 0 -17.14768 0 -17.14768 -61634.879 71.401456
375 0 -17.14852 0 -17.14852 -59343.518 71.379205
376 0 -17.149328 0 -17.149328 -57048.956 71.356959
377 0 -17.150104 0 -17.150104 -54751.195 71.334718
378 0 -17.150848 0 -17.150848 -52450.234 71.312482
379 0 -17.15156 0 -17.15156 -50146.073 71.29025
380 0 -17.152239 0 -17.152239 -47838.712 71.268023
381 0 -17.152887 0 -17.152887 -45528.152 71.2458
382 0 -17.153502 0 -17.153502 -43214.394 71.223582
383 0 -17.154085 0 -17.154085 -40897.436 71.201369
384 0 -17.154636 0 -17.154636 -38577.279 71.17916
385 0 -17.155155 0 -17.155155 -36253.924 71.156956
386 0 -17.155641 0 -17.155641 -33927.37 71.134756
387 0 -17.156095 0 -17.156095 -31597.618 71.112561
388 0 -17.156516 0 -17.156516 -29264.668 71.090371
389 0 -17.156905 0 -17.156905 -26928.52 71.068185
390 0 -17.157262 0 -17.157262 -24589.174 71.046004
391 0 -17.157586 0 -17.157586 -22246.631 71.023828
392 0 -17.157878 0 -17.157878 -19900.891 71.001656
393 0 -17.158137 0 -17.158137 -17551.954 70.979488
394 0 -17.158363 0 -17.158363 -15199.82 70.957326
395 0 -17.158557 0 -17.158557 -12844.489 70.935168
396 0 -17.158719 0 -17.158719 -10485.962 70.913014
397 0 -17.158847 0 -17.158847 -8124.2386 70.890866
398 0 -17.158943 0 -17.158943 -5759.3193 70.868722
399 0 -17.159006 0 -17.159006 -3391.2043 70.846582
400 0 -17.159037 0 -17.159037 -1019.8937 70.824447
401 0 -17.15904 0 -17.15904 -0.27024801 70.81494
402 0 -17.15904 0 -17.15904 -1.5603989e-05 70.814937
403 0 -17.15904 0 -17.15904 3.4008317e-09 70.814937
404 0 -17.15904 0 -17.15904 3.2564181e-09 70.814937
405 0 -17.15904 0 -17.15904 2.3373282e-09 70.814937
406 0 -17.15904 0 -17.15904 -9.4785189e-10 70.814937
407 0 -17.15904 0 -17.15904 -1.238317e-10 70.814937
408 0 -17.15904 0 -17.15904 -1.6373305e-10 70.814937
409 0 -17.15904 0 -17.15904 9.4335021e-11 70.814937
410 0 -17.15904 0 -17.15904 -4.6124262e-10 70.814937
411 0 -17.15904 0 -17.15904 3.3870854e-09 70.814937
412 0 -17.15904 0 -17.15904 -1.3077808e-09 70.814937
413 0 -17.15904 0 -17.15904 -2.0897946e-09 70.814937
414 0 -17.15904 0 -17.15904 3.3870854e-09 70.814937
415 0 -17.15904 0 -17.15904 -1.5433105e-09 70.814937
416 0 -17.15904 0 -17.15904 -2.0629924e-09 70.814937
417 0 -17.15904 0 -17.15904 3.3870854e-09 70.814937
418 0 -17.15904 0 -17.15904 -1.8838344e-09 70.814937
419 0 -17.15904 0 -17.15904 3.5418108e-12 70.814937
420 0 -17.15904 0 -17.15904 3.5418108e-12 70.814937
Loop time of 0.0241749 on 1 procs for 420 steps with 4 atoms
95.2% CPU use with 1 MPI tasks x 1 OpenMP threads
Minimization stats:
Stopping criterion = linesearch alpha is zero
Energy initial, next-to-last, final =
-14.8850317162759 -17.1590398301299 -17.1590398301299
Force two-norm initial, final = 59.993295 1.0284199e-14
Force max component initial, final = 34.637145 6.9392045e-15
Final line search alpha, max atom move = 0.5 3.4696022e-15
Iterations, force evaluations = 420 440
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.018593 | 0.018593 | 0.018593 | 0.0 | 76.91
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0.00049936 | 0.00049936 | 0.00049936 | 0.0 | 2.07
Output | 0.0027008 | 0.0027008 | 0.0027008 | 0.0 | 11.17
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 0.002382 | | | 9.85
Nlocal: 4 ave 4 max 4 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 168 ave 168 max 168 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 108 ave 108 max 108 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 216 ave 216 max 216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 216
Ave neighs/atom = 54
Neighbor list builds = 0
Dangerous builds = 0
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (4*vol/atoms)^0.3333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Per MPI rank memory allocation (min/avg/max) = 3.715 | 3.715 | 3.715 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
420 0 -17.15904 0 -17.15904 3.8874134e-10 70.814937
Loop time of 1.22e-06 on 1 procs for 0 steps with 4 atoms
163.9% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.22e-06 | | |100.00
Nlocal: 4 ave 4 max 4 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 360 ave 360 max 360 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 108 ave 108 max 108 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 216 ave 216 max 216 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 216
Ave neighs/atom = 54
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.28975995753248 eV/atom
print "Reference cohesive energy: -4.289 eV/atom"
Reference cohesive energy: -4.289 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.7037343507869 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 4.13721691666803 A
print "Reference lattice constant 4.137 A"
Reference lattice constant 4.137 A
print "===================================================="
====================================================
#dump 1 all custom 1 fcc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,100 @@
LAMMPS (3 Nov 2022)
OMP_NUM_THREADS environment is not set. Defaulting to 1 thread. (src/comm.cpp:98)
using 1 OpenMP thread(s) per MPI task
# Si fcc phase
units metal
boundary p p p
atom_style atomic
lattice sc 2.612
Lattice spacing in x,y,z = 2.612 2.612 2.612
region box block 0 1 0 1 0 1
create_box 1 box
Created orthogonal box = (0 0 0) to (2.612 2.612 2.612)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 1 atoms
using lattice units in orthogonal box = (0 0 0) to (2.612 2.612 2.612)
create_atoms CPU = 0.000 seconds
pair_style meam/sw/spline
pair_coeff * * Si.b.meam.sw.spline Si
Reading meam/sw/spline potential file Si.b.meam.sw.spline with DATE: 2012-10-26
mass * 28.085
variable cohesive_energy equal pe/atoms
variable atmVol equal vol/atoms
variable aLatt equal (vol/atoms)^0.3333333333
run 0
WARNING: No fixes with time integration, atoms won't move (src/verlet.cpp:60)
Neighbor list info ...
update: every = 1 steps, delay = 0 steps, check = yes
max neighbors/atom: 2000, page size: 100000
master list distance cutoff = 6.4
ghost atom cutoff = 6.4
binsize = 3.2, bins = 1 1 1
2 neighbor lists, perpetual/occasional/extra = 2 0 0
(1) pair meam/sw/spline, perpetual
attributes: full, newton on
pair build: full/bin/atomonly
stencil: full/bin/3d
bin: standard
(2) pair meam/sw/spline, perpetual, half/full from (1)
attributes: half, newton on
pair build: halffull/newton
stencil: none
bin: none
Per MPI rank memory allocation (min/avg/max) = 3.744 | 3.744 | 3.744 Mbytes
Step Temp E_pair E_mol TotEng Press
0 0 -4.3368757 0 -4.3368757 -41.869135
Loop time of 1.328e-06 on 1 procs for 0 steps with 1 atoms
75.3% CPU use with 1 MPI tasks x 1 OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0 | 0 | 0 | 0.0 | 0.00
Neigh | 0 | 0 | 0 | 0.0 | 0.00
Comm | 0 | 0 | 0 | 0.0 | 0.00
Output | 0 | 0 | 0 | 0.0 | 0.00
Modify | 0 | 0 | 0 | 0.0 | 0.00
Other | | 1.328e-06 | | |100.00
Nlocal: 1 ave 1 max 1 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 215 ave 215 max 215 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 40 ave 40 max 40 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 80 ave 80 max 80 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 80
Ave neighs/atom = 80
Neighbor list builds = 0
Dangerous builds = 0
print "===================================================="
====================================================
print "Calculated cohesive energy: ${cohesive_energy} eV/atom"
Calculated cohesive energy: -4.33687572290858 eV/atom
print "Reference cohesive energy: -4.337 eV/atom"
Reference cohesive energy: -4.337 eV/atom
print "Atomic volume ${atmVol} A^3"
Atomic volume 17.820484928 A^3
print "Lattice constant ${aLatt} A"
Lattice constant 2.61199999974922 A
print "Reference lattice constant 2.612 A"
Reference lattice constant 2.612 A
print "===================================================="
====================================================
#dump 1 all custom 1 sc.dump id type x y z fx fy fz
#run 0
Total wall time: 0:00:00

Some files were not shown because too many files have changed in this diff Show More