Files
lammps/doc/src/Howto_grid.rst
2023-01-22 09:50:27 -05:00

103 lines
4.6 KiB
ReStructuredText

Using distributed grids
=======================
.. versionadded:: 22Dec2022
LAMMPS has internal capabilities to create uniformly spaced grids
which overlay the simulation domain. For 2d and 3d simulations these
are 2d and 3d grids respectively. Conceptually a grid can be thought
of as a collection of grid cells. Each grid cell can store one or
more values (data).
The grid cells and data they store are distributed across processors.
Each processor owns the grid cells (and data) whose center points lie
within the spatial subdomain of the processor. If needed for its
computations, a processor may also store ghost grid cells with their
data.
Distributed grids can overlay orthogonal or triclinic simulation
boxes; see the :doc:`Howto triclinic <Howto_triclinic>` doc page for
an explanation of the latter. For a triclinic box, the grid cell
shape conforms to the shape of the simulation domain,
e.g. parallelograms instead of rectangles in 2d.
If the box size or shape changes during a simulation, the grid changes
with it, so that it always overlays the entire simulation domain. For
non-periodic dimensions, the grid size in that dimension matches the
box size, as set by the :doc:`boundary <boundary>` command for fixed
or shrink-wrapped boundaries.
If load-balancing is invoked by the :doc:`balance <balance>` or
:doc:`fix balance <fix_balance>` commands, then the subdomain owned
by a processor can change which may also change which grid cells they
own.
Post-processing and visualization of grid cell data can be enabled by
the :doc:`dump grid <dump>`, :doc:`dump grid/vtk <dump>`, and
:doc:`dump image <dump_image>` commands. The latter has an optional
*grid* keyword. The `OVITO visualization tool
<https://www.ovito.org>`_ also plans (as of Nov 2022) to add support
for visualizing grid cell data (along with atoms) using :doc:`dump
grid <dump>` output files as input.
.. note::
For developers, distributed grids are implemented within the code via
two classes: Grid2d and Grid3d. These partition the grid across
processors and have methods which allow forward and reverse
communication of ghost grid data as well as load balancing. If you
write a new compute or fix which needs a distributed grid, these are
the classes to look at. A new pair style could use a distributed
grid by having a fix define it. Please see the section on
:doc:`using distributed grids within style classes <Developer_grid>`
for a detailed description.
----------
These are the commands which currently define or use distributed
grids:
* :doc:`fix ttm/grid <fix_ttm>` - store electron temperature on grid
* :doc:`fix ave/grid <fix_ave_grid>` - time average per-atom or per-grid values
* :doc:`compute property/grid <compute_property_grid>` - generate grid IDs and coords
* :doc:`dump grid <dump>` - output per-grid values in LAMMPS format
* :doc:`dump grid/vtk <dump>` - output per-grid values in VTK format
* :doc:`dump image grid <dump_image>` - include colored grid in output images
* :doc:`pair_style amoeba <pair_amoeba>` - FFT grids
* :doc:`kspace_style pppm <kspace_style>` (and variants) - FFT grids
* :doc:`kspace_style msm <kspace_style>` (and variants) - MSM grids
The grids used by the :doc:`kspace_style <kspace_style>` can not be
referenced by an input script. However the grids and data created and
used by the other commands can be.
A compute or fix command may create one or more grids (of different
sizes). Each grid can store one or more data fields. A data field
can be a single value per grid point (per-grid vector) or multiple
values per grid point (per-grid array). See the :doc:`Howto output
<Howto_output>` doc page for an explanation of how per-grid data can
be generated by some commands and used by other commands.
A command accesses grid data from a compute or fix using a *grid
reference* with the following syntax:
* c_ID:gname:dname
* c_ID:gname:dname[I]
* f_ID:gname:dname
* f_ID:gname:dname[I]
The prefix "c\_" or "f\_" refers to the ID of the compute or fix; gname is
the name of the grid, which is assigned by the compute or fix; dname is
the name of the data field, which is also assigned by the compute or
fix.
If the data field is a per-grid vector (one value per grid point),
then no brackets are used to access the values. If the data field is
a per-grid array (multiple values per grid point), then brackets are
used to specify the column I of the array. I ranges from 1 to Ncol
inclusive, where Ncol is the number of columns in the array and is
defined by the compute or fix.
Currently, there are no per-grid variables implemented in LAMMPS. We
may add this feature at some point.