The return value of `lammps_get_last_error_message` now encodes if the last
error was recoverable or should cause an `MPI_Abort`. The driving code is
responsible of reacting to the error and calling `MPI_Abort` on the
communicator it passed to the LAMMPS instance.
Thermo data of the last run is now accessable through the `last_run.thermo`
property. This is a dictionary containing the data columns of thermo output.
All run data is kept as list and can be found in the `runs` property.
See issue #144
This implements the requested feature in issue #145. The `write_script`
method now gives you a way of dumping out all used commands into a
LAMMPS input script file.
Note: this also dumps all commands which are indirectly issued by PyLammps
This allows checking if the LAMMPS binary/library was compiled with PNG, JPEG,
FFMPEG, GZIP, or exceptions support.
Usage:
```
is_available(feature,gzip)
is_available(feature,png)
is_available(feature,jpeg)
is_available(feature,ffmpeg)
is_available(feature,exceptions)
```
Adds the ability to list all available styles in LAMMPS with:
```
info styles
```
Each style can also be printed separately using one of the following:
```
info atom_styles
info integrate_styles
info minimize_styles
info pair_styles
info bond_styles
info angle_styles
info dihedral_styles
info improper_styles
info kspace_styles
info fix_styles
info compute_styles
info region_styles
info dump_styles
```
If we use the Google Custom Search API, we do not need to keep the
generated searchindex.js file anymore. We also can safely remove
the _sources directory for good.
Since these get generated during each Sphinx build, additional
steps have been added to the Makefile to get rid of them. They
are also added to .gitignore to avoid commiting them by accident.
- tables are now dimensioned by nelements instead of ntypes
- tables are only created if used
- correctly identify max size of table
- add test for illegal cutoff for tabulation
- allocated memory for tables is accounted for
- add example input using 16-bit tables
- remove unused or unneeded class members
- make the code compatible with per-atom masses
- test for and abend in case of an invalid group mass
(cherry picked from commit e017b33898)
On MacOS X there is no sha1sum. So to simplify doc generation on those systems
use a Python script instead to generate a unique string from the repository
path.
assigning atom->maxspecial will not work, since it will be reset, e.g. when reading from a data file that doesn't have any special neighbors.
instead we need to set force->special_extra so this is going to be preserved.
These can be activated using the -DLAMMPS_EXCEPTIONS compiler flag.
It has no effect for regular execution. However, while using
it as a library, any issued command will capture the exception
and save its error message. This can be queried using the
lammps_has_error() and lammps_get_last_error_message() methods.
The Python wrapper checks these in order to rethrow these errors
as Python exceptions. See issue #146.
(cherry picked from commit 6c154bb0b67a13d38968bc42d31013b97f87db75)
Submitted by Steven E. Strong via github
Contributing authors: Steven E. Strong and Joel D. Eaves Joel.Eaves@Colorado.edu
This branch implements Gaussian dynamics (GD), which is a method to do
nonequilibrium molecular dynamics simulations of steady-state flow. See
http://dx.doi.org/10.1021/acs.jpclett.6b00748. It is simple to implement
and derives rigorously from Gauss's principle of least constraint.
(cherry picked from commit 75929ee01b)
<liclass="toctree-l2"><aclass="reference internal"href="#general-strategies">5.2. General strategies</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#packages-with-optimized-styles">5.3. Packages with optimized styles</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#comparison-of-various-accelerator-packages">5.4. Comparison of various accelerator packages</a><ul>
<li>5.4 <aclass="reference internal"href="#acc-4"><span>Comparison of various accelerator packages</span></a></li>
</ul>
<p>The <aclass="reference external"href="http://lammps.sandia.gov/bench.html">Benchmark page</a> of the LAMMPS
web site gives performance results for the various accelerator
packages discussed in Section 5.2, for several of the standard LAMMPS
benchmark problems, as a function of problem size and number of
compute nodes, on different hardware platforms.</p>
<divclass="section"id="measuring-performance">
<spanid="acc-1"></span><h2>5.1. Measuring performance<aclass="headerlink"href="#measuring-performance"title="Permalink to this headline">¶</a></h2>
<p>Before trying to make your simulation run faster, you should
understand how it currently performs and where the bottlenecks are.</p>
<p>The best way to do this is run the your system (actual number of
atoms) for a modest number of timesteps (say 100 steps) on several
different processor counts, including a single processor if possible.
Do this for an equilibrium version of your system, so that the
100-step timings are representative of a much longer run. There is
typically no need to run for 1000s of timesteps to get accurate
timings; you can simply extrapolate from short runs.</p>
<p>For the set of runs, look at the timing data printed to the screen and
log file at the end of each LAMMPS run. <aclass="reference internal"href="Section_start.html#start-8"><span>This section</span></a> of the manual has an overview.</p>
<p>Running on one (or a few processors) should give a good estimate of
the serial performance and what portions of the timestep are taking
the most time. Running the same problem on a few different processor
counts should give an estimate of parallel scalability. I.e. if the
simulation runs 16x faster on 16 processors, its 100% parallel
efficient; if it runs 8x faster on 16 processors, it’s 50% efficient.</p>
<p>The most important data to look at in the timing info is the timing
breakdown and relative percentages. For example, trying different
options for speeding up the long-range solvers will have little impact
if they only consume 10% of the run time. If the pairwise time is
dominating, you may want to look at GPU or OMP versions of the pair
style, as discussed below. Comparing how the percentages change as
you increase the processor count gives you a sense of how different
operations within the timestep are scaling. Note that if you are
running with a Kspace solver, there is additional output on the
breakdown of the Kspace time. For PPPM, this includes the fraction
spent on FFTs, which can be communication intensive.</p>
<p>Another important detail in the timing info are the histograms of
atoms counts and neighbor counts. If these vary widely across
processors, you have a load-imbalance issue. This often results in
inaccurate relative timing data, because processors have to wait when
communication occurs for other processors to catch up. Thus the
reported times for “Communication” or “Other” may be higher than they
really are, due to load-imbalance. If this is an issue, you can
uncomment the MPI_Barrier() lines in src/timer.cpp, and recompile
LAMMPS, to obtain synchronized timings.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="general-strategies">
<spanid="acc-2"></span><h2>5.2. General strategies<aclass="headerlink"href="#general-strategies"title="Permalink to this headline">¶</a></h2>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<pclass="last">this section 5.2 is still a work in progress</p>
</div>
<p>Here is a list of general ideas for improving simulation performance.
Most of them are only applicable to certain models and certain
bottlenecks in the current performance, so let the timing data you
generate be your guide. It is hard, if not impossible, to predict how
much difference these options will make, since it is a function of
problem size, number of processors used, and your machine. There is
no substitute for identifying performance bottlenecks, and trying out
various options.</p>
<ulclass="simple">
<li>rRESPA</li>
<li>2-FFT PPPM</li>
<li>Staggered PPPM</li>
<li>single vs double PPPM</li>
<li>partial charge PPPM</li>
<li>verlet/split run style</li>
<li>processor command for proc layout and numa layout</li>
<li>load-balancing: balance and fix balance</li>
</ul>
<p>2-FFT PPPM, also called <em>analytic differentiation</em> or <em>ad</em> PPPM, uses
2 FFTs instead of the 4 FFTs used by the default <em>ik differentiation</em>
PPPM. However, 2-FFT PPPM also requires a slightly larger mesh size to
achieve the same accuracy as 4-FFT PPPM. For problems where the FFT
cost is the performance bottleneck (typically large problems running
on many processors), 2-FFT PPPM may be faster than 4-FFT PPPM.</p>
<p>Staggered PPPM performs calculations using two different meshes, one
shifted slightly with respect to the other. This can reduce force
aliasing errors and increase the accuracy of the method, but also
doubles the amount of work required. For high relative accuracy, using
staggered PPPM allows one to half the mesh size in each dimension as
compared to regular PPPM, which can give around a 4x speedup in the
kspace time. However, for low relative accuracy, using staggered PPPM
gives little benefit and can be up to 2x slower in the kspace
time. For example, the rhodopsin benchmark was run on a single
processor, and results for kspace time vs. relative accuracy for the
different methods are shown in the figure below. For this system,
staggered PPPM (using ik differentiation) becomes useful when using a
relative accuracy of slightly greater than 1e-5 and above.</p>
<spanid="acc-3"></span><h2>5.3. Packages with optimized styles<aclass="headerlink"href="#packages-with-optimized-styles"title="Permalink to this headline">¶</a></h2>
<p>Accelerated versions of various <aclass="reference internal"href="pair_style.html"><em>pair_style</em></a>,
<aclass="reference internal"href="fix.html"><em>fixes</em></a>, <aclass="reference internal"href="compute.html"><em>computes</em></a>, and other commands have
been added to LAMMPS, which will typically run faster than the
standard non-accelerated versions. Some require appropriate hardware
to be present on your system, e.g. GPUs or Intel Xeon Phi
coprocessors.</p>
<p>All of these commands are in packages provided with LAMMPS. An
overview of packages is give in <aclass="reference internal"href="Section_packages.html"><em>Section packages</em></a>. These are the accelerator packages
currently in LAMMPS, either as standard or user packages:</p>
<p>The first 4 steps can be done as a single command, using the
src/Make.py tool. The Make.py tool is discussed in <aclass="reference internal"href="Section_start.html#start-4"><span>Section 2.4</span></a> of the manual, and its use is
illustrated in the individual accelerator sections. Typically these
steps only need to be done once, to create an executable that uses one
or more accelerator packages.</p>
<p>The last 4 steps can all be done from the command-line when LAMMPS is
launched, without changing your input script, as illustrated in the
individual accelerator sections. Or you can add
<aclass="reference internal"href="package.html"><em>package</em></a> and <aclass="reference internal"href="suffix.html"><em>suffix</em></a> commands to your input
script.</p>
<divclass="admonition warning">
<pclass="first admonition-title">Warning</p>
<pclass="last">With a few exceptions, you can build a single LAMMPS
executable with all its accelerator packages installed. Note that the
USER-INTEL and KOKKOS packages require you to choose one of their
options when building. I.e. CPU or Phi for USER-INTEL. OpenMP, Cuda,
or Phi for KOKKOS. Here are the exceptions; you cannot build a single
executable with:</p>
</div>
<ulclass="simple">
<li>both the USER-INTEL Phi and KOKKOS Phi options</li>
<li>the USER-INTEL Phi or Kokkos Phi option, and either the USER-CUDA or GPU packages</li>
</ul>
<p>See the examples/accelerate/README and make.list files for sample
Make.py commands that build LAMMPS with any or all of the accelerator
packages. As an example, here is a command that builds with all the
GPU related packages installed (USER-CUDA, GPU, KOKKOS with Cuda),
including settings to build the needed auxiliary USER-CUDA and GPU
<spanid="acc-4"></span><h2>5.4. Comparison of various accelerator packages<aclass="headerlink"href="#comparison-of-various-accelerator-packages"title="Permalink to this headline">¶</a></h2>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<pclass="last">this section still needs to be re-worked with additional KOKKOS
and USER-INTEL information.</p>
</div>
<p>The next section compares and contrasts the various accelerator
options, since there are multiple ways to perform OpenMP threading,
run on GPUs, and run on Intel Xeon Phi coprocessors.</p>
<p>All 3 of these packages accelerate a LAMMPS calculation using NVIDIA
hardware, but they do it in different ways.</p>
<p>As a consequence, for a particular simulation on specific hardware,
one package may be faster than the other. We give guidelines below,
but the best way to determine which package is faster for your input
script is to try both of them on your machine. See the benchmarking
section below for examples where this has been done.</p>
<p><strong>Guidelines for using each package optimally:</strong></p>
<ulclass="simple">
<li>The GPU package allows you to assign multiple CPUs (cores) to a single
GPU (a common configuration for “hybrid” nodes that contain multicore
CPU(s) and GPU(s)) and works effectively in this mode. The USER-CUDA
package does not allow this; you can only use one CPU per GPU.</li>
<li>The GPU package moves per-atom data (coordinates, forces)
back-and-forth between the CPU and GPU every timestep. The USER-CUDA
package only does this on timesteps when a CPU calculation is required
(e.g. to invoke a fix or compute that is non-GPU-ized). Hence, if you
can formulate your input script to only use GPU-ized fixes and
computes, and avoid doing I/O too often (thermo output, dump file
snapshots, restart files), then the data transfer cost of the
USER-CUDA package can be very low, causing it to run faster than the
GPU package.</li>
<li>The GPU package is often faster than the USER-CUDA package, if the
number of atoms per GPU is “small”. The crossover point, in terms of
atoms/GPU at which the USER-CUDA package becomes faster depends
strongly on the pair style. For example, for a simple Lennard Jones
system the crossover (in single precision) is often about 50K-100K
atoms per GPU. When performing double precision calculations the
crossover point can be significantly smaller.</li>
<li>Both packages compute bonded interactions (bonds, angles, etc) on the
CPU. This means a model with bonds will force the USER-CUDA package
to transfer per-atom data back-and-forth between the CPU and GPU every
timestep. If the GPU package is running with several MPI processes
assigned to one GPU, the cost of computing the bonded interactions is
spread across more CPUs and hence the GPU package can run faster.</li>
<li>When using the GPU package with multiple CPUs assigned to one GPU, its
performance depends to some extent on high bandwidth between the CPUs
and the GPU. Hence its performance is affected if full 16 PCIe lanes
are not available for each GPU. In HPC environments this can be the
case if S2050/70 servers are used, where two devices generally share
one PCIe 2.0 16x slot. Also many multi-GPU mainboards do not provide
full 16 lanes to each of the PCIe 2.0 16x slots.</li>
</ul>
<p><strong>Differences between the two packages:</strong></p>
<ulclass="simple">
<li>The GPU package accelerates only pair force, neighbor list, and PPPM
calculations. The USER-CUDA package currently supports a wider range
of pair styles and can also accelerate many fix styles and some
compute styles, as well as neighbor list and PPPM calculations.</li>
<li>The USER-CUDA package does not support acceleration for minimization.</li>
<li>The USER-CUDA package does not support hybrid pair styles.</li>
<li>The USER-CUDA package can order atoms in the neighbor list differently
from run to run resulting in a different order for force accumulation.</li>
<li>The USER-CUDA package has a limit on the number of atom types that can be
used in a simulation.</li>
<li>The GPU package requires neighbor lists to be built on the CPU when using
exclusion lists or a triclinic simulation box.</li>
<li>The GPU package uses more GPU memory than the USER-CUDA package. This
is generally not a problem since typical runs are computation-limited
rather than memory-limited.</li>
</ul>
<divclass="section"id="examples">
<h3>5.4.1. Examples<aclass="headerlink"href="#examples"title="Permalink to this headline">¶</a></h3>
<p>The LAMMPS distribution has two directories with sample input scripts
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
<h1>7. Example problems<aclass="headerlink"href="#example-problems"title="Permalink to this headline">¶</a></h1>
<p>The LAMMPS distribution includes an examples sub-directory with
several sample problems. Each problem is in a sub-directory of its
own. Most are 2d models so that they run quickly, requiring at most a
couple of minutes to run on a desktop machine. Each problem has an
input script (in.*) and produces a log file (log.*) and dump file
(dump.*) when it runs. Some use a data file (data.*) of initial
coordinates as additional input. A few sample log file outputs on
different machines and different numbers of processors are included in
the directories to compare your answers to. E.g. a log file like
log.crack.foo.P means it ran on P processors of machine “foo”.</p>
<p>For examples that use input data files, many of them were produced by
<aclass="reference external"href="http://pizza.sandia.gov">Pizza.py</a> or setup tools described in the
<aclass="reference internal"href="Section_tools.html"><em>Additional Tools</em></a> section of the LAMMPS
documentation and provided with the LAMMPS distribution.</p>
<p>If you uncomment the <aclass="reference internal"href="dump.html"><em>dump</em></a> command in the input script, a
text dump file will be produced, which can be animated by various
<aclass="reference external"href="http://lammps.sandia.gov/viz.html">visualization programs</a>. It can
also be animated using the xmovie tool described in the <aclass="reference internal"href="Section_tools.html"><em>Additional Tools</em></a> section of the LAMMPS documentation.</p>
<p>If you uncomment the <aclass="reference internal"href="dump.html"><em>dump image</em></a> command in the input
script, and assuming you have built LAMMPS with a JPG library, JPG
snapshot images will be produced when the simulation runs. They can
be quickly post-processed into a movie using commands described on the
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
<spanid="hist-1"></span><h2>13.1. Coming attractions<aclass="headerlink"href="#coming-attractions"title="Permalink to this headline">¶</a></h2>
<p>The <aclass="reference external"href="http://lammps.sandia.gov/future.html">Wish list link</a> on the
LAMMPS WWW page gives a list of features we are hoping to add to
LAMMPS in the future, including contact names of individuals you can
email if you are interested in contributing to the developement or
would be a future user of that feature.</p>
<p>You can also send <aclass="reference external"href="http://lammps.sandia.gov/authors.html">email to the developers</a> if you want to add
your wish to the list.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="past-versions">
<spanid="hist-2"></span><h2>13.2. Past versions<aclass="headerlink"href="#past-versions"title="Permalink to this headline">¶</a></h2>
<p>LAMMPS development began in the mid 1990s under a cooperative research
& development agreement (CRADA) between two DOE labs (Sandia and LLNL)
and 3 companies (Cray, Bristol Myers Squibb, and Dupont). The goal was
to develop a large-scale parallel classical MD code; the coding effort
was led by Steve Plimpton at Sandia.</p>
<p>After the CRADA ended, a final F77 version, LAMMPS 99, was
released. As development of LAMMPS continued at Sandia, its memory
management was converted to F90; a final F90 version was released as
LAMMPS 2001.</p>
<p>The current LAMMPS is a rewrite in C++ and was first publicly released
as an open source code in 2004. It includes many new features beyond
those in LAMMPS 99 or 2001. It also includes features from older
parallel MD codes written at Sandia, namely ParaDyn, Warp, and
GranFlow (see below).</p>
<p>In late 2006 we began merging new capabilities into LAMMPS that were
developed by Aidan Thompson at Sandia for his MD code GRASP, which has
a parallel framework similar to LAMMPS. Most notably, these have
included many-body potentials - Stillinger-Weber, Tersoff, ReaxFF -
and the associated charge-equilibration routines needed for ReaxFF.</p>
<p>The <aclass="reference external"href="http://lammps.sandia.gov/history.html">History link</a> on the
LAMMPS WWW page gives a timeline of features added to the
C++ open-source version of LAMMPS over the last several years.</p>
<p>These older codes are available for download from the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW site</a>, except for Warp & GranFlow which were primarily used
internally. A brief listing of their features is given here.</p>
<p>LAMMPS 2001</p>
<ulclass="simple">
<li>F90 + MPI</li>
<li>dynamic memory</li>
<li>spatial-decomposition parallelism</li>
<li>NVE, NVT, NPT, NPH, rRESPA integrators</li>
<li>LJ and Coulombic pairwise force fields</li>
<li>all-atom, united-atom, bead-spring polymer force fields</li>
<li>CHARMM-compatible force fields</li>
<li>class 2 force fields</li>
<li>3d/2d Ewald & PPPM</li>
<li>various force and temperature constraints</li>
<li>SHAKE</li>
<li>Hessian-free truncated-Newton minimizer</li>
<li>user-defined diagnostics</li>
</ul>
<p>LAMMPS 99</p>
<ulclass="simple">
<li>F77 + MPI</li>
<li>static memory allocation</li>
<li>spatial-decomposition parallelism</li>
<li>most of the LAMMPS 2001 features with a few exceptions</li>
<li>no 2d Ewald & PPPM</li>
<li>molecular force fields are missing a few CHARMM terms</li>
<li>no SHAKE</li>
</ul>
<p>Warp</p>
<ulclass="simple">
<li>F90 + MPI</li>
<li>spatial-decomposition parallelism</li>
<li>embedded atom method (EAM) metal potentials + LJ</li>
<li>lattice and grain-boundary atom creation</li>
<li>NVE, NVT integrators</li>
<li>boundary conditions for applying shear stresses</li>
<li>temperature controls for actively sheared systems</li>
<li>per-atom energy and centro-symmetry computation and output</li>
</ul>
<p>ParaDyn</p>
<ulclass="simple">
<li>F77 + MPI</li>
<li>atom- and force-decomposition parallelism</li>
<li>embedded atom method (EAM) metal potentials</li>
<li>lattice atom creation</li>
<li>NVE, NVT, NPT integrators</li>
<li>all serial DYNAMO features for controls and constraints</li>
</ul>
<p>GranFlow</p>
<ulclass="simple">
<li>F90 + MPI</li>
<li>spatial-decomposition parallelism</li>
<li>frictional granular potentials</li>
<li>NVE integrator</li>
<li>boundary conditions for granular flow and packing and walls</li>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
<li>force-field compatibility with common CHARMM, AMBER, DREIDING, OPLS, GROMACS, COMPASS options</li>
<li>access to <aclass="reference external"href="http://openkim.org">KIM archive</a> of potentials via <aclass="reference internal"href="pair_kim.html"><em>pair kim</em></a></li>
<li>hybrid potentials: multiple pair, bond, angle, dihedral, improper potentials can be used in one simulation</li>
<li>overlaid potentials: superposition of multiple pair potentials</li>
</ul>
</div>
<divclass="section"id="atom-creation">
<h3>1.2.4. Atom creation<aclass="headerlink"href="#atom-creation"title="Permalink to this headline">¶</a></h3>
<h3>1.2.5. Ensembles, constraints, and boundary conditions<aclass="headerlink"href="#ensembles-constraints-and-boundary-conditions"title="Permalink to this headline">¶</a></h3>
<li>energy minimization via conjugate gradient or steepest descent relaxation</li>
<li>rRESPA hierarchical timestepping</li>
<li>rerun command for post-processing of dump files</li>
</ul>
</div>
<divclass="section"id="diagnostics">
<h3>1.2.7. Diagnostics<aclass="headerlink"href="#diagnostics"title="Permalink to this headline">¶</a></h3>
<ulclass="simple">
<li>see the various flavors of the <aclass="reference internal"href="fix.html"><em>fix</em></a> and <aclass="reference internal"href="compute.html"><em>compute</em></a> commands</li>
</ul>
</div>
<divclass="section"id="output">
<h3>1.2.8. Output<aclass="headerlink"href="#output"title="Permalink to this headline">¶</a></h3>
<h3>1.2.10. Pre- and post-processing<aclass="headerlink"href="#pre-and-post-processing"title="Permalink to this headline">¶</a></h3>
<ulclass="simple">
<li>Various pre- and post-processing serial tools are packaged
with LAMMPS; see these <aclass="reference internal"href="Section_tools.html"><em>doc pages</em></a>.</li>
<li>Our group has also written and released a separate toolkit called
<aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">Pizza.py</a> which provides tools for doing setup, analysis,
plotting, and visualization for LAMMPS simulations. Pizza.py is
written in <aclass="reference external"href="http://www.python.org">Python</a> and is available for download from <aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">the Pizza.py WWW site</a>.</li>
</ul>
</div>
<divclass="section"id="specialized-features">
<h3>1.2.11. Specialized features<aclass="headerlink"href="#specialized-features"title="Permalink to this headline">¶</a></h3>
<p>These are LAMMPS capabilities which you may not think of as typical
molecular dynamics options:</p>
<ulclass="simple">
<li><aclass="reference internal"href="balance.html"><em>static</em></a> and <aclass="reference internal"href="fix_balance.html"><em>dynamic load-balancing</em></a></li>
<li><aclass="reference internal"href="fix_ipi.html"><em>path-integral molecular dynamics (PIMD)</em></a> and <aclass="reference internal"href="fix_pimd.html"><em>this as well</em></a></li>
<li>Monte Carlo via <aclass="reference internal"href="fix_gcmc.html"><em>GCMC</em></a> and <aclass="reference internal"href="fix_tfmc.html"><em>tfMC</em></a> and <codeclass="xref doc docutils literal"><spanclass="pre">atom</span><spanclass="pre">swapping</span></code></li>
<li><aclass="reference internal"href="pair_dsmc.html"><em>Direct Simulation Monte Carlo</em></a> for low-density fluids</li>
<li>perform sophisticated analyses of your MD simulation</li>
<li>visualize your MD simulation</li>
<li>plot your output data</li>
</ul>
<p>A few tools for pre- and post-processing tasks are provided as part of
the LAMMPS package; they are described in <aclass="reference internal"href="Section_tools.html"><em>this section</em></a>. However, many people use other codes or
write their own tools for these tasks.</p>
<p>As noted above, our group has also written and released a separate
toolkit called <aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">Pizza.py</a> which addresses some of the listed
bullets. It provides tools for doing setup, analysis, plotting, and
visualization for LAMMPS simulations. Pizza.py is written in
<aclass="reference external"href="http://www.python.org">Python</a> and is available for download from <aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">the Pizza.py WWW site</a>.</p>
<p>LAMMPS requires as input a list of initial atom coordinates and types,
molecular topology information, and force-field coefficients assigned
to all atoms and bonds. LAMMPS will not build molecular systems and
assign force-field parameters for you.</p>
<p>For atomic systems LAMMPS provides a <aclass="reference internal"href="create_atoms.html"><em>create_atoms</em></a>
command which places atoms on solid-state lattices (fcc, bcc,
user-defined, etc). Assigning small numbers of force field
coefficients can be done via the <aclass="reference internal"href="pair_coeff.html"><em>pair coeff</em></a>, <aclass="reference internal"href="bond_coeff.html"><em>bond coeff</em></a>, <aclass="reference internal"href="angle_coeff.html"><em>angle coeff</em></a>, etc commands.
For molecular systems or more complicated simulation geometries, users
typically use another code as a builder and convert its output to
LAMMPS input format, or write their own code to generate atom
coordinate and molecular topology for LAMMPS to read in.</p>
<p>For complicated molecular systems (e.g. a protein), a multitude of
topology information and hundreds of force-field coefficients must
typically be specified. We suggest you use a program like
<aclass="reference external"href="http://www.scripps.edu/brooks">CHARMM</a> or <aclass="reference external"href="http://amber.scripps.edu">AMBER</a> or other molecular builders to setup
such problems and dump its information to a file. You can then
reformat the file as LAMMPS input. Some of the tools in <aclass="reference internal"href="Section_tools.html"><em>this section</em></a> can assist in this process.</p>
<p>Similarly, LAMMPS creates output files in a simple format. Most users
post-process these files with their own analysis tools or re-format
them for input into other programs, including visualization packages.
If you are convinced you need to compute something on-the-fly as
LAMMPS runs, see <aclass="reference internal"href="Section_modify.html"><em>Section_modify</em></a> for a discussion
of how you can use the <aclass="reference internal"href="dump.html"><em>dump</em></a> and <aclass="reference internal"href="compute.html"><em>compute</em></a> and
<aclass="reference internal"href="fix.html"><em>fix</em></a> commands to print out data of your choosing. Keep in
mind that complicated computations can slow down the molecular
dynamics timestepping, particularly if the computations are not
parallel, so it is often better to leave such analysis to
post-processing codes.</p>
<p>A very simple (yet fast) visualizer is provided with the LAMMPS
package - see the <aclass="reference internal"href="Section_tools.html#xmovie"><span>xmovie</span></a> tool in <aclass="reference internal"href="Section_tools.html"><em>this section</em></a>. It creates xyz projection views of
atomic coordinates and animates them. We find it very useful for
debugging purposes. For high-quality visualization we recommend the
<p>CHARMM, AMBER, NAMD, NWCHEM, and Tinker are designed primarily for
modeling biological molecules. CHARMM and AMBER use
atom-decomposition (replicated-data) strategies for parallelism; NAMD
and NWCHEM use spatial-decomposition approaches, similar to LAMMPS.
Tinker is a serial code. DL_POLY includes potentials for a variety of
biological and non-biological materials; both a replicated-data and
spatial-decomposition version exist.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="open-source-distribution">
<spanid="intro-4"></span><h2>1.4. Open source distribution<aclass="headerlink"href="#open-source-distribution"title="Permalink to this headline">¶</a></h2>
<p>LAMMPS comes with no warranty of any kind. As each source file states
in its header, it is a copyrighted code that is distributed free-of-
charge, under the terms of the <aclass="reference external"href="http://www.gnu.org/copyleft/gpl.html">GNU Public License</a> (GPL). This
is often referred to as open-source distribution - see
<aclass="reference external"href="http://www.gnu.org">www.gnu.org</a> or <aclass="reference external"href="http://www.opensource.org">www.opensource.org</a> for more
details. The legal text of the GPL is in the LICENSE file that is
included in the LAMMPS distribution.</p>
<p>Here is a summary of what the GPL means for LAMMPS users:</p>
<p>(1) Anyone is free to use, modify, or extend LAMMPS in any way they
choose, including for commercial purposes.</p>
<p>(2) If you distribute a modified version of LAMMPS, it must remain
open-source, meaning you distribute it under the terms of the GPL.
You should clearly annotate such a code as a derivative version of
LAMMPS.</p>
<p>(3) If you release any code that includes LAMMPS source code, then it
must also be open-sourced, meaning you distribute it under the terms
of the GPL.</p>
<p>(4) If you give LAMMPS files to someone else, the GPL LICENSE file and
source file headers (including the copyright and GPL notices) should
remain part of the code.</p>
<p>In the spirit of an open-source code, these are various ways you can
contribute to making LAMMPS better. You can send email to the
<aclass="reference external"href="http://lammps.sandia.gov/authors.html">developers</a> on any of these
items.</p>
<ulclass="simple">
<li>Point prospective users to the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a>. Mention it in
talks or link to it from your WWW site.</li>
<li>If you find an error or omission in this manual or on the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a>, or have a suggestion for something to clarify or include,
<li>If you find a bug, <aclass="reference internal"href="Section_errors.html#err-2"><span>Section_errors 2</span></a>
describes how to report it.</li>
<li>If you publish a paper using LAMMPS results, send the citation (and
any cool pictures or movies if you like) to add to the Publications,
Pictures, and Movies pages of the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a>, with links
and attributions back to you.</li>
<li>Create a new Makefile.machine that can be added to the src/MAKE
directory.</li>
<li>The tools sub-directory of the LAMMPS distribution has various
stand-alone codes for pre- and post-processing of LAMMPS data. More
details are given in <aclass="reference internal"href="Section_tools.html"><em>Section_tools</em></a>. If you write
a new tool that users will find useful, it can be added to the LAMMPS
distribution.</li>
<li>LAMMPS is designed to be easy to extend with new code for features
like potentials, boundary conditions, diagnostic computations, etc.
<aclass="reference internal"href="Section_modify.html"><em>This section</em></a> gives details. If you add a
feature of general interest, it can be added to the LAMMPS
distribution.</li>
<li>The Benchmark page of the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a> lists LAMMPS
performance on various platforms. The files needed to run the
benchmarks are part of the LAMMPS distribution. If your machine is
sufficiently different from those listed, your timing data can be
added to the page.</li>
<li>You can send feedback for the User Comments page of the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a>. It might be added to the page. No promises.</li>
<li>Cash. Small denominations, unmarked bills preferred. Paper sack OK.
Leave on desk. VISA also accepted. Chocolate chip cookies
<spanid="intro-5"></span><h2>1.5. Acknowledgments and citations<aclass="headerlink"href="#acknowledgments-and-citations"title="Permalink to this headline">¶</a></h2>
<p>LAMMPS development has been funded by the <aclass="reference external"href="http://www.doe.gov">US Department of Energy</a> (DOE), through its CRADA, LDRD, ASCI, and Genomes-to-Life
programs and its <aclass="reference external"href="http://www.sc.doe.gov/ascr/home.html">OASCR</a> and <aclass="reference external"href="http://www.er.doe.gov/production/ober/ober_top.html">OBER</a> offices.</p>
<p>Specifically, work on the latest version was funded in part by the US
Department of Energy’s Genomics:GTL program
(<aclass="reference external"href="http://www.doegenomestolife.org">www.doegenomestolife.org</a>) under the <aclass="reference external"href="http://www.genomes2life.org">project</a>, “Carbon
Sequestration in Synechococcus Sp.: From Molecular Machines to
Hierarchical Modeling”.</p>
<p>The following paper describe the basic parallel algorithms used in
LAMMPS. If you use LAMMPS results in your published work, please cite
this paper and include a pointer to the <aclass="reference external"href="http://lammps.sandia.gov">LAMMPS WWW Site</a>
<p>Other papers describing specific algorithms used in LAMMPS are listed
under the <aclass="reference external"href="http://lammps.sandia.gov/cite.html">Citing LAMMPS link</a> of
the LAMMPS WWW page.</p>
<p>The <aclass="reference external"href="http://lammps.sandia.gov/papers.html">Publications link</a> on the
LAMMPS WWW page lists papers that have cited LAMMPS. If your paper is
not listed there for some reason, feel free to send us the info. If
the simulations in your paper produced cool pictures or animations,
we’ll be pleased to add them to the
<aclass="reference external"href="http://lammps.sandia.gov/pictures.html">Pictures</a> or
<aclass="reference external"href="http://lammps.sandia.gov/movies.html">Movies</a> pages of the LAMMPS WWW
site.</p>
<p>The core group of LAMMPS developers is at Sandia National Labs:</p>
<ulclass="simple">
<li>Steve Plimpton, sjplimp at sandia.gov</li>
<li>Aidan Thompson, athomps at sandia.gov</li>
<li>Paul Crozier, pscrozi at sandia.gov</li>
</ul>
<p>The following folks are responsible for significant contributions to
the code, or other aspects of the LAMMPS development effort. Many of
the packages they have written are somewhat unique to LAMMPS and the
code would not be as general-purpose as it is without their expertise
and efforts.</p>
<ulclass="simple">
<li>Axel Kohlmeyer (Temple U), akohlmey at gmail.com, SVN and Git repositories, indefatigable mail list responder, USER-CG-CMM and USER-OMP packages</li>
<li>Roy Pollock (LLNL), Ewald and PPPM solvers</li>
<li>Mike Brown (ORNL), brownw at ornl.gov, GPU package</li>
<li>Greg Wagner (Sandia), gjwagne at sandia.gov, MEAM package for MEAM potential</li>
<li>Mike Parks (Sandia), mlparks at sandia.gov, PERI package for Peridynamics</li>
<li>Rudra Mukherjee (JPL), Rudranarayan.M.Mukherjee at jpl.nasa.gov, POEMS package for articulated rigid body motion</li>
<li>Reese Jones (Sandia) and collaborators, rjones at sandia.gov, USER-ATC package for atom/continuum coupling</li>
<li>Ilya Valuev (JIHT), valuev at physik.hu-berlin.de, USER-AWPMD package for wave-packet MD</li>
<li>Christian Trott (U Tech Ilmenau), christian.trott at tu-ilmenau.de, USER-CUDA package</li>
<li>Andres Jaramillo-Botero (Caltech), ajaramil at wag.caltech.edu, USER-EFF package for electron force field</li>
<li>Christoph Kloss (JKU), Christoph.Kloss at jku.at, USER-LIGGGHTS package for granular models and granular/fluid coupling</li>
<li>Metin Aktulga (LBL), hmaktulga at lbl.gov, USER-REAXC package for C version of ReaxFF</li>
<li>Georg Gunzenmuller (EMI), georg.ganzenmueller at emi.fhg.de, USER-SPH package</li>
</ul>
<p>As discussed in <aclass="reference internal"href="Section_history.html"><em>Section_history</em></a>, LAMMPS
originated as a cooperative project between DOE labs and industrial
partners. Folks involved in the design and testing of the original
version of LAMMPS were the following:</p>
<ulclass="simple">
<li>John Carpenter (Mayo Clinic, formerly at Cray Research)</li>
<li>Terry Stouch (Lexicon Pharmaceuticals, formerly at Bristol Myers Squibb)</li>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
<liclass="toctree-l1"><aclass="reference internal"href="Section_modify.html">10. Modifying & extending LAMMPS</a></li>
<liclass="toctree-l1 current"><aclass="current reference internal"href="">11. Python interface to LAMMPS</a><ul>
<liclass="toctree-l2"><aclass="reference internal"href="#overview-of-running-lammps-from-python">11.1. Overview of running LAMMPS from Python</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#overview-of-using-python-from-a-lammps-script">11.2. Overview of using Python from a LAMMPS script</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#building-lammps-as-a-shared-library">11.3. Building LAMMPS as a shared library</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#installing-the-python-wrapper-into-python">11.4. Installing the Python wrapper into Python</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#extending-python-with-mpi-to-run-in-parallel">11.5. Extending Python with MPI to run in parallel</a></li>
<liclass="toctree-l2"><aclass="reference internal"href="#testing-the-python-lammps-interface">11.6. Testing the Python-LAMMPS interface</a><ul>
<liclass="toctree-l3"><aclass="reference internal"href="#test-lammps-and-python-in-serial">11.6.1. <strong>Test LAMMPS and Python in serial:</strong></a></li>
<liclass="toctree-l3"><aclass="reference internal"href="#test-lammps-and-python-in-parallel">11.6.2. <strong>Test LAMMPS and Python in parallel:</strong></a></li>
<h1>11. Python interface to LAMMPS<aclass="headerlink"href="#python-interface-to-lammps"title="Permalink to this headline">¶</a></h1>
<p>LAMMPS can work together with Python in two ways. First, Python can
wrap LAMMPS through the <aclass="reference internal"href="Section_howto.html#howto-19"><span>LAMMPS library interface</span></a>, so that a Python script can
create one or more instances of LAMMPS and launch one or more
simulations. In Python lingo, this is “extending” Python with LAMMPS.</p>
<p>Second, LAMMPS can use the Python interpreter, so that a LAMMPS input
script can invoke Python code, and pass information back-and-forth
between the input script and Python functions you write. The Python
code can also callback to LAMMPS to query or change its attributes.
In Python lingo, this is “embedding” Python in LAMMPS.</p>
<p>This section describes how to do both.</p>
<ulclass="simple">
<li>11.1 <aclass="reference internal"href="#py-1"><span>Overview of running LAMMPS from Python</span></a></li>
<li>11.2 <aclass="reference internal"href="#py-2"><span>Overview of using Python from a LAMMPS script</span></a></li>
<li>11.3 <aclass="reference internal"href="#py-3"><span>Building LAMMPS as a shared library</span></a></li>
<li>11.4 <aclass="reference internal"href="#py-4"><span>Installing the Python wrapper into Python</span></a></li>
<li>11.5 <aclass="reference internal"href="#py-5"><span>Extending Python with MPI to run in parallel</span></a></li>
<li>11.6 <aclass="reference internal"href="#py-6"><span>Testing the Python-LAMMPS interface</span></a></li>
<li>11.7 <aclass="reference internal"href="#py-7"><span>Using LAMMPS from Python</span></a></li>
<li>11.8 <aclass="reference internal"href="#py-8"><span>Example Python scripts that use LAMMPS</span></a></li>
</ul>
<p>If you are not familiar with it, <aclass="reference external"href="http://www.python.org">Python</a> is a
powerful scripting and programming language which can essentially do
anything that faster, lower-level languages like C or C++ can do, but
typically with much fewer lines of code. When used in embedded mode,
Python can perform operations that the simplistic LAMMPS input script
syntax cannot. Python can be also be used as a “glue” language to
drive a program through its library interface, or to hook multiple
pieces of software together, such as a simulation package plus a
visualization package, or to run a coupled multiscale or multiphysics
model.</p>
<p>See <aclass="reference internal"href="Section_howto.html#howto-10"><span>Section_howto 10</span></a> of the manual and
the couple directory of the distribution for more ideas about coupling
LAMMPS to other codes. See <aclass="reference internal"href="Section_howto.html#howto-19"><span>Section_howto 19</span></a> for a description of the LAMMPS
library interface provided in src/library.cpp and src/library.h, and
how to extend it for your needs. As described below, that interface
is what is exposed to Python either when calling LAMMPS from Python or
when calling Python from a LAMMPS input script and then calling back
to LAMMPS from Python code. The library interface is designed to be
easy to add functions to. Thus the Python interface to LAMMPS is also
easy to extend as well.</p>
<p>If you create interesting Python scripts that run LAMMPS or
interesting Python functions that can be called from a LAMMPS input
script, that you think would be useful to other users, please <aclass="reference external"href="http://lammps.sandia.gov/authors.html">email them to the developers</a>. We can
<spanid="py-1"></span><h2>11.1. Overview of running LAMMPS from Python<aclass="headerlink"href="#overview-of-running-lammps-from-python"title="Permalink to this headline">¶</a></h2>
<p>The LAMMPS distribution includes a python directory with all you need
to run LAMMPS from Python. The python/lammps.py file wraps the LAMMPS
library interface, with one wrapper function per LAMMPS library
function. This file makes it is possible to do the following either
from a Python script, or interactively from a Python prompt: create
one or more instances of LAMMPS, invoke LAMMPS commands or give it an
input script, run LAMMPS incrementally, extract LAMMPS results, an
modify internal LAMMPS variables. From a Python script you can do
this in serial or parallel. Running Python interactively in parallel
does not generally work, unless you have a version of Python that
extends standard Python to enable multiple instances of Python to read
what you type.</p>
<p>To do all of this, you must first build LAMMPS as a shared library,
then insure that your Python can find the python/lammps.py file and
the shared library. These steps are explained in subsequent sections
11.3 and 11.4. Sections 11.5 and 11.6 discuss using MPI from a
parallel Python program and how to test that you are ready to use
LAMMPS from Python. Section 11.7 lists all the functions in the
current LAMMPS library interface and how to call them from Python.</p>
<p>Section 11.8 gives some examples of coupling LAMMPS to other tools via
Python. For example, LAMMPS can easily be coupled to a GUI or other
visualization tools that display graphs or animations in real time as
LAMMPS runs. Examples of such scripts are inlcluded in the python
directory.</p>
<p>Two advantages of using Python to run LAMMPS are how concise the
language is, and that it can be run interactively, enabling rapid
development and debugging of programs. If you use it to mostly invoke
costly operations within LAMMPS, such as running a simulation for a
reasonable number of timesteps, then the overhead cost of invoking
LAMMPS thru Python will be negligible.</p>
<p>The Python wrapper for LAMMPS uses the amazing and magical (to me)
“ctypes” package in Python, which auto-generates the interface code
needed between Python and a set of C interface routines for a library.
Ctypes is part of standard Python for versions 2.5 and later. You can
check which version of Python you have installed, by simply typing
<spanid="py-2"></span><h2>11.2. Overview of using Python from a LAMMPS script<aclass="headerlink"href="#overview-of-using-python-from-a-lammps-script"title="Permalink to this headline">¶</a></h2>
<divclass="admonition warning">
<pclass="first admonition-title">Warning</p>
<pclass="last">It is not currently possible to use the
<aclass="reference internal"href="python.html"><em>python</em></a> command described in this section with Python 3,
only with Python 2. The C API changed from Python 2 to 3 and the
LAMMPS code is not compatible with both.</p>
</div>
<p>LAMMPS has a <aclass="reference internal"href="python.html"><em>python</em></a> command which can be used in an
input script to define and execute a Python function that you write
the code for. The Python function can also be assigned to a LAMMPS
python-style variable via the <aclass="reference internal"href="variable.html"><em>variable</em></a> command. Each
time the variable is evaluated, either in the LAMMPS input script
itself, or by another LAMMPS command that uses the variable, this will
trigger the Python function to be invoked.</p>
<p>The Python code for the function can be included directly in the input
script or in an auxiliary file. The function can have arguments which
are mapped to LAMMPS variables (also defined in the input script) and
it can return a value to a LAMMPS variable. This is thus a mechanism
for your input script to pass information to a piece of Python code,
ask Python to execute the code, and return information to your input
script.</p>
<p>Note that a Python function can be arbitrarily complex. It can import
other Python modules, instantiate Python classes, call other Python
functions, etc. The Python code that you provide can contain more
code than the single function. It can contain other functions or
Python classes, as well as global variables or other mechanisms for
storing state between calls from LAMMPS to the function.</p>
<p>The Python function you provide can consist of “pure” Python code that
only performs operations provided by standard Python. However, the
Python function can also “call back” to LAMMPS through its
Python-wrapped library interface, in the manner described in the
previous section 11.1. This means it can issue LAMMPS input script
commands or query and set internal LAMMPS state. As an example, this
can be useful in an input script to create a more complex loop with
branching logic, than can be created using the simple looping and
brancing logic enabled by the <aclass="reference internal"href="next.html"><em>next</em></a> and <aclass="reference internal"href="if.html"><em>if</em></a>
commands.</p>
<p>See the <aclass="reference internal"href="python.html"><em>python</em></a> doc page and the <aclass="reference internal"href="variable.html"><em>variable</em></a>
doc page for its python-style variables for more info, including
examples of Python code you can write for both pure Python operations
and callbacks to LAMMPS.</p>
<p>To run pure Python code from LAMMPS, you only need to build LAMMPS
with the PYTHON package installed:</p>
<p>make yes-python
make machine</p>
<p>Note that this will link LAMMPS with the Python library on your
system, which typically requires several auxiliary system libraries to
also be linked. The list of these libraries and the paths to find
them are specified in the lib/python/Makefile.lammps file. You need
to insure that file contains the correct information for your version
of Python and your machine to successfully build LAMMPS. See the
lib/python/README file for more info.</p>
<p>If you want to write Python code with callbacks to LAMMPS, then you
must also follow the steps overviewed in the preceeding section (11.1)
for running LAMMPS from Python. I.e. you must build LAMMPS as a
shared library and insure that Python can find the python/lammps.py
<spanid="py-3"></span><h2>11.3. Building LAMMPS as a shared library<aclass="headerlink"href="#building-lammps-as-a-shared-library"title="Permalink to this headline">¶</a></h2>
<p>Instructions on how to build LAMMPS as a shared library are given in
<aclass="reference internal"href="Section_start.html#start-5"><span>Section_start 5</span></a>. A shared library is one
that is dynamically loadable, which is what Python requires to wrap
LAMMPS. On Linux this is a library file that ends in ”.so”, not ”.a”.</p>
<spanid="py-4"></span><h2>11.4. Installing the Python wrapper into Python<aclass="headerlink"href="#installing-the-python-wrapper-into-python"title="Permalink to this headline">¶</a></h2>
<p>For Python to invoke LAMMPS, there are 2 files it needs to know about:</p>
<ulclass="simple">
<li>python/lammps.py</li>
<li>src/liblammps.so</li>
</ul>
<p>Lammps.py is the Python wrapper on the LAMMPS library interface.
Liblammps.so is the shared LAMMPS library that Python loads, as
described above.</p>
<p>You can insure Python can find these files in one of two ways:</p>
<ulclass="simple">
<li>set two environment variables</li>
<li>run the python/install.py script</li>
</ul>
<p>If you set the paths to these files as environment variables, you only
have to do it once. For the csh or tcsh shells, add something like
this to your ~/.cshrc file, one line for each of the two files:</p>
<spanid="py-5"></span><h2>11.5. Extending Python with MPI to run in parallel<aclass="headerlink"href="#extending-python-with-mpi-to-run-in-parallel"title="Permalink to this headline">¶</a></h2>
<p>If you wish to run LAMMPS in parallel from Python, you need to extend
your Python with an interface to MPI. This also allows you to
make MPI calls directly from Python in your script, if you desire.</p>
<p>There are several Python packages available that purport to wrap MPI
as a library and allow MPI functions to be called from Python.</p>
<spanid="py-6"></span><h2>11.6. Testing the Python-LAMMPS interface<aclass="headerlink"href="#testing-the-python-lammps-interface"title="Permalink to this headline">¶</a></h2>
<p>To test if LAMMPS is callable from Python, launch Python interactively
<p>If an error occurs, carefully go thru the steps in <aclass="reference internal"href="Section_start.html#start-5"><span>Section_start 5</span></a> and above about building a shared
library and about insuring Python can find the necessary two files
<h3>11.6.1. <strong>Test LAMMPS and Python in serial:</strong><aclass="headerlink"href="#test-lammps-and-python-in-serial"title="Permalink to this headline">¶</a></h3>
<p>To run a LAMMPS test in serial, type these lines into Python
<h3>11.6.2. <strong>Test LAMMPS and Python in parallel:</strong><aclass="headerlink"href="#test-lammps-and-python-in-parallel"title="Permalink to this headline">¶</a></h3>
<p>To run LAMMPS in parallel, assuming you have installed the
<aclass="reference external"href="http://datamining.anu.edu.au/~ole/pypar">Pypar</a> package as discussed
above, create a test.py file containing these lines:</p>
<p>Without the “-i” flag, Python will exit when the script finishes.
With the “-i” flag, you will be left in the Python interpreter when
the script finishes, so you can type subsequent commands. As
mentioned above, you can only run Python interactively when running
Python on a single processor, not in parallel.</p>
</div>
</div>
<divclass="section"id="using-lammps-from-python">
<spanid="py-7"></span><h2>11.7. Using LAMMPS from Python<aclass="headerlink"href="#using-lammps-from-python"title="Permalink to this headline">¶</a></h2>
<p>As described above, the Python interface to LAMMPS consists of a
Python “lammps” module, the source code for which is in
python/lammps.py, which creates a “lammps” object, with a set of
methods that can be invoked on that object. The sample Python code
below assumes you have first imported the “lammps” module in your
<p>These are the methods defined by the lammps module. If you look at
the files src/library.cpp and src/library.h you will see that they
correspond one-to-one with calls you can make to the LAMMPS library
from a C++ or C or Fortran program.</p>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">lmp</span><spanclass="o">=</span><spanclass="n">lammps</span><spanclass="p">()</span><spanclass="c"># create a LAMMPS object using the default liblammps.so library</span>
<spanclass="n">lmp</span><spanclass="o">=</span><spanclass="n">lammps</span><spanclass="p">(</span><spanclass="n">ptr</span><spanclass="o">=</span><spanclass="n">lmpptr</span><spanclass="p">)</span><spanclass="c"># ditto, but use lmpptr as previously created LAMMPS object</span>
<spanclass="n">lmp</span><spanclass="o">=</span><spanclass="n">lammps</span><spanclass="p">(</span><spanclass="s">"g++"</span><spanclass="p">)</span><spanclass="c"># create a LAMMPS object using the liblammps_g++.so library</span>
<spanclass="n">lmp</span><spanclass="o">=</span><spanclass="n">lammps</span><spanclass="p">(</span><spanclass="s">""</span><spanclass="p">,</span><spanclass="nb">list</span><spanclass="p">)</span><spanclass="c"># ditto, with command-line args, e.g. list = ["-echo","screen"]</span>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">close</span><spanclass="p">()</span><spanclass="c"># destroy a LAMMPS object</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">file</span><spanclass="p">(</span><spanclass="nb">file</span><spanclass="p">)</span><spanclass="c"># run an entire input script, file = "in.lj"</span>
<spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">command</span><spanclass="p">(</span><spanclass="n">cmd</span><spanclass="p">)</span><spanclass="c"># invoke a single LAMMPS command, cmd = "run 100"</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">xlo</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">extract_global</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">)</span><spanclass="c"># extract a global quantity</span>
<spanclass="c"># name = "boxxlo", "nlocal", etc</span>
<spanclass="c"># type = 0 = int</span>
<spanclass="c"># 1 = double</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">coords</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">extract_atom</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">)</span><spanclass="c"># extract a per-atom quantity</span>
<spanclass="c"># name = "x", "type", etc</span>
<spanclass="c"># type = 0 = vector of ints</span>
<spanclass="c"># 1 = array of ints</span>
<spanclass="c"># 2 = vector of doubles</span>
<spanclass="c"># 3 = array of doubles</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">eng</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">extract_compute</span><spanclass="p">(</span><spanclass="nb">id</span><spanclass="p">,</span><spanclass="n">style</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">)</span><spanclass="c"># extract value(s) from a compute</span>
<spanclass="n">v3</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">extract_fix</span><spanclass="p">(</span><spanclass="nb">id</span><spanclass="p">,</span><spanclass="n">style</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">,</span><spanclass="n">i</span><spanclass="p">,</span><spanclass="n">j</span><spanclass="p">)</span><spanclass="c"># extract value(s) from a fix</span>
<spanclass="c"># id = ID of compute or fix</span>
<spanclass="c"># style = 0 = global data</span>
<spanclass="c"># 1 = per-atom data</span>
<spanclass="c"># 2 = local data</span>
<spanclass="c"># type = 0 = scalar</span>
<spanclass="c"># 1 = vector</span>
<spanclass="c"># 2 = array</span>
<spanclass="c"># i,j = indices of value in global vector or array</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">var</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">extract_variable</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="n">group</span><spanclass="p">,</span><spanclass="n">flag</span><spanclass="p">)</span><spanclass="c"># extract value(s) from a variable</span>
<spanclass="c"># name = name of variable</span>
<spanclass="c"># group = group ID (ignored for equal-style variables)</span>
<spanclass="c"># flag = 0 = equal-style variable</span>
<spanclass="c"># 1 = atom-style variable</span>
</pre></div>
</div>
<divclass="highlight-python"><divclass="highlight"><pre><spanclass="n">flag</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">set_variable</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="n">value</span><spanclass="p">)</span><spanclass="c"># set existing named string-style variable to value, flag = 0 if successful</span>
<spanclass="n">natoms</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">get_natoms</span><spanclass="p">()</span><spanclass="c"># total # of atoms as int</span>
<spanclass="n">data</span><spanclass="o">=</span><spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">gather_atoms</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">,</span><spanclass="n">count</span><spanclass="p">)</span><spanclass="c"># return atom attribute of all atoms gathered into data, ordered by atom ID</span>
<spanclass="c"># name = "x", "charge", "type", etc</span>
<spanclass="c"># count = # of per-atom values, 1 or 3, etc</span>
<spanclass="n">lmp</span><spanclass="o">.</span><spanclass="n">scatter_atoms</span><spanclass="p">(</span><spanclass="n">name</span><spanclass="p">,</span><spanclass="nb">type</span><spanclass="p">,</span><spanclass="n">count</span><spanclass="p">,</span><spanclass="n">data</span><spanclass="p">)</span><spanclass="c"># scatter atom attribute of all atoms from data, ordered by atom ID</span>
<spanclass="c"># name = "x", "charge", "type", etc</span>
<spanclass="c"># count = # of per-atom values, 1 or 3, etc</span>
</pre></div>
</div>
<hrclass="docutils"/>
<divclass="admonition warning">
<pclass="first admonition-title">Warning</p>
<pclass="last">Currently, the creation of a LAMMPS object from within
lammps.py does not take an MPI communicator as an argument. There
should be a way to do this, so that the LAMMPS instance runs on a
subset of processors if desired, but I don’t know how to do it from
Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
processors. If someone figures out how to do this with one or more of
the Python wrappers for MPI, like Pypar, please let us know and we
extract_fix(), and extract_variable() methods return values or
pointers to data structures internal to LAMMPS.</p>
<p>For extract_global() see the src/library.cpp file for the list of
valid names. New names could easily be added. A double or integer is
returned. You need to specify the appropriate data type via the type
argument.</p>
<p>For extract_atom(), a pointer to internal LAMMPS atom-based data is
returned, which you can use via normal Python subscripting. See the
extract() method in the src/atom.cpp file for a list of valid names.
Again, new names could easily be added. A pointer to a vector of
doubles or integers, or a pointer to an array of doubles (double <ahref="#id2"><spanclass="problematic"id="id3">**</span></a>)
or integers (int <ahref="#id4"><spanclass="problematic"id="id5">**</span></a>) is returned. You need to specify the appropriate
data type via the type argument.</p>
<p>For extract_compute() and extract_fix(), the global, per-atom, or
local data calulated by the compute or fix can be accessed. What is
returned depends on whether the compute or fix calculates a scalar or
vector or array. For a scalar, a single double value is returned. If
the compute or fix calculates a vector or array, a pointer to the
internal LAMMPS data is returned, which you can use via normal Python
subscripting. The one exception is that for a fix that calculates a
global vector or array, a single double value from the vector or array
is returned, indexed by I (vector) or I and J (array). I,J are
zero-based indices. The I,J arguments can be left out if not needed.
See <aclass="reference internal"href="Section_howto.html#howto-15"><span>Section_howto 15</span></a> of the manual for a
discussion of global, per-atom, and local data, and of scalar, vector,
and array data types. See the doc pages for individual
<aclass="reference internal"href="compute.html"><em>computes</em></a> and <aclass="reference internal"href="fix.html"><em>fixes</em></a> for a description of what
they calculate and store.</p>
<p>For extract_variable(), an <aclass="reference internal"href="variable.html"><em>equal-style or atom-style variable</em></a> is evaluated and its result returned.</p>
<p>For equal-style variables a single double value is returned and the
group argument is ignored. For atom-style variables, a vector of
doubles is returned, one value per atom, which you can use via normal
Python subscripting. The values will be zero for atoms not in the
specified group.</p>
<p>The get_natoms() method returns the total number of atoms in the
simulation, as an int.</p>
<p>The gather_atoms() method returns a ctypes vector of ints or doubles
as specified by type, of length count*natoms, for the property of all
the atoms in the simulation specified by name, ordered by count and
then by atom ID. The vector can be used via normal Python
subscripting. If atom IDs are not consecutively ordered within
LAMMPS, a None is returned as indication of an error.</p>
<p>Note that the data structure gather_atoms(“x”) returns is different
from the data structure returned by extract_atom(“x”) in four ways.
(1) Gather_atoms() returns a vector which you index as x[i];
extract_atom() returns an array which you index as x[i][j]. (2)
Gather_atoms() orders the atoms by atom ID while extract_atom() does
not. (3) Gathert_atoms() returns a list of all atoms in the
simulation; extract_atoms() returns just the atoms local to each
processor. (4) Finally, the gather_atoms() data structure is a copy
of the atom coords stored internally in LAMMPS, whereas extract_atom()
returns an array that effectively points directly to the internal
data. This means you can change values inside LAMMPS from Python by
assigning a new values to the extract_atom() array. To do this with
the gather_atoms() vector, you need to change values in the vector,
then invoke the scatter_atoms() method.</p>
<p>The scatter_atoms() method takes a vector of ints or doubles as
specified by type, of length count*natoms, for the property of all the
atoms in the simulation specified by name, ordered by bount and then
by atom ID. It uses the vector of data to overwrite the corresponding
properties for each atom inside LAMMPS. This requires LAMMPS to have
its “map” option enabled; see the <aclass="reference internal"href="atom_modify.html"><em>atom_modify</em></a>
command for details. If it is not, or if atom IDs are not
consecutively ordered, no coordinates are reset.</p>
<p>The array of coordinates passed to scatter_atoms() must be a ctypes
vector of ints or doubles, allocated and initialized something like
<spanid="py-8"></span><h2>11.8. Example Python scripts that use LAMMPS<aclass="headerlink"href="#example-python-scripts-that-use-lammps"title="Permalink to this headline">¶</a></h2>
<p>These are the Python scripts included as demos in the python/examples
directory of the LAMMPS distribution, to illustrate the kinds of
things that are possible when Python wraps LAMMPS. If you create your
own scripts, send them to us and we can include them in the LAMMPS
distribution.</p>
<tableborder="1"class="docutils">
<colgroup>
<colwidth="27%"/>
<colwidth="73%"/>
</colgroup>
<tbodyvalign="top">
<trclass="row-odd"><td>trivial.py</td>
<td>read/run a LAMMPS input script thru Python</td>
</tr>
<trclass="row-even"><td>demo.py</td>
<td>invoke various LAMMPS library interface routines</td>
</tr>
<trclass="row-odd"><td>simple.py</td>
<td>mimic operation of couple/simple/simple.cpp in Python</td>
</tr>
<trclass="row-even"><td>gui.py</td>
<td>GUI go/stop/temperature-slider to control LAMMPS</td>
</tr>
<trclass="row-odd"><td>plot.py</td>
<td>real-time temeperature plot with GnuPlot via Pizza.py</td>
</tr>
<trclass="row-even"><td>viz_tool.py</td>
<td>real-time viz via some viz package</td>
</tr>
<trclass="row-odd"><td>vizplotgui_tool.py</td>
<td>combination of viz_tool.py and plot.py and gui.py</td>
</tr>
</tbody>
</table>
<hrclass="docutils"/>
<p>For the viz_tool.py and vizplotgui_tool.py commands, replace “tool”
with “gl” or “atomeye” or “pymol” or “vmd”, depending on what
visualization package you have installed.</p>
<p>Note that for GL, you need to be able to run the Pizza.py GL tool,
which is included in the pizza sub-directory. See the <aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">Pizza.py doc pages</a> for more info:</p>
<p>Note that for AtomEye, you need version 3, and there is a line in the
scripts that specifies the path and name of the executable. See the
AtomEye WWW pages <aclass="reference external"href="http://mt.seas.upenn.edu/Archive/Graphics/A">here</a> or <aclass="reference external"href="http://mt.seas.upenn.edu/Archive/Graphics/A3/A3.html">here</a> for more details:</p>
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
<h1>9. Additional tools<aclass="headerlink"href="#additional-tools"title="Permalink to this headline">¶</a></h1>
<p>LAMMPS is designed to be a computational kernel for performing
molecular dynamics computations. Additional pre- and post-processing
steps are often necessary to setup and analyze a simulation. A few
additional tools are provided with the LAMMPS distribution and are
described in this section.</p>
<p>Our group has also written and released a separate toolkit called
<aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">Pizza.py</a> which provides tools for doing setup, analysis,
plotting, and visualization for LAMMPS simulations. Pizza.py is
written in <aclass="reference external"href="http://www.python.org">Python</a> and is available for download from <aclass="reference external"href="http://www.sandia.gov/~sjplimp/pizza.html">the Pizza.py WWW site</a>.</p>
<p>Note that many users write their own setup or analysis tools or use
other existing codes and convert their output to a LAMMPS input format
or vice versa. The tools listed here are included in the LAMMPS
distribution as examples of auxiliary tools. Some of them are not
actively supported by Sandia, as they were contributed by LAMMPS
users. If you have problems using them, we can direct you to the
authors.</p>
<p>The source code for each of these codes is in the tools sub-directory
of the LAMMPS distribution. There is a Makefile (which you may need
to edit for your platform) which will build several of the tools which
reside in that directory. Some of them are larger packages in their
Zr. The files can then be used with the <aclass="reference internal"href="pair_eam.html"><em>pair_style eam/alloy</em></a> command.</p>
<p>The tool is authored by Xiaowang Zhou (Sandia), xzhou at sandia.gov,
and is based on his paper:</p>
<p>X. W. Zhou, R. A. Johnson, and H. N. G. Wadley, Phys. Rev. B, 69,
144113 (2004).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="eam-generate-tool">
<spanid="eamgn"></span><h2>9.9. eam generate tool<aclass="headerlink"href="#eam-generate-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/eam_generate directory contains several one-file C programs
that convert an analytic formula into a tabulated <aclass="reference internal"href="pair_eam.html"><em>embedded atom method (EAM)</em></a> setfl potential file. The potentials they
produce are in the potentials directory, and can be used with the
<p>The source files and potentials were provided by Gerolf Ziegenhain
(gerolf at ziegenhain.com).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="eff-tool">
<spanid="eff"></span><h2>9.10. eff tool<aclass="headerlink"href="#eff-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/eff directory contains various scripts for generating
structures and post-processing output for simulations using the
electron force field (eFF).</p>
<p>These tools were provided by Andres Jaramillo-Botero at CalTech
(ajaramil at wag.caltech.edu).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="emacs-tool">
<spanid="emacs"></span><h2>9.11. emacs tool<aclass="headerlink"href="#emacs-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/emacs directory contains a Lips add-on file for Emacs that
enables a lammps-mode for editing of input scripts when using Emacs,
with various highlighting options setup.</p>
<p>These tools were provided by Aidan Thompson at Sandia
(athomps at sandia.gov).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="fep-tool">
<spanid="fep"></span><h2>9.12. fep tool<aclass="headerlink"href="#fep-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/fep directory contains Python scripts useful for
post-processing results from performing free-energy perturbation
simulations using the USER-FEP package.</p>
<p>The scripts were contributed by Agilio Padua (Universite Blaise
Pascal Clermont-Ferrand), agilio.padua at univ-bpclermont.fr.</p>
<p>See README file in the tools/fep directory.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="i-pi-tool">
<spanid="ipi"></span><h2>9.13. i-pi tool<aclass="headerlink"href="#i-pi-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/i-pi directory contains a version of the i-PI package, with
all the LAMMPS-unrelated files removed. It is provided so that it can
be used with the <aclass="reference internal"href="fix_ipi.html"><em>fix ipi</em></a> command to perform
path-integral molecular dynamics (PIMD).</p>
<p>The i-PI package was created and is maintained by Michele Ceriotti,
michele.ceriotti at gmail.com, to interface to a variety of molecular
dynamics codes.</p>
<p>See the tools/i-pi/manual.pdf file for an overview of i-PI, and the
<aclass="reference internal"href="fix_ipi.html"><em>fix ipi</em></a> doc page for further details on running PIMD
calculations with LAMMPS.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="ipp-tool">
<spanid="ipp"></span><h2>9.14. ipp tool<aclass="headerlink"href="#ipp-tool"title="Permalink to this headline">¶</a></h2>
<p>The tools/ipp directory contains a Perl script ipp which can be used
to facilitate the creation of a complicated file (say, a lammps input
script or tools/createatoms input file) using a template file.</p>
<p>ipp was created and is maintained by Reese Jones (Sandia), rjones at
sandia.gov.</p>
<p>See two examples in the tools/ipp directory. One of them is for the
tools/createatoms tool’s input file.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="kate-tool">
<spanid="kate"></span><h2>9.15. kate tool<aclass="headerlink"href="#kate-tool"title="Permalink to this headline">¶</a></h2>
<p>The file in the tools/kate directory is an add-on to the Kate editor
in the KDE suite that allow syntax highlighting of LAMMPS input
scripts. See the README.txt file for details.</p>
<p>The file was provided by Alessandro Luigi Sellerio
(alessandro.sellerio at ieni.cnr.it).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="lmp2arc-tool">
<spanid="arc"></span><h2>9.16. lmp2arc tool<aclass="headerlink"href="#lmp2arc-tool"title="Permalink to this headline">¶</a></h2>
<p>The lmp2arc sub-directory contains a tool for converting LAMMPS output
files to the format for Accelrys’ Insight MD code (formerly
MSI/Biosym and its Discover MD code). See the README file for more
information.</p>
<p>This tool was written by John Carpenter (Cray), Michael Peachey
(Cray), and Steve Lustig (Dupont). John is now at the Mayo Clinic
(jec at mayo.edu), but still fields questions about the tool.</p>
<p>This tool was updated for the current LAMMPS C++ version by Jeff
Greathouse at Sandia (jagreat at sandia.gov).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="lmp2cfg-tool">
<spanid="cfg"></span><h2>9.17. lmp2cfg tool<aclass="headerlink"href="#lmp2cfg-tool"title="Permalink to this headline">¶</a></h2>
<p>The lmp2cfg sub-directory contains a tool for converting LAMMPS output
files into a series of <ahref="#id1"><spanclass="problematic"id="id2">*</span></a>.cfg files which can be read into the
<aclass="reference external"href="http://mt.seas.upenn.edu/Archive/Graphics/A">AtomEye</a> visualizer. See
the README file for more information.</p>
<p>This tool was written by Ara Kooser at Sandia (askoose at sandia.gov).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="lmp2vmd-tool">
<spanid="vmd"></span><h2>9.18. lmp2vmd tool<aclass="headerlink"href="#lmp2vmd-tool"title="Permalink to this headline">¶</a></h2>
<p>The lmp2vmd sub-directory contains a README.txt file that describes
details of scripts and plugin support within the <aclass="reference external"href="http://www.ks.uiuc.edu/Research/vmd">VMD package</a> for visualizing LAMMPS
dump files.</p>
<p>The VMD plugins and other supporting scripts were written by Axel
Kohlmeyer (akohlmey at cmm.chem.upenn.edu) at U Penn.</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="matlab-tool">
<spanid="matlab"></span><h2>9.19. matlab tool<aclass="headerlink"href="#matlab-tool"title="Permalink to this headline">¶</a></h2>
<p>The matlab sub-directory contains several <spanclass="xref std std-ref">MATLAB</span> scripts for
post-processing LAMMPS output. The scripts include readers for log
and dump files, a reader for EAM potential files, and a converter that
reads LAMMPS dump files and produces CFG files that can be visualized
with the <aclass="reference external"href="http://mt.seas.upenn.edu/Archive/Graphics/A">AtomEye</a>
visualizer.</p>
<p>See the README.pdf file for more information.</p>
<p>These scripts were written by Arun Subramaniyan at Purdue Univ
(asubrama at purdue.edu).</p>
<hrclass="docutils"/>
</div>
<divclass="section"id="micelle2d-tool">
<spanid="micelle"></span><h2>9.20. micelle2d tool<aclass="headerlink"href="#micelle2d-tool"title="Permalink to this headline">¶</a></h2>
<p>The file micelle2d.f creates a LAMMPS data file containing short lipid
chains in a monomer solution. It uses a text file containing lipid
definition parameters as an input. The created molecules and solvent
atoms can strongly overlap, so LAMMPS needs to run the system
initially with a “soft” pair potential to un-overlap it. The syntax
Built with <ahref="http://sphinx-doc.org/">Sphinx</a> using a <ahref="https://github.com/snide/sphinx_rtd_theme">theme</a> provided by <ahref="https://readthedocs.org">Read the Docs</a>.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.