Update Python docs

This commit is contained in:
Richard Berger
2020-10-01 15:00:08 -04:00
parent 507c2cb2a8
commit 533c453a08
3 changed files with 112 additions and 54 deletions

View File

@ -35,10 +35,10 @@ Both CMake and traditional make build options offer ways to automate these tasks
LAMMPS can be configured and compiled as shared library with CMake by enabling the ``BUILD_SHARED_LIBS`` option.
The file name of the shared library depends on the platform (Unix/Linux, MacOS, Windows) and the build configuration
being used. See :ref:`Build the LAMMPS executable and library <library>` for more details and how the name is
determined.
being used. See :ref:`Build the LAMMPS executable and library <library>` for more details and how the name of the
shared library and executable is determined.
After compilation, the generated binaries, shared library, Python module,
After compilation, the generated executables, shared library, Python module,
and other files can be installed to a custom location defined by the
``CMAKE_INSTALL_PREFIX`` setting. By default, this is set to the current
user's ``$HOME/.local`` directory. This leads to an installation to the following locations:
@ -52,6 +52,8 @@ Both CMake and traditional make build options offer ways to automate these tasks
| LAMMPS shared library | * ``$HOME/.local/lib/`` (32bit) | |
| | * ``$HOME/.local/lib64/`` (64bit) | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
| LAMMPS executable | ``$HOME/.local/bin/`` | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
| LAMMPS potential files | ``$HOME/.local/share/lammps/potentials/`` | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
@ -77,7 +79,7 @@ Both CMake and traditional make build options offer ways to automate these tasks
# compile LAMMPS (in parallel for faster builds)
cmake --build . --parallel
# install LAMMPS into myvenv
# install LAMMPS into $HOME/.local
cmake --install .
2. Configure Environment Variables
@ -100,10 +102,16 @@ Both CMake and traditional make build options offer ways to automate these tasks
containing its potential files. This can be set with the ``LAMMPS_POTENTIALS``
environment variable:
.. code-block::
.. code-block:: bash
export LAMMPS_POTENTIALS=$HOME/.local/share/lammps/potentials
If you are planning to also use the LAMMPS executable (e.g., ``lmp``), also set the ``PATH`` variable:
.. code-block:: bash
export PATH=$HOME/.local/bin:$PATH
To set these environment variables for each new shell, add the above
``export`` commands at the end of the ``$HOME/.bashrc`` file.
@ -144,6 +152,8 @@ Both CMake and traditional make build options offer ways to automate these tasks
| LAMMPS shared library | * ``/usr/lib/`` (32bit) | |
| | * ``/usr/lib64/`` (64bit) | |
+------------------------+---------------------------------------------------+-------------------------------------------------------------+
| LAMMPS executable | ``/usr/bin/`` | |
+------------------------+---------------------------------------------------+-------------------------------------------------------------+
| LAMMPS potential files | ``/usr/share/lammps/potentials/`` | |
+------------------------+---------------------------------------------------+-------------------------------------------------------------+
@ -170,10 +180,10 @@ Both CMake and traditional make build options offer ways to automate these tasks
sudo cmake --install .
Unlike the local user installation, no additional environment
variables need to be set. The system locations such as ``/usr/lib`` and
variables need to be set. The system locations such as ``/usr/lib``, and
``/usr/lib64`` are already part of the search path of the dynamic library
loader. Therefore ``LD_LIBRARY_PATH`` or ``DYLD_LIBRARY_PATH`` on MacOS do not
have be set.
loader. Therefore ``LD_LIBRARY_PATH`` (or ``DYLD_LIBRARY_PATH`` on MacOS) does not
have be set. The same is true for ``PATH``.
All other environment variables will be automatically set when
launching a new shell. This is due to files installed in system folders
@ -236,6 +246,8 @@ Both CMake and traditional make build options offer ways to automate these tasks
| LAMMPS shared library | * ``$VIRTUAL_ENV/lib/`` (32bit) | |
| | * ``$VIRTUAL_ENV/lib64/`` (64bit) | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
| LAMMPS executable | ``$VIRTUAL_ENV/bin/`` | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
| LAMMPS potential files | ``$VIRTUAL_ENV/share/lammps/potentials/`` | |
+------------------------+-----------------------------------------------------------+-------------------------------------------------------------+
@ -337,7 +349,7 @@ Both CMake and traditional make build options offer ways to automate these tasks
wrap LAMMPS. On Linux this is a library file that ends in ``.so``, not
``.a``.
From the src directory, type
From the ``src`` directory, type
.. code-block:: bash
@ -375,7 +387,7 @@ Both CMake and traditional make build options offer ways to automate these tasks
If you set the paths to these files as environment variables, you only
have to do it once. For the csh or tcsh shells, add something like
this to your ~/.cshrc file, one line for each of the two files:
this to your ``~/.cshrc`` file, one line for each of the two files:
.. code-block:: csh
@ -414,6 +426,8 @@ Both CMake and traditional make build options offer ways to automate these tasks
environment variable as described above.
.. _python_install_mpi4py:
Extending Python to run in parallel
===================================
@ -431,7 +445,7 @@ and as of version 2.0.0 mpi4py allows passing a custom MPI communicator
to the LAMMPS constructor, which means one can easily run one or more
LAMMPS instances on subsets of the total MPI ranks.
To install mpi4py (version 3.0.3 as of Sep 2020),
Install mpi4py via ``pip`` (version 3.0.3 as of Sep 2020):
.. tabs::
@ -458,6 +472,23 @@ To install mpi4py (version 3.0.3 as of Sep 2020),
For more detailed installation instructions, please see the `mpi4py installation <mpi4py_install>`_ page.
.. note::
To use mpi4py and LAMMPS in parallel from Python, you must
insure both are using the same version of MPI. If you only have one
MPI installed on your system, this is not an issue, but it can be if
you have multiple MPIs. Your LAMMPS build is explicit about which MPI
it is using, since it is either detected during CMake configuration or
in the traditional make build system you specify the details in your
low-level ``src/MAKE/Makefile.foo`` file.
mpi4py uses the ``mpicc`` command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the ``LD_LIBRARY_PATH``. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both mpi4py and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that mpi4py finds
the right one.
If you have successfully installed mpi4py, you should be able to run
Python and type
@ -470,7 +501,7 @@ on a simple test script
.. code-block:: bash
$ mpirun -np 4 python test.py
$ mpirun -np 4 python3 test.py
where ``test.py`` contains the lines
@ -478,22 +509,15 @@ where ``test.py`` contains the lines
from mpi4py import MPI
comm = MPI.COMM_WORLD
print "Proc %d out of %d procs" % (comm.Get_rank(),comm.Get_size())
print("Proc %d out of %d procs" % (comm.Get_rank(),comm.Get_size()))
and see one line of output for each processor you run on.
.. note::
To use mpi4py and LAMMPS in parallel from Python, you must
insure both are using the same version of MPI. If you only have one
MPI installed on your system, this is not an issue, but it can be if
you have multiple MPIs. Your LAMMPS build is explicit about which MPI
it is using, since you specify the details in your low-level
src/MAKE/Makefile.foo file. mpi4py uses the "mpicc" command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the ``LD_LIBRARY_PATH``. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both mpi4py and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that mpi4py finds
the right one.
.. code-block:: bash
# NOTE: the line order is not deterministic
$ mpirun -np 4 python3 test.py
Proc 0 out of 4 procs
Proc 1 out of 4 procs
Proc 2 out of 4 procs
Proc 3 out of 4 procs

View File

@ -46,7 +46,7 @@ launch one or more simulations. In Python lingo, this is called
Second, the lower-level Python interface can be used indirectly through
the provided :code`PyLammps` and :code:`IPyLammps` wrapper classes, written in Python.
the provided :code:`PyLammps` and :code:`IPyLammps` wrapper classes, written in Python.
These wrappers try to simplify the usage of LAMMPS in Python by
providing an object-based interface to common LAMMPS functionality.
They also reduces the amount of code necessary to parameterize LAMMPS

View File

@ -5,9 +5,9 @@ Running LAMMPS and Python in serial:
-------------------------------------
To run a LAMMPS in serial, type these lines into Python
interactively from the bench directory:
interactively from the ``bench`` directory:
.. parsed-literal::
.. code-block:: python
>>> from lammps import lammps
>>> lmp = lammps()
@ -17,35 +17,22 @@ Or put the same lines in the file ``test.py`` and run it as
.. code-block:: bash
% python test.py
$ python3 test.py
Either way, you should see the results of running the ``in.lj`` benchmark
on a single processor appear on the screen, the same as if you had
typed something like:
.. parsed-literal::
.. code-block:: bash
lmp_serial -in in.lj
Test LAMMPS and Python in parallel:
---------------------------------------
To run LAMMPS in parallel, assuming you have installed the
`PyPar <https://github.com/daleroberts/pypar>`_ package as discussed
above, create a ``test.py`` file containing these lines:
.. code-block:: python
import pypar
from lammps import lammps
lmp = lammps()
lmp.file("in.lj")
print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()),lmp
pypar.finalize()
Running LAMMPS and Python in parallel with MPI (mpi4py)
-------------------------------------------------------
To run LAMMPS in parallel, assuming you have installed the
`mpi4py <https://mpi4py.readthedocs.io>`_ package as discussed
above, create a ``test.py`` file containing these lines:
:ref:`python_install_mpi4py`, create a ``test.py`` file containing these lines:
.. code-block:: python
@ -55,14 +42,61 @@ above, create a ``test.py`` file containing these lines:
lmp.file("in.lj")
me = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
print "Proc %d out of %d procs has" % (me,nprocs),lmp
print("Proc %d out of %d procs has" % (me,nprocs),lmp)
MPI.Finalize()
You can run either script in parallel as:
You can run the script in parallel as:
.. code-block:: bash
$ mpirun -np 4 python test.py
$ mpirun -np 4 python3 test.py
and you should see the same output as if you had typed
.. code-block:: bash
$ mpirun -np 4 lmp_mpi -in in.lj
Note that without the mpi4py specific lines from ``test.py``
.. code-block::
from lammps import lammps
lmp = lammps()
lmp.file("in.lj")
running the script with ``mpirun`` on :math:`P` processors would lead to
:math:`P` independent simulations to run parallel, each with a single
processor. Therefore, if you use the mpi4py lines and you see multiple LAMMPS
single processor outputs. that means mpi4py isn't working correctly.
Also note that once you import the mpi4py module, mpi4py initializes MPI
for you, and you can use MPI calls directly in your Python script, as
described in the mpi4py documentation. The last line of your Python
script should be ``MPI.finalize()``, to insure MPI is shut down
correctly.
Running LAMMPS and Python in parallel with MPI (pypar)
------------------------------------------------------
To run LAMMPS in parallel, assuming you have installed the
`PyPar <https://github.com/daleroberts/pypar>`_ package as discussed
in :ref:`python_install_mpi4py`, create a ``test.py`` file containing these lines:
.. code-block:: python
import pypar
from lammps import lammps
lmp = lammps()
lmp.file("in.lj")
print("Proc %d out of %d procs has" % (pypar.rank(),pypar.size()), lmp)
pypar.finalize()
You can run the script in parallel as:
.. code-block:: bash
$ mpirun -np 4 python3 test.py
and you should see the same output as if you had typed
@ -72,7 +106,7 @@ and you should see the same output as if you had typed
Note that if you leave out the 3 lines from ``test.py`` that specify PyPar
commands you will instantiate and run LAMMPS independently on each of
the :math:`P` processors specified in the mpirun command. In this case you
the :math:`P` processors specified in the ``mpirun`` command. In this case you
should get 4 sets of output, each showing that a LAMMPS run was made
on a single processor, instead of one set of output showing that
LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
@ -84,8 +118,8 @@ described in the PyPar documentation. The last line of your Python
script should be ``pypar.finalize()``, to insure MPI is shut down
correctly.
Running Python scripts:
---------------------------
Running Python scripts
----------------------
Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways: