diff --git a/doc/Section_python.html b/doc/Section_python.html index b761ee73c3..46c4ab9783 100644 --- a/doc/Section_python.html +++ b/doc/Section_python.html @@ -14,8 +14,8 @@
This section describes how to build and use LAMMPS via a Python interface.
-For Python to use the LAMMPS interface, it needs to find two files. -The paths to these files need to be added to two environment variables -that Python checks. -
-The first is the environment variable PYTHONPATH. It needs -to include the directory where the python/lammps.py file is. -
-For the csh or tcsh shells, you could add something like this to your -~/.cshrc file: -
-setenv PYTHONPATH $PYTHONPATH:/home/sjplimp/lammps/python --
The second is the environment variable LD_LIBRARY_PATH, which is used -by the operating system to find dynamic shared libraries when it loads -them. It needs to include the directory where the shared LAMMPS -library will be. Normally this is the LAMMPS src dir, as explained in -the following section. -
-For the csh or tcsh shells, you could add something like this to your -~/.cshrc file: -
-setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/src --
As discussed below, if your LAMMPS build includes auxiliary libraries, -they must also be available as shared libraries for Python to -successfully load LAMMPS. If they are not in default places where the -operating system can find them, then you also have to add their paths -to the LD_LIBRARY_PATH environment variable. -
-For example, if you are using the dummy MPI library provided in -src/STUBS, you need to add something like this to your ~/.cshrc file: -
-setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/src/STUBS --
If you are using the LAMMPS USER-ATC package, you need to add -something like this to your ~/.cshrc file: -
-setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/lib/atc --
Instructions on how to build LAMMPS as a shared library are given in Section_start 5. A shared library is one that is dynamically loadable, which is what Python requires. On Linux this is a library file that ends in ".so", not ".a".
->From the src directory, type +
From the src directory, type
-make makeshlib -make -f Makefile.shlib foo -
-where foo is the machine target name, such as linux or g++ or serial. -This should create the file liblmp_foo.so in the src directory, as -well as a soft link liblmp.so which is what the Python wrapper will -load by default. If you are building multiple machine versions of the -shared library, the soft link is always set to the most recently built -version. -
-Note that as discussed in below, a LAMMPS build may depend on several -auxiliary libraries, which are specified in your low-level -src/Makefile.foo file. For example, an MPI library, the FFTW library, -a JPEG library, etc. Depending on what LAMMPS packages you have -installed, the build may also require additional libraries from the -lib directories, such as lib/atc/libatc.so or lib/reax/libreax.so. -
-You must insure that each of these libraries exist in shared library -form (*.so file for Linux systems), or either the LAMMPS shared -library build or the Python load of the library will fail. For the -load to be successful all the shared libraries must also be in -directories that the operating system checks. See the discussion in -the preceding section about the LD_LIBRARY_PATH environment variable -for how to insure this. -
-Note that some system libraries, such as MPI, if you installed it -yourself, may not be built by default as shared libraries. The build -instructions for the library should tell you how to do this. -
-For example, here is how to build and install the MPICH -library, a popular open-source version of MPI, distributed by -Argonne National Labs, as a shared library in the default -/usr/local/lib location: -
- - -./configure --enable-shared -make -make install +make makeshlib +make -f Makefile.shlib foo-You may need to use "sudo make install" in place of the last line if -you do not have write priveleges for /usr/local/lib. The end result -should be the file /usr/local/lib/libmpich.so. +
where foo is the machine target name, such as linux or g++ or serial. +This should create the file liblammps_foo.so in the src directory, as +well as a soft link liblammps.so, which is what the Python wrapper will +load by default. Note that if you are building multiple machine +versions of the shared library, the soft link is always set to the +most recently built version.
-Note that not all of the auxiliary libraries provided with LAMMPS have -shared-library Makefiles in their lib directories. Typically this -simply requires a Makefile.foo that adds a -fPIC switch when files are -compiled and a "-fPIC -shared" switches when the library is linked -with a C++ (or Fortran) compiler, as well as an output target that -ends in ".so", like libatc.o. As we or others create and contribute -these Makefiles, we will add them to the LAMMPS distribution. +
If this fails, see Section_start 5 for +more details, especially if your LAMMPS build uses auxiliary libraries +like MPI or FFTW which may not be built as shared libraries on your +system. +
+
+ +11.2 Installing the Python wrapper into Python +
+For Python to invoke LAMMPS, there are 2 files it needs to know about: +
+
Lammps.py is the Python wrapper on the LAMMPS library interface. +Liblammps.so is the shared LAMMPS library that Python loads, as +described above. +
+You can insure Python can find these files in one of two ways: +
+If you set the paths to these files as environment variables, you only +have to do it once. For the csh or tcsh shells, add something like +this to your ~/.cshrc file, one line for each of the two files: +
+setenv PYTHONPATH $PYTHONPATH:/home/sjplimp/lammps/python +setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/src ++
If you use the python/install.py script, you need to invoke it every +time you rebuild LAMMPS (as a shared library) or make changes to the +python/lammps.py file. +
+You can invoke install.py from the python directory as +
+% python install.py libdir pydir ++
The optional libdir is where to copy the LAMMPS shared library to; the +default is /usr/local/lib. The optional pydir is where to copy the +lammps.py file to; the default is the site-packages directory of the +version of Python that is running the install script. +
+Note that libdir must be a location that is in your default +LD_LIBRARY_PATH, like /usr/local/lib or /usr/lib. And pydir must be a +location that Python looks in by default for imported modules, like +its site-packages dir. If you want to copy these files to +non-standard locations, such as within your own user space, you will +need to set your PYTHONPATH and LD_LIBRARY_PATH environment variables +accordingly, as above. +
+If the instally.py script does not allow you to copy files into system +directories, prefix the python command with "sudo". If you do this, +make sure that the Python that root runs is the same as the Python you +run. E.g. you may need to do something like +
+% sudo /usr/local/bin/python install.py libdir pydir ++
You can also invoke install.py from the make command in the src +directory as +
+% make install-python ++
In this mode you cannot append optional arguments. Again, you may +need to prefix this with "sudo". In this mode you cannot control +which Python is invoked by root. +
+Note that if you want Python to be able to load different versions of +the LAMMPS shared library (see this section below), you will +need to manually copy files like lmplammps_g++.so into the appropriate +system directory. This is not needed if you set the LD_LIBRARY_PATH +environment variable as described above.
All of these except pyMPI work by wrapping the MPI library (which must -be available on your system as a shared library, as discussed above), -and exposing (some portion of) its interface to your Python script. -This means Python cannot be used interactively in parallel, since they -do not address the issue of interactive input to multiple instances of +
All of these except pyMPI work by wrapping the MPI library and +exposing (some portion of) its interface to your Python script. This +means Python cannot be used interactively in parallel, since they do +not address the issue of interactive input to multiple instances of Python running on different processors. The one exception is pyMPI, which alters the Python interpreter to address this issue, and (I believe) creates a new alternate executable (in place of "python" @@ -233,17 +220,17 @@ sudo python setup.py install
The "sudo" is only needed if required to copy Numpy files into your Python distribution's site-packages directory.
-To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it +
To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it and from its "source" directory, type
python setup.py build sudo python setup.py install-
Again, the "sudo" is only needed if required to copy PyPar files into +
Again, the "sudo" is only needed if required to copy Pypar files into your Python distribution's site-packages directory.
If you have successully installed Pypar, you should be able to run -python serially and type +Python and type
import pypar@@ -259,6 +246,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size())
and see one line of output for each processor you run on.
+IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you +must insure both are using the same version of MPI. If you only have +one MPI installed on your system, this is not an issue, but it can be +if you have multiple MPIs. Your LAMMPS build is explicit about which +MPI it is using, since you specify the details in your lo-level +src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find +information about the MPI it uses to build against. And it tries to +load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find +the MPI library that LAMMPS is using. If you have problems running +both Pypar and LAMMPS together, this is an issue you may need to +address, e.g. by moving other MPI installations so that Pypar finds +the right one. +
If you get no errors, you're ready to use LAMMPS from Python. If the load fails, the most common error to see is
-"CDLL: asdfasdfasdf" -
+OSError: Could not load LAMMPS dynamic library +
which means Python was unable to load the LAMMPS shared library. This -can occur if it can't find the LAMMMPS library; see the environment -variable discussion above. Or if it can't find one of the -auxiliary libraries that was specified in the LAMMPS build, in a -shared dynamic library format. This includes all libraries needed by -main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by -main LAMMPS (e.g. extra libs needed by MPI), or packages you have -installed that require libraries provided with LAMMPS (e.g. the -USER-ATC package require lib/atc/libatc.so) or system libraries -(e.g. BLAS or Fortran-to-C libraries) listed in the -lib/package/Makefile.lammps file. Again, all of these must be -available as shared libraries, or the Python load will fail. +typically occurs if the system can't find the LAMMMPS shared library +or one of the auxiliary shared libraries it depends on.
Python (actually the operating system) isn't verbose about telling you -why the load failed, so go through the steps above and in -Section_start 5 carefully. +why the load failed, so carefully go through the steps above regarding +environment variables, and the instructions in Section_start +5 about building a shared library and +about setting the LD_LIBRARY_PATH envirornment variable.
Note that if you leave out the 3 lines from test.py that specify Pypar commands you will instantiate and run LAMMPS independently on each of the P processors specified in the mpirun command. In this case you -should get 4 sets of output, each showing that a run was made on a -single processor, instead of one set of output showing that it ran on -4 processors. If the 1-processor outputs occur, it means that Pypar -is not working correctly. +should get 4 sets of output, each showing that a LAMMPS run was made +on a single processor, instead of one set of output showing that +LAMMPS ran on 4 processors. If the 1-processor outputs occur, it +means that Pypar is not working correctly.
Also note that once you import the PyPar module, Pypar initializes MPI for you, and you can use MPI calls directly in your Python script, as @@ -345,6 +338,8 @@ described in the Pypar documentation. The last line of your Python script should be pypar.finalize(), to insure MPI is shut down correctly.
+Note that any Python script (not just for LAMMPS) can be invoked in one of several ways:
@@ -379,25 +374,18 @@ Python on a single processor, not in parallel. the source code for which is in python/lammps.py, which creates a "lammps" object, with a set of methods that can be invoked on that object. The sample Python code below assumes you have first imported -the "lammps" module in your Python script. You can also include its -settings as follows, which are useful in test return values from some -of the methods described below: +the "lammps" module in your Python script, as follows:from lammps import lammps -from lammps import LMPINT as INT -from lammps import LMPDOUBLE as DOUBLE -from lammps import LMPIPTR as IPTR -from lammps import LMPDPTR as DPTR -from lammps import LMPDPTRPTR as DPTRPTR
These are the methods defined by the lammps module. If you look at the file src/library.cpp you will see that they correspond one-to-one with calls you can make to the LAMMPS library from a C++ or C or Fortran program.
-lmp = lammps() # create a LAMMPS object using the default liblmp.so library
-lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
-lmp = lammps("",list) # ditto, with command-line args, list = ["-echo","screen"]
+lmp = lammps() # create a LAMMPS object using the default liblammps.so library
+lmp = lammps("g++") # create a LAMMPS object using the liblammps_g++.so library
+lmp = lammps("",list) # ditto, with command-line args, e.g. list = ["-echo","screen"]
lmp = lammps("g++",list)
lmp.close() # destroy a LAMMPS object
@@ -406,16 +394,20 @@ lmp = lammps("g++",list)
lmp.command(cmd) # invoke a single LAMMPS command, cmd = "run 100"
xlo = lmp.extract_global(name,type) # extract a global quantity
- # name = "boxxlo", "nlocal", etc
- # type = INT or DOUBLE
+ # name = "boxxlo", "nlocal", etc
+ # type = 0 = int
+ # 1 = double
coords = lmp.extract_atom(name,type) # extract a per-atom quantity
- # name = "x", "type", etc
- # type = IPTR or DPTR or DPTRPTR
+ # name = "x", "type", etc
+ # type = 0 = vector of ints
+ # 1 = array of ints
+ # 2 = vector of doubles
+ # 3 = array of doubles
eng = lmp.extract_compute(id,style,type) # extract value(s) from a compute
v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
- # id = ID of compute or fix
+ # id = ID of compute or fix
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
@@ -431,18 +423,23 @@ v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
# 1 = atom-style variable
natoms = lmp.get_natoms() # total # of atoms as int
-x = lmp.get_coords() # return coords of all atoms in x
-lmp.put_coords(x) # set all atom coords via x
+data = lmp.gather_atoms(name,type,count) # return atom attribute of all atoms gathered into data, ordered by atom ID
+ # name = "x", "charge", "type", etc
+ # count = # of per-atom values, 1 or 3, etc
+lmp.scatter_atoms(name,type,count,data) # scatter atom attribute of all atoms from data, ordered by atom ID
+ # name = "x", "charge", "type", etc
+ # count = # of per-atom values, 1 or 3, etc
-IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
-take an MPI communicator as an argument. There should be a way to do
-this, so that the LAMMPS instance runs on a subset of processors if
-desired, but I don't know how to do it from Pypar. So for now, it
-runs on MPI_COMM_WORLD, which is all the processors. If someone
-figures out how to do this with one or more of the Python wrappers for
-MPI, like Pypar, please let us know and we will amend these doc pages.
+
IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
+lammps.py does not take an MPI communicator as an argument. There
+should be a way to do this, so that the LAMMPS instance runs on a
+subset of processors if desired, but I don't know how to do it from
+Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
+processors. If someone figures out how to do this with one or more of
+the Python wrappers for MPI, like Pypar, please let us know and we
+will amend these doc pages.
Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.
@@ -470,8 +467,8 @@ returned, which you can use via normal Python subscripting. See the
extract() method in the src/atom.cpp file for a list of valid names.
Again, new names could easily be added. A pointer to a vector of
doubles or integers, or a pointer to an array of doubles (double **)
-is returned. You need to specify the appropriate data type via the
-type argument.
+or integers (int **) is returned. You need to specify the appropriate
+data type via the type argument.
For extract_compute() and extract_fix(), the global, per-atom, or
local data calulated by the compute or fix can be accessed. What is
@@ -499,58 +496,57 @@ Python subscripting. The values will be zero for atoms not in the
specified group.
The get_natoms() method returns the total number of atoms in the
-simulation, as an int. Note that extract_global("natoms") returns the
-same value, but as a double, which is the way LAMMPS stores it to
-allow for systems with more atoms than can be stored in an int (> 2
-billion).
+simulation, as an int.
-The get_coords() method returns an ctypes vector of doubles of length
-3*natoms, for the coordinates of all the atoms in the simulation,
-ordered by x,y,z and then by atom ID (see code for put_coords()
-below). The array can be used via normal Python subscripting. If
-atom IDs are not consecutively ordered within LAMMPS, a None is
-returned as indication of an error.
+
The gather_atoms() method returns a ctypes vector of ints or doubles
+as specified by type, of length count*natoms, for the property of all
+the atoms in the simulation specified by name, ordered by count and
+then by atom ID. The vector can be used via normal Python
+subscripting. If atom IDs are not consecutively ordered within
+LAMMPS, a None is returned as indication of an error.
-Note that the data structure get_coords() returns is different from
-the data structure returned by extract_atom("x") in four ways. (1)
-Get_coords() returns a vector which you index as x[i];
+
Note that the data structure gather_atoms("x") returns is different
+from the data structure returned by extract_atom("x") in four ways.
+(1) Gather_atoms() returns a vector which you index as x[i];
extract_atom() returns an array which you index as x[i][j]. (2)
-Get_coords() orders the atoms by atom ID while extract_atom() does
-not. (3) Get_coords() returns a list of all atoms in the simulation;
-extract_atoms() returns just the atoms local to each processor. (4)
-Finally, the get_coords() data structure is a copy of the atom coords
-stored internally in LAMMPS, whereas extract_atom returns an array
-that points directly to the internal data. This means you can change
-values inside LAMMPS from Python by assigning a new values to the
-extract_atom() array. To do this with the get_atoms() vector, you
-need to change values in the vector, then invoke the put_coords()
-method.
+Gather_atoms() orders the atoms by atom ID while extract_atom() does
+not. (3) Gathert_atoms() returns a list of all atoms in the
+simulation; extract_atoms() returns just the atoms local to each
+processor. (4) Finally, the gather_atoms() data structure is a copy
+of the atom coords stored internally in LAMMPS, whereas extract_atom()
+returns an array that effectively points directly to the internal
+data. This means you can change values inside LAMMPS from Python by
+assigning a new values to the extract_atom() array. To do this with
+the gather_atoms() vector, you need to change values in the vector,
+then invoke the scatter_atoms() method.
-The put_coords() method takes a vector of coordinates for all atoms in
-the simulation, assumed to be ordered by x,y,z and then by atom ID,
-and uses the values to overwrite the corresponding coordinates for
-each atom inside LAMMPS. This requires LAMMPS to have its "map"
-option enabled; see the atom_modify command for
-details. If it is not or if atom IDs are not consecutively ordered,
-no coordinates are reset,
+
The scatter_atoms() method takes a vector of ints or doubles as
+specified by type, of length count*natoms, for the property of all the
+atoms in the simulation specified by name, ordered by bount and then
+by atom ID. It uses the vector of data to overwrite the corresponding
+properties for each atom inside LAMMPS. This requires LAMMPS to have
+its "map" option enabled; see the atom_modify
+command for details. If it is not, or if atom IDs are not
+consecutively ordered, no coordinates are reset.
-The array of coordinates passed to put_coords() must be a ctypes
-vector of doubles, allocated and initialized something like this:
+
The array of coordinates passed to scatter_atoms() must be a ctypes
+vector of ints or doubles, allocated and initialized something like
+this:
from ctypes import *
-natoms = lmp.get_atoms()
+natoms = lmp.get_natoms()
n3 = 3*natoms
-x = (c_double*n3)()
+x = (n3*c_double)()
x0 = x coord of atom with ID 1
x1 = y coord of atom with ID 1
x2 = z coord of atom with ID 1
x3 = x coord of atom with ID 2
...
xn3-1 = z coord of atom with ID natoms
-lmp.put_coords(x)
+lmp.scatter_coords("x",1,3,x)
Alternatively, you can just change values in the vector returned by
-get_coords(), since it is a ctypes vector of doubles.
+gather_atoms("x",1,3), since it is a ctypes vector of doubles.
diff --git a/doc/Section_python.txt b/doc/Section_python.txt
index 7386e8b53e..3c29fe6fe6 100644
--- a/doc/Section_python.txt
+++ b/doc/Section_python.txt
@@ -11,8 +11,8 @@
This section describes how to build and use LAMMPS via a Python
interface.
-11.1 "Setting necessary environment variables"_#py_1
-11.2 "Building LAMMPS as a shared library"_#py_2
+11.1 "Building LAMMPS as a shared library"_#py_1
+11.2 "Installing the Python wrapper into Python"_#py_2
11.3 "Extending Python with MPI to run in parallel"_#py_3
11.4 "Testing the Python-LAMMPS interface"_#py_4
11.5 "Using LAMMPS from Python"_#py_5
@@ -72,109 +72,97 @@ check which version of Python you have installed, by simply typing
:line
:line
-11.1 Setting necessary environment variables :link(py_1),h4
-
-For Python to use the LAMMPS interface, it needs to find two files.
-The paths to these files need to be added to two environment variables
-that Python checks.
-
-The first is the environment variable PYTHONPATH. It needs
-to include the directory where the python/lammps.py file is.
-
-For the csh or tcsh shells, you could add something like this to your
-~/.cshrc file:
-
-setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python :pre
-
-The second is the environment variable LD_LIBRARY_PATH, which is used
-by the operating system to find dynamic shared libraries when it loads
-them. It needs to include the directory where the shared LAMMPS
-library will be. Normally this is the LAMMPS src dir, as explained in
-the following section.
-
-For the csh or tcsh shells, you could add something like this to your
-~/.cshrc file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
-
-As discussed below, if your LAMMPS build includes auxiliary libraries,
-they must also be available as shared libraries for Python to
-successfully load LAMMPS. If they are not in default places where the
-operating system can find them, then you also have to add their paths
-to the LD_LIBRARY_PATH environment variable.
-
-For example, if you are using the dummy MPI library provided in
-src/STUBS, you need to add something like this to your ~/.cshrc file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre
-
-If you are using the LAMMPS USER-ATC package, you need to add
-something like this to your ~/.cshrc file:
-
-setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre
-
-:line
-
-11.2 Building LAMMPS as a shared library :link(py_2),h4
+11.1 Building LAMMPS as a shared library :link(py_1),h4
Instructions on how to build LAMMPS as a shared library are given in
"Section_start 5"_Section_start.html#start_5. A shared library is one
that is dynamically loadable, which is what Python requires. On Linux
this is a library file that ends in ".so", not ".a".
->From the src directory, type
+From the src directory, type
make makeshlib
-make -f Makefile.shlib foo
+make -f Makefile.shlib foo :pre
where foo is the machine target name, such as linux or g++ or serial.
-This should create the file liblmp_foo.so in the src directory, as
-well as a soft link liblmp.so which is what the Python wrapper will
-load by default. If you are building multiple machine versions of the
-shared library, the soft link is always set to the most recently built
-version.
+This should create the file liblammps_foo.so in the src directory, as
+well as a soft link liblammps.so, which is what the Python wrapper will
+load by default. Note that if you are building multiple machine
+versions of the shared library, the soft link is always set to the
+most recently built version.
-Note that as discussed in below, a LAMMPS build may depend on several
-auxiliary libraries, which are specified in your low-level
-src/Makefile.foo file. For example, an MPI library, the FFTW library,
-a JPEG library, etc. Depending on what LAMMPS packages you have
-installed, the build may also require additional libraries from the
-lib directories, such as lib/atc/libatc.so or lib/reax/libreax.so.
+If this fails, see "Section_start 5"_Section_start.html#start_5 for
+more details, especially if your LAMMPS build uses auxiliary libraries
+like MPI or FFTW which may not be built as shared libraries on your
+system.
-You must insure that each of these libraries exist in shared library
-form (*.so file for Linux systems), or either the LAMMPS shared
-library build or the Python load of the library will fail. For the
-load to be successful all the shared libraries must also be in
-directories that the operating system checks. See the discussion in
-the preceding section about the LD_LIBRARY_PATH environment variable
-for how to insure this.
+:line
-Note that some system libraries, such as MPI, if you installed it
-yourself, may not be built by default as shared libraries. The build
-instructions for the library should tell you how to do this.
+11.2 Installing the Python wrapper into Python :link(py_2),h4
-For example, here is how to build and install the "MPICH
-library"_mpich, a popular open-source version of MPI, distributed by
-Argonne National Labs, as a shared library in the default
-/usr/local/lib location:
+For Python to invoke LAMMPS, there are 2 files it needs to know about:
-:link(mpich,http://www-unix.mcs.anl.gov/mpi)
+python/lammps.py
+src/liblammps.so :ul
-./configure --enable-shared
-make
-make install :pre
+Lammps.py is the Python wrapper on the LAMMPS library interface.
+Liblammps.so is the shared LAMMPS library that Python loads, as
+described above.
-You may need to use "sudo make install" in place of the last line if
-you do not have write priveleges for /usr/local/lib. The end result
-should be the file /usr/local/lib/libmpich.so.
+You can insure Python can find these files in one of two ways:
-Note that not all of the auxiliary libraries provided with LAMMPS have
-shared-library Makefiles in their lib directories. Typically this
-simply requires a Makefile.foo that adds a -fPIC switch when files are
-compiled and a "-fPIC -shared" switches when the library is linked
-with a C++ (or Fortran) compiler, as well as an output target that
-ends in ".so", like libatc.o. As we or others create and contribute
-these Makefiles, we will add them to the LAMMPS distribution.
+set two environment variables
+run the python/install.py script :ul
+
+If you set the paths to these files as environment variables, you only
+have to do it once. For the csh or tcsh shells, add something like
+this to your ~/.cshrc file, one line for each of the two files:
+
+setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python
+setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
+
+If you use the python/install.py script, you need to invoke it every
+time you rebuild LAMMPS (as a shared library) or make changes to the
+python/lammps.py file.
+
+You can invoke install.py from the python directory as
+
+% python install.py [libdir] [pydir] :pre
+
+The optional libdir is where to copy the LAMMPS shared library to; the
+default is /usr/local/lib. The optional pydir is where to copy the
+lammps.py file to; the default is the site-packages directory of the
+version of Python that is running the install script.
+
+Note that libdir must be a location that is in your default
+LD_LIBRARY_PATH, like /usr/local/lib or /usr/lib. And pydir must be a
+location that Python looks in by default for imported modules, like
+its site-packages dir. If you want to copy these files to
+non-standard locations, such as within your own user space, you will
+need to set your PYTHONPATH and LD_LIBRARY_PATH environment variables
+accordingly, as above.
+
+If the instally.py script does not allow you to copy files into system
+directories, prefix the python command with "sudo". If you do this,
+make sure that the Python that root runs is the same as the Python you
+run. E.g. you may need to do something like
+
+% sudo /usr/local/bin/python install.py [libdir] [pydir] :pre
+
+You can also invoke install.py from the make command in the src
+directory as
+
+% make install-python :pre
+
+In this mode you cannot append optional arguments. Again, you may
+need to prefix this with "sudo". In this mode you cannot control
+which Python is invoked by root.
+
+Note that if you want Python to be able to load different versions of
+the LAMMPS shared library (see "this section"_#py_5 below), you will
+need to manually copy files like lmplammps_g++.so into the appropriate
+system directory. This is not needed if you set the LD_LIBRARY_PATH
+environment variable as described above.
:line
@@ -193,13 +181,12 @@ These include
"maroonmpi"_http://code.google.com/p/maroonmpi/
"mpi4py"_http://code.google.com/p/mpi4py/
"myMPI"_http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16
-"Pypar"_http://datamining.anu.edu.au/~ole/pypar :ul
+"Pypar"_http://code.google.com/p/pypar :ul
-All of these except pyMPI work by wrapping the MPI library (which must
-be available on your system as a shared library, as discussed above),
-and exposing (some portion of) its interface to your Python script.
-This means Python cannot be used interactively in parallel, since they
-do not address the issue of interactive input to multiple instances of
+All of these except pyMPI work by wrapping the MPI library and
+exposing (some portion of) its interface to your Python script. This
+means Python cannot be used interactively in parallel, since they do
+not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of "python"
@@ -229,17 +216,17 @@ sudo python setup.py install :pre
The "sudo" is only needed if required to copy Numpy files into your
Python distribution's site-packages directory.
-To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it
+To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
and from its "source" directory, type
python setup.py build
sudo python setup.py install :pre
-Again, the "sudo" is only needed if required to copy PyPar files into
+Again, the "sudo" is only needed if required to copy Pypar files into
your Python distribution's site-packages directory.
If you have successully installed Pypar, you should be able to run
-python serially and type
+Python and type
import pypar :pre
@@ -255,6 +242,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) :pre
and see one line of output for each processor you run on.
+IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
+must insure both are using the same version of MPI. If you only have
+one MPI installed on your system, this is not an issue, but it can be
+if you have multiple MPIs. Your LAMMPS build is explicit about which
+MPI it is using, since you specify the details in your lo-level
+src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find
+information about the MPI it uses to build against. And it tries to
+load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find
+the MPI library that LAMMPS is using. If you have problems running
+both Pypar and LAMMPS together, this is an issue you may need to
+address, e.g. by moving other MPI installations so that Pypar finds
+the right one.
+
:line
11.4 Testing the Python-LAMMPS interface :link(py_4),h4
@@ -268,24 +268,17 @@ and type:
If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is
-"CDLL: asdfasdfasdf"
+OSError: Could not load LAMMPS dynamic library :pre
which means Python was unable to load the LAMMPS shared library. This
-can occur if it can't find the LAMMMPS library; see the environment
-variable discussion "above"_#python_1. Or if it can't find one of the
-auxiliary libraries that was specified in the LAMMPS build, in a
-shared dynamic library format. This includes all libraries needed by
-main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
-main LAMMPS (e.g. extra libs needed by MPI), or packages you have
-installed that require libraries provided with LAMMPS (e.g. the
-USER-ATC package require lib/atc/libatc.so) or system libraries
-(e.g. BLAS or Fortran-to-C libraries) listed in the
-lib/package/Makefile.lammps file. Again, all of these must be
-available as shared libraries, or the Python load will fail.
+typically occurs if the system can't find the LAMMMPS shared library
+or one of the auxiliary shared libraries it depends on.
Python (actually the operating system) isn't verbose about telling you
-why the load failed, so go through the steps above and in
-"Section_start 5"_Section_start.html#start_5 carefully.
+why the load failed, so carefully go through the steps above regarding
+environment variables, and the instructions in "Section_start
+5"_Section_start.html#start_5 about building a shared library and
+about setting the LD_LIBRARY_PATH envirornment variable.
[Test LAMMPS and Python in serial:] :h5
@@ -330,10 +323,10 @@ and you should see the same output as if you had typed
Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you
-should get 4 sets of output, each showing that a run was made on a
-single processor, instead of one set of output showing that it ran on
-4 processors. If the 1-processor outputs occur, it means that Pypar
-is not working correctly.
+should get 4 sets of output, each showing that a LAMMPS run was made
+on a single processor, instead of one set of output showing that
+LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
+means that Pypar is not working correctly.
Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as
@@ -341,6 +334,8 @@ described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down
correctly.
+[Running Python scripts:] :h5
+
Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways:
@@ -374,25 +369,18 @@ The Python interface to LAMMPS consists of a Python "lammps" module,
the source code for which is in python/lammps.py, which creates a
"lammps" object, with a set of methods that can be invoked on that
object. The sample Python code below assumes you have first imported
-the "lammps" module in your Python script. You can also include its
-settings as follows, which are useful in test return values from some
-of the methods described below:
+the "lammps" module in your Python script, as follows:
-from lammps import lammps
-from lammps import LMPINT as INT
-from lammps import LMPDOUBLE as DOUBLE
-from lammps import LMPIPTR as IPTR
-from lammps import LMPDPTR as DPTR
-from lammps import LMPDPTRPTR as DPTRPTR :pre
+from lammps import lammps :pre
These are the methods defined by the lammps module. If you look
at the file src/library.cpp you will see that they correspond
one-to-one with calls you can make to the LAMMPS library from a C++ or
C or Fortran program.
-lmp = lammps() # create a LAMMPS object using the default liblmp.so library
-lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
-lmp = lammps("",list) # ditto, with command-line args, list = \["-echo","screen"\]
+lmp = lammps() # create a LAMMPS object using the default liblammps.so library
+lmp = lammps("g++") # create a LAMMPS object using the liblammps_g++.so library
+lmp = lammps("",list) # ditto, with command-line args, e.g. list = \["-echo","screen"\]
lmp = lammps("g++",list) :pre
lmp.close() # destroy a LAMMPS object :pre
@@ -401,16 +389,20 @@ lmp.file(file) # run an entire input script, file = "in.lj"
lmp.command(cmd) # invoke a single LAMMPS command, cmd = "run 100" :pre
xlo = lmp.extract_global(name,type) # extract a global quantity
- # name = "boxxlo", "nlocal", etc
- # type = INT or DOUBLE :pre
+ # name = "boxxlo", "nlocal", etc
+ # type = 0 = int
+ # 1 = double :pre
coords = lmp.extract_atom(name,type) # extract a per-atom quantity
- # name = "x", "type", etc
- # type = IPTR or DPTR or DPTRPTR :pre
+ # name = "x", "type", etc
+ # type = 0 = vector of ints
+ # 1 = array of ints
+ # 2 = vector of doubles
+ # 3 = array of doubles :pre
eng = lmp.extract_compute(id,style,type) # extract value(s) from a compute
v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
- # id = ID of compute or fix
+ # id = ID of compute or fix
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
@@ -426,18 +418,23 @@ var = lmp.extract_variable(name,group,flag) # extract value(s) from a variable
# 1 = atom-style variable :pre
natoms = lmp.get_natoms() # total # of atoms as int
-x = lmp.get_coords() # return coords of all atoms in x
-lmp.put_coords(x) # set all atom coords via x :pre
+data = lmp.gather_atoms(name,type,count) # return atom attribute of all atoms gathered into data, ordered by atom ID
+ # name = "x", "charge", "type", etc
+ # count = # of per-atom values, 1 or 3, etc
+lmp.scatter_atoms(name,type,count,data) # scatter atom attribute of all atoms from data, ordered by atom ID
+ # name = "x", "charge", "type", etc
+ # count = # of per-atom values, 1 or 3, etc :pre
:line
-IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
-take an MPI communicator as an argument. There should be a way to do
-this, so that the LAMMPS instance runs on a subset of processors if
-desired, but I don't know how to do it from Pypar. So for now, it
-runs on MPI_COMM_WORLD, which is all the processors. If someone
-figures out how to do this with one or more of the Python wrappers for
-MPI, like Pypar, please let us know and we will amend these doc pages.
+IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
+lammps.py does not take an MPI communicator as an argument. There
+should be a way to do this, so that the LAMMPS instance runs on a
+subset of processors if desired, but I don't know how to do it from
+Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
+processors. If someone figures out how to do this with one or more of
+the Python wrappers for MPI, like Pypar, please let us know and we
+will amend these doc pages.
Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.
@@ -465,8 +462,8 @@ returned, which you can use via normal Python subscripting. See the
extract() method in the src/atom.cpp file for a list of valid names.
Again, new names could easily be added. A pointer to a vector of
doubles or integers, or a pointer to an array of doubles (double **)
-is returned. You need to specify the appropriate data type via the
-type argument.
+or integers (int **) is returned. You need to specify the appropriate
+data type via the type argument.
For extract_compute() and extract_fix(), the global, per-atom, or
local data calulated by the compute or fix can be accessed. What is
@@ -494,58 +491,57 @@ Python subscripting. The values will be zero for atoms not in the
specified group.
The get_natoms() method returns the total number of atoms in the
-simulation, as an int. Note that extract_global("natoms") returns the
-same value, but as a double, which is the way LAMMPS stores it to
-allow for systems with more atoms than can be stored in an int (> 2
-billion).
+simulation, as an int.
-The get_coords() method returns an ctypes vector of doubles of length
-3*natoms, for the coordinates of all the atoms in the simulation,
-ordered by x,y,z and then by atom ID (see code for put_coords()
-below). The array can be used via normal Python subscripting. If
-atom IDs are not consecutively ordered within LAMMPS, a None is
-returned as indication of an error.
+The gather_atoms() method returns a ctypes vector of ints or doubles
+as specified by type, of length count*natoms, for the property of all
+the atoms in the simulation specified by name, ordered by count and
+then by atom ID. The vector can be used via normal Python
+subscripting. If atom IDs are not consecutively ordered within
+LAMMPS, a None is returned as indication of an error.
-Note that the data structure get_coords() returns is different from
-the data structure returned by extract_atom("x") in four ways. (1)
-Get_coords() returns a vector which you index as x\[i\];
+Note that the data structure gather_atoms("x") returns is different
+from the data structure returned by extract_atom("x") in four ways.
+(1) Gather_atoms() returns a vector which you index as x\[i\];
extract_atom() returns an array which you index as x\[i\]\[j\]. (2)
-Get_coords() orders the atoms by atom ID while extract_atom() does
-not. (3) Get_coords() returns a list of all atoms in the simulation;
-extract_atoms() returns just the atoms local to each processor. (4)
-Finally, the get_coords() data structure is a copy of the atom coords
-stored internally in LAMMPS, whereas extract_atom returns an array
-that points directly to the internal data. This means you can change
-values inside LAMMPS from Python by assigning a new values to the
-extract_atom() array. To do this with the get_atoms() vector, you
-need to change values in the vector, then invoke the put_coords()
-method.
+Gather_atoms() orders the atoms by atom ID while extract_atom() does
+not. (3) Gathert_atoms() returns a list of all atoms in the
+simulation; extract_atoms() returns just the atoms local to each
+processor. (4) Finally, the gather_atoms() data structure is a copy
+of the atom coords stored internally in LAMMPS, whereas extract_atom()
+returns an array that effectively points directly to the internal
+data. This means you can change values inside LAMMPS from Python by
+assigning a new values to the extract_atom() array. To do this with
+the gather_atoms() vector, you need to change values in the vector,
+then invoke the scatter_atoms() method.
-The put_coords() method takes a vector of coordinates for all atoms in
-the simulation, assumed to be ordered by x,y,z and then by atom ID,
-and uses the values to overwrite the corresponding coordinates for
-each atom inside LAMMPS. This requires LAMMPS to have its "map"
-option enabled; see the "atom_modify"_atom_modify.html command for
-details. If it is not or if atom IDs are not consecutively ordered,
-no coordinates are reset,
+The scatter_atoms() method takes a vector of ints or doubles as
+specified by type, of length count*natoms, for the property of all the
+atoms in the simulation specified by name, ordered by bount and then
+by atom ID. It uses the vector of data to overwrite the corresponding
+properties for each atom inside LAMMPS. This requires LAMMPS to have
+its "map" option enabled; see the "atom_modify"_atom_modify.html
+command for details. If it is not, or if atom IDs are not
+consecutively ordered, no coordinates are reset.
-The array of coordinates passed to put_coords() must be a ctypes
-vector of doubles, allocated and initialized something like this:
+The array of coordinates passed to scatter_atoms() must be a ctypes
+vector of ints or doubles, allocated and initialized something like
+this:
from ctypes import *
-natoms = lmp.get_atoms()
+natoms = lmp.get_natoms()
n3 = 3*natoms
-x = (c_double*n3)()
+x = (n3*c_double)()
x[0] = x coord of atom with ID 1
x[1] = y coord of atom with ID 1
x[2] = z coord of atom with ID 1
x[3] = x coord of atom with ID 2
...
x[n3-1] = z coord of atom with ID natoms
-lmp.put_coords(x) :pre
+lmp.scatter_coords("x",1,3,x) :pre
Alternatively, you can just change values in the vector returned by
-get_coords(), since it is a ctypes vector of doubles.
+gather_atoms("x",1,3), since it is a ctypes vector of doubles.
:line
diff --git a/doc/Section_start.html b/doc/Section_start.html
index f16a963e6b..e225c3394a 100644
--- a/doc/Section_start.html
+++ b/doc/Section_start.html
@@ -281,10 +281,11 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your
-platform before making LAMMPS itself. From the src directory, type
-"make stubs", or from the STUBS dir, type "make" and it should create
-a libmpi.a suitable for linking to LAMMPS. If this build fails, you
-will need to edit the STUBS/Makefile for your platform.
+platform before making LAMMPS itself. To build from the src
+directory, type "make stubs", or from the STUBS dir, type "make".
+This should create a libmpi_stubs.a file suitable for linking to
+LAMMPS. If the build fails, you will need to edit the STUBS/Makefile
+for your platform.
The file STUBS/mpi.cpp provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't
@@ -779,24 +780,28 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See this section for
more info on wrapping and running LAMMPS from Python.
+Static library:
+
To build LAMMPS as a static library (*.a file on Linux), type
make makelib
make -f Makefile.lib foo
where foo is the machine name. This kind of library is typically used
-to statically link a driver application to all of LAMMPS, so that you
-can insure all dependencies are satisfied at compile time. Note that
+to statically link a driver application to LAMMPS, so that you can
+insure all dependencies are satisfied at compile time. Note that
inclusion or exclusion of any desired optional packages should be done
before typing "make makelib". The first "make" command will create a
-current Makefile.lib with all the file names in your src dir. The 2nd
-"make" command will use it to build LAMMPS as a static library, using
-the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build
-will create the file liblmp_foo.a which another application can link
-to.
+current Makefile.lib with all the file names in your src dir. The
+second "make" command will use it to build LAMMPS as a static library,
+using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
+build will create the file liblammps_foo.a which another application can
+link to.
+Shared library:
+
To build LAMMPS as a shared library (*.so file on Linux), which can be
-dynamically loaded, type
+dynamically loaded, e.g. from Python, type
make makeshlib
make -f Makefile.shlib foo
@@ -806,31 +811,58 @@ wrapping LAMMPS with Python; see Section_python<
for details. Again, note that inclusion or exclusion of any desired
optional packages should be done before typing "make makelib". The
first "make" command will create a current Makefile.shlib with all the
-file names in your src dir. The 2nd "make" command will use it to
+file names in your src dir. The second "make" command will use it to
build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
settings in src/MAKE/Makefile.foo. The build will create the file
-liblmp_foo.so which another application can link to dyamically, as
-well as a soft link liblmp.so, which the Python wrapper uses by
-default.
+liblammps_foo.so which another application can link to dyamically. It
+will also create a soft link liblammps.so, which the Python wrapper uses
+by default.
Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
-libraries, and be find-able by the operating system. Else you will
-get a run-time error when the shared library is loaded. For LAMMPS,
-this includes all libraries needed by main LAMMPS (e.g. MPI or FFTW or
-JPEG), system libraries needed by main LAMMPS (e.g. extra libs needed
-by MPI), or packages you have installed that require libraries
-provided with LAMMPS (e.g. the USER-ATC package require
-lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
-libraries) listed in the lib/package/Makefile.lammps file. See the
-discussion about the LAMMPS shared library in
-Section_python for details about how to build
-shared versions of these libraries, and how to insure the operating
-system can find them, by setting the LD_LIBRARY_PATH environment
-variable correctly.
+libraries. This will be the case for libraries included with LAMMPS,
+such as the dummy MPI library in src/STUBS or any package libraries in
+lib/packges, since they are always built as shared libraries with the
+-fPIC switch. However, if a library like MPI or FFTW does not exist
+as a shared library, the second make command will generate an error.
+This means you will need to install a shared library version of the
+package. The build instructions for the library should tell you how
+to do this.
-Either flavor of library allows one or more LAMMPS objects to be
-instantiated from the calling program.
+
As an example, here is how to build and install the MPICH
+library, a popular open-source version of MPI, distributed by
+Argonne National Labs, as a shared library in the default
+/usr/local/lib location:
+
+
+
+./configure --enable-shared
+make
+make install
+
+You may need to use "sudo make install" in place of the last line if
+you do not have write privileges for /usr/local/lib. The end result
+should be the file /usr/local/lib/libmpich.so.
+
+Additional requirement for using a shared library:
+
+The operating system finds shared libraries to load at run-time using
+the environment variable LD_LIBRARY_PATH. So you may wish to copy the
+file src/liblammps.so or src/liblammps_g++.so (for example) to a place
+the system can find it by default, such as /usr/local/lib, or you may
+wish to add the lammps src directory to LD_LIBRARY_PATH, so that the
+current version of the shared library is always available to programs
+that use it.
+
+For the csh or tcsh shells, you would add something like this to your
+~/.cshrc file:
+
+setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/home/sjplimp/lammps/src
+
+Calling the LAMMPS library:
+
+Either flavor of library (static or shared0 allows one or more LAMMPS
+objects to be instantiated from the calling program.
When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
@@ -841,17 +873,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
See the sample codes in examples/COUPLE/simple for examples of C++ and
-C codes that invoke LAMMPS thru its library interface. There are
-other examples as well in the COUPLE directory which are discussed in
-Section_howto 10 of the manual. See
-Section_python of the manual for a description
-of the Python wrapper provided with LAMMPS that operates through the
-LAMMPS library interface.
+C and Fortran codes that invoke LAMMPS thru its library interface.
+There are other examples as well in the COUPLE directory which are
+discussed in Section_howto 10 of the
+manual. See Section_python of the manual for a
+description of the Python wrapper provided with LAMMPS that operates
+through the LAMMPS library interface.
-The files src/library.cpp and library.h contain the C-style interface
-to LAMMPS. See Section_howto 19 of the
-manual for a description of the interface and how to extend it for
-your needs.
+
The files src/library.cpp and library.h define the C-style API for
+using LAMMPS as a library. See Section_howto
+19 of the manual for a description of the
+interface and how to extend it for your needs.
diff --git a/doc/Section_start.txt b/doc/Section_start.txt
index 5cd6d9febf..e25ec7a413 100644
--- a/doc/Section_start.txt
+++ b/doc/Section_start.txt
@@ -275,10 +275,11 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your
-platform before making LAMMPS itself. From the src directory, type
-"make stubs", or from the STUBS dir, type "make" and it should create
-a libmpi.a suitable for linking to LAMMPS. If this build fails, you
-will need to edit the STUBS/Makefile for your platform.
+platform before making LAMMPS itself. To build from the src
+directory, type "make stubs", or from the STUBS dir, type "make".
+This should create a libmpi_stubs.a file suitable for linking to
+LAMMPS. If the build fails, you will need to edit the STUBS/Makefile
+for your platform.
The file STUBS/mpi.c provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't
@@ -773,24 +774,28 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See "this section"_Section_python.html for
more info on wrapping and running LAMMPS from Python.
+[Static library:] :h5
+
To build LAMMPS as a static library (*.a file on Linux), type
make makelib
make -f Makefile.lib foo :pre
where foo is the machine name. This kind of library is typically used
-to statically link a driver application to all of LAMMPS, so that you
-can insure all dependencies are satisfied at compile time. Note that
+to statically link a driver application to LAMMPS, so that you can
+insure all dependencies are satisfied at compile time. Note that
inclusion or exclusion of any desired optional packages should be done
before typing "make makelib". The first "make" command will create a
-current Makefile.lib with all the file names in your src dir. The 2nd
-"make" command will use it to build LAMMPS as a static library, using
-the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build
-will create the file liblmp_foo.a which another application can link
-to.
+current Makefile.lib with all the file names in your src dir. The
+second "make" command will use it to build LAMMPS as a static library,
+using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
+build will create the file liblammps_foo.a which another application can
+link to.
+
+[Shared library:] :h5
To build LAMMPS as a shared library (*.so file on Linux), which can be
-dynamically loaded, type
+dynamically loaded, e.g. from Python, type
make makeshlib
make -f Makefile.shlib foo :pre
@@ -800,31 +805,58 @@ wrapping LAMMPS with Python; see "Section_python"_Section_python.html
for details. Again, note that inclusion or exclusion of any desired
optional packages should be done before typing "make makelib". The
first "make" command will create a current Makefile.shlib with all the
-file names in your src dir. The 2nd "make" command will use it to
+file names in your src dir. The second "make" command will use it to
build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
settings in src/MAKE/Makefile.foo. The build will create the file
-liblmp_foo.so which another application can link to dyamically, as
-well as a soft link liblmp.so, which the Python wrapper uses by
-default.
+liblammps_foo.so which another application can link to dyamically. It
+will also create a soft link liblammps.so, which the Python wrapper uses
+by default.
Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
-libraries, and be find-able by the operating system. Else you will
-get a run-time error when the shared library is loaded. For LAMMPS,
-this includes all libraries needed by main LAMMPS (e.g. MPI or FFTW or
-JPEG), system libraries needed by main LAMMPS (e.g. extra libs needed
-by MPI), or packages you have installed that require libraries
-provided with LAMMPS (e.g. the USER-ATC package require
-lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
-libraries) listed in the lib/package/Makefile.lammps file. See the
-discussion about the LAMMPS shared library in
-"Section_python"_Section_python.html for details about how to build
-shared versions of these libraries, and how to insure the operating
-system can find them, by setting the LD_LIBRARY_PATH environment
-variable correctly.
+libraries. This will be the case for libraries included with LAMMPS,
+such as the dummy MPI library in src/STUBS or any package libraries in
+lib/packges, since they are always built as shared libraries with the
+-fPIC switch. However, if a library like MPI or FFTW does not exist
+as a shared library, the second make command will generate an error.
+This means you will need to install a shared library version of the
+package. The build instructions for the library should tell you how
+to do this.
-Either flavor of library allows one or more LAMMPS objects to be
-instantiated from the calling program.
+As an example, here is how to build and install the "MPICH
+library"_mpich, a popular open-source version of MPI, distributed by
+Argonne National Labs, as a shared library in the default
+/usr/local/lib location:
+
+:link(mpich,http://www-unix.mcs.anl.gov/mpi)
+
+./configure --enable-shared
+make
+make install :pre
+
+You may need to use "sudo make install" in place of the last line if
+you do not have write privileges for /usr/local/lib. The end result
+should be the file /usr/local/lib/libmpich.so.
+
+[Additional requirement for using a shared library:] :h5
+
+The operating system finds shared libraries to load at run-time using
+the environment variable LD_LIBRARY_PATH. So you may wish to copy the
+file src/liblammps.so or src/liblammps_g++.so (for example) to a place
+the system can find it by default, such as /usr/local/lib, or you may
+wish to add the lammps src directory to LD_LIBRARY_PATH, so that the
+current version of the shared library is always available to programs
+that use it.
+
+For the csh or tcsh shells, you would add something like this to your
+~/.cshrc file:
+
+setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
+
+[Calling the LAMMPS library:] :h5
+
+Either flavor of library (static or shared0 allows one or more LAMMPS
+objects to be instantiated from the calling program.
When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
@@ -835,17 +867,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
See the sample codes in examples/COUPLE/simple for examples of C++ and
-C codes that invoke LAMMPS thru its library interface. There are
-other examples as well in the COUPLE directory which are discussed in
-"Section_howto 10"_Section_howto.html#howto_10 of the manual. See
-"Section_python"_Section_python.html of the manual for a description
-of the Python wrapper provided with LAMMPS that operates through the
-LAMMPS library interface.
+C and Fortran codes that invoke LAMMPS thru its library interface.
+There are other examples as well in the COUPLE directory which are
+discussed in "Section_howto 10"_Section_howto.html#howto_10 of the
+manual. See "Section_python"_Section_python.html of the manual for a
+description of the Python wrapper provided with LAMMPS that operates
+through the LAMMPS library interface.
-The files src/library.cpp and library.h contain the C-style interface
-to LAMMPS. See "Section_howto 19"_Section_howto.html#howto_19 of the
-manual for a description of the interface and how to extend it for
-your needs.
+The files src/library.cpp and library.h define the C-style API for
+using LAMMPS as a library. See "Section_howto
+19"_Section_howto.html#howto_19 of the manual for a description of the
+interface and how to extend it for your needs.
:line
diff --git a/doc/fix_deform.html b/doc/fix_deform.html
index 6d90461848..05e8c0c9a2 100644
--- a/doc/fix_deform.html
+++ b/doc/fix_deform.html
@@ -327,17 +327,19 @@ direction for xy deformation) from the unstrained orientation.
The tilt factor T as a function of time will change as
-T(t) = T0 + erate*dt
+T(t) = T0 + L0*erate*dt
-where T0 is the initial tilt factor and dt is the elapsed time (in
-time units). Thus if erate R is specified as 0.1 and time units are
-picoseconds, this means the shear strain will increase by 0.1 every
-picosecond. I.e. if the xy shear strain was initially 0.0, then
-strain after 1 psec = 0.1, strain after 2 psec = 0.2, etc. Thus the
-tilt factor would be 0.0 at time 0, 0.1*ybox at 1 psec, 0.2*ybox at 2
-psec, etc, where ybox is the original y box length. R = 1 or 2 means
-the tilt factor will increase by 1 or 2 every picosecond. R = -0.01
-means a decrease in shear strain by 0.01 every picosecond.
+
where T0 is the initial tilt factor, L0 is the original length of the
+box perpendicular to the shear direction (e.g. y box length for xy
+deformation), and dt is the elapsed time (in time units). Thus if
+erate R is specified as 0.1 and time units are picoseconds, this
+means the shear strain will increase by 0.1 every picosecond. I.e. if
+the xy shear strain was initially 0.0, then strain after 1 psec = 0.1,
+strain after 2 psec = 0.2, etc. Thus the tilt factor would be 0.0 at
+time 0, 0.1*ybox at 1 psec, 0.2*ybox at 2 psec, etc, where ybox is the
+original y box length. R = 1 or 2 means the tilt factor will increase
+by 1 or 2 every picosecond. R = -0.01 means a decrease in shear
+strain by 0.01 every picosecond.
The trate style changes a tilt factor at a "constant true shear
strain rate". Note that this is not an "engineering shear strain
diff --git a/doc/fix_deform.txt b/doc/fix_deform.txt
index a459e2e912..e403b2b8f2 100644
--- a/doc/fix_deform.txt
+++ b/doc/fix_deform.txt
@@ -317,17 +317,19 @@ direction for xy deformation) from the unstrained orientation.
The tilt factor T as a function of time will change as
-T(t) = T0 + erate*dt :pre
+T(t) = T0 + L0*erate*dt :pre
-where T0 is the initial tilt factor and dt is the elapsed time (in
-time units). Thus if {erate} R is specified as 0.1 and time units are
-picoseconds, this means the shear strain will increase by 0.1 every
-picosecond. I.e. if the xy shear strain was initially 0.0, then
-strain after 1 psec = 0.1, strain after 2 psec = 0.2, etc. Thus the
-tilt factor would be 0.0 at time 0, 0.1*ybox at 1 psec, 0.2*ybox at 2
-psec, etc, where ybox is the original y box length. R = 1 or 2 means
-the tilt factor will increase by 1 or 2 every picosecond. R = -0.01
-means a decrease in shear strain by 0.01 every picosecond.
+where T0 is the initial tilt factor, L0 is the original length of the
+box perpendicular to the shear direction (e.g. y box length for xy
+deformation), and dt is the elapsed time (in time units). Thus if
+{erate} R is specified as 0.1 and time units are picoseconds, this
+means the shear strain will increase by 0.1 every picosecond. I.e. if
+the xy shear strain was initially 0.0, then strain after 1 psec = 0.1,
+strain after 2 psec = 0.2, etc. Thus the tilt factor would be 0.0 at
+time 0, 0.1*ybox at 1 psec, 0.2*ybox at 2 psec, etc, where ybox is the
+original y box length. R = 1 or 2 means the tilt factor will increase
+by 1 or 2 every picosecond. R = -0.01 means a decrease in shear
+strain by 0.01 every picosecond.
The {trate} style changes a tilt factor at a "constant true shear
strain rate". Note that this is not an "engineering shear strain
diff --git a/doc/units.html b/doc/units.html
index 0462ba6ee1..ebf4dd9048 100644
--- a/doc/units.html
+++ b/doc/units.html
@@ -58,6 +58,11 @@ results from a unitless LJ simulation into physical quantities.
- electric field = force/charge, where E* = E (4 pi perm0 sigma epsilon)^1/2 sigma / epsilon
- density = mass/volume, where rho* = rho sigma^dim
Note that for LJ units, the default mode of thermodyamic output via +the thermo_style command is to normalize energies +by the number of atoms, i.e. energy/atom. This can be changed via the +thermo_modify norm command. +
For style real, these are the units: