git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@8617 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp
2012-08-13 17:01:35 +00:00
parent 0ea3d01f09
commit 673d3d8018
4 changed files with 229 additions and 223 deletions

View File

@ -97,6 +97,12 @@ them. See the discussion in <A HREF = "Section_start.html#start_5">Section_star
shared library, for instructions on how to set the LD_LIBRARY_PATH
variable appropriately.
</P>
<P>If your LAMMPS build is not using any auxiliary libraries which are in
non-default directories where the system cannot find them, you
typically just need to add something like this to your ~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src
</PRE>
<HR>
<A NAME = "py_2"></A><H4>11.2 Building LAMMPS as a shared library
@ -114,11 +120,14 @@ make -f Makefile.shlib foo
<P>where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the
shared library, the soft link is always set to the most recently built
version.
load by default. Note that if you are building multiple machine
versions of the shared library, the soft link is always set to the
most recently built version.
</P>
<P>See <A HREF = "Section_start.html#start_5">Section_start 5</A> for more details.
<P>If this fails, see <A HREF = "Section_start.html#start_5">Section_start 5</A> for
more details, especially if your LAMMPS build uses auxiliary
libraries, e.g. ones required by certain packages and found in the
lib/package directories.
</P>
<HR>
@ -139,11 +148,10 @@ as a library and allow MPI functions to be called from Python.
<LI><A HREF = "http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16">myMPI</A>
<LI><A HREF = "http://code.google.com/p/pypar">Pypar</A>
</UL>
<P>All of these except pyMPI work by wrapping the MPI library (which must
be available on your system as a shared library, as discussed above),
and exposing (some portion of) its interface to your Python script.
This means Python cannot be used interactively in parallel, since they
do not address the issue of interactive input to multiple instances of
<P>All of these except pyMPI work by wrapping the MPI library and
exposing (some portion of) its interface to your Python script. This
means Python cannot be used interactively in parallel, since they do
not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of "python"
@ -173,17 +181,17 @@ sudo python setup.py install
<P>The "sudo" is only needed if required to copy Numpy files into your
Python distribution's site-packages directory.
</P>
<P>To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it
<P>To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
and from its "source" directory, type
</P>
<PRE>python setup.py build
sudo python setup.py install
</PRE>
<P>Again, the "sudo" is only needed if required to copy PyPar files into
<P>Again, the "sudo" is only needed if required to copy Pypar files into
your Python distribution's site-packages directory.
</P>
<P>If you have successully installed Pypar, you should be able to run
python serially and type
Python and type
</P>
<PRE>import pypar
</PRE>
@ -199,6 +207,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size())
</PRE>
<P>and see one line of output for each processor you run on.
</P>
<P>IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
must insure both are using the same version of MPI. If you only have
one MPI installed on your system, this is not an issue, but it can be
if you have multiple MPIs. Your LAMMPS build is explicit about which
MPI it is using, since you specify the details in your lo-level
src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both Pypar and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that Pypar finds
the right one.
</P>
<HR>
<A NAME = "py_4"></A><H4>11.4 Testing the Python-LAMMPS interface
@ -212,24 +233,17 @@ and type:
<P>If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is
</P>
<P>OSError: Could not load LAMMPS dynamic library
</P>
<PRE>OSError: Could not load LAMMPS dynamic library
</PRE>
<P>which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment
variable discussion <A HREF = "#python_1">above</A>. Or if it can't find one of the
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
typically occurs if the system can't find the LAMMMPS shared library
or one of the auxiliary shared libraries it depends on.
</P>
<P>Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in
<A HREF = "Section_start.html#start_5">Section_start 5</A> carefully.
why the load failed, so carefully go through the steps above regarding
environment variables, and the instructions in <A HREF = "Section_start.html#start_5">Section_start
5</A> about building a shared library and
about setting the LD_LIBRARY_PATH envirornment variable.
</P>
<H5><B>Test LAMMPS and Python in serial:</B>
</H5>
@ -274,10 +288,10 @@ pypar.finalize()
<P>Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a
single processor, instead of one set of output showing that it ran on
4 processors. If the 1-processor outputs occur, it means that Pypar
is not working correctly.
should get 4 sets of output, each showing that a LAMMPS run was made
on a single processor, instead of one set of output showing that
LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
means that Pypar is not working correctly.
</P>
<P>Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as
@ -285,6 +299,8 @@ described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down
correctly.
</P>
<H5><B>Running Python scripts:</B>
</H5>
<P>Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways:
</P>
@ -337,7 +353,7 @@ C or Fortran program.
</P>
<PRE>lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = ["-echo","screen"]
lmp = lammps("",list) # ditto, with command-line args, e.g. list = ["-echo","screen"]
lmp = lammps("g++",list)
</PRE>
<PRE>lmp.close() # destroy a LAMMPS object
@ -376,13 +392,14 @@ lmp.put_coords(x) # set all atom coords via x
</PRE>
<HR>
<P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
take an MPI communicator as an argument. There should be a way to do
this, so that the LAMMPS instance runs on a subset of processors if
desired, but I don't know how to do it from Pypar. So for now, it
runs on MPI_COMM_WORLD, which is all the processors. If someone
figures out how to do this with one or more of the Python wrappers for
MPI, like Pypar, please let us know and we will amend these doc pages.
<P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
lammps.py does not take an MPI communicator as an argument. There
should be a way to do this, so that the LAMMPS instance runs on a
subset of processors if desired, but I don't know how to do it from
Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
processors. If someone figures out how to do this with one or more of
the Python wrappers for MPI, like Pypar, please let us know and we
will amend these doc pages.
</P>
<P>Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.

View File

@ -93,6 +93,12 @@ them. See the discussion in "Section_start
shared library, for instructions on how to set the LD_LIBRARY_PATH
variable appropriately.
If your LAMMPS build is not using any auxiliary libraries which are in
non-default directories where the system cannot find them, you
typically just need to add something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
:line
11.2 Building LAMMPS as a shared library :link(py_2),h4
@ -110,11 +116,14 @@ make -f Makefile.shlib foo :pre
where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the
shared library, the soft link is always set to the most recently built
version.
load by default. Note that if you are building multiple machine
versions of the shared library, the soft link is always set to the
most recently built version.
See "Section_start 5"_Section_start.html#start_5 for more details.
If this fails, see "Section_start 5"_Section_start.html#start_5 for
more details, especially if your LAMMPS build uses auxiliary
libraries, e.g. ones required by certain packages and found in the
lib/package directories.
:line
@ -135,11 +144,10 @@ These include
"myMPI"_http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16
"Pypar"_http://code.google.com/p/pypar :ul
All of these except pyMPI work by wrapping the MPI library (which must
be available on your system as a shared library, as discussed above),
and exposing (some portion of) its interface to your Python script.
This means Python cannot be used interactively in parallel, since they
do not address the issue of interactive input to multiple instances of
All of these except pyMPI work by wrapping the MPI library and
exposing (some portion of) its interface to your Python script. This
means Python cannot be used interactively in parallel, since they do
not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of "python"
@ -169,22 +177,17 @@ sudo python setup.py install :pre
The "sudo" is only needed if required to copy Numpy files into your
Python distribution's site-packages directory.
To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it
To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
and from its "source" directory, type
python setup.py build
sudo python setup.py install :pre
Again, the "sudo" is only needed if required to copy PyPar files into
Again, the "sudo" is only needed if required to copy Pypar files into
your Python distribution's site-packages directory.
If you have successully installed Pypar, you should be able to run
python serially and type
Python and type
import pypar :pre
@ -200,6 +203,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) :pre
and see one line of output for each processor you run on.
IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
must insure both are using the same version of MPI. If you only have
one MPI installed on your system, this is not an issue, but it can be
if you have multiple MPIs. Your LAMMPS build is explicit about which
MPI it is using, since you specify the details in your lo-level
src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both Pypar and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that Pypar finds
the right one.
:line
11.4 Testing the Python-LAMMPS interface :link(py_4),h4
@ -213,27 +229,17 @@ and type:
If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is
OSError: Could not load LAMMPS dynamic library
OSError: Could not load LAMMPS dynamic library :pre
which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment
variable discussion "above"_#python_1. Or if it can't find one of the
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
typically occurs if the system can't find the LAMMMPS shared library
or one of the auxiliary shared libraries it depends on.
Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in
"Section_start 5"_Section_start.html#start_5 carefully.
why the load failed, so carefully go through the steps above regarding
environment variables, and the instructions in "Section_start
5"_Section_start.html#start_5 about building a shared library and
about setting the LD_LIBRARY_PATH envirornment variable.
[Test LAMMPS and Python in serial:] :h5
@ -278,10 +284,10 @@ and you should see the same output as if you had typed
Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a
single processor, instead of one set of output showing that it ran on
4 processors. If the 1-processor outputs occur, it means that Pypar
is not working correctly.
should get 4 sets of output, each showing that a LAMMPS run was made
on a single processor, instead of one set of output showing that
LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
means that Pypar is not working correctly.
Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as
@ -289,6 +295,8 @@ described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down
correctly.
[Running Python scripts:] :h5
Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways:
@ -340,7 +348,7 @@ C or Fortran program.
lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = \["-echo","screen"\]
lmp = lammps("",list) # ditto, with command-line args, e.g. list = \["-echo","screen"\]
lmp = lammps("g++",list) :pre
lmp.close() # destroy a LAMMPS object :pre
@ -379,13 +387,14 @@ lmp.put_coords(x) # set all atom coords via x :pre
:line
IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
take an MPI communicator as an argument. There should be a way to do
this, so that the LAMMPS instance runs on a subset of processors if
desired, but I don't know how to do it from Pypar. So for now, it
runs on MPI_COMM_WORLD, which is all the processors. If someone
figures out how to do this with one or more of the Python wrappers for
MPI, like Pypar, please let us know and we will amend these doc pages.
IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
lammps.py does not take an MPI communicator as an argument. There
should be a way to do this, so that the LAMMPS instance runs on a
subset of processors if desired, but I don't know how to do it from
Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
processors. If someone figures out how to do this with one or more of
the Python wrappers for MPI, like Pypar, please let us know and we
will amend these doc pages.
Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.

View File

@ -281,10 +281,13 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your
platform before making LAMMPS itself. From the src directory, type
"make stubs", or from the STUBS dir, type "make" and it should create
a libmpi.a suitable for linking to LAMMPS. If this build fails, you
will need to edit the STUBS/Makefile for your platform.
platform before making LAMMPS itself. To build it as a static
library, from the src directory, type "make stubs", or from the STUBS
dir, type "make" and it should create a libmpi_stubs.a suitable for
linking to LAMMPS. To build it as a shared library, from the STUBS
dir, type "make shlib" and it should create a libmpi_stubs.so suitable
for dynamically loading when LAMMPS runs. If either of these builds
fail, you will need to edit the STUBS/Makefile for your platform.
</P>
<P>The file STUBS/mpi.cpp provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't
@ -779,6 +782,8 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See <A HREF = "Section_python.html">this section</A> for
more info on wrapping and running LAMMPS from Python.
</P>
<H5><B>Static library:</B>
</H5>
<P>To build LAMMPS as a static library (*.a file on Linux), type
</P>
<PRE>make makelib
@ -795,8 +800,10 @@ using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
build will create the file liblmp_foo.a which another application can
link to.
</P>
<H5><B>Shared library:</B>
</H5>
<P>To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type
dynamically loaded, e.g. from Python, type
</P>
<PRE>make makeshlib
make -f Makefile.shlib foo
@ -813,28 +820,31 @@ liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by
default.
</P>
<H5><B>Additional requirements for building a shared library:</B>
</H5>
<P>Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
libraries, and the operating system must be able to find them. For
LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
needed by MPI), or packages you have installed that require libraries
needed by MPI), any packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
libraries) listed in the lib/package/Makefile.lammps file.
lib/atc/libatc.so), and any system libraries (e.g. BLAS or
Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
file.
</P>
<P>If one of these auxiliary libraries does not exist as a shared
library, the second make command should generate a build error. The
build will not generate an error, if a needed library is simply
missing from the link list; this will generate a run-time error when
the library is loaded, so be sure all needed libraries are listed,
as they would be when building LAMMPS as a stand-alone code.
library, the second make command should generate a build error. If a
needed library is simply missing from the link list, this will not
generate an error at build time, but will generate a run-time error
when the library is loaded, so be sure all needed libraries are
listed, just as they are when building LAMMPS as a stand-alone code.
</P>
<P>Note that if you install them yourself, some libraries, such as MPI,
may not build by default as shared libraries. The build instructions
for the library should tell you how to do this.
</P>
<P>For example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH
<P>As an example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH
library</A>, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default
/usr/local/lib location:
@ -846,63 +856,50 @@ make
make install
</PRE>
<P>You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result
you do not have write privileges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so.
</P>
<P>Also note that not all of the auxiliary libraries provided with LAMMPS
include Makefiles in their lib directories for building them as shared
libraries. Typically this simply requires adding a -fPIC switch when
files are compiled and "-fPIC -shared" switches when the library is
linked with a C++ (or Fortran) compiler, and creating an output target
that ends in ".so", like libatc.o. As we or others create and
contribute these Makefiles, we will add them to the LAMMPS
distribution.
include Makefiles in their lib directories suitable for building them
as shared libraries. Typically this simply requires 3 steps: (a)
adding a -fPIC switch when files are compiled, (b) adding "-fPIC
-shared" switches when the library is linked with a C++ (or Fortran)
compiler, and (c) creating an output target that ends in ".so", like
libatc.o. As we or others create and contribute these Makefiles, we
will add them to the LAMMPS distribution.
</P>
<P>You must insure that each of these libraries exist in shared library
form (*.so file for Linux systems), or either the LAMMPS shared
library build or the Python load of the library will fail. For the
load to be successful all the shared libraries must also be in
directories that the operating system checks. See the discussion in
the preceding section about the LD_LIBRARY_PATH environment variable
for how to insure this.
</P>
<P>The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them.
</P>
<P>It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
<H5><B>Additional requirements for using a shared library:</B>
</H5>
<P>The operating system finds shared libraries to load at run-time using
the environment variable LD_LIBRARY_PATH. So at a minimum you
must set it to include the lammps src directory where the LAMMPS
shared library file is created.
</P>
<P>For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src
</PRE>
<P>As discussed below, if your LAMMPS build includes auxiliary libraries,
they must also be available as shared libraries for Python to
successfully load LAMMPS. If they are not in default places where the
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
<P>If any auxiliary libraries, used by LAMMPS, are not in default places
where the operating system can find them, then you also have to add
their paths to the LD_LIBRARY_PATH environment variable.
</P>
<P>For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file:
src/STUBS, and have built the file libmpi_stubs.so, you would add
something like this to your ~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src/STUBS
</PRE>
<P>If you are using the LAMMPS USER-ATC package, you need to add
something like this to your ~/.cshrc file:
<P>If you are using the LAMMPS USER-ATC package, and have built the file
lib/atc/libatc.so, you would add something like this to your ~/.cshrc
file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/lib/atc
</PRE>
<P>See the discussion about the LAMMPS shared library in
<A HREF = "Section_python.html">Section_python</A> for details about how to build
shared versions of these libraries, and how to insure the operating
system can find them, by setting the LD_LIBRARY_PATH environment
variable correctly.
</P>
<P>Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
<H5><B>Calling the LAMMPS library:</B>
</H5>
<P>Either flavor of library (static or shared0 allows one or more LAMMPS
objects to be instantiated from the calling program.
</P>
<P>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
@ -913,17 +910,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
</P>
<P>See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are
other examples as well in the COUPLE directory which are discussed in
<A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the manual. See
<A HREF = "Section_python.html">Section_python</A> of the manual for a description
of the Python wrapper provided with LAMMPS that operates through the
LAMMPS library interface.
C and Fortran codes that invoke LAMMPS thru its library interface.
There are other examples as well in the COUPLE directory which are
discussed in <A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the
manual. See <A HREF = "Section_python.html">Section_python</A> of the manual for a
description of the Python wrapper provided with LAMMPS that operates
through the LAMMPS library interface.
</P>
<P>The files src/library.cpp and library.h contain the C-style interface
to LAMMPS. See <A HREF = "Section_howto.html#howto_19">Section_howto 19</A> of the
manual for a description of the interface and how to extend it for
your needs.
<P>The files src/library.cpp and library.h define the C-style API for
using LAMMPS as a library. See <A HREF = "Section_howto.html#howto_19">Section_howto
19</A> of the manual for a description of the
interface and how to extend it for your needs.
</P>
<HR>

View File

@ -275,10 +275,13 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your
platform before making LAMMPS itself. From the src directory, type
"make stubs", or from the STUBS dir, type "make" and it should create
a libmpi.a suitable for linking to LAMMPS. If this build fails, you
will need to edit the STUBS/Makefile for your platform.
platform before making LAMMPS itself. To build it as a static
library, from the src directory, type "make stubs", or from the STUBS
dir, type "make" and it should create a libmpi_stubs.a suitable for
linking to LAMMPS. To build it as a shared library, from the STUBS
dir, type "make shlib" and it should create a libmpi_stubs.so suitable
for dynamically loading when LAMMPS runs. If either of these builds
fail, you will need to edit the STUBS/Makefile for your platform.
The file STUBS/mpi.cpp provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't
@ -773,6 +776,8 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See "this section"_Section_python.html for
more info on wrapping and running LAMMPS from Python.
[Static library:] :h5
To build LAMMPS as a static library (*.a file on Linux), type
make makelib
@ -789,8 +794,10 @@ using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
build will create the file liblmp_foo.a which another application can
link to.
[Shared library:] :h5
To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type
dynamically loaded, e.g. from Python, type
make makeshlib
make -f Makefile.shlib foo :pre
@ -807,28 +814,31 @@ liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by
default.
[Additional requirements for building a shared library:] :h5
Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
libraries, and the operating system must be able to find them. For
LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
needed by MPI), or packages you have installed that require libraries
needed by MPI), any packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
libraries) listed in the lib/package/Makefile.lammps file.
lib/atc/libatc.so), and any system libraries (e.g. BLAS or
Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
file.
If one of these auxiliary libraries does not exist as a shared
library, the second make command should generate a build error. The
build will not generate an error, if a needed library is simply
missing from the link list; this will generate a run-time error when
the library is loaded, so be sure all needed libraries are listed,
as they would be when building LAMMPS as a stand-alone code.
library, the second make command should generate a build error. If a
needed library is simply missing from the link list, this will not
generate an error at build time, but will generate a run-time error
when the library is loaded, so be sure all needed libraries are
listed, just as they are when building LAMMPS as a stand-alone code.
Note that if you install them yourself, some libraries, such as MPI,
may not build by default as shared libraries. The build instructions
for the library should tell you how to do this.
For example, here is how to build and install the "MPICH
As an example, here is how to build and install the "MPICH
library"_mpich, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default
/usr/local/lib location:
@ -840,77 +850,50 @@ make
make install :pre
You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result
you do not have write privileges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so.
Also note that not all of the auxiliary libraries provided with LAMMPS
include Makefiles in their lib directories for building them as shared
libraries. Typically this simply requires adding a -fPIC switch when
files are compiled and "-fPIC -shared" switches when the library is
linked with a C++ (or Fortran) compiler, and creating an output target
that ends in ".so", like libatc.o. As we or others create and
contribute these Makefiles, we will add them to the LAMMPS
distribution.
include Makefiles in their lib directories suitable for building them
as shared libraries. Typically this simply requires 3 steps: (a)
adding a -fPIC switch when files are compiled, (b) adding "-fPIC
-shared" switches when the library is linked with a C++ (or Fortran)
compiler, and (c) creating an output target that ends in ".so", like
libatc.o. As we or others create and contribute these Makefiles, we
will add them to the LAMMPS distribution.
[Additional requirements for using a shared library:] :h5
You must insure that each of these libraries exist in shared library
form (*.so file for Linux systems), or either the LAMMPS shared
library build or the Python load of the library will fail. For the
load to be successful all the shared libraries must also be in
directories that the operating system checks. See the discussion in
the preceding section about the LD_LIBRARY_PATH environment variable
for how to insure this.
The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them.
It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
The operating system finds shared libraries to load at run-time using
the environment variable LD_LIBRARY_PATH. So at a minimum you
must set it to include the lammps src directory where the LAMMPS
shared library file is created.
For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
As discussed below, if your LAMMPS build includes auxiliary libraries,
they must also be available as shared libraries for Python to
successfully load LAMMPS. If they are not in default places where the
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
If any auxiliary libraries, used by LAMMPS, are not in default places
where the operating system can find them, then you also have to add
their paths to the LD_LIBRARY_PATH environment variable.
For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file:
src/STUBS, and have built the file libmpi_stubs.so, you would add
something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre
If you are using the LAMMPS USER-ATC package, you need to add
something like this to your ~/.cshrc file:
If you are using the LAMMPS USER-ATC package, and have built the file
lib/atc/libatc.so, you would add something like this to your ~/.cshrc
file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre
[Calling the LAMMPS library:] :h5
See the discussion about the LAMMPS shared library in
"Section_python"_Section_python.html for details about how to build
shared versions of these libraries, and how to insure the operating
system can find them, by setting the LD_LIBRARY_PATH environment
variable correctly.
Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
Either flavor of library (static or shared0 allows one or more LAMMPS
objects to be instantiated from the calling program.
When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
@ -921,17 +904,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are
other examples as well in the COUPLE directory which are discussed in
"Section_howto 10"_Section_howto.html#howto_10 of the manual. See
"Section_python"_Section_python.html of the manual for a description
of the Python wrapper provided with LAMMPS that operates through the
LAMMPS library interface.
C and Fortran codes that invoke LAMMPS thru its library interface.
There are other examples as well in the COUPLE directory which are
discussed in "Section_howto 10"_Section_howto.html#howto_10 of the
manual. See "Section_python"_Section_python.html of the manual for a
description of the Python wrapper provided with LAMMPS that operates
through the LAMMPS library interface.
The files src/library.cpp and library.h contain the C-style interface
to LAMMPS. See "Section_howto 19"_Section_howto.html#howto_19 of the
manual for a description of the interface and how to extend it for
your needs.
The files src/library.cpp and library.h define the C-style API for
using LAMMPS as a library. See "Section_howto
19"_Section_howto.html#howto_19 of the manual for a description of the
interface and how to extend it for your needs.
:line