git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@8617 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp
2012-08-13 17:01:35 +00:00
parent 0ea3d01f09
commit 673d3d8018
4 changed files with 229 additions and 223 deletions

View File

@ -97,6 +97,12 @@ them. See the discussion in <A HREF = "Section_start.html#start_5">Section_star
shared library, for instructions on how to set the LD_LIBRARY_PATH shared library, for instructions on how to set the LD_LIBRARY_PATH
variable appropriately. variable appropriately.
</P> </P>
<P>If your LAMMPS build is not using any auxiliary libraries which are in
non-default directories where the system cannot find them, you
typically just need to add something like this to your ~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src
</PRE>
<HR> <HR>
<A NAME = "py_2"></A><H4>11.2 Building LAMMPS as a shared library <A NAME = "py_2"></A><H4>11.2 Building LAMMPS as a shared library
@ -114,11 +120,14 @@ make -f Makefile.shlib foo
<P>where foo is the machine target name, such as linux or g++ or serial. <P>where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the load by default. Note that if you are building multiple machine
shared library, the soft link is always set to the most recently built versions of the shared library, the soft link is always set to the
version. most recently built version.
</P> </P>
<P>See <A HREF = "Section_start.html#start_5">Section_start 5</A> for more details. <P>If this fails, see <A HREF = "Section_start.html#start_5">Section_start 5</A> for
more details, especially if your LAMMPS build uses auxiliary
libraries, e.g. ones required by certain packages and found in the
lib/package directories.
</P> </P>
<HR> <HR>
@ -139,11 +148,10 @@ as a library and allow MPI functions to be called from Python.
<LI><A HREF = "http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16">myMPI</A> <LI><A HREF = "http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16">myMPI</A>
<LI><A HREF = "http://code.google.com/p/pypar">Pypar</A> <LI><A HREF = "http://code.google.com/p/pypar">Pypar</A>
</UL> </UL>
<P>All of these except pyMPI work by wrapping the MPI library (which must <P>All of these except pyMPI work by wrapping the MPI library and
be available on your system as a shared library, as discussed above), exposing (some portion of) its interface to your Python script. This
and exposing (some portion of) its interface to your Python script. means Python cannot be used interactively in parallel, since they do
This means Python cannot be used interactively in parallel, since they not address the issue of interactive input to multiple instances of
do not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI, Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of "python" believe) creates a new alternate executable (in place of "python"
@ -173,17 +181,17 @@ sudo python setup.py install
<P>The "sudo" is only needed if required to copy Numpy files into your <P>The "sudo" is only needed if required to copy Numpy files into your
Python distribution's site-packages directory. Python distribution's site-packages directory.
</P> </P>
<P>To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it <P>To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
and from its "source" directory, type and from its "source" directory, type
</P> </P>
<PRE>python setup.py build <PRE>python setup.py build
sudo python setup.py install sudo python setup.py install
</PRE> </PRE>
<P>Again, the "sudo" is only needed if required to copy PyPar files into <P>Again, the "sudo" is only needed if required to copy Pypar files into
your Python distribution's site-packages directory. your Python distribution's site-packages directory.
</P> </P>
<P>If you have successully installed Pypar, you should be able to run <P>If you have successully installed Pypar, you should be able to run
python serially and type Python and type
</P> </P>
<PRE>import pypar <PRE>import pypar
</PRE> </PRE>
@ -199,6 +207,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size())
</PRE> </PRE>
<P>and see one line of output for each processor you run on. <P>and see one line of output for each processor you run on.
</P> </P>
<P>IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
must insure both are using the same version of MPI. If you only have
one MPI installed on your system, this is not an issue, but it can be
if you have multiple MPIs. Your LAMMPS build is explicit about which
MPI it is using, since you specify the details in your lo-level
src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both Pypar and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that Pypar finds
the right one.
</P>
<HR> <HR>
<A NAME = "py_4"></A><H4>11.4 Testing the Python-LAMMPS interface <A NAME = "py_4"></A><H4>11.4 Testing the Python-LAMMPS interface
@ -212,24 +233,17 @@ and type:
<P>If you get no errors, you're ready to use LAMMPS from Python. <P>If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is If the load fails, the most common error to see is
</P> </P>
<P>OSError: Could not load LAMMPS dynamic library <PRE>OSError: Could not load LAMMPS dynamic library
</P> </PRE>
<P>which means Python was unable to load the LAMMPS shared library. This <P>which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment typically occurs if the system can't find the LAMMMPS shared library
variable discussion <A HREF = "#python_1">above</A>. Or if it can't find one of the or one of the auxiliary shared libraries it depends on.
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
</P> </P>
<P>Python (actually the operating system) isn't verbose about telling you <P>Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in why the load failed, so carefully go through the steps above regarding
<A HREF = "Section_start.html#start_5">Section_start 5</A> carefully. environment variables, and the instructions in <A HREF = "Section_start.html#start_5">Section_start
5</A> about building a shared library and
about setting the LD_LIBRARY_PATH envirornment variable.
</P> </P>
<H5><B>Test LAMMPS and Python in serial:</B> <H5><B>Test LAMMPS and Python in serial:</B>
</H5> </H5>
@ -274,10 +288,10 @@ pypar.finalize()
<P>Note that if you leave out the 3 lines from test.py that specify Pypar <P>Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a should get 4 sets of output, each showing that a LAMMPS run was made
single processor, instead of one set of output showing that it ran on on a single processor, instead of one set of output showing that
4 processors. If the 1-processor outputs occur, it means that Pypar LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
is not working correctly. means that Pypar is not working correctly.
</P> </P>
<P>Also note that once you import the PyPar module, Pypar initializes MPI <P>Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as for you, and you can use MPI calls directly in your Python script, as
@ -285,6 +299,8 @@ described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down script should be pypar.finalize(), to insure MPI is shut down
correctly. correctly.
</P> </P>
<H5><B>Running Python scripts:</B>
</H5>
<P>Note that any Python script (not just for LAMMPS) can be invoked in <P>Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways: one of several ways:
</P> </P>
@ -337,7 +353,7 @@ C or Fortran program.
</P> </P>
<PRE>lmp = lammps() # create a LAMMPS object using the default liblmp.so library <PRE>lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = ["-echo","screen"] lmp = lammps("",list) # ditto, with command-line args, e.g. list = ["-echo","screen"]
lmp = lammps("g++",list) lmp = lammps("g++",list)
</PRE> </PRE>
<PRE>lmp.close() # destroy a LAMMPS object <PRE>lmp.close() # destroy a LAMMPS object
@ -376,13 +392,14 @@ lmp.put_coords(x) # set all atom coords via x
</PRE> </PRE>
<HR> <HR>
<P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not <P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
take an MPI communicator as an argument. There should be a way to do lammps.py does not take an MPI communicator as an argument. There
this, so that the LAMMPS instance runs on a subset of processors if should be a way to do this, so that the LAMMPS instance runs on a
desired, but I don't know how to do it from Pypar. So for now, it subset of processors if desired, but I don't know how to do it from
runs on MPI_COMM_WORLD, which is all the processors. If someone Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
figures out how to do this with one or more of the Python wrappers for processors. If someone figures out how to do this with one or more of
MPI, like Pypar, please let us know and we will amend these doc pages. the Python wrappers for MPI, like Pypar, please let us know and we
will amend these doc pages.
</P> </P>
<P>Note that you can create multiple LAMMPS objects in your Python <P>Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g. script, and coordinate and run multiple simulations, e.g.

View File

@ -93,6 +93,12 @@ them. See the discussion in "Section_start
shared library, for instructions on how to set the LD_LIBRARY_PATH shared library, for instructions on how to set the LD_LIBRARY_PATH
variable appropriately. variable appropriately.
If your LAMMPS build is not using any auxiliary libraries which are in
non-default directories where the system cannot find them, you
typically just need to add something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
:line :line
11.2 Building LAMMPS as a shared library :link(py_2),h4 11.2 Building LAMMPS as a shared library :link(py_2),h4
@ -110,11 +116,14 @@ make -f Makefile.shlib foo :pre
where foo is the machine target name, such as linux or g++ or serial. where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the load by default. Note that if you are building multiple machine
shared library, the soft link is always set to the most recently built versions of the shared library, the soft link is always set to the
version. most recently built version.
See "Section_start 5"_Section_start.html#start_5 for more details. If this fails, see "Section_start 5"_Section_start.html#start_5 for
more details, especially if your LAMMPS build uses auxiliary
libraries, e.g. ones required by certain packages and found in the
lib/package directories.
:line :line
@ -135,11 +144,10 @@ These include
"myMPI"_http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16 "myMPI"_http://nbcr.sdsc.edu/forum/viewtopic.php?t=89&sid=c997fefc3933bd66204875b436940f16
"Pypar"_http://code.google.com/p/pypar :ul "Pypar"_http://code.google.com/p/pypar :ul
All of these except pyMPI work by wrapping the MPI library (which must All of these except pyMPI work by wrapping the MPI library and
be available on your system as a shared library, as discussed above), exposing (some portion of) its interface to your Python script. This
and exposing (some portion of) its interface to your Python script. means Python cannot be used interactively in parallel, since they do
This means Python cannot be used interactively in parallel, since they not address the issue of interactive input to multiple instances of
do not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI, Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of "python" believe) creates a new alternate executable (in place of "python"
@ -169,22 +177,17 @@ sudo python setup.py install :pre
The "sudo" is only needed if required to copy Numpy files into your The "sudo" is only needed if required to copy Numpy files into your
Python distribution's site-packages directory. Python distribution's site-packages directory.
To install Pypar (version pypar-2.1.4_94 as of Aug 2012), unpack it
To install Pypar (version pypar-2.1.0_66 as of April 2009), unpack it
and from its "source" directory, type and from its "source" directory, type
python setup.py build python setup.py build
sudo python setup.py install :pre sudo python setup.py install :pre
Again, the "sudo" is only needed if required to copy PyPar files into Again, the "sudo" is only needed if required to copy Pypar files into
your Python distribution's site-packages directory. your Python distribution's site-packages directory.
If you have successully installed Pypar, you should be able to run If you have successully installed Pypar, you should be able to run
python serially and type Python and type
import pypar :pre import pypar :pre
@ -200,6 +203,19 @@ print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) :pre
and see one line of output for each processor you run on. and see one line of output for each processor you run on.
IMPORTANT NOTE: To use Pypar and LAMMPS in parallel from Python, you
must insure both are using the same version of MPI. If you only have
one MPI installed on your system, this is not an issue, but it can be
if you have multiple MPIs. Your LAMMPS build is explicit about which
MPI it is using, since you specify the details in your lo-level
src/MAKE/Makefile.foo file. Pypar uses the "mpicc" command to find
information about the MPI it uses to build against. And it tries to
load "libmpi.so" from the LD_LIBRARY_PATH. This may or may not find
the MPI library that LAMMPS is using. If you have problems running
both Pypar and LAMMPS together, this is an issue you may need to
address, e.g. by moving other MPI installations so that Pypar finds
the right one.
:line :line
11.4 Testing the Python-LAMMPS interface :link(py_4),h4 11.4 Testing the Python-LAMMPS interface :link(py_4),h4
@ -213,27 +229,17 @@ and type:
If you get no errors, you're ready to use LAMMPS from Python. If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is If the load fails, the most common error to see is
OSError: Could not load LAMMPS dynamic library :pre
OSError: Could not load LAMMPS dynamic library
which means Python was unable to load the LAMMPS shared library. This which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment typically occurs if the system can't find the LAMMMPS shared library
variable discussion "above"_#python_1. Or if it can't find one of the or one of the auxiliary shared libraries it depends on.
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
Python (actually the operating system) isn't verbose about telling you Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in why the load failed, so carefully go through the steps above regarding
"Section_start 5"_Section_start.html#start_5 carefully. environment variables, and the instructions in "Section_start
5"_Section_start.html#start_5 about building a shared library and
about setting the LD_LIBRARY_PATH envirornment variable.
[Test LAMMPS and Python in serial:] :h5 [Test LAMMPS and Python in serial:] :h5
@ -278,10 +284,10 @@ and you should see the same output as if you had typed
Note that if you leave out the 3 lines from test.py that specify Pypar Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a should get 4 sets of output, each showing that a LAMMPS run was made
single processor, instead of one set of output showing that it ran on on a single processor, instead of one set of output showing that
4 processors. If the 1-processor outputs occur, it means that Pypar LAMMPS ran on 4 processors. If the 1-processor outputs occur, it
is not working correctly. means that Pypar is not working correctly.
Also note that once you import the PyPar module, Pypar initializes MPI Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as for you, and you can use MPI calls directly in your Python script, as
@ -289,6 +295,8 @@ described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down script should be pypar.finalize(), to insure MPI is shut down
correctly. correctly.
[Running Python scripts:] :h5
Note that any Python script (not just for LAMMPS) can be invoked in Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways: one of several ways:
@ -340,7 +348,7 @@ C or Fortran program.
lmp = lammps() # create a LAMMPS object using the default liblmp.so library lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = \["-echo","screen"\] lmp = lammps("",list) # ditto, with command-line args, e.g. list = \["-echo","screen"\]
lmp = lammps("g++",list) :pre lmp = lammps("g++",list) :pre
lmp.close() # destroy a LAMMPS object :pre lmp.close() # destroy a LAMMPS object :pre
@ -379,13 +387,14 @@ lmp.put_coords(x) # set all atom coords via x :pre
:line :line
IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not IMPORTANT NOTE: Currently, the creation of a LAMMPS object from within
take an MPI communicator as an argument. There should be a way to do lammps.py does not take an MPI communicator as an argument. There
this, so that the LAMMPS instance runs on a subset of processors if should be a way to do this, so that the LAMMPS instance runs on a
desired, but I don't know how to do it from Pypar. So for now, it subset of processors if desired, but I don't know how to do it from
runs on MPI_COMM_WORLD, which is all the processors. If someone Pypar. So for now, it runs with MPI_COMM_WORLD, which is all the
figures out how to do this with one or more of the Python wrappers for processors. If someone figures out how to do this with one or more of
MPI, like Pypar, please let us know and we will amend these doc pages. the Python wrappers for MPI, like Pypar, please let us know and we
will amend these doc pages.
Note that you can create multiple LAMMPS objects in your Python Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g. script, and coordinate and run multiple simulations, e.g.

View File

@ -281,10 +281,13 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your in this case. You will also need to build the STUBS library for your
platform before making LAMMPS itself. From the src directory, type platform before making LAMMPS itself. To build it as a static
"make stubs", or from the STUBS dir, type "make" and it should create library, from the src directory, type "make stubs", or from the STUBS
a libmpi.a suitable for linking to LAMMPS. If this build fails, you dir, type "make" and it should create a libmpi_stubs.a suitable for
will need to edit the STUBS/Makefile for your platform. linking to LAMMPS. To build it as a shared library, from the STUBS
dir, type "make shlib" and it should create a libmpi_stubs.so suitable
for dynamically loading when LAMMPS runs. If either of these builds
fail, you will need to edit the STUBS/Makefile for your platform.
</P> </P>
<P>The file STUBS/mpi.cpp provides a CPU timer function called <P>The file STUBS/mpi.cpp provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't MPI_Wtime() that calls gettimeofday() . If your system doesn't
@ -779,6 +782,8 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See <A HREF = "Section_python.html">this section</A> for LAMMPS to other codes. See <A HREF = "Section_python.html">this section</A> for
more info on wrapping and running LAMMPS from Python. more info on wrapping and running LAMMPS from Python.
</P> </P>
<H5><B>Static library:</B>
</H5>
<P>To build LAMMPS as a static library (*.a file on Linux), type <P>To build LAMMPS as a static library (*.a file on Linux), type
</P> </P>
<PRE>make makelib <PRE>make makelib
@ -795,8 +800,10 @@ using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
build will create the file liblmp_foo.a which another application can build will create the file liblmp_foo.a which another application can
link to. link to.
</P> </P>
<H5><B>Shared library:</B>
</H5>
<P>To build LAMMPS as a shared library (*.so file on Linux), which can be <P>To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type dynamically loaded, e.g. from Python, type
</P> </P>
<PRE>make makeshlib <PRE>make makeshlib
make -f Makefile.shlib foo make -f Makefile.shlib foo
@ -813,28 +820,31 @@ liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by well as a soft link liblmp.so, which the Python wrapper uses by
default. default.
</P> </P>
<H5><B>Additional requirements for building a shared library:</B>
</H5>
<P>Note that for a shared library to be usable by a calling program, all <P>Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared the auxiliary libraries it depends on must also exist as shared
libraries, and the operating system must be able to find them. For libraries, and the operating system must be able to find them. For
LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
needed by MPI), or packages you have installed that require libraries needed by MPI), any packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C lib/atc/libatc.so), and any system libraries (e.g. BLAS or
libraries) listed in the lib/package/Makefile.lammps file. Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
file.
</P> </P>
<P>If one of these auxiliary libraries does not exist as a shared <P>If one of these auxiliary libraries does not exist as a shared
library, the second make command should generate a build error. The library, the second make command should generate a build error. If a
build will not generate an error, if a needed library is simply needed library is simply missing from the link list, this will not
missing from the link list; this will generate a run-time error when generate an error at build time, but will generate a run-time error
the library is loaded, so be sure all needed libraries are listed, when the library is loaded, so be sure all needed libraries are
as they would be when building LAMMPS as a stand-alone code. listed, just as they are when building LAMMPS as a stand-alone code.
</P> </P>
<P>Note that if you install them yourself, some libraries, such as MPI, <P>Note that if you install them yourself, some libraries, such as MPI,
may not build by default as shared libraries. The build instructions may not build by default as shared libraries. The build instructions
for the library should tell you how to do this. for the library should tell you how to do this.
</P> </P>
<P>For example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH <P>As an example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH
library</A>, a popular open-source version of MPI, distributed by library</A>, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default Argonne National Labs, as a shared library in the default
/usr/local/lib location: /usr/local/lib location:
@ -846,63 +856,50 @@ make
make install make install
</PRE> </PRE>
<P>You may need to use "sudo make install" in place of the last line if <P>You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result you do not have write privileges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so. should be the file /usr/local/lib/libmpich.so.
</P> </P>
<P>Also note that not all of the auxiliary libraries provided with LAMMPS <P>Also note that not all of the auxiliary libraries provided with LAMMPS
include Makefiles in their lib directories for building them as shared include Makefiles in their lib directories suitable for building them
libraries. Typically this simply requires adding a -fPIC switch when as shared libraries. Typically this simply requires 3 steps: (a)
files are compiled and "-fPIC -shared" switches when the library is adding a -fPIC switch when files are compiled, (b) adding "-fPIC
linked with a C++ (or Fortran) compiler, and creating an output target -shared" switches when the library is linked with a C++ (or Fortran)
that ends in ".so", like libatc.o. As we or others create and compiler, and (c) creating an output target that ends in ".so", like
contribute these Makefiles, we will add them to the LAMMPS libatc.o. As we or others create and contribute these Makefiles, we
distribution. will add them to the LAMMPS distribution.
</P> </P>
<P>You must insure that each of these libraries exist in shared library <H5><B>Additional requirements for using a shared library:</B>
form (*.so file for Linux systems), or either the LAMMPS shared </H5>
library build or the Python load of the library will fail. For the <P>The operating system finds shared libraries to load at run-time using
load to be successful all the shared libraries must also be in the environment variable LD_LIBRARY_PATH. So at a minimum you
directories that the operating system checks. See the discussion in must set it to include the lammps src directory where the LAMMPS
the preceding section about the LD_LIBRARY_PATH environment variable shared library file is created.
for how to insure this.
</P>
<P>The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them.
</P>
<P>It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
</P> </P>
<P>For the csh or tcsh shells, you could add something like this to your <P>For the csh or tcsh shells, you could add something like this to your
~/.cshrc file: ~/.cshrc file:
</P> </P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src <PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src
</PRE> </PRE>
<P>As discussed below, if your LAMMPS build includes auxiliary libraries, <P>If any auxiliary libraries, used by LAMMPS, are not in default places
they must also be available as shared libraries for Python to where the operating system can find them, then you also have to add
successfully load LAMMPS. If they are not in default places where the their paths to the LD_LIBRARY_PATH environment variable.
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
</P> </P>
<P>For example, if you are using the dummy MPI library provided in <P>For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file: src/STUBS, and have built the file libmpi_stubs.so, you would add
something like this to your ~/.cshrc file:
</P> </P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src/STUBS <PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src/STUBS
</PRE> </PRE>
<P>If you are using the LAMMPS USER-ATC package, you need to add <P>If you are using the LAMMPS USER-ATC package, and have built the file
something like this to your ~/.cshrc file: lib/atc/libatc.so, you would add something like this to your ~/.cshrc
file:
</P> </P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/lib/atc <PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/lib/atc
</PRE> </PRE>
<P>See the discussion about the LAMMPS shared library in <H5><B>Calling the LAMMPS library:</B>
<A HREF = "Section_python.html">Section_python</A> for details about how to build </H5>
shared versions of these libraries, and how to insure the operating <P>Either flavor of library (static or shared0 allows one or more LAMMPS
system can find them, by setting the LD_LIBRARY_PATH environment objects to be instantiated from the calling program.
variable correctly.
</P>
<P>Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
</P> </P>
<P>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS <P>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from namespace; you can safely use any of its classes and methods from
@ -913,17 +910,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h. src/library.cpp and src/library.h.
</P> </P>
<P>See the sample codes in examples/COUPLE/simple for examples of C++ and <P>See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are C and Fortran codes that invoke LAMMPS thru its library interface.
other examples as well in the COUPLE directory which are discussed in There are other examples as well in the COUPLE directory which are
<A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the manual. See discussed in <A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the
<A HREF = "Section_python.html">Section_python</A> of the manual for a description manual. See <A HREF = "Section_python.html">Section_python</A> of the manual for a
of the Python wrapper provided with LAMMPS that operates through the description of the Python wrapper provided with LAMMPS that operates
LAMMPS library interface. through the LAMMPS library interface.
</P> </P>
<P>The files src/library.cpp and library.h contain the C-style interface <P>The files src/library.cpp and library.h define the C-style API for
to LAMMPS. See <A HREF = "Section_howto.html#howto_19">Section_howto 19</A> of the using LAMMPS as a library. See <A HREF = "Section_howto.html#howto_19">Section_howto
manual for a description of the interface and how to extend it for 19</A> of the manual for a description of the
your needs. interface and how to extend it for your needs.
</P> </P>
<HR> <HR>

View File

@ -275,10 +275,13 @@ dummy MPI library provided in src/STUBS, since you don't need a true
MPI library installed on your system. See the MPI library installed on your system. See the
src/MAKE/Makefile.serial file for how to specify the 3 MPI variables src/MAKE/Makefile.serial file for how to specify the 3 MPI variables
in this case. You will also need to build the STUBS library for your in this case. You will also need to build the STUBS library for your
platform before making LAMMPS itself. From the src directory, type platform before making LAMMPS itself. To build it as a static
"make stubs", or from the STUBS dir, type "make" and it should create library, from the src directory, type "make stubs", or from the STUBS
a libmpi.a suitable for linking to LAMMPS. If this build fails, you dir, type "make" and it should create a libmpi_stubs.a suitable for
will need to edit the STUBS/Makefile for your platform. linking to LAMMPS. To build it as a shared library, from the STUBS
dir, type "make shlib" and it should create a libmpi_stubs.so suitable
for dynamically loading when LAMMPS runs. If either of these builds
fail, you will need to edit the STUBS/Makefile for your platform.
The file STUBS/mpi.cpp provides a CPU timer function called The file STUBS/mpi.cpp provides a CPU timer function called
MPI_Wtime() that calls gettimeofday() . If your system doesn't MPI_Wtime() that calls gettimeofday() . If your system doesn't
@ -773,6 +776,8 @@ then be called from another application or a scripting language. See
LAMMPS to other codes. See "this section"_Section_python.html for LAMMPS to other codes. See "this section"_Section_python.html for
more info on wrapping and running LAMMPS from Python. more info on wrapping and running LAMMPS from Python.
[Static library:] :h5
To build LAMMPS as a static library (*.a file on Linux), type To build LAMMPS as a static library (*.a file on Linux), type
make makelib make makelib
@ -789,8 +794,10 @@ using the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The
build will create the file liblmp_foo.a which another application can build will create the file liblmp_foo.a which another application can
link to. link to.
[Shared library:] :h5
To build LAMMPS as a shared library (*.so file on Linux), which can be To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type dynamically loaded, e.g. from Python, type
make makeshlib make makeshlib
make -f Makefile.shlib foo :pre make -f Makefile.shlib foo :pre
@ -807,28 +814,31 @@ liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by well as a soft link liblmp.so, which the Python wrapper uses by
default. default.
[Additional requirements for building a shared library:] :h5
Note that for a shared library to be usable by a calling program, all Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared the auxiliary libraries it depends on must also exist as shared
libraries, and the operating system must be able to find them. For libraries, and the operating system must be able to find them. For
LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or LAMMPS, this includes all libraries needed by main LAMMPS (e.g. MPI or
FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs FFTW or JPEG), system libraries needed by main LAMMPS (e.g. extra libs
needed by MPI), or packages you have installed that require libraries needed by MPI), any packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C lib/atc/libatc.so), and any system libraries (e.g. BLAS or
libraries) listed in the lib/package/Makefile.lammps file. Fortran-to-C libraries) listed in the lib/package/Makefile.lammps
file.
If one of these auxiliary libraries does not exist as a shared If one of these auxiliary libraries does not exist as a shared
library, the second make command should generate a build error. The library, the second make command should generate a build error. If a
build will not generate an error, if a needed library is simply needed library is simply missing from the link list, this will not
missing from the link list; this will generate a run-time error when generate an error at build time, but will generate a run-time error
the library is loaded, so be sure all needed libraries are listed, when the library is loaded, so be sure all needed libraries are
as they would be when building LAMMPS as a stand-alone code. listed, just as they are when building LAMMPS as a stand-alone code.
Note that if you install them yourself, some libraries, such as MPI, Note that if you install them yourself, some libraries, such as MPI,
may not build by default as shared libraries. The build instructions may not build by default as shared libraries. The build instructions
for the library should tell you how to do this. for the library should tell you how to do this.
For example, here is how to build and install the "MPICH As an example, here is how to build and install the "MPICH
library"_mpich, a popular open-source version of MPI, distributed by library"_mpich, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default Argonne National Labs, as a shared library in the default
/usr/local/lib location: /usr/local/lib location:
@ -840,77 +850,50 @@ make
make install :pre make install :pre
You may need to use "sudo make install" in place of the last line if You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result you do not have write privileges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so. should be the file /usr/local/lib/libmpich.so.
Also note that not all of the auxiliary libraries provided with LAMMPS Also note that not all of the auxiliary libraries provided with LAMMPS
include Makefiles in their lib directories for building them as shared include Makefiles in their lib directories suitable for building them
libraries. Typically this simply requires adding a -fPIC switch when as shared libraries. Typically this simply requires 3 steps: (a)
files are compiled and "-fPIC -shared" switches when the library is adding a -fPIC switch when files are compiled, (b) adding "-fPIC
linked with a C++ (or Fortran) compiler, and creating an output target -shared" switches when the library is linked with a C++ (or Fortran)
that ends in ".so", like libatc.o. As we or others create and compiler, and (c) creating an output target that ends in ".so", like
contribute these Makefiles, we will add them to the LAMMPS libatc.o. As we or others create and contribute these Makefiles, we
distribution. will add them to the LAMMPS distribution.
[Additional requirements for using a shared library:] :h5
You must insure that each of these libraries exist in shared library The operating system finds shared libraries to load at run-time using
form (*.so file for Linux systems), or either the LAMMPS shared the environment variable LD_LIBRARY_PATH. So at a minimum you
library build or the Python load of the library will fail. For the must set it to include the lammps src directory where the LAMMPS
load to be successful all the shared libraries must also be in shared library file is created.
directories that the operating system checks. See the discussion in
the preceding section about the LD_LIBRARY_PATH environment variable
for how to insure this.
The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them.
It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
For the csh or tcsh shells, you could add something like this to your For the csh or tcsh shells, you could add something like this to your
~/.cshrc file: ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
As discussed below, if your LAMMPS build includes auxiliary libraries, If any auxiliary libraries, used by LAMMPS, are not in default places
they must also be available as shared libraries for Python to where the operating system can find them, then you also have to add
successfully load LAMMPS. If they are not in default places where the their paths to the LD_LIBRARY_PATH environment variable.
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
For example, if you are using the dummy MPI library provided in For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file: src/STUBS, and have built the file libmpi_stubs.so, you would add
something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre
If you are using the LAMMPS USER-ATC package, you need to add If you are using the LAMMPS USER-ATC package, and have built the file
something like this to your ~/.cshrc file: lib/atc/libatc.so, you would add something like this to your ~/.cshrc
file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre
[Calling the LAMMPS library:] :h5
Either flavor of library (static or shared0 allows one or more LAMMPS
objects to be instantiated from the calling program.
See the discussion about the LAMMPS shared library in
"Section_python"_Section_python.html for details about how to build
shared versions of these libraries, and how to insure the operating
system can find them, by setting the LD_LIBRARY_PATH environment
variable correctly.
Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from namespace; you can safely use any of its classes and methods from
@ -921,17 +904,17 @@ Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h. src/library.cpp and src/library.h.
See the sample codes in examples/COUPLE/simple for examples of C++ and See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are C and Fortran codes that invoke LAMMPS thru its library interface.
other examples as well in the COUPLE directory which are discussed in There are other examples as well in the COUPLE directory which are
"Section_howto 10"_Section_howto.html#howto_10 of the manual. See discussed in "Section_howto 10"_Section_howto.html#howto_10 of the
"Section_python"_Section_python.html of the manual for a description manual. See "Section_python"_Section_python.html of the manual for a
of the Python wrapper provided with LAMMPS that operates through the description of the Python wrapper provided with LAMMPS that operates
LAMMPS library interface. through the LAMMPS library interface.
The files src/library.cpp and library.h contain the C-style interface The files src/library.cpp and library.h define the C-style API for
to LAMMPS. See "Section_howto 19"_Section_howto.html#howto_19 of the using LAMMPS as a library. See "Section_howto
manual for a description of the interface and how to extend it for 19"_Section_howto.html#howto_19 of the manual for a description of the
your needs. interface and how to extend it for your needs.
:line :line