Merge branch 'master' into lammps-icms

Resolved Conflicts:
	python/lammps.py
	src/MAKE/Makefile.mingw
	src/MAKE/Makefile.openmpi
	src/MAKE/Makefile.serial
	src/MAKE/Makefile.serial_debug
	src/USER-CUDA/verlet_cuda.cpp
This commit is contained in:
Axel Kohlmeyer
2012-08-14 03:35:29 -04:00
117 changed files with 2365 additions and 1679 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 280 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 260 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 229 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 261 KiB

View File

@ -659,8 +659,8 @@ invoked with minimal overhead (no setup or clean-up) if you wish to do
multiple short runs, driven by another program.
</P>
<P>Examples of driver codes that call LAMMPS as a library are included in
the "couple" directory of the LAMMPS distribution; see couple/README
for more details:
the examples/COUPLE directory of the LAMMPS distribution; see
examples/COUPLE/README for more details:
</P>
<UL><LI>simple: simple driver programs in C++ and C which invoke LAMMPS as a
library
@ -1820,11 +1820,12 @@ details.
<P>The key idea of the library interface is that you can write any
functions you wish to define how your code talks to LAMMPS and add
them to src/library.cpp and src/library.h, as well as to the <A HREF = "Section_python.html">Python
interface</A>. The routines you add can access
or change any LAMMPS data you wish. The couple and python directories
have example C++ and C and Python codes which show how a driver code
can link to LAMMPS as a library, run LAMMPS on a subset of processors,
grab data from LAMMPS, change it, and put it back into LAMMPS.
interface</A>. The routines you add can access or
change any LAMMPS data you wish. The examples/COUPLE and python
directories have example C++ and C and Python codes which show how a
driver code can link to LAMMPS as a library, run LAMMPS on a subset of
processors, grab data from LAMMPS, change it, and put it back into
LAMMPS.
</P>
<HR>

View File

@ -654,8 +654,8 @@ invoked with minimal overhead (no setup or clean-up) if you wish to do
multiple short runs, driven by another program.
Examples of driver codes that call LAMMPS as a library are included in
the "couple" directory of the LAMMPS distribution; see couple/README
for more details:
the examples/COUPLE directory of the LAMMPS distribution; see
examples/COUPLE/README for more details:
simple: simple driver programs in C++ and C which invoke LAMMPS as a
library :ulb,l
@ -1807,11 +1807,12 @@ details.
The key idea of the library interface is that you can write any
functions you wish to define how your code talks to LAMMPS and add
them to src/library.cpp and src/library.h, as well as to the "Python
interface"_Section_python.html. The routines you add can access
or change any LAMMPS data you wish. The couple and python directories
have example C++ and C and Python codes which show how a driver code
can link to LAMMPS as a library, run LAMMPS on a subset of processors,
grab data from LAMMPS, change it, and put it back into LAMMPS.
interface"_Section_python.html. The routines you add can access or
change any LAMMPS data you wish. The examples/COUPLE and python
directories have example C++ and C and Python codes which show how a
driver code can link to LAMMPS as a library, run LAMMPS on a subset of
processors, grab data from LAMMPS, change it, and put it back into
LAMMPS.
:line

View File

@ -14,175 +14,153 @@
<P>This section describes how to build and use LAMMPS via a Python
interface.
</P>
<UL><LI>11.1 <A HREF = "#py_1">Extending Python with a serial version of LAMMPS</A>
<LI>11.2 <A HREF = "#py_2">Creating a shared MPI library</A>
<LI>11.3 <A HREF = "#py_3">Extending Python with a parallel version of LAMMPS</A>
<LI>11.4 <A HREF = "#py_4">Extending Python with MPI</A>
<LI>11.5 <A HREF = "#py_5">Testing the Python-LAMMPS interface</A>
<LI>11.6 <A HREF = "#py_6">Using LAMMPS from Python</A>
<LI>11.7 <A HREF = "#py_7">Example Python scripts that use LAMMPS</A>
<UL><LI>11.1 <A HREF = "#py_1">Setting necessary environment variables</A>
<LI>11.2 <A HREF = "#py_2">Building LAMMPS as a shared library</A>
<LI>11.3 <A HREF = "#py_3">Extending Python with MPI to run in parallel</A>
<LI>11.4 <A HREF = "#py_4">Testing the Python-LAMMPS interface</A>
<LI>11.5 <A HREF = "#py_5">Using LAMMPS from Python</A>
<LI>11.6 <A HREF = "#py_6">Example Python scripts that use LAMMPS</A>
</UL>
<P>The LAMMPS distribution includes some Python code in its python
directory which wraps the library interface to LAMMPS. This makes it
is possible to run LAMMPS, invoke LAMMPS commands or give it an input
script, extract LAMMPS results, an modify internal LAMMPS variables,
either from a Python script or interactively from a Python prompt.
<P>The LAMMPS distribution includes the file python/lammps.py which wraps
the library interface to LAMMPS. This file makes it is possible to
run LAMMPS, invoke LAMMPS commands or give it an input script, extract
LAMMPS results, an modify internal LAMMPS variables, either from a
Python script or interactively from a Python prompt. You can do the
former in serial or parallel. Running Python interactively in
parallel does not generally work, unless you have a package installed
that extends your Python to enable multiple instances of Python to
read what you type.
</P>
<P><A HREF = "http://www.python.org">Python</A> is a powerful scripting and programming
language which can be used to wrap software like LAMMPS and other
packages. It can be used to glue multiple pieces of software
together, e.g. to run a coupled or multiscale model. See <A HREF = "Section_howto.html#howto_10">this
together, e.g. to run a coupled or multiscale model. See <A HREF = "Section_howto.html#howto_10">Section
section</A> of the manual and the couple
directory of the distribution for more ideas about coupling LAMMPS to
other codes. See <A HREF = "Section_start.html#start_5">Section_start 4</A> about
how to build LAMMPS as a library, and <A HREF = "Section_howto.html#howto_19">this
section</A> for a description of the library
how to build LAMMPS as a library, and <A HREF = "Section_howto.html#howto_19">Section_howto
19</A> for a description of the library
interface provided in src/library.cpp and src/library.h and how to
extend it for your needs. As described below, that interface is what
is exposed to Python. It is designed to be easy to add functions to.
This has the effect of extending the Python inteface as well. See
details below.
This can easily extend the Python inteface as well. See details
below.
</P>
<P>By using the Python interface LAMMPS can also be coupled with a GUI or
visualization tools that display graphs or animations in real time as
LAMMPS runs. Examples of such scripts are inlcluded in the python
directory.
<P>By using the Python interface, LAMMPS can also be coupled with a GUI
or other visualization tools that display graphs or animations in real
time as LAMMPS runs. Examples of such scripts are inlcluded in the
python directory.
</P>
<P>Two advantages of using Python are how concise the language is and
<P>Two advantages of using Python are how concise the language is, and
that it can be run interactively, enabling rapid development and
debugging of programs. If you use it to mostly invoke costly
operations within LAMMPS, such as running a simulation for a
reasonable number of timesteps, then the overhead cost of invoking
LAMMPS thru Python will be negligible.
</P>
<P>Before using LAMMPS from a Python script, the Python on your machine
must be "extended" to include an interface to the LAMMPS library. If
your Python script will invoke MPI operations, you will also need to
extend your Python with an interface to MPI itself.
<P>Before using LAMMPS from a Python script, you have to do two things.
You need to set two environment variables. And you need to build
LAMMPS as a dynamic shared library, so it can be loaded by Python.
Both these steps are discussed below. If you wish to run LAMMPS in
parallel from Python, you also need to extend your Python with MPI.
This is also discussed below.
</P>
<P>Thus you should first decide how you intend to use LAMMPS from Python.
There are 3 options:
</P>
<P>(1) Use LAMMPS on a single processor running Python.
</P>
<P>(2) Use LAMMPS in parallel, where each processor runs Python, but your
Python program does not use MPI.
</P>
<P>(3) Use LAMMPS in parallel, where each processor runs Python, and your
Python script also makes MPI calls through a Python/MPI interface.
</P>
<P>Note that for (2) and (3) you will not be able to use Python
interactively by typing commands and getting a response. This is
because you will have multiple instances of Python running (e.g. on a
parallel machine) and they cannot all read what you type.
</P>
<P>Working in mode (1) does not require your machine to have MPI
installed. You should extend your Python with a serial version of
LAMMPS and the dummy MPI library provided with LAMMPS. See
instructions below on how to do this.
</P>
<P>Working in mode (2) requires your machine to have an MPI library
installed, but your Python does not need to be extended with MPI
itself. The MPI library must be a shared library (e.g. a *.so file on
Linux) which is not typically created when MPI is built/installed.
See instruction below on how to do this. You should extend your
Python with the a parallel versionn of LAMMPS which will use the
shared MPI system library. See instructions below on how to do this.
</P>
<P>Working in mode (3) requires your machine to have MPI installed (as a
shared library as in (2)). You must also extend your Python with a
parallel version of LAMMPS (same as in (2)) and with MPI itself, via
one of several available Python/MPI packages. See instructions below
on how to do the latter task.
</P>
<P>Several of the following sub-sections cover the rest of the Python
setup discussion. The next to last sub-section describes the Python
syntax used to invoke LAMMPS. The last sub-section describes example
Python scripts included in the python directory.
</P>
<P>Before proceeding, there are 2 items to note.
</P>
<P>(1) The provided Python wrapper for LAMMPS uses the amazing and
magical (to me) "ctypes" package in Python, which auto-generates the
interface code needed between Python and a set of C interface routines
for a library. Ctypes is part of standard Python for versions 2.5 and
later. You can check which version of Python you have installed, by
simply typing "python" at a shell prompt.
</P>
<P>(2) Any library wrapped by Python, including LAMMPS, must be built as
a shared library (e.g. a *.so file on Linux and not a *.a file). The
python/setup_serial.py and setup.py scripts do this build for LAMMPS
itself (described below). But if you have LAMMPS configured to use
additional packages that have their own libraries, then those
libraries must also be shared libraries. E.g. MPI, FFTW, or any of
the libraries in lammps/lib. When you build LAMMPS as a stand-alone
code, you are not building shared versions of these libraries.
</P>
<P>The discussion below describes how to create a shared MPI library. I
suggest you start by configuing LAMMPS without packages installed that
require any libraries besides MPI. See <A HREF = "Section_start.html#start_3">this
section</A> of the manual for a discussion of
LAMMPS packages. E.g. do not use the KSPACE, GPU, MEAM, POEMS, or
REAX packages.
</P>
<P>If you are successfully follow the steps belwo to build the Python
wrappers and use this version of LAMMPS through Python, you can then
take the next step of adding LAMMPS packages that use additional
libraries. This will require you to build a shared library for that
package's library, similar to what is described below for MPI. It
will also require you to edit the python/setup_serial.py or setup.py
scripts to enable Python to access those libraries when it builds the
LAMMPS wrapper.
<P>The Python wrapper for LAMMPS uses the amazing and magical (to me)
"ctypes" package in Python, which auto-generates the interface code
needed between Python and a set of C interface routines for a library.
Ctypes is part of standard Python for versions 2.5 and later. You can
check which version of Python you have installed, by simply typing
"python" at a shell prompt.
</P>
<HR>
<HR>
<A NAME = "py_1"></A><H4>11.1 Extending Python with a serial version of LAMMPS
<A NAME = "py_1"></A><H4>11.1 Setting necessary environment variables
</H4>
<P>From the python directory in the LAMMPS distribution, type
<P>For Python to use the LAMMPS interface, it needs to find two files.
The paths to these files need to be added to two environment variables
that Python checks.
</P>
<PRE>python setup_serial.py build
<P>The first is the environment variable PYTHONPATH. It needs
to include the directory where the python/lammps.py file is.
</P>
<P>For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
</P>
<PRE>setenv PYTHONPATH $<I>PYTHONPATH</I>:/home/sjplimp/lammps/python
</PRE>
<P>and then one of these commands:
<P>The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them. It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
</P>
<PRE>sudo python setup_serial.py install
python setup_serial.py install --home=~/foo
<P>For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src
</PRE>
<P>The "build" command should compile all the needed LAMMPS files,
including its dummy MPI library. The first "install" command will put
the needed files in your Python's site-packages sub-directory, so that
Python can load them. For example, if you installed Python yourself
on a Linux machine, it would typically be somewhere like
/usr/local/lib/python2.5/site-packages. Installing Python packages
this way often requires you to be able to write to the Python
directories, which may require root priveleges, hence the "sudo"
prefix. If this is not the case, you can drop the "sudo". If you use
the "sudo" prefix and you have installed Python yourself, you should
make sure that root uses the same Python as the one you did the
"install" in. E.g. these 2 commands may do the install in different
Python versions:
<P>As discussed below, if your LAMMPS build includes auxiliary libraries,
they must also be available as shared libraries for Python to
successfully load LAMMPS. If they are not in default places where the
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
</P>
<PRE>python setup_serial.py install --home=~/foo
python /usr/local/bin/python/setup_serial.py install --home=~/foo
<P>For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/src/STUBS
</PRE>
<P>Alternatively, you can install the LAMMPS files (or any other Python
packages) in your own user space. The second "install" command does
this, where you should replace "foo" with your directory of choice.
</P>
<P>If these commands are successful, a <I>lammps.py</I> and
<I>_lammps_serial.so</I> file will be put in the appropriate directory.
<P>If you are using the LAMMPS USER-ATC package, you need to add
something like this to your ~/.cshrc file:
</P>
<PRE>setenv LD_LIBRARY_PATH $<I>LD_LIBRARY_PATH</I>:/home/sjplimp/lammps/lib/atc
</PRE>
<HR>
<A NAME = "py_2"></A><H4>11.2 Creating a shared MPI library
<A NAME = "py_2"></A><H4>11.2 Building LAMMPS as a shared library
</H4>
<P>A shared library is one that is dynamically loadable, which is what
Python requires. On Linux this is a library file that ends in ".so",
not ".a". Such a shared library is normally not built if you
installed MPI yourself, but it is easy to do. Here is how to do it
for <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH</A>, a popular open-source version of MPI, distributed
by Argonne National Labs. From within the mpich directory, type
<P>Instructions on how to build LAMMPS as a shared library are given in
<A HREF = "Section_start.html#start_5">Section_start 5</A>. A shared library is one
that is dynamically loadable, which is what Python requires. On Linux
this is a library file that ends in ".so", not ".a".
</P>
<P>>From the src directory, type
</P>
<P>make makeshlib
make -f Makefile.shlib foo
</P>
<P>where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the
shared library, the soft link is always set to the most recently built
version.
</P>
<P>Note that as discussed in below, a LAMMPS build may depend on several
auxiliary libraries, which are specified in your low-level
src/Makefile.foo file. For example, an MPI library, the FFTW library,
a JPEG library, etc. Depending on what LAMMPS packages you have
installed, the build may also require additional libraries from the
lib directories, such as lib/atc/libatc.so or lib/reax/libreax.so.
</P>
<P>You must insure that each of these libraries exist in shared library
form (*.so file for Linux systems), or either the LAMMPS shared
library build or the Python load of the library will fail. For the
load to be successful all the shared libraries must also be in
directories that the operating system checks. See the discussion in
the preceding section about the LD_LIBRARY_PATH environment variable
for how to insure this.
</P>
<P>Note that some system libraries, such as MPI, if you installed it
yourself, may not be built by default as shared libraries. The build
instructions for the library should tell you how to do this.
</P>
<P>For example, here is how to build and install the <A HREF = "http://www-unix.mcs.anl.gov/mpi">MPICH
library</A>, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default
/usr/local/lib location:
</P>
@ -190,62 +168,26 @@ by Argonne National Labs. From within the mpich directory, type
make
make install
</PRE>
<P>You may need to use "sudo make install" in place of the last line.
The end result should be the file libmpich.so in /usr/local/lib.
<P>You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so.
</P>
<P>IMPORTANT NOTE: If the file libmpich.a already exists in your
installation directory (e.g. /usr/local/lib), you will now have both a
static and shared MPI library. This will be fine for running LAMMPS
from Python since it only uses the shared library. But if you now try
to build LAMMPS by itself as a stand-alone program (cd lammps/src;
make foo) or build other codes that expect to link against libmpich.a,
then those builds may fail if the linker uses libmpich.so instead. If
this happens, it means you will need to remove the file
/usr/local/lib/libmich.so before building LAMMPS again as a
stand-alone code.
<P>Note that not all of the auxiliary libraries provided with LAMMPS have
shared-library Makefiles in their lib directories. Typically this
simply requires a Makefile.foo that adds a -fPIC switch when files are
compiled and a "-fPIC -shared" switches when the library is linked
with a C++ (or Fortran) compiler, as well as an output target that
ends in ".so", like libatc.o. As we or others create and contribute
these Makefiles, we will add them to the LAMMPS distribution.
</P>
<HR>
<A NAME = "py_3"></A><H4>11.3 Extending Python with a parallel version of LAMMPS
<A NAME = "py_3"></A><H4>11.3 Extending Python with MPI to run in parallel
</H4>
<P>From the python directory, type
<P>If you wish to run LAMMPS in parallel from Python, you need to extend
your Python with an interface to MPI. This also allows you to
make MPI calls directly from Python in your script, if you desire.
</P>
<PRE>python setup.py build
</PRE>
<P>and then one of these commands:
</P>
<PRE>sudo python setup.py install
python setup.py install --home=~/foo
</PRE>
<P>The "build" command should compile all the needed LAMMPS C++ files,
which will require MPI to be installed on your system. This means it
must find both the header file mpi.h and a shared library file,
e.g. libmpich.so if the MPICH version of MPI is installed. See the
preceding section for how to create a shared library version of MPI if
it does not exist. You may need to adjust the "include_dirs" and
"library_dirs" and "libraries" fields in python/setup.py to
insure the Python build finds all the files it needs.
</P>
<P>The first "install" command will put the needed files in your Python's
site-packages sub-directory, so that Python can load them. For
example, if you installed Python yourself on a Linux machine, it would
typically be somewhere like /usr/local/lib/python2.5/site-packages.
Installing Python packages this way often requires you to be able to
write to the Python directories, which may require root priveleges,
hence the "sudo" prefix. If this is not the case, you can drop the
"sudo".
</P>
<P>Alternatively, you can install the LAMMPS files (or any other Python
packages) in your own user space. The second "install" command does
this, where you should replace "foo" with your directory of choice.
</P>
<P>If these commands are successful, a <I>lammps.py</I> and <I>_lammps.so</I> file
will be put in the appropriate directory.
</P>
<HR>
<A NAME = "py_4"></A><H4>11.4 Extending Python with MPI
</H4>
<P>There are several Python packages available that purport to wrap MPI
as a library and allow MPI functions to be called from Python.
</P>
@ -260,26 +202,26 @@ as a library and allow MPI functions to be called from Python.
<P>All of these except pyMPI work by wrapping the MPI library (which must
be available on your system as a shared library, as discussed above),
and exposing (some portion of) its interface to your Python script.
This means they cannot be used interactively in parallel, since they
This means Python cannot be used interactively in parallel, since they
do not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of python
believe) creates a new alternate executable (in place of "python"
itself) as a result.
</P>
<P>In principle any of these Python/MPI packages should work to invoke
both calls to LAMMPS and MPI itself from a Python script running in
parallel. However, when I downloaded and looked at a few of them,
their docuemtation was incomplete and I had trouble with their
installation. It's not clear if some of the packages are still being
actively developed and supported.
LAMMPS in parallel and MPI calls themselves from a Python script which
is itself running in parallel. However, when I downloaded and looked
at a few of them, their documentation was incomplete and I had trouble
with their installation. It's not clear if some of the packages are
still being actively developed and supported.
</P>
<P>The one I recommend, since I have successfully used it with LAMMPS, is
Pypar. Pypar requires the ubiquitous <A HREF = "http://numpy.scipy.org">Numpy
package</A> be installed in your Python. After
launching python, type
</P>
<PRE>>>> import numpy
<PRE>import numpy
</PRE>
<P>to see if it is installed. If not, here is how to install it (version
1.3.0b1 as of April 2009). Unpack the numpy tarball and from its
@ -303,106 +245,124 @@ your Python distribution's site-packages directory.
<P>If you have successully installed Pypar, you should be able to run
python serially and type
</P>
<PRE>>>> import pypar
<PRE>import pypar
</PRE>
<P>without error. You should also be able to run python in parallel
on a simple test script
</P>
<PRE>% mpirun -np 4 python test.script
<PRE>% mpirun -np 4 python test.py
</PRE>
<P>where test.script contains the lines
<P>where test.py contains the lines
</P>
<PRE>import pypar
print "Proc %d out of %d procs" % (pypar.rank(),pypar.size())
</PRE>
<P>and see one line of output for each processor you ran on.
<P>and see one line of output for each processor you run on.
</P>
<HR>
<A NAME = "py_5"></A><H4>11.5 Testing the Python-LAMMPS interface
<A NAME = "py_4"></A><H4>11.4 Testing the Python-LAMMPS interface
</H4>
<P>Before using LAMMPS in a Python program, one more step is needed. The
interface to LAMMPS is via the Python ctypes package, which loads the
shared LAMMPS library via a CDLL() call, which in turn is a wrapper on
the C-library dlopen(). This command is different than a normal
Python "import" and needs to be able to find the LAMMPS shared
library, which is either in the Python site-packages directory or in a
local directory you specified in the "python setup.py install"
command, as described above.
</P>
<P>The simplest way to do this is add a line like this to your
.cshrc or other shell start-up file.
</P>
<PRE>setenv LD_LIBRARY_PATH
${LD_LIBRARY_PATH}:/usr/local/lib/python2.5/site-packages
</PRE>
<P>and then execute the shell file to insure the path has been updated.
This will extend the path that dlopen() uses to look for shared
libraries.
</P>
<P>To test if the serial LAMMPS library has been successfully installed
(mode 1 above), launch Python and type
<P>To test if LAMMPS is callable from Python, launch Python interactively
and type:
</P>
<PRE>>>> from lammps import lammps
>>> lmp = lammps()
</PRE>
<P>If you get no errors, you're ready to use serial LAMMPS from Python.
<P>If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is
</P>
<P>If you built LAMMPS for parallel use (mode 2 or 3 above), launch
Python in parallel:
<P>"CDLL: asdfasdfasdf"
</P>
<PRE>% mpirun -np 4 python test.script
<P>which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment
variable discussion <A HREF = "#python_1">above</A>. Or if it can't find one of the
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
</P>
<P>Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in
<A HREF = "Section_start.html#start_5">Section_start 5</A> carefully.
</P>
<H5><B>Test LAMMPS and Python in serial:</B>
</H5>
<P>To run a LAMMPS test in serial, type these lines into Python
interactively from the bench directory:
</P>
<PRE>>>> from lammps import lammps
>>> lmp = lammps()
>>> lmp.file("in.lj")
</PRE>
<P>where test.script contains the lines
<P>Or put the same lines in the file test.py and run it as
</P>
<PRE>% python test.py
</PRE>
<P>Either way, you should see the results of running the in.lj benchmark
on a single processor appear on the screen, the same as if you had
typed something like:
</P>
<PRE>lmp_g++ < in.lj
</PRE>
<H5><B>Test LAMMPS and Python in parallel:</B>
</H5>
<P>To run LAMMPS in parallel, assuming you have installed the
<A HREF = "http://datamining.anu.edu.au/~ole/pypar">Pypar</A> package as discussed
above, create a test.py file containing these lines:
</P>
<PRE>import pypar
from lammps import lammps
lmp = lammps()
print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()), lmp
lmp.file("in.lj")
print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()),lmp
pypar.finalize()
</PRE>
<P>Again, if you get no errors, you're good to go.
<P>You can then run it in parallel as:
</P>
<P>Note that if you left out the "import pypar" line from this script,
you would instantiate and run LAMMPS independently on each of the P
processors specified in the mpirun command. You can test if Pypar is
enabling true parallel Python and LAMMPS by adding a line to the above
sequence of commands like lmp.file("in.lj") to run an input script and
see if the LAMMPS run says it ran on P processors or if you get output
from P duplicated 1-processor runs written to the screen. In the
latter case, Pypar is not working correctly.
</P>
<P>Note that this line:
</P>
<PRE>from lammps import lammps
<PRE>% mpirun -np 4 python test.py
</PRE>
<P>will import either the serial or parallel version of the LAMMPS
library, as wrapped by lammps.py. But if you installed both via
setup_serial.py and setup.py, it will always import the parallel
version, since it attempts that first.
<P>and you should see the same output as if you had typed
</P>
<P>Note that if your Python script imports the Pypar package (as above),
so that it can use MPI calls directly, then Pypar initializes MPI for
you. Thus the last line of your Python script should be
pypar.finalize(), to insure MPI is shut down correctly.
<PRE>% mpirun -np 4 lmp_g++ < in.lj
</PRE>
<P>Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a
single processor, instead of one set of output showing that it ran on
4 processors. If the 1-processor outputs occur, it means that Pypar
is not working correctly.
</P>
<P>Also note that a Python script can be invoked in one of several ways:
<P>Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as
described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down
correctly.
</P>
<P>% python foo.script
<P>Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways:
</P>
<PRE>% python foo.script
% python -i foo.script
% foo.script
</P>
% foo.script
</PRE>
<P>The last command requires that the first line of the script be
something like this:
</P>
<P>#!/usr/local/bin/python
#!/usr/local/bin/python -i
</P>
<PRE>#!/usr/local/bin/python
#!/usr/local/bin/python -i
</PRE>
<P>where the path points to where you have Python installed, and that you
have made the script file executable:
</P>
<P>% chmod +x foo.script
</P>
<PRE>% chmod +x foo.script
</PRE>
<P>Without the "-i" flag, Python will exit when the script finishes.
With the "-i" flag, you will be left in the Python interpreter when
the script finishes, so you can type subsequent commands. As
@ -413,14 +373,15 @@ Python on a single processor, not in parallel.
<HR>
<A NAME = "py_6"></A><H4>11.6 Using LAMMPS from Python
<A NAME = "py_5"></A><H4>11.5 Using LAMMPS from Python
</H4>
<P>The Python interface to LAMMPS consists of a Python "lammps" module,
the source code for which is in python/lammps.py, which creates a
"lammps" object, with a set of methods that can be invoked on that
object. The sample Python code below assumes you have first imported
the "lammps" module in your Python script and its settings as
follows:
the "lammps" module in your Python script. You can also include its
settings as follows, which are useful in test return values from some
of the methods described below:
</P>
<PRE>from lammps import lammps
from lammps import LMPINT as INT
@ -434,8 +395,10 @@ at the file src/library.cpp you will see that they correspond
one-to-one with calls you can make to the LAMMPS library from a C++ or
C or Fortran program.
</P>
<PRE>lmp = lammps() # create a LAMMPS object
lmp = lammps(list) # ditto, with command-line args, list = ["-echo","screen"]
<PRE>lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = ["-echo","screen"]
lmp = lammps("g++",list)
</PRE>
<PRE>lmp.close() # destroy a LAMMPS object
</PRE>
@ -443,16 +406,16 @@ lmp = lammps(list) # ditto, with command-line args, list = ["-echo","scree
lmp.command(cmd) # invoke a single LAMMPS command, cmd = "run 100"
</PRE>
<PRE>xlo = lmp.extract_global(name,type) # extract a global quantity
# name = "boxxlo", "nlocal", etc
# name = "boxxlo", "nlocal", etc
# type = INT or DOUBLE
</PRE>
<PRE>coords = lmp.extract_atom(name,type) # extract a per-atom quantity
# name = "x", "type", etc
# name = "x", "type", etc
# type = IPTR or DPTR or DPTRPTR
</PRE>
<PRE>eng = lmp.extract_compute(id,style,type) # extract value(s) from a compute
v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
# id = ID of compute or fix
# id = ID of compute or fix
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
@ -473,12 +436,23 @@ lmp.put_coords(x) # set all atom coords via x
</PRE>
<HR>
<P>The creation of a LAMMPS object does not take an MPI communicator as
an argument. There should be a way to do this, so that the LAMMPS
instance runs on a subset of processors, if desired, but I don't yet
know how from Pypar. So for now, it runs on MPI_COMM_WORLD, which is
all the processors.
<P>IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
take an MPI communicator as an argument. There should be a way to do
this, so that the LAMMPS instance runs on a subset of processors if
desired, but I don't know how to do it from Pypar. So for now, it
runs on MPI_COMM_WORLD, which is all the processors. If someone
figures out how to do this with one or more of the Python wrappers for
MPI, like Pypar, please let us know and we will amend these doc pages.
</P>
<P>Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.
</P>
<PRE>from lammps import lammps
lmp1 = lammps()
lmp2 = lammps()
lmp1.file("in.file1")
lmp2.file("in.file2")
</PRE>
<P>The file() and command() methods allow an input script or single
commands to be invoked.
</P>
@ -588,15 +562,10 @@ following steps:
<UL><LI>Add a new interface function to src/library.cpp and
src/library.h.
<LI>Verify the new function is syntactically correct by building LAMMPS as
a library - see <A HREF = "Section_start.html#start_5">Section_start 4</A> of the
manual.
<LI>Rebuild LAMMPS as a shared library.
<LI>Add a wrapper method in the Python LAMMPS module to python/lammps.py
for this interface function.
<LI>Rebuild the Python wrapper via python/setup_serial.py or
python/setup.py.
<LI>Add a wrapper method to python/lammps.py for this interface
function.
<LI>You should now be able to invoke the new interface function from a
Python script. Isn't ctypes amazing?
@ -605,7 +574,7 @@ Python script. Isn't ctypes amazing?
<HR>
<A NAME = "py_7"></A><H4>11.7 Example Python scripts that use LAMMPS
<A NAME = "py_6"></A><H4>11.6 Example Python scripts that use LAMMPS
</H4>
<P>These are the Python scripts included as demos in the python/examples
directory of the LAMMPS distribution, to illustrate the kinds of

View File

@ -11,174 +11,152 @@
This section describes how to build and use LAMMPS via a Python
interface.
11.1 "Extending Python with a serial version of LAMMPS"_#py_1
11.2 "Creating a shared MPI library"_#py_2
11.3 "Extending Python with a parallel version of LAMMPS"_#py_3
11.4 "Extending Python with MPI"_#py_4
11.5 "Testing the Python-LAMMPS interface"_#py_5
11.6 "Using LAMMPS from Python"_#py_6
11.7 "Example Python scripts that use LAMMPS"_#py_7 :ul
11.1 "Setting necessary environment variables"_#py_1
11.2 "Building LAMMPS as a shared library"_#py_2
11.3 "Extending Python with MPI to run in parallel"_#py_3
11.4 "Testing the Python-LAMMPS interface"_#py_4
11.5 "Using LAMMPS from Python"_#py_5
11.6 "Example Python scripts that use LAMMPS"_#py_6 :ul
The LAMMPS distribution includes some Python code in its python
directory which wraps the library interface to LAMMPS. This makes it
is possible to run LAMMPS, invoke LAMMPS commands or give it an input
script, extract LAMMPS results, an modify internal LAMMPS variables,
either from a Python script or interactively from a Python prompt.
The LAMMPS distribution includes the file python/lammps.py which wraps
the library interface to LAMMPS. This file makes it is possible to
run LAMMPS, invoke LAMMPS commands or give it an input script, extract
LAMMPS results, an modify internal LAMMPS variables, either from a
Python script or interactively from a Python prompt. You can do the
former in serial or parallel. Running Python interactively in
parallel does not generally work, unless you have a package installed
that extends your Python to enable multiple instances of Python to
read what you type.
"Python"_http://www.python.org is a powerful scripting and programming
language which can be used to wrap software like LAMMPS and other
packages. It can be used to glue multiple pieces of software
together, e.g. to run a coupled or multiscale model. See "this
together, e.g. to run a coupled or multiscale model. See "Section
section"_Section_howto.html#howto_10 of the manual and the couple
directory of the distribution for more ideas about coupling LAMMPS to
other codes. See "Section_start 4"_Section_start.html#start_5 about
how to build LAMMPS as a library, and "this
section"_Section_howto.html#howto_19 for a description of the library
how to build LAMMPS as a library, and "Section_howto
19"_Section_howto.html#howto_19 for a description of the library
interface provided in src/library.cpp and src/library.h and how to
extend it for your needs. As described below, that interface is what
is exposed to Python. It is designed to be easy to add functions to.
This has the effect of extending the Python inteface as well. See
details below.
This can easily extend the Python inteface as well. See details
below.
By using the Python interface LAMMPS can also be coupled with a GUI or
visualization tools that display graphs or animations in real time as
LAMMPS runs. Examples of such scripts are inlcluded in the python
directory.
By using the Python interface, LAMMPS can also be coupled with a GUI
or other visualization tools that display graphs or animations in real
time as LAMMPS runs. Examples of such scripts are inlcluded in the
python directory.
Two advantages of using Python are how concise the language is and
Two advantages of using Python are how concise the language is, and
that it can be run interactively, enabling rapid development and
debugging of programs. If you use it to mostly invoke costly
operations within LAMMPS, such as running a simulation for a
reasonable number of timesteps, then the overhead cost of invoking
LAMMPS thru Python will be negligible.
Before using LAMMPS from a Python script, the Python on your machine
must be "extended" to include an interface to the LAMMPS library. If
your Python script will invoke MPI operations, you will also need to
extend your Python with an interface to MPI itself.
Before using LAMMPS from a Python script, you have to do two things.
You need to set two environment variables. And you need to build
LAMMPS as a dynamic shared library, so it can be loaded by Python.
Both these steps are discussed below. If you wish to run LAMMPS in
parallel from Python, you also need to extend your Python with MPI.
This is also discussed below.
Thus you should first decide how you intend to use LAMMPS from Python.
There are 3 options:
(1) Use LAMMPS on a single processor running Python.
(2) Use LAMMPS in parallel, where each processor runs Python, but your
Python program does not use MPI.
(3) Use LAMMPS in parallel, where each processor runs Python, and your
Python script also makes MPI calls through a Python/MPI interface.
Note that for (2) and (3) you will not be able to use Python
interactively by typing commands and getting a response. This is
because you will have multiple instances of Python running (e.g. on a
parallel machine) and they cannot all read what you type.
Working in mode (1) does not require your machine to have MPI
installed. You should extend your Python with a serial version of
LAMMPS and the dummy MPI library provided with LAMMPS. See
instructions below on how to do this.
Working in mode (2) requires your machine to have an MPI library
installed, but your Python does not need to be extended with MPI
itself. The MPI library must be a shared library (e.g. a *.so file on
Linux) which is not typically created when MPI is built/installed.
See instruction below on how to do this. You should extend your
Python with the a parallel versionn of LAMMPS which will use the
shared MPI system library. See instructions below on how to do this.
Working in mode (3) requires your machine to have MPI installed (as a
shared library as in (2)). You must also extend your Python with a
parallel version of LAMMPS (same as in (2)) and with MPI itself, via
one of several available Python/MPI packages. See instructions below
on how to do the latter task.
Several of the following sub-sections cover the rest of the Python
setup discussion. The next to last sub-section describes the Python
syntax used to invoke LAMMPS. The last sub-section describes example
Python scripts included in the python directory.
Before proceeding, there are 2 items to note.
(1) The provided Python wrapper for LAMMPS uses the amazing and
magical (to me) "ctypes" package in Python, which auto-generates the
interface code needed between Python and a set of C interface routines
for a library. Ctypes is part of standard Python for versions 2.5 and
later. You can check which version of Python you have installed, by
simply typing "python" at a shell prompt.
(2) Any library wrapped by Python, including LAMMPS, must be built as
a shared library (e.g. a *.so file on Linux and not a *.a file). The
python/setup_serial.py and setup.py scripts do this build for LAMMPS
itself (described below). But if you have LAMMPS configured to use
additional packages that have their own libraries, then those
libraries must also be shared libraries. E.g. MPI, FFTW, or any of
the libraries in lammps/lib. When you build LAMMPS as a stand-alone
code, you are not building shared versions of these libraries.
The discussion below describes how to create a shared MPI library. I
suggest you start by configuing LAMMPS without packages installed that
require any libraries besides MPI. See "this
section"_Section_start.html#start_3 of the manual for a discussion of
LAMMPS packages. E.g. do not use the KSPACE, GPU, MEAM, POEMS, or
REAX packages.
If you are successfully follow the steps belwo to build the Python
wrappers and use this version of LAMMPS through Python, you can then
take the next step of adding LAMMPS packages that use additional
libraries. This will require you to build a shared library for that
package's library, similar to what is described below for MPI. It
will also require you to edit the python/setup_serial.py or setup.py
scripts to enable Python to access those libraries when it builds the
LAMMPS wrapper.
The Python wrapper for LAMMPS uses the amazing and magical (to me)
"ctypes" package in Python, which auto-generates the interface code
needed between Python and a set of C interface routines for a library.
Ctypes is part of standard Python for versions 2.5 and later. You can
check which version of Python you have installed, by simply typing
"python" at a shell prompt.
:line
:line
11.1 Extending Python with a serial version of LAMMPS :link(py_1),h4
11.1 Setting necessary environment variables :link(py_1),h4
From the python directory in the LAMMPS distribution, type
For Python to use the LAMMPS interface, it needs to find two files.
The paths to these files need to be added to two environment variables
that Python checks.
python setup_serial.py build :pre
The first is the environment variable PYTHONPATH. It needs
to include the directory where the python/lammps.py file is.
and then one of these commands:
For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
sudo python setup_serial.py install
python setup_serial.py install --home=~/foo :pre
setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python :pre
The "build" command should compile all the needed LAMMPS files,
including its dummy MPI library. The first "install" command will put
the needed files in your Python's site-packages sub-directory, so that
Python can load them. For example, if you installed Python yourself
on a Linux machine, it would typically be somewhere like
/usr/local/lib/python2.5/site-packages. Installing Python packages
this way often requires you to be able to write to the Python
directories, which may require root priveleges, hence the "sudo"
prefix. If this is not the case, you can drop the "sudo". If you use
the "sudo" prefix and you have installed Python yourself, you should
make sure that root uses the same Python as the one you did the
"install" in. E.g. these 2 commands may do the install in different
Python versions:
The second is the environment variable LD_LIBRARY_PATH, which is used
by the operating system to find dynamic shared libraries when it loads
them. It needs to include the directory where the shared LAMMPS
library will be. Normally this is the LAMMPS src dir, as explained in
the following section.
python setup_serial.py install --home=~/foo
python /usr/local/bin/python/setup_serial.py install --home=~/foo :pre
For the csh or tcsh shells, you could add something like this to your
~/.cshrc file:
Alternatively, you can install the LAMMPS files (or any other Python
packages) in your own user space. The second "install" command does
this, where you should replace "foo" with your directory of choice.
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src :pre
If these commands are successful, a {lammps.py} and
{_lammps_serial.so} file will be put in the appropriate directory.
As discussed below, if your LAMMPS build includes auxiliary libraries,
they must also be available as shared libraries for Python to
successfully load LAMMPS. If they are not in default places where the
operating system can find them, then you also have to add their paths
to the LD_LIBRARY_PATH environment variable.
For example, if you are using the dummy MPI library provided in
src/STUBS, you need to add something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS :pre
If you are using the LAMMPS USER-ATC package, you need to add
something like this to your ~/.cshrc file:
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/lib/atc :pre
:line
11.2 Creating a shared MPI library :link(py_2),h4
11.2 Building LAMMPS as a shared library :link(py_2),h4
A shared library is one that is dynamically loadable, which is what
Python requires. On Linux this is a library file that ends in ".so",
not ".a". Such a shared library is normally not built if you
installed MPI yourself, but it is easy to do. Here is how to do it
for "MPICH"_mpich, a popular open-source version of MPI, distributed
by Argonne National Labs. From within the mpich directory, type
Instructions on how to build LAMMPS as a shared library are given in
"Section_start 5"_Section_start.html#start_5. A shared library is one
that is dynamically loadable, which is what Python requires. On Linux
this is a library file that ends in ".so", not ".a".
>From the src directory, type
make makeshlib
make -f Makefile.shlib foo
where foo is the machine target name, such as linux or g++ or serial.
This should create the file liblmp_foo.so in the src directory, as
well as a soft link liblmp.so which is what the Python wrapper will
load by default. If you are building multiple machine versions of the
shared library, the soft link is always set to the most recently built
version.
Note that as discussed in below, a LAMMPS build may depend on several
auxiliary libraries, which are specified in your low-level
src/Makefile.foo file. For example, an MPI library, the FFTW library,
a JPEG library, etc. Depending on what LAMMPS packages you have
installed, the build may also require additional libraries from the
lib directories, such as lib/atc/libatc.so or lib/reax/libreax.so.
You must insure that each of these libraries exist in shared library
form (*.so file for Linux systems), or either the LAMMPS shared
library build or the Python load of the library will fail. For the
load to be successful all the shared libraries must also be in
directories that the operating system checks. See the discussion in
the preceding section about the LD_LIBRARY_PATH environment variable
for how to insure this.
Note that some system libraries, such as MPI, if you installed it
yourself, may not be built by default as shared libraries. The build
instructions for the library should tell you how to do this.
For example, here is how to build and install the "MPICH
library"_mpich, a popular open-source version of MPI, distributed by
Argonne National Labs, as a shared library in the default
/usr/local/lib location:
:link(mpich,http://www-unix.mcs.anl.gov/mpi)
@ -186,61 +164,25 @@ by Argonne National Labs. From within the mpich directory, type
make
make install :pre
You may need to use "sudo make install" in place of the last line.
The end result should be the file libmpich.so in /usr/local/lib.
You may need to use "sudo make install" in place of the last line if
you do not have write priveleges for /usr/local/lib. The end result
should be the file /usr/local/lib/libmpich.so.
IMPORTANT NOTE: If the file libmpich.a already exists in your
installation directory (e.g. /usr/local/lib), you will now have both a
static and shared MPI library. This will be fine for running LAMMPS
from Python since it only uses the shared library. But if you now try
to build LAMMPS by itself as a stand-alone program (cd lammps/src;
make foo) or build other codes that expect to link against libmpich.a,
then those builds may fail if the linker uses libmpich.so instead. If
this happens, it means you will need to remove the file
/usr/local/lib/libmich.so before building LAMMPS again as a
stand-alone code.
Note that not all of the auxiliary libraries provided with LAMMPS have
shared-library Makefiles in their lib directories. Typically this
simply requires a Makefile.foo that adds a -fPIC switch when files are
compiled and a "-fPIC -shared" switches when the library is linked
with a C++ (or Fortran) compiler, as well as an output target that
ends in ".so", like libatc.o. As we or others create and contribute
these Makefiles, we will add them to the LAMMPS distribution.
:line
11.3 Extending Python with a parallel version of LAMMPS :link(py_3),h4
11.3 Extending Python with MPI to run in parallel :link(py_3),h4
From the python directory, type
python setup.py build :pre
and then one of these commands:
sudo python setup.py install
python setup.py install --home=~/foo :pre
The "build" command should compile all the needed LAMMPS C++ files,
which will require MPI to be installed on your system. This means it
must find both the header file mpi.h and a shared library file,
e.g. libmpich.so if the MPICH version of MPI is installed. See the
preceding section for how to create a shared library version of MPI if
it does not exist. You may need to adjust the "include_dirs" and
"library_dirs" and "libraries" fields in python/setup.py to
insure the Python build finds all the files it needs.
The first "install" command will put the needed files in your Python's
site-packages sub-directory, so that Python can load them. For
example, if you installed Python yourself on a Linux machine, it would
typically be somewhere like /usr/local/lib/python2.5/site-packages.
Installing Python packages this way often requires you to be able to
write to the Python directories, which may require root priveleges,
hence the "sudo" prefix. If this is not the case, you can drop the
"sudo".
Alternatively, you can install the LAMMPS files (or any other Python
packages) in your own user space. The second "install" command does
this, where you should replace "foo" with your directory of choice.
If these commands are successful, a {lammps.py} and {_lammps.so} file
will be put in the appropriate directory.
:line
11.4 Extending Python with MPI :link(py_4),h4
If you wish to run LAMMPS in parallel from Python, you need to extend
your Python with an interface to MPI. This also allows you to
make MPI calls directly from Python in your script, if you desire.
There are several Python packages available that purport to wrap MPI
as a library and allow MPI functions to be called from Python.
@ -256,26 +198,26 @@ These include
All of these except pyMPI work by wrapping the MPI library (which must
be available on your system as a shared library, as discussed above),
and exposing (some portion of) its interface to your Python script.
This means they cannot be used interactively in parallel, since they
This means Python cannot be used interactively in parallel, since they
do not address the issue of interactive input to multiple instances of
Python running on different processors. The one exception is pyMPI,
which alters the Python interpreter to address this issue, and (I
believe) creates a new alternate executable (in place of python
believe) creates a new alternate executable (in place of "python"
itself) as a result.
In principle any of these Python/MPI packages should work to invoke
both calls to LAMMPS and MPI itself from a Python script running in
parallel. However, when I downloaded and looked at a few of them,
their docuemtation was incomplete and I had trouble with their
installation. It's not clear if some of the packages are still being
actively developed and supported.
LAMMPS in parallel and MPI calls themselves from a Python script which
is itself running in parallel. However, when I downloaded and looked
at a few of them, their documentation was incomplete and I had trouble
with their installation. It's not clear if some of the packages are
still being actively developed and supported.
The one I recommend, since I have successfully used it with LAMMPS, is
Pypar. Pypar requires the ubiquitous "Numpy
package"_http://numpy.scipy.org be installed in your Python. After
launching python, type
>>> import numpy :pre
import numpy :pre
to see if it is installed. If not, here is how to install it (version
1.3.0b1 as of April 2009). Unpack the numpy tarball and from its
@ -299,105 +241,123 @@ your Python distribution's site-packages directory.
If you have successully installed Pypar, you should be able to run
python serially and type
>>> import pypar :pre
import pypar :pre
without error. You should also be able to run python in parallel
on a simple test script
% mpirun -np 4 python test.script :pre
% mpirun -np 4 python test.py :pre
where test.script contains the lines
where test.py contains the lines
import pypar
print "Proc %d out of %d procs" % (pypar.rank(),pypar.size()) :pre
and see one line of output for each processor you ran on.
and see one line of output for each processor you run on.
:line
11.5 Testing the Python-LAMMPS interface :link(py_5),h4
11.4 Testing the Python-LAMMPS interface :link(py_4),h4
Before using LAMMPS in a Python program, one more step is needed. The
interface to LAMMPS is via the Python ctypes package, which loads the
shared LAMMPS library via a CDLL() call, which in turn is a wrapper on
the C-library dlopen(). This command is different than a normal
Python "import" and needs to be able to find the LAMMPS shared
library, which is either in the Python site-packages directory or in a
local directory you specified in the "python setup.py install"
command, as described above.
The simplest way to do this is add a line like this to your
.cshrc or other shell start-up file.
setenv LD_LIBRARY_PATH
$\{LD_LIBRARY_PATH\}:/usr/local/lib/python2.5/site-packages :pre
and then execute the shell file to insure the path has been updated.
This will extend the path that dlopen() uses to look for shared
libraries.
To test if the serial LAMMPS library has been successfully installed
(mode 1 above), launch Python and type
To test if LAMMPS is callable from Python, launch Python interactively
and type:
>>> from lammps import lammps
>>> lmp = lammps() :pre
If you get no errors, you're ready to use serial LAMMPS from Python.
If you get no errors, you're ready to use LAMMPS from Python.
If the load fails, the most common error to see is
If you built LAMMPS for parallel use (mode 2 or 3 above), launch
Python in parallel:
"CDLL: asdfasdfasdf"
% mpirun -np 4 python test.script :pre
which means Python was unable to load the LAMMPS shared library. This
can occur if it can't find the LAMMMPS library; see the environment
variable discussion "above"_#python_1. Or if it can't find one of the
auxiliary libraries that was specified in the LAMMPS build, in a
shared dynamic library format. This includes all libraries needed by
main LAMMPS (e.g. MPI or FFTW or JPEG), system libraries needed by
main LAMMPS (e.g. extra libs needed by MPI), or packages you have
installed that require libraries provided with LAMMPS (e.g. the
USER-ATC package require lib/atc/libatc.so) or system libraries
(e.g. BLAS or Fortran-to-C libraries) listed in the
lib/package/Makefile.lammps file. Again, all of these must be
available as shared libraries, or the Python load will fail.
where test.script contains the lines
Python (actually the operating system) isn't verbose about telling you
why the load failed, so go through the steps above and in
"Section_start 5"_Section_start.html#start_5 carefully.
[Test LAMMPS and Python in serial:] :h5
To run a LAMMPS test in serial, type these lines into Python
interactively from the bench directory:
>>> from lammps import lammps
>>> lmp = lammps()
>>> lmp.file("in.lj") :pre
Or put the same lines in the file test.py and run it as
% python test.py :pre
Either way, you should see the results of running the in.lj benchmark
on a single processor appear on the screen, the same as if you had
typed something like:
lmp_g++ < in.lj :pre
[Test LAMMPS and Python in parallel:] :h5
To run LAMMPS in parallel, assuming you have installed the
"Pypar"_http://datamining.anu.edu.au/~ole/pypar package as discussed
above, create a test.py file containing these lines:
import pypar
from lammps import lammps
lmp = lammps()
print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()), lmp
lmp.file("in.lj")
print "Proc %d out of %d procs has" % (pypar.rank(),pypar.size()),lmp
pypar.finalize() :pre
Again, if you get no errors, you're good to go.
You can then run it in parallel as:
Note that if you left out the "import pypar" line from this script,
you would instantiate and run LAMMPS independently on each of the P
processors specified in the mpirun command. You can test if Pypar is
enabling true parallel Python and LAMMPS by adding a line to the above
sequence of commands like lmp.file("in.lj") to run an input script and
see if the LAMMPS run says it ran on P processors or if you get output
from P duplicated 1-processor runs written to the screen. In the
latter case, Pypar is not working correctly.
% mpirun -np 4 python test.py :pre
Note that this line:
and you should see the same output as if you had typed
from lammps import lammps :pre
% mpirun -np 4 lmp_g++ < in.lj :pre
will import either the serial or parallel version of the LAMMPS
library, as wrapped by lammps.py. But if you installed both via
setup_serial.py and setup.py, it will always import the parallel
version, since it attempts that first.
Note that if you leave out the 3 lines from test.py that specify Pypar
commands you will instantiate and run LAMMPS independently on each of
the P processors specified in the mpirun command. In this case you
should get 4 sets of output, each showing that a run was made on a
single processor, instead of one set of output showing that it ran on
4 processors. If the 1-processor outputs occur, it means that Pypar
is not working correctly.
Note that if your Python script imports the Pypar package (as above),
so that it can use MPI calls directly, then Pypar initializes MPI for
you. Thus the last line of your Python script should be
pypar.finalize(), to insure MPI is shut down correctly.
Also note that once you import the PyPar module, Pypar initializes MPI
for you, and you can use MPI calls directly in your Python script, as
described in the Pypar documentation. The last line of your Python
script should be pypar.finalize(), to insure MPI is shut down
correctly.
Also note that a Python script can be invoked in one of several ways:
Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways:
% python foo.script
% python -i foo.script
% foo.script
% foo.script :pre
The last command requires that the first line of the script be
something like this:
#!/usr/local/bin/python
#!/usr/local/bin/python -i
#!/usr/local/bin/python -i :pre
where the path points to where you have Python installed, and that you
have made the script file executable:
% chmod +x foo.script
% chmod +x foo.script :pre
Without the "-i" flag, Python will exit when the script finishes.
With the "-i" flag, you will be left in the Python interpreter when
@ -408,14 +368,15 @@ Python on a single processor, not in parallel.
:line
:line
11.6 Using LAMMPS from Python :link(py_6),h4
11.5 Using LAMMPS from Python :link(py_5),h4
The Python interface to LAMMPS consists of a Python "lammps" module,
the source code for which is in python/lammps.py, which creates a
"lammps" object, with a set of methods that can be invoked on that
object. The sample Python code below assumes you have first imported
the "lammps" module in your Python script and its settings as
follows:
the "lammps" module in your Python script. You can also include its
settings as follows, which are useful in test return values from some
of the methods described below:
from lammps import lammps
from lammps import LMPINT as INT
@ -429,8 +390,10 @@ at the file src/library.cpp you will see that they correspond
one-to-one with calls you can make to the LAMMPS library from a C++ or
C or Fortran program.
lmp = lammps() # create a LAMMPS object
lmp = lammps(list) # ditto, with command-line args, list = \["-echo","screen"\] :pre
lmp = lammps() # create a LAMMPS object using the default liblmp.so library
lmp = lammps("g++") # create a LAMMPS object using the liblmp_g++.so library
lmp = lammps("",list) # ditto, with command-line args, list = \["-echo","screen"\]
lmp = lammps("g++",list) :pre
lmp.close() # destroy a LAMMPS object :pre
@ -438,16 +401,16 @@ lmp.file(file) # run an entire input script, file = "in.lj"
lmp.command(cmd) # invoke a single LAMMPS command, cmd = "run 100" :pre
xlo = lmp.extract_global(name,type) # extract a global quantity
# name = "boxxlo", "nlocal", etc
# name = "boxxlo", "nlocal", etc
# type = INT or DOUBLE :pre
coords = lmp.extract_atom(name,type) # extract a per-atom quantity
# name = "x", "type", etc
# name = "x", "type", etc
# type = IPTR or DPTR or DPTRPTR :pre
eng = lmp.extract_compute(id,style,type) # extract value(s) from a compute
v3 = lmp.extract_fix(id,style,type,i,j) # extract value(s) from a fix
# id = ID of compute or fix
# id = ID of compute or fix
# style = 0 = global data
# 1 = per-atom data
# 2 = local data
@ -468,11 +431,22 @@ lmp.put_coords(x) # set all atom coords via x :pre
:line
The creation of a LAMMPS object does not take an MPI communicator as
an argument. There should be a way to do this, so that the LAMMPS
instance runs on a subset of processors, if desired, but I don't yet
know how from Pypar. So for now, it runs on MPI_COMM_WORLD, which is
all the processors.
IMPORTANT NOTE: Currently, the creation of a LAMMPS object does not
take an MPI communicator as an argument. There should be a way to do
this, so that the LAMMPS instance runs on a subset of processors if
desired, but I don't know how to do it from Pypar. So for now, it
runs on MPI_COMM_WORLD, which is all the processors. If someone
figures out how to do this with one or more of the Python wrappers for
MPI, like Pypar, please let us know and we will amend these doc pages.
Note that you can create multiple LAMMPS objects in your Python
script, and coordinate and run multiple simulations, e.g.
from lammps import lammps
lmp1 = lammps()
lmp2 = lammps()
lmp1.file("in.file1")
lmp2.file("in.file2") :pre
The file() and command() methods allow an input script or single
commands to be invoked.
@ -583,15 +557,10 @@ following steps:
Add a new interface function to src/library.cpp and
src/library.h. :ulb,l
Verify the new function is syntactically correct by building LAMMPS as
a library - see "Section_start 4"_Section_start.html#start_5 of the
manual. :l
Rebuild LAMMPS as a shared library. :l
Add a wrapper method in the Python LAMMPS module to python/lammps.py
for this interface function. :l
Rebuild the Python wrapper via python/setup_serial.py or
python/setup.py. :l
Add a wrapper method to python/lammps.py for this interface
function. :l
You should now be able to invoke the new interface function from a
Python script. Isn't ctypes amazing? :l,ule
@ -599,7 +568,7 @@ Python script. Isn't ctypes amazing? :l,ule
:line
:line
11.7 Example Python scripts that use LAMMPS :link(py_7),h4
11.6 Example Python scripts that use LAMMPS :link(py_6),h4
These are the Python scripts included as demos in the python/examples
directory of the LAMMPS distribution, to illustrate the kinds of

View File

@ -44,7 +44,6 @@ sub-directories:
<TR><TD >README</TD><TD > text file</TD></TR>
<TR><TD >LICENSE</TD><TD > the GNU General Public License (GPL)</TD></TR>
<TR><TD >bench</TD><TD > benchmark problems</TD></TR>
<TR><TD >couple</TD><TD > code coupling examples, using LAMMPS as a library</TD></TR>
<TR><TD >doc</TD><TD > documentation</TD></TR>
<TR><TD >examples</TD><TD > simple test problems</TD></TR>
<TR><TD >potentials</TD><TD > embedded atom method (EAM) potential files</TD></TR>
@ -774,39 +773,80 @@ input scripts.
<H4><A NAME = "start_5"></A>2.5 Building LAMMPS as a library
</H4>
<P>LAMMPS itself can be built as a library, which can then be called from
another application or a scripting language. See <A HREF = "Section_howto.html#howto_10">this
section</A> for more info on coupling LAMMPS
to other codes. Building LAMMPS as a library is done by typing
<P>LAMMPS can be built as either a static or shared library, which can
then be called from another application or a scripting language. See
<A HREF = "Section_howto.html#howto_10">this section</A> for more info on coupling
LAMMPS to other codes. See <A HREF = "Section_python.html">this section</A> for
more info on wrapping and running LAMMPS from Python.
</P>
<P>To build LAMMPS as a static library (*.a file on Linux), type
</P>
<PRE>make makelib
make -f Makefile.lib foo
</PRE>
<P>where foo is the machine name. Note that inclusion or exclusion of
any desired optional packages should be done before typing "make
makelib". The first "make" command will create a current Makefile.lib
with all the file names in your src dir. The 2nd "make" command will
use it to build LAMMPS as a library. This requires that Makefile.foo
have a library target (lib) and system-specific settings for ARCHIVE
and ARFLAGS. See Makefile.linux for an example. The build will
create the file liblmp_foo.a which another application can link to.
<P>where foo is the machine name. This kind of library is typically used
to statically link a driver application to all of LAMMPS, so that you
can insure all dependencies are satisfied at compile time. Note that
inclusion or exclusion of any desired optional packages should be done
before typing "make makelib". The first "make" command will create a
current Makefile.lib with all the file names in your src dir. The 2nd
"make" command will use it to build LAMMPS as a static library, using
the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build
will create the file liblmp_foo.a which another application can link
to.
</P>
<P>When used from a C++ program, the library allows one or more LAMMPS
objects to be instantiated. All of LAMMPS is wrapped in a LAMMPS_NS
<P>To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type
</P>
<PRE>make makeshlib
make -f Makefile.shlib foo
</PRE>
<P>where foo is the machine name. This kind of library is required when
wrapping LAMMPS with Python; see <A HREF = "Section_python.html">Section_python</A>
for details. Again, note that inclusion or exclusion of any desired
optional packages should be done before typing "make makelib". The
first "make" command will create a current Makefile.shlib with all the
file names in your src dir. The 2nd "make" command will use it to
build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
settings in src/MAKE/Makefile.foo. The build will create the file
liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by
default.
</P>
<P>Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
libraries, and be find-able by the operating system. Else you will
get a run-time error when the shared library is loaded. For LAMMPS,
this includes all libraries needed by main LAMMPS (e.g. MPI or FFTW or
JPEG), system libraries needed by main LAMMPS (e.g. extra libs needed
by MPI), or packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
libraries) listed in the lib/package/Makefile.lammps file. See the
discussion about the LAMMPS shared library in
<A HREF = "Section_python.html">Section_python</A> for details about how to build
shared versions of these libraries, and how to insure the operating
system can find them, by setting the LD_LIBRARY_PATH environment
variable correctly.
</P>
<P>Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
</P>
<P>When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
within your application code, as needed.
within the calling code, as needed.
</P>
<P>When used from a C or Fortran program or a scripting language, the
library has a simple function-style interface, provided in
<P>When used from a C or Fortran program or a scripting language like
Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
</P>
<P>See the sample codes couple/simple/simple.cpp and simple.c as examples
of C++ and C codes that invoke LAMMPS thru its library interface.
There are other examples as well in the couple directory which are
discussed in <A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the
manual. See <A HREF = "Section_python.html">Section_python</A> of the manual for a
description of the Python wrapper provided with LAMMPS that operates
through the LAMMPS library interface.
<P>See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are
other examples as well in the COUPLE directory which are discussed in
<A HREF = "Section_howto.html#howto_10">Section_howto 10</A> of the manual. See
<A HREF = "Section_python.html">Section_python</A> of the manual for a description
of the Python wrapper provided with LAMMPS that operates through the
LAMMPS library interface.
</P>
<P>The files src/library.cpp and library.h contain the C-style interface
to LAMMPS. See <A HREF = "Section_howto.html#howto_19">Section_howto 19</A> of the

View File

@ -39,7 +39,6 @@ sub-directories:
README: text file
LICENSE: the GNU General Public License (GPL)
bench: benchmark problems
couple: code coupling examples, using LAMMPS as a library
doc: documentation
examples: simple test problems
potentials: embedded atom method (EAM) potential files
@ -768,39 +767,80 @@ input scripts.
2.5 Building LAMMPS as a library :h4,link(start_5)
LAMMPS itself can be built as a library, which can then be called from
another application or a scripting language. See "this
section"_Section_howto.html#howto_10 for more info on coupling LAMMPS
to other codes. Building LAMMPS as a library is done by typing
LAMMPS can be built as either a static or shared library, which can
then be called from another application or a scripting language. See
"this section"_Section_howto.html#howto_10 for more info on coupling
LAMMPS to other codes. See "this section"_Section_python.html for
more info on wrapping and running LAMMPS from Python.
To build LAMMPS as a static library (*.a file on Linux), type
make makelib
make -f Makefile.lib foo :pre
where foo is the machine name. Note that inclusion or exclusion of
any desired optional packages should be done before typing "make
makelib". The first "make" command will create a current Makefile.lib
with all the file names in your src dir. The 2nd "make" command will
use it to build LAMMPS as a library. This requires that Makefile.foo
have a library target (lib) and system-specific settings for ARCHIVE
and ARFLAGS. See Makefile.linux for an example. The build will
create the file liblmp_foo.a which another application can link to.
where foo is the machine name. This kind of library is typically used
to statically link a driver application to all of LAMMPS, so that you
can insure all dependencies are satisfied at compile time. Note that
inclusion or exclusion of any desired optional packages should be done
before typing "make makelib". The first "make" command will create a
current Makefile.lib with all the file names in your src dir. The 2nd
"make" command will use it to build LAMMPS as a static library, using
the ARCHIVE and ARFLAGS settings in src/MAKE/Makefile.foo. The build
will create the file liblmp_foo.a which another application can link
to.
When used from a C++ program, the library allows one or more LAMMPS
objects to be instantiated. All of LAMMPS is wrapped in a LAMMPS_NS
To build LAMMPS as a shared library (*.so file on Linux), which can be
dynamically loaded, type
make makeshlib
make -f Makefile.shlib foo :pre
where foo is the machine name. This kind of library is required when
wrapping LAMMPS with Python; see "Section_python"_Section_python.html
for details. Again, note that inclusion or exclusion of any desired
optional packages should be done before typing "make makelib". The
first "make" command will create a current Makefile.shlib with all the
file names in your src dir. The 2nd "make" command will use it to
build LAMMPS as a shared library, using the SHFLAGS and SHLIBFLAGS
settings in src/MAKE/Makefile.foo. The build will create the file
liblmp_foo.so which another application can link to dyamically, as
well as a soft link liblmp.so, which the Python wrapper uses by
default.
Note that for a shared library to be usable by a calling program, all
the auxiliary libraries it depends on must also exist as shared
libraries, and be find-able by the operating system. Else you will
get a run-time error when the shared library is loaded. For LAMMPS,
this includes all libraries needed by main LAMMPS (e.g. MPI or FFTW or
JPEG), system libraries needed by main LAMMPS (e.g. extra libs needed
by MPI), or packages you have installed that require libraries
provided with LAMMPS (e.g. the USER-ATC package require
lib/atc/libatc.so) or system libraries (e.g. BLAS or Fortran-to-C
libraries) listed in the lib/package/Makefile.lammps file. See the
discussion about the LAMMPS shared library in
"Section_python"_Section_python.html for details about how to build
shared versions of these libraries, and how to insure the operating
system can find them, by setting the LD_LIBRARY_PATH environment
variable correctly.
Either flavor of library allows one or more LAMMPS objects to be
instantiated from the calling program.
When used from a C++ program, all of LAMMPS is wrapped in a LAMMPS_NS
namespace; you can safely use any of its classes and methods from
within your application code, as needed.
within the calling code, as needed.
When used from a C or Fortran program or a scripting language, the
library has a simple function-style interface, provided in
When used from a C or Fortran program or a scripting language like
Python, the library has a simple function-style interface, provided in
src/library.cpp and src/library.h.
See the sample codes couple/simple/simple.cpp and simple.c as examples
of C++ and C codes that invoke LAMMPS thru its library interface.
There are other examples as well in the couple directory which are
discussed in "Section_howto 10"_Section_howto.html#howto_10 of the
manual. See "Section_python"_Section_python.html of the manual for a
description of the Python wrapper provided with LAMMPS that operates
through the LAMMPS library interface.
See the sample codes in examples/COUPLE/simple for examples of C++ and
C codes that invoke LAMMPS thru its library interface. There are
other examples as well in the COUPLE directory which are discussed in
"Section_howto 10"_Section_howto.html#howto_10 of the manual. See
"Section_python"_Section_python.html of the manual for a description
of the Python wrapper provided with LAMMPS that operates through the
LAMMPS library interface.
The files src/library.cpp and library.h contain the C-style interface
to LAMMPS. See "Section_howto 19"_Section_howto.html#howto_19 of the

View File

@ -21,15 +21,17 @@
<LI>input = one or more attributes
<PRE> possible attributes = natom1 natom2
patom1 patom2
<PRE> possible attributes = natom1 natom2 ntype1 ntype2
patom1 patom2 ptype1 ptype2
batom1 batom2 btype
aatom1 aatom2 aatom3 atype
datom1 datom2 datom3 dtype
iatom1 iatom2 iatom3 itype
</PRE>
<PRE> natom1, natom2 = IDs of 2 atoms in each pair (within neighbor cutoff)
ntype1, ntype2 = type of 2 atoms in each pair (within neighbor cutoff)
patom1, patom2 = IDs of 2 atoms in each pair (within force cutoff)
ptype1, ptype2 = type of 2 atoms in each pair (within force cutoff)
batom1, batom2 = IDs of 2 atoms in each bond
btype = bond type of each bond
aatom1, aatom2, aatom3 = IDs of 3 atoms in each angle
@ -91,7 +93,9 @@ local</A> command in a consistent way.
</P>
<P>The <I>natom1</I> and <I>natom2</I>, or <I>patom1</I> and <I>patom2</I> attributes refer
to the atom IDs of the 2 atoms in each pairwise interaction computed
by the <A HREF = "pair_style.html">pair_style</A> command.
by the <A HREF = "pair_style.html">pair_style</A> command. The <I>ntype1</I> and
<I>ntype2</I>, or <I>ptype1</I> and <I>ptype2</I> attributes refer to the atom types
of the 2 atoms in each pairwise interaction.
</P>
<P>IMPORTANT NOTE: For pairs, if two atoms I,J are involved in 1-2, 1-3,
1-4 interactions within the molecular topology, their pairwise
@ -107,9 +111,11 @@ command.
atoms in each <A HREF = "bond_style.html">bond</A>. The <I>btype</I> attribute refers to
the type of the bond, from 1 to Nbtypes = # of bond types. The number
of bond types is defined in the data file read by the
<A HREF = "read_data.html">read_data</A> command. The attributes that start with
"a", "d", "i", refer to similar values for <A HREF = "angle_style.html">angles</A>,
<A HREF = "dihedral_style.html">dihedrals</A>, and <A HREF = "improper_style.html">impropers</A>.
<A HREF = "read_data.html">read_data</A> command.
</P>
<P>The attributes that start with "a", "d", "i", refer to similar values
for <A HREF = "angle_style.html">angles</A>, <A HREF = "dihedral_style.html">dihedrals</A>, and
<A HREF = "improper_style.html">impropers</A>.
</P>
<P><B>Output info:</B>
</P>

View File

@ -15,15 +15,17 @@ compute ID group-ID property/local input1 input2 ... :pre
ID, group-ID are documented in "compute"_compute.html command :ulb,l
property/local = style name of this compute command :l
input = one or more attributes :l
possible attributes = natom1 natom2
patom1 patom2
possible attributes = natom1 natom2 ntype1 ntype2
patom1 patom2 ptype1 ptype2
batom1 batom2 btype
aatom1 aatom2 aatom3 atype
datom1 datom2 datom3 dtype
iatom1 iatom2 iatom3 itype :pre
natom1, natom2 = IDs of 2 atoms in each pair (within neighbor cutoff)
ntype1, ntype2 = type of 2 atoms in each pair (within neighbor cutoff)
patom1, patom2 = IDs of 2 atoms in each pair (within force cutoff)
ptype1, ptype2 = type of 2 atoms in each pair (within force cutoff)
batom1, batom2 = IDs of 2 atoms in each bond
btype = bond type of each bond
aatom1, aatom2, aatom3 = IDs of 3 atoms in each angle
@ -84,7 +86,9 @@ local"_dump.html command in a consistent way.
The {natom1} and {natom2}, or {patom1} and {patom2} attributes refer
to the atom IDs of the 2 atoms in each pairwise interaction computed
by the "pair_style"_pair_style.html command.
by the "pair_style"_pair_style.html command. The {ntype1} and
{ntype2}, or {ptype1} and {ptype2} attributes refer to the atom types
of the 2 atoms in each pairwise interaction.
IMPORTANT NOTE: For pairs, if two atoms I,J are involved in 1-2, 1-3,
1-4 interactions within the molecular topology, their pairwise
@ -100,9 +104,11 @@ The {batom1} and {batom2} attributes refer to the atom IDs of the 2
atoms in each "bond"_bond_style.html. The {btype} attribute refers to
the type of the bond, from 1 to Nbtypes = # of bond types. The number
of bond types is defined in the data file read by the
"read_data"_read_data.html command. The attributes that start with
"a", "d", "i", refer to similar values for "angles"_angle_style.html,
"dihedrals"_dihedral_style.html, and "impropers"_improper_style.html.
"read_data"_read_data.html command.
The attributes that start with "a", "d", "i", refer to similar values
for "angles"_angle_style.html, "dihedrals"_dihedral_style.html, and
"impropers"_improper_style.html.
[Output info:]

View File

@ -8,8 +8,8 @@ model a realistic problem.
See these sections of the LAMMPS manaul for details:
2.4 Building LAMMPS as a library (doc/Section_start.html#2_4)
4.10 Coupling LAMMPS to other codes (doc/Section_howto.html#4_10)
2.5 Building LAMMPS as a library (doc/Section_start.html#start_5)
6.10 Coupling LAMMPS to other codes (doc/Section_howto.html#howto_10)
In all of the examples included here, LAMMPS must first be built as a
library. Basically, you type something like

View File

@ -0,0 +1,114 @@
units lj
dimension 2
atom_style atomic
read_data data.lammps
mass * 1.0
pair_style lj/cut 2.5
pair_coeff * * 1.0 1.2
pair_coeff 1 1 1.0 1.0
pair_coeff 2 2 1.0 1.0
pair_coeff 3 3 1.0 1.0
pair_coeff 4 4 1.0 1.0
pair_coeff 5 5 1.0 1.0
pair_coeff 6 6 1.0 1.0
pair_coeff 7 7 1.0 1.0
pair_coeff 8 8 1.0 1.0
pair_coeff 9 9 1.0 1.0
pair_coeff 10 10 1.0 1.0
pair_coeff 11 11 1.0 1.0
pair_coeff 12 12 1.0 1.0
pair_coeff 13 13 1.0 1.0
pair_coeff 14 14 1.0 1.0
pair_coeff 15 15 1.0 1.0
pair_coeff 16 16 1.0 1.0
pair_coeff 17 17 1.0 1.0
pair_coeff 18 18 1.0 1.0
pair_coeff 19 19 1.0 1.0
pair_coeff 20 20 1.0 1.0
pair_coeff 21 21 1.0 1.0
pair_coeff 22 22 1.0 1.0
pair_coeff 23 23 1.0 1.0
pair_coeff 24 24 1.0 1.0
pair_coeff 25 25 1.0 1.0
pair_coeff 26 26 1.0 1.0
pair_coeff 27 27 1.0 1.0
pair_coeff 28 28 1.0 1.0
pair_coeff 29 29 1.0 1.0
pair_coeff 30 30 1.0 1.0
pair_coeff 31 31 1.0 1.0
pair_coeff 32 32 1.0 1.0
pair_coeff 33 33 1.0 1.0
pair_coeff 34 34 1.0 1.0
pair_coeff 35 35 1.0 1.0
pair_coeff 36 36 1.0 1.0
pair_coeff 37 37 1.0 1.0
pair_coeff 38 38 1.0 1.0
pair_coeff 39 39 1.0 1.0
pair_coeff 40 40 1.0 1.0
pair_coeff 41 41 1.0 1.0
pair_coeff 42 42 1.0 1.0
pair_coeff 43 43 1.0 1.0
pair_coeff 44 44 1.0 1.0
pair_coeff 45 45 1.0 1.0
pair_coeff 46 46 1.0 1.0
pair_coeff 47 47 1.0 1.0
pair_coeff 48 48 1.0 1.0
pair_coeff 49 49 1.0 1.0
pair_coeff 50 50 1.0 1.0
pair_coeff 51 51 1.0 1.0
pair_coeff 52 52 1.0 1.0
pair_coeff 53 53 1.0 1.0
pair_coeff 54 54 1.0 1.0
pair_coeff 55 55 1.0 1.0
pair_coeff 56 56 1.0 1.0
pair_coeff 57 57 1.0 1.0
pair_coeff 58 58 1.0 1.0
pair_coeff 59 59 1.0 1.0
pair_coeff 60 60 1.0 1.0
pair_coeff 61 61 1.0 1.0
pair_coeff 62 62 1.0 1.0
pair_coeff 63 63 1.0 1.0
pair_coeff 64 64 1.0 1.0
pair_coeff 65 65 1.0 1.0
pair_coeff 66 66 1.0 1.0
pair_coeff 67 67 1.0 1.0
pair_coeff 68 68 1.0 1.0
pair_coeff 69 69 1.0 1.0
pair_coeff 70 70 1.0 1.0
pair_coeff 71 71 1.0 1.0
pair_coeff 72 72 1.0 1.0
pair_coeff 73 73 1.0 1.0
pair_coeff 74 74 1.0 1.0
pair_coeff 75 75 1.0 1.0
pair_coeff 76 76 1.0 1.0
pair_coeff 77 77 1.0 1.0
pair_coeff 78 78 1.0 1.0
pair_coeff 79 79 1.0 1.0
pair_coeff 80 80 1.0 1.0
pair_coeff 81 81 1.0 1.0
pair_coeff 82 82 1.0 1.0
pair_coeff 83 83 1.0 1.0
pair_coeff 84 84 1.0 1.0
pair_coeff 85 85 1.0 1.0
pair_coeff 86 86 1.0 1.0
pair_coeff 87 87 1.0 1.0
pair_coeff 88 88 1.0 1.0
pair_coeff 89 89 1.0 1.0
pair_coeff 90 90 1.0 1.0
pair_coeff 91 91 1.0 1.0
pair_coeff 92 92 1.0 1.0
pair_coeff 93 93 1.0 1.0
pair_coeff 94 94 1.0 1.0
pair_coeff 95 95 1.0 1.0
pair_coeff 96 96 1.0 1.0
pair_coeff 97 97 1.0 1.0
pair_coeff 98 98 1.0 1.0
pair_coeff 99 99 1.0 1.0
pair_coeff 100 100 1.0 1.0
compute da all displace/atom
dump 1 all atom 10 dump.md
thermo 1

View File

@ -26,13 +26,13 @@ This builds the C++ driver with the LAMMPS library using a C++ compiler:
g++ -I/home/sjplimp/lammps/src -c simple.cpp
g++ -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lpthread -o simpleCC
-llmp_g++ -lfftw -lmpich -lmpl -lpthread -o simpleCC
This builds the C driver with the LAMMPS library using a C compiler:
gcc -I/home/sjplimp/lammps/src -c simple.c
gcc -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lpthread -lstdc++ -o simpleC
-llmp_g++ -lfftw -lmpich -lmpl -lpthread -lstdc++ -o simpleC
This builds the Fortran wrapper and driver with the LAMMPS library
using a Fortran and C compiler:

View File

@ -1,12 +1,26 @@
LAMMPS example problems
There are 3 flavors of sub-directories in this file, each with sample
problems you can run with LAMMPS.
lower-case directories = simple test problems for LAMMPS and its packages
upper-case directories = more complex problems
USER directory with its own sub-directories = tests for USER packages
Each is discussed below.
------------------------------------------
Lower-case directories
Each of these sub-directories contains a sample problem you can run
with LAMMPS. Most are 2d models so that they run quickly, requiring a
few seconds to a few minutes to run on a desktop machine. Each
problem has an input script (in.*) and produces a log file (log.*) and
(optionally) a dump file (dump.*) or image files (image.*) when it
runs. Some use a data file (data.*) of initial coordinates as
additional input.
additional input. Some require that you install one or more optional
LAMMPS packages.
A few sample log file outputs on different machines and different
numbers of processors are included in the directories to compare your
@ -77,12 +91,22 @@ create a GIF file suitable for viewing in a browser.
------------------------------------------
There is also an ELASTIC directory with an example script for
computing elastic constants, using a zero temperature Si example. See
the in.elastic file for more info.
Upper-case directories
There is also a USER directory which contains subdirectories of
user-provided examples for user packages. See the README files in
those directories for more info. See the doc/Section_start.html file
for more info about user packages.
The COUPLE directory has examples of how to use LAMMPS as a library,
either by itself or in tandem with another code or library. See the
COUPLE/README file to get started.
The ELASTIC directory has an example script for computing elastic
constants, using a zero temperature Si example. See the
ELASTIC/in.elastic file for more info.
------------------------------------------
USER directory
The USER directory contains subdirectories of user-provided example
scripts for ser packages. See the README files in those directories
for more info. See the doc/Section_start.html file for more info
about installing and building user packages.

View File

@ -1,26 +1,48 @@
This directory contains Python code which wraps LAMMPS as a library
and allows the library interface to be invoked from a Python, either
from a script or interactively.
and allows the LAMMPS library interface to be invoked from Python,
either from a script or interactively.
Details on how to build and use this Python interface are given in
Details on the Python interface to LAMMPS and how to build LAMMPS as a
shared library for use with Python are given in
doc/Section_python.html.
Basically you have to extend the Python on your box to include the
LAMMPS wrappers:
Basically you need to follow these 3 steps:
python setup_serial.py build # for serial LAMMPS and Python
sudo python setup_serial.py install
a) Add paths to environment variables in your shell script
python setup.py build # for parallel LAMMPS and Python
sudo python setup.py install
For example, for csh or tcsh, add something like this to ~/.cshrc:
but there are several issues to be aware of, as discussed in the doc
pages.
setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src/STUBS
The latter is only necessary if you will use the MPI stubs library
instead of an MPI installed on your machine.
b) Build LAMMPS as a dynamic library, including dynamic versions of
any libraries it includes for the packages you have installed,
e.g. STUBS, MPI, FFTW, JPEG, package libs.
From the src directory:
% make makeshlib
% make -f Makefile.shlib g++
If successful, this results in the file src/liblmp_g++.so
c) Launch Python and import the LAMMPS wrapper
% python
>>> from lammps import lammps
>>> lmp = lammps()
If that gives no errors, you have succesfully wrapped LAMMPS with
Python.
-------------------------------------------------------------------
Once you have successfully built and tested the wrappers, you can run
the Python scripts in the examples sub-directory:
Once you have successfully wrapped LAMMPS, you can run the Python
scripts in the examples sub-directory:
trivial.py read/run a LAMMPS input script thru Python
demo.py invoke various LAMMPS library interface routines

View File

@ -26,28 +26,26 @@ LMPDPTRPTR = 4
LOCATION = os.path.dirname(__file__)
class lammps:
def __init__(self,args=None):
def __init__(self,name="",cmdlineargs=None):
# attempt to load parallel library first, serial library next
# could provide caller a flag to choose which library to load
# load liblmp.so by default
# if name = "g++", load liblmp_g++.so
try:
self.lib = CDLL(os.path.join(LOCATION, "_lammps.so"))
if not name: self.lib = CDLL("liblmp.so")
else: self.lib = CDLL("liblmp_%s.so" % name)
except:
try:
self.lib = CDLL(os.path.join(LOCATION, "_lammps_serial.so"))
except:
raise OSError,"Could not load LAMMPS dynamic library"
raise OSError,"Could not load LAMMPS dynamic library"
# create an instance of LAMMPS
# don't know how to pass an MPI communicator from PyPar
# no_mpi call lets LAMMPS use MPI_COMM_WORLD
# cargs = array of C strings from args
if args:
args.insert(0,"lammps.py")
narg = len(args)
cargs = (c_char_p*narg)(*args)
if cmdlineargs:
cmdlineargs.insert(0,"lammps.py")
narg = len(cmdlineargs)
cargs = (c_char_p*narg)(*cmdlineargs)
self.lmp = c_void_p()
self.lib.lammps_open_no_mpi(narg,cargs,byref(self.lmp))
else:

View File

@ -1,39 +0,0 @@
#!/usr/local/bin/python
"""
setup.py file for LAMMPS with system MPICH library
"""
from distutils.core import setup, Extension
import os, glob
path = os.path.dirname(os.getcwd())
# list of src files for LAMMPS
libfiles = glob.glob("%s/src/*.cpp" % path)
lammps_library = Extension("_lammps",
sources = libfiles,
define_macros = [("MPICH_IGNORE_CXX_SEEK",1),
("LAMMPS_GZIP",1),
("FFT_NONE",1),],
# src files for LAMMPS
include_dirs = ["../src"],
# additional libs for MPICH on Linux
libraries = ["mpich","mpl","pthread"],
# where to find the MPICH lib on Linux
library_dirs = ["/usr/local/lib"],
# additional libs for MPI on Mac
# libraries = ["mpi"],
)
setup(name = "lammps",
version = "28Nov11",
author = "Steve Plimpton",
author_email = "sjplimp@sandia.gov",
url = "http://lammps.sandia.gov",
description = """LAMMPS molecular dynamics library - parallel""",
py_modules = ["lammps"],
ext_modules = [lammps_library]
)

View File

@ -1,34 +0,0 @@
#!/usr/local/bin/python
"""
setup_serial.py file for LAMMPS with dummy serial MPI library
"""
from distutils.core import setup, Extension
import os, glob
path = os.path.dirname(os.getcwd())
# list of src files for LAMMPS and MPI STUBS
libfiles = glob.glob("%s/src/*.cpp" % path) + \
glob.glob("%s/src/STUBS/*.c" % path)
lammps_library = Extension("_lammps_serial",
sources = libfiles,
define_macros = [("MPICH_IGNORE_CXX_SEEK",1),
("LAMMPS_GZIP",1),
("FFT_NONE",1),],
# src files for LAMMPS and MPI STUBS
include_dirs = ["../src", "../src/STUBS"]
)
setup(name = "lammps_serial",
version = "28Nov11",
author = "Steve Plimpton",
author_email = "sjplimp@sandia.gov",
url = "http://lammps.sandia.gov",
description = """LAMMPS molecular dynamics library - serial""",
py_modules = ["lammps"],
ext_modules = [lammps_library]
)

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = icc
CCFLAGS = -O2
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O2
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,7 +9,9 @@ SHELL = /bin/sh
CC = /opt/ibmcmp/vacpp/7.0/bin/blrts_xlC
CCFLAGS = -O3
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = /opt/ibmcmp/vacpp/7.0/bin/blrts_xlC
LINKFLAGS = -O \
-L/opt/ibmcmp/xlf/9.1/blrts_lib \
@ -18,9 +20,11 @@ LINKFLAGS = -O \
-L/bgl/local/bglfftwgel-2.1.5.pre5/lib
LIB = -lxlopt -lxlomp_ser -lxl -lxlfmath -lm \
-lmsglayer.rts -lrts.rts -ldevices.rts -lmassv
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -82,15 +86,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = mpicxx
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = mpicxx
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = c++
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = c++
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,8 +38,8 @@ LMP_INC = -DLAMMPS_GZIP
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = ../STUBS/libmpi.a
MPI_PATH = -L../STUBS
MPI_LIB = -lmpi_stubs
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = g++
CCFLAGS = -g -O # -Wunused
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -g -O
LIB =
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = g++
CCFLAGS = -g -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -g -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -25,13 +25,17 @@ SHELL = /bin/sh
CC = mpicxx
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lstdc++ -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -93,15 +97,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CXX = CC
CCFLAGS = -g -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = $(CXX)
LINKFLAGS = -g -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,18 +80,22 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CXX) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CXX) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@
# Individual dependencies

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = mpic++
CCFLAGS = -O3
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpic++
LINKFLAGS = -O3
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = icc
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = icc
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = c++
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = c++
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,8 +38,8 @@ LMP_INC = -DLAMMPS_GZIP
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = ../STUBS/libmpi.a
MPI_PATH = -L../STUBS
MPI_LIB = -lmpi_stubs
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,13 +9,17 @@ SHELL = /bin/sh
CC = ${MPI_GCC46_PATH}/mpic++
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = ${MPI_GCC46_PATH}/mpic++
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -79,15 +83,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,13 +9,17 @@ SHELL = /bin/sh
CC = i686-pc-mingw32-g++
CCFLAGS = -O3 -march=i686 -mtune=generic -mfpmath=387 -mpc64 \
-ffast-math -funroll-loops -fstrict-aliasing -Wall -W -Wno-uninitialized
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = i686-pc-mingw32-g++
LINKFLAGS = -O
LIB = -lwsock32 # -lwsock32 is needed for USER-IMD which uses tcp/ip sockets.
SIZE = i686-pc-mingw32-size
ARCHIVE = ar
ARFLAGS = -rcsv
SIZE = i686-pc-mingw32-size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,9 +38,9 @@ LMP_INC = -DLAMMPS_XDR # -DLAMMPS_GZIP -DMALLOC_MEMALIGN=64
# PATH = path for MPI library
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = mpi.o
MPI_LIB = mpi.o
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)

View File

@ -14,13 +14,17 @@ SHELL = /bin/sh
CC = mpiicc
CCFLAGS = -O3 -fno-alias -ip -unroll0
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpiicc
LINKFLAGS = -O -L/opt/intel/mkl/10.0.011/lib/em64t
LIB = -lstdc++ -lpthread -lmkl_em64t -lguide
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -82,15 +86,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = g++
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = G++
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -10,13 +10,17 @@ CC = mpic++
CCFLAGS = -O2 -fomit-frame-pointer -fno-rtti -fno-exceptions -g \
-march=native -ffast-math -mpc64 -finline-functions \
-funroll-loops -fstrict-aliasing -Wall -W -Wno-uninitialized
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpic++
LINKFLAGS = -O -g -fno-rtti -fno-exceptions -mpc64
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rcsv
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -78,15 +82,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = pgCC
CCFLAGS = -fast
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = pgCC
LINKFLAGS =
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,13 +9,17 @@ SHELL = /bin/sh
CC = mpCC_r
CCFLAGS = -O3 -qnoipa -qlanglvl=oldmath
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpCC_r
LINKFLAGS = -O -qnoipa -qlanglvl=oldmath -bmaxdata:0x70000000
LIB = -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -77,15 +81,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = mpiCC
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpiCC
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -36,13 +36,17 @@ SHELL = /bin/sh
CC = mpic++
CCFLAGS = -O2 -xsse4.2 -funroll-loops -fstrict-aliasing
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpic++
LINKFLAGS = -O -xsse4.2
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rcsv
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -104,15 +108,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -10,16 +10,20 @@ SHELL = /bin/sh
CC = blrts_xlC
CCFLAGS = -I/bgl/BlueLight/ppcfloor/bglsys/include \
-O2 -qarch=440 -qtune=440
SHFLAGS = -fPIC
DEPFLAGS = -M -qmakedep=gcc
LINK = blrts_xlC
LINKFLAGS = -O \
-L/bgl/BlueLight/ppcfloor/bglsys/lib \
-L/opt/ibmcmp/xlf/bg/10.1/blrts_lib \
-L/opt/ibmcmp/vacpp/bg/8.0/blrts_lib
LIB = -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -81,15 +85,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,13 +9,17 @@ SHELL = /bin/sh
CC = mpCC_r
CCFLAGS = -O2 -qnoipa
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpCC_r
LINKFLAGS = -O -L/usr/lib
LIB = -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -77,15 +81,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = g++
CCFLAGS = -O -g
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -O -g
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,8 +38,8 @@ LMP_INC = #-DLAMMPS_GZIP -DMALLOC_MEMALIGN=64
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = ../STUBS/libmpi.a
MPI_PATH = -L../STUBS
MPI_LIB = -lmpi_stubs
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
@ -44,7 +48,7 @@ MPI_LIB = ../STUBS/libmpi.a
# PATH = path for FFT library
# LIB = name of FFT library
FFT_INC =
FFT_INC =
FFT_PATH =
FFT_LIB = -lfftw3f
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = g++
CCFLAGS = -O0 -g -Wall -W -fstrict-aliasing
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = g++
LINKFLAGS = -O0 -g
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rcsv
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,8 +38,8 @@ LMP_INC = -DLAMMPS_GZIP
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = ../STUBS/libmpi.a
MPI_PATH = -L../STUBS
MPI_LIB = -lmpi_stubs
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = CC
CCFLAGS = -64 -O -mp
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = CC
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = c++
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = c++
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -34,8 +38,8 @@ LMP_INC = -DLAMMPS_GZIP
# LIB = name of MPI library
MPI_INC = -I../STUBS
MPI_PATH =
MPI_LIB = ../STUBS/libmpi.a
MPI_PATH = -L../STUBS
MPI_LIB = -lmpi_stubs
# FFT library, OPTIONAL
# see discussion in doc/Section_start.html#2_2 (step 6)
@ -76,15 +80,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -14,13 +14,17 @@ SHELL = /bin/sh
CC = mpicxx
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -84,15 +88,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -9,13 +9,17 @@ SHELL = /bin/sh
CC = CC
CCFLAGS = -fastsse
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = CC
LINKFLAGS = -O
LIB = -lstdc++
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -77,11 +81,15 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
.cpp.o:

View File

@ -8,13 +8,17 @@ SHELL = /bin/sh
CC = mpiCC
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpiCC
LINKFLAGS = -O
LIB =
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -79,15 +83,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

View File

@ -27,13 +27,17 @@ SHELL = /bin/sh
CC = mpicxx
CCFLAGS = -O
SHFLAGS = -fPIC
DEPFLAGS = -M
LINK = mpicxx
LINKFLAGS = -O
LIB = -lstdc++ -lm
SIZE = size
ARCHIVE = ar
ARFLAGS = -rc
SIZE = size
SHLIBFLAGS = -shared
# ---------------------------------------------------------------------
# LAMMPS-specific settings
@ -95,15 +99,19 @@ $(EXE): $(OBJ)
$(LINK) $(LINKFLAGS) $(EXTRA_PATH) $(OBJ) $(EXTRA_LIB) $(LIB) -o $(EXE)
$(SIZE) $(EXE)
# Library target
# Library targets
lib: $(OBJ)
$(ARCHIVE) $(ARFLAGS) $(EXE) $(OBJ)
shlib: $(OBJ)
$(CC) $(CCFLAGS) $(SHFLAGS) $(SHLIBFLAGS) $(EXTRA_PATH) -o $(EXE) \
$(OBJ) $(EXTRA_LIB) $(LIB)
# Compilation rules
%.o:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) -c $<
$(CC) $(CCFLAGS) $(SHFLAGS) $(EXTRA_INC) -c $<
%.d:%.cpp
$(CC) $(CCFLAGS) $(EXTRA_INC) $(DEPFLAGS) $< > $@

Some files were not shown because too many files have changed in this diff Show More