git-svn-id: svn://svn.icms.temple.edu/lammps-ro/trunk@8604 f3b2605a-c512-4ea7-a41b-209d697bcdaa

This commit is contained in:
sjplimp
2012-08-11 19:05:13 +00:00
parent ee7f12334d
commit 58443251c3
65 changed files with 112430 additions and 0 deletions

View File

@ -0,0 +1,78 @@
This directory has a simple C, C++, and Fortran code that shows how
LAMMPS can be linked to a driver application as a library. The purpose
is to illustrate how another code could perform computations while
using LAMMPS to perform MD on all or a subset of the processors, or
how an umbrella code or script could call both LAMMPS and some other
code to perform a coupled calculation.
simple.cpp is the C++ driver
simple.c is the C driver
simple.f90 is the Fortran driver
libfwrapper.c is the Fortran-to-C wrapper
The 3 codes do the same thing, so you can compare them to see how to
drive LAMMPS in this manner. The C driver is similar in spirit to what
one could use to write a scripting language interface. The Fortran
driver in addition requires a wrapper library that interfaces the C
interface of the LAMMPS library to Fortran and also translates the MPI
communicator from Fortran to C.
You can then build either driver code with a compile line something
like this, which includes paths to the LAMMPS library interface, MPI,
and FFTW (assuming you built LAMMPS as a library with its PPPM
solver).
This builds the C++ driver with the LAMMPS library using a C++ compiler:
g++ -I/home/sjplimp/lammps/src -c simple.cpp
g++ -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lmpl -lpthread -o simpleCC
This builds the C driver with the LAMMPS library using a C compiler:
gcc -I/home/sjplimp/lammps/src -c simple.c
gcc -L/home/sjplimp/lammps/src simple.o \
-llmp_g++ -lfftw -lmpich -lmpl -lpthread -lstdc++ -o simpleC
This builds the Fortran wrapper and driver with the LAMMPS library
using a Fortran and C compiler:
cp ../fortran/libfwrapper.c .
gcc -I/home/sjplimp/lammps/src -c libfwrapper.c
gfortran -I/home/sjplimp/lammps/src -c simple.f90
gfortran -L/home/sjplimp/lammps/src simple.o libfwrapper.o \
-llmp_g++ -lfftw -lfmpich -lmpich -lpthread -lstdc++ -o simpleF
You then run simpleCC, simpleC, or simpleF on a parallel machine
on some number of processors Q with 2 arguments:
mpirun -np Q simpleCC P in.lj
P is the number of procs you want LAMMPS to run on (must be <= Q) and
in.lj is a LAMMPS input script.
The driver will launch LAMMPS on P procs, read the input script a line
at a time, and pass each command line to LAMMPS. The final line of
the script is a "run" command, so LAMMPS will run the problem.
The driver then requests all the atom coordinates from LAMMPS, moves
one of the atoms a small amount "epsilon", passes the coordinates back
to LAMMPS, and runs LAMMPS again. If you look at the output, you
should see a small energy change between runs, due to the moved atom.
The C driver is calling C-style routines in the src/library.cpp file
of LAMMPS. You could add any functions you wish to this file to
manipulate LAMMPS data however you wish.
The Fortran driver is using the same C-style routines, but requires an
additional wrapper to make them Fortran callable. Only a subset of the
library functions are currently wrapped, but it should be clear how to
extend the wrapper if desired.
The C++ driver does the same thing, except that it instantiates LAMMPS
as an object first. Some of the functions in src/library.cpp can be
invoked directly as methods within appropriate LAMMPS classes, which
is what the driver does. Any public LAMMPS class method could be
called from the driver this way. However the get/put functions are
only implemented in src/library.cpp, so the C++ driver calls them as
C-style functions.

View File

@ -0,0 +1,24 @@
# 3d Lennard-Jones melt
units lj
atom_style atomic
atom_modify map array
lattice fcc 0.8442
region box block 0 4 0 4 0 4
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 10

View File

@ -0,0 +1,150 @@
LAMMPS (14 Aug 2012)
# 3d Lennard-Jones melt
units lj
atom_style atomic
atom_modify map array
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 4 0 4 0 4
create_box 1 box
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 256 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 10
Memory usage per processor = 1.82446 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6218056 -5.0244179
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
Loop time of 0.00204515 on 1 procs for 10 steps with 256 atoms
Pair time (%) = 0.00186634 (91.2567)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000108242 (5.29261)
Outpt time (%) = 7.86781e-06 (0.384705)
Other time (%) = 6.27041e-05 (3.06598)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1431 ave 1431 max 1431 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9984 ave 9984 max 9984 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9984
Ave neighs/atom = 39
Neighbor list builds = 0
Dangerous builds = 0
run 10
Memory usage per processor = 1.82446 Mbytes
Step Temp E_pair E_mol TotEng Press
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
20 0.6239063 -5.557644 0 -4.6254403 0.97451173
Loop time of 0.00205898 on 1 procs for 10 steps with 256 atoms
Pair time (%) = 0.00188446 (91.5239)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000113249 (5.50023)
Outpt time (%) = 7.86781e-06 (0.382121)
Other time (%) = 5.34058e-05 (2.59379)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1431 ave 1431 max 1431 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9952 ave 9952 max 9952 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9952
Ave neighs/atom = 38.875
Neighbor list builds = 0
Dangerous builds = 0
print 'Hello'
Hello
run 100
Memory usage per processor = 1.82446 Mbytes
Step Temp E_pair E_mol TotEng Press
20 0.6239063 -5.557644 0 -4.6254403 0.97451173
120 0.76618392 -5.7755399 0 -4.6307534 0.043194007
Loop time of 0.0224361 on 1 procs for 100 steps with 256 atoms
Pair time (%) = 0.01788 (79.6927)
Neigh time (%) = 0.00273871 (12.2067)
Comm time (%) = 0.00126839 (5.65332)
Outpt time (%) = 1.00136e-05 (0.0446315)
Other time (%) = 0.000539064 (2.40266)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1403 ave 1403 max 1403 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9634 ave 9634 max 9634 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9634
Ave neighs/atom = 37.6328
Neighbor list builds = 5
Dangerous builds = 0
run 1
Memory usage per processor = 1.82446 Mbytes
Step Temp E_pair E_mol TotEng Press
120 0.76618392 -5.7490597 0 -4.6042731 0.15552489
121 0.7701256 -5.7552684 0 -4.6045924 0.12926088
Loop time of 0.000268936 on 1 procs for 1 steps with 256 atoms
Pair time (%) = 0.000243902 (90.6915)
Neigh time (%) = 0 (0)
Comm time (%) = 1.09673e-05 (4.07801)
Outpt time (%) = 7.15256e-06 (2.65957)
Other time (%) = 6.91414e-06 (2.57092)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1403 ave 1403 max 1403 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9635 ave 9635 max 9635 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9635
Ave neighs/atom = 37.6367
Neighbor list builds = 0
Dangerous builds = 0

View File

@ -0,0 +1,124 @@
LAMMPS (20 Sep 2010)
# 3d Lennard-Jones melt
units lj
atom_style atomic
atom_modify map array
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 4 0 4 0 4
create_box 1 box
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
1 by 1 by 1 processor grid
create_atoms 1 box
Created 256 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 10
Memory usage per processor = 1.50139 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6218056 -5.0244179
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
Loop time of 0.00370193 on 1 procs for 10 steps with 256 atoms
Pair time (%) = 0.00340414 (91.9559)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000165701 (4.47607)
Outpt time (%) = 2.31266e-05 (0.624718)
Other time (%) = 0.000108957 (2.94326)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1431 ave 1431 max 1431 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9984 ave 9984 max 9984 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9984
Ave neighs/atom = 39
Neighbor list builds = 0
Dangerous builds = 0
run 10
Memory usage per processor = 1.50139 Mbytes
Step Temp E_pair E_mol TotEng Press
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
20 0.6239063 -5.557644 0 -4.6254403 0.97451173
Loop time of 0.00365806 on 1 procs for 10 steps with 256 atoms
Pair time (%) = 0.0033741 (92.2375)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000161886 (4.42547)
Outpt time (%) = 1.09673e-05 (0.299811)
Other time (%) = 0.000111103 (3.03722)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1431 ave 1431 max 1431 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9952 ave 9952 max 9952 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9952
Ave neighs/atom = 38.875
Neighbor list builds = 0
Dangerous builds = 0
run 1
Memory usage per processor = 1.50139 Mbytes
Step Temp E_pair E_mol TotEng Press
20 0.6239063 -5.5404291 0 -4.6082254 1.0394285
21 0.63845863 -5.5628733 0 -4.6089263 0.99398278
Loop time of 0.000490904 on 1 procs for 1 steps with 256 atoms
Pair time (%) = 0.000452042 (92.0835)
Neigh time (%) = 0 (0)
Comm time (%) = 1.69277e-05 (3.44828)
Outpt time (%) = 1.00136e-05 (2.03983)
Other time (%) = 1.19209e-05 (2.42836)
Nlocal: 256 ave 256 max 256 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 1431 ave 1431 max 1431 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 9705 ave 9705 max 9705 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9705
Ave neighs/atom = 37.9102
Neighbor list builds = 0
Dangerous builds = 0

View File

@ -0,0 +1,124 @@
LAMMPS (20 Sep 2010)
# 3d Lennard-Jones melt
units lj
atom_style atomic
atom_modify map array
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 4 0 4 0 4
create_box 1 box
Created orthogonal box = (0 0 0) to (6.71838 6.71838 6.71838)
1 by 2 by 2 processor grid
create_atoms 1 box
Created 256 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 10
Memory usage per processor = 1.48354 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6218056 -5.0244179
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
Loop time of 0.00202775 on 4 procs for 10 steps with 256 atoms
Pair time (%) = 0.00085938 (42.381)
Neigh time (%) = 0 (0)
Comm time (%) = 0.00108671 (53.592)
Outpt time (%) = 2.79546e-05 (1.3786)
Other time (%) = 5.37038e-05 (2.64844)
Nlocal: 64 ave 64 max 64 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Nghost: 843 ave 843 max 843 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Neighs: 2496 ave 2496 max 2496 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 9984
Ave neighs/atom = 39
Neighbor list builds = 0
Dangerous builds = 0
run 10
Memory usage per processor = 1.48354 Mbytes
Step Temp E_pair E_mol TotEng Press
10 1.1298532 -6.3095502 0 -4.6213906 -2.6058175
20 0.6239063 -5.557644 0 -4.6254403 0.97451173
Loop time of 0.00224167 on 4 procs for 10 steps with 256 atoms
Pair time (%) = 0.000862718 (38.4855)
Neigh time (%) = 0 (0)
Comm time (%) = 0.00127524 (56.888)
Outpt time (%) = 5.19753e-05 (2.31859)
Other time (%) = 5.17368e-05 (2.30796)
Nlocal: 64 ave 69 max 59 min
Histogram: 1 0 0 0 1 0 1 0 0 1
Nghost: 843 ave 848 max 838 min
Histogram: 1 0 0 0 1 0 1 0 0 1
Neighs: 2488 ave 2745 max 2319 min
Histogram: 1 0 1 1 0 0 0 0 0 1
Total # of neighbors = 9952
Ave neighs/atom = 38.875
Neighbor list builds = 0
Dangerous builds = 0
run 1
Memory usage per processor = 1.48354 Mbytes
Step Temp E_pair E_mol TotEng Press
20 0.6239063 -5.5404291 0 -4.6082254 1.0394285
21 0.63845863 -5.5628733 0 -4.6089263 0.99398278
Loop time of 0.000325441 on 4 procs for 1 steps with 256 atoms
Pair time (%) = 0.000120759 (37.1062)
Neigh time (%) = 0 (0)
Comm time (%) = 0.000165045 (50.7143)
Outpt time (%) = 2.86698e-05 (8.80952)
Other time (%) = 1.09673e-05 (3.36996)
Nlocal: 64 ave 70 max 58 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Nghost: 843 ave 849 max 837 min
Histogram: 1 0 0 0 1 1 0 0 0 1
Neighs: 2426.25 ave 2704 max 2229 min
Histogram: 1 0 1 1 0 0 0 0 0 1
Total # of neighbors = 9705
Ave neighs/atom = 37.9102
Neighbor list builds = 0
Dangerous builds = 0

View File

@ -0,0 +1,116 @@
/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
www.cs.sandia.gov/~sjplimp/lammps.html
Steve Plimpton, sjplimp@sandia.gov, Sandia National Laboratories
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* c_driver = simple example of how an umbrella program
can invoke LAMMPS as a library on some subset of procs
Syntax: c_driver P in.lammps
P = # of procs to run LAMMPS on
must be <= # of procs the driver code itself runs on
in.lammps = LAMMPS input script
See README for compilation instructions */
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "mpi.h"
#include "library.h" /* this is a LAMMPS include file */
int main(int narg, char **arg)
{
/* setup MPI and various communicators
driver runs on all procs in MPI_COMM_WORLD
comm_lammps only has 1st P procs (could be all or any subset) */
MPI_Init(&narg,&arg);
if (narg != 3) {
printf("Syntax: c_driver P in.lammps\n");
exit(1);
}
int me,nprocs;
MPI_Comm_rank(MPI_COMM_WORLD,&me);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
int nprocs_lammps = atoi(arg[1]);
if (nprocs_lammps > nprocs) {
if (me == 0)
printf("ERROR: LAMMPS cannot use more procs than available\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
int lammps;
if (me < nprocs_lammps) lammps = 1;
else lammps = MPI_UNDEFINED;
MPI_Comm comm_lammps;
MPI_Comm_split(MPI_COMM_WORLD,lammps,0,&comm_lammps);
/* open LAMMPS input script */
FILE *fp;
if (me == 0) {
fp = fopen(arg[2],"r");
if (fp == NULL) {
printf("ERROR: Could not open LAMMPS input script\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
}
/* run the input script thru LAMMPS one line at a time until end-of-file
driver proc 0 reads a line, Bcasts it to all procs
(could just send it to proc 0 of comm_lammps and let it Bcast)
all LAMMPS procs call lammps_command() on the line */
void *ptr;
if (lammps == 1) lammps_open(0,NULL,comm_lammps,&ptr);
int n;
char line[1024];
while (1) {
if (me == 0) {
if (fgets(line,1024,fp) == NULL) n = 0;
else n = strlen(line) + 1;
if (n == 0) fclose(fp);
}
MPI_Bcast(&n,1,MPI_INT,0,MPI_COMM_WORLD);
if (n == 0) break;
MPI_Bcast(line,n,MPI_CHAR,0,MPI_COMM_WORLD);
if (lammps == 1) lammps_command(ptr,line);
}
/* run 10 more steps
get coords from LAMMPS
change coords of 1st atom
put coords back into LAMMPS
run a single step with changed coords */
if (lammps == 1) {
lammps_command(ptr,"run 10");
int natoms = lammps_get_natoms(ptr);
double *x = (double *) malloc(3*natoms*sizeof(double));
lammps_get_coords(ptr,x);
double epsilon = 0.1;
x[0] += epsilon;
lammps_put_coords(ptr,x);
free(x);
lammps_command(ptr,"run 1");
}
if (lammps == 1) lammps_close(ptr);
/* close down MPI */
MPI_Finalize();
}

View File

@ -0,0 +1,121 @@
/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
www.cs.sandia.gov/~sjplimp/lammps.html
Steve Plimpton, sjplimp@sandia.gov, Sandia National Laboratories
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
// c++_driver = simple example of how an umbrella program
// can invoke LAMMPS as a library on some subset of procs
// Syntax: c++_driver P in.lammps
// P = # of procs to run LAMMPS on
// must be <= # of procs the driver code itself runs on
// in.lammps = LAMMPS input script
// See README for compilation instructions
#include "stdio.h"
#include "stdlib.h"
#include "string.h"
#include "mpi.h"
#include "lammps.h" // these are LAMMPS include files
#include "input.h"
#include "atom.h"
#include "library.h"
using namespace LAMMPS_NS;
int main(int narg, char **arg)
{
// setup MPI and various communicators
// driver runs on all procs in MPI_COMM_WORLD
// comm_lammps only has 1st P procs (could be all or any subset)
MPI_Init(&narg,&arg);
if (narg != 3) {
printf("Syntax: c++_driver P in.lammps\n");
exit(1);
}
int me,nprocs;
MPI_Comm_rank(MPI_COMM_WORLD,&me);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
int nprocs_lammps = atoi(arg[1]);
if (nprocs_lammps > nprocs) {
if (me == 0)
printf("ERROR: LAMMPS cannot use more procs than available\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
int lammps;
if (me < nprocs_lammps) lammps = 1;
else lammps = MPI_UNDEFINED;
MPI_Comm comm_lammps;
MPI_Comm_split(MPI_COMM_WORLD,lammps,0,&comm_lammps);
// open LAMMPS input script
FILE *fp;
if (me == 0) {
fp = fopen(arg[2],"r");
if (fp == NULL) {
printf("ERROR: Could not open LAMMPS input script\n");
MPI_Abort(MPI_COMM_WORLD,1);
}
}
// run the input script thru LAMMPS one line at a time until end-of-file
// driver proc 0 reads a line, Bcasts it to all procs
// (could just send it to proc 0 of comm_lammps and let it Bcast)
// all LAMMPS procs call input->one() on the line
LAMMPS *lmp;
if (lammps == 1) lmp = new LAMMPS(0,NULL,comm_lammps);
int n;
char line[1024];
while (1) {
if (me == 0) {
if (fgets(line,1024,fp) == NULL) n = 0;
else n = strlen(line) + 1;
if (n == 0) fclose(fp);
}
MPI_Bcast(&n,1,MPI_INT,0,MPI_COMM_WORLD);
if (n == 0) break;
MPI_Bcast(line,n,MPI_CHAR,0,MPI_COMM_WORLD);
if (lammps == 1) lmp->input->one(line);
}
// run 10 more steps
// get coords from LAMMPS
// change coords of 1st atom
// put coords back into LAMMPS
// run a single step with changed coords
if (lammps == 1) {
lmp->input->one("run 10");
int natoms = static_cast<int> (lmp->atom->natoms);
double *x = new double[3*natoms];
lammps_get_coords(lmp,x); // no LAMMPS class function for this
double epsilon = 0.1;
x[0] += epsilon;
lammps_put_coords(lmp,x); // no LAMMPS class function for this
delete [] x;
lmp->input->one("run 1");
}
if (lammps == 1) delete lmp;
// close down MPI
MPI_Finalize();
}

View File

@ -0,0 +1,135 @@
! LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
! www.cs.sandia.gov/~sjplimp/lammps.html
! Steve Plimpton, sjplimp@sandia.gov, Sandia National Laboratories
!
! Copyright (2003) Sandia Corporation. Under the terms of Contract
! DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
! certain rights in this software. This software is distributed under
! the GNU General Public License.
!
! See the README file in the top-level LAMMPS directory.
! f_driver = simple example of how an umbrella program
! can invoke LAMMPS as a library on some subset of procs
! Syntax: f_driver P in.lammps
! P = # of procs to run LAMMPS on
! must be <= # of procs the driver code itself runs on
! in.lammps = LAMMPS input script
! See README for compilation instructions
PROGRAM f_driver
IMPLICIT NONE
INCLUDE 'mpif.h'
INTEGER, PARAMETER :: fp=20
INTEGER :: n, narg, ierr, me, nprocs, natoms
INTEGER :: lammps, nprocs_lammps, comm_lammps
INTEGER (kind=8) :: ptr
REAL (kind=8), ALLOCATABLE :: x(:)
REAL (kind=8), PARAMETER :: epsilon=0.1
CHARACTER (len=64) :: arg
CHARACTER (len=1024) :: line
! setup MPI and various communicators
! driver runs on all procs in MPI_COMM_WORLD
! comm_lammps only has 1st P procs (could be all or any subset)
CALL mpi_init(ierr)
narg = command_argument_count()
IF (narg /= 2) THEN
PRINT *, 'Syntax: f_driver P in.lammps'
CALL mpi_abort(MPI_COMM_WORLD,1,ierr)
END IF
CALL mpi_comm_rank(MPI_COMM_WORLD,me,ierr);
CALL mpi_comm_size(MPI_COMM_WORLD,nprocs,ierr);
CALL get_command_argument(1,arg)
READ (arg,'(I10)') nprocs_lammps
IF (nprocs_lammps > nprocs) THEN
IF (me == 0) THEN
PRINT *, 'ERROR: LAMMPS cannot use more procs than available'
CALL mpi_abort(MPI_COMM_WORLD,2,ierr)
END IF
END IF
lammps = 0
IF (me < nprocs_lammps) THEN
lammps = 1
ELSE
lammps = MPI_UNDEFINED
END IF
CALL mpi_comm_split(MPI_COMM_WORLD,lammps,0,comm_lammps,ierr)
! open LAMMPS input script on rank zero
CALL get_command_argument(2,arg)
OPEN(UNIT=fp, FILE=arg, ACTION='READ', STATUS='OLD', IOSTAT=ierr)
IF (ierr /= 0) THEN
PRINT *, 'ERROR: Could not open LAMMPS input script'
CALL mpi_abort(MPI_COMM_WORLD,3,ierr);
END IF
! run the input script thru LAMMPS one line at a time until end-of-file
! driver proc 0 reads a line, Bcasts it to all procs
! (could just send it to proc 0 of comm_lammps and let it Bcast)
! all LAMMPS procs call lammps_command() on the line */
IF (lammps == 1) CALL lammps_open(comm_lammps,ptr)
n = 0
DO
IF (me == 0) THEN
READ (UNIT=fp, FMT='(A)', IOSTAT=ierr) line
n = 0
IF (ierr == 0) THEN
n = LEN(TRIM(line))
IF (n == 0 ) THEN
line = ' '
n = 1
END IF
END IF
END IF
CALL mpi_bcast(n,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)
IF (n == 0) EXIT
CALL mpi_bcast(line,n,MPI_CHARACTER,0,MPI_COMM_WORLD,ierr)
IF (lammps == 1) CALL lammps_command(ptr,line,n)
END DO
CLOSE(UNIT=fp)
! run 10 more steps
! get coords from LAMMPS
! change coords of 1st atom
! put coords back into LAMMPS
! run a single step with changed coords */
IF (lammps == 1) THEN
CALL lammps_command(ptr,'run 10',6)
CALL lammps_get_natoms(ptr,natoms)
ALLOCATE(x(3*natoms))
CALL lammps_get_coords(ptr,x)
x(1) = x(1) + epsilon
CALL lammps_put_coords(ptr,x)
DEALLOCATE(x)
CALL lammps_command(ptr,'run 1',5);
END IF
! free LAMMPS object
IF (lammps == 1) CALL lammps_close(ptr);
! close down MPI
CALL mpi_finalize(ierr)
END PROGRAM f_driver