Merge pull request #330 from akohlmey/collected-small-bugfixes

Collected small bugfixes
This commit is contained in:
sjplimp
2017-01-17 09:08:00 -07:00
committed by GitHub
20 changed files with 125 additions and 108 deletions

View File

@ -1153,7 +1153,7 @@ Package, Description, Author(s), Doc page, Example, Pic/movie, Library
"USER-MISC"_#USER-MISC, single-file contributions, USER-MISC/README, USER-MISC/README, -, -, -
"USER-MANIFOLD"_#USER-MANIFOLD, motion on 2d surface, Stefan Paquay (Eindhoven U of Technology), "fix manifoldforce"_fix_manifoldforce.html, USER/manifold, "manifold"_manifold, -
"USER-MOLFILE"_#USER-MOLFILE, "VMD"_VMD molfile plug-ins, Axel Kohlmeyer (Temple U), "dump molfile"_dump_molfile.html, -, -, VMD-MOLFILE
"USER-NC-DUMP"_#USER-NC-DUMP, dump output via NetCDF, Lars Pastewka (Karlsruhe Institute of Technology, KIT), "dump nc, dump nc/mpiio"_dump_nc.html, -, -, lib/netcdf
"USER-NC-DUMP"_#USER-NC-DUMP, dump output via NetCDF, Lars Pastewka (Karlsruhe Institute of Technology, KIT), "dump nc / dump nc/mpiio"_dump_nc.html, -, -, lib/netcdf
"USER-OMP"_#USER-OMP, OpenMP threaded styles, Axel Kohlmeyer (Temple U), "Section 5.3.4"_accelerate_omp.html, -, -, -
"USER-PHONON"_#USER-PHONON, phonon dynamical matrix, Ling-Ti Kong (Shanghai Jiao Tong U), "fix phonon"_fix_phonon.html, USER/phonon, -, -
"USER-QMMM"_#USER-QMMM, QM/MM coupling, Axel Kohlmeyer (Temple U), "fix qmmm"_fix_qmmm.html, USER/qmmm, -, lib/qmmm
@ -1610,11 +1610,12 @@ and a "dump nc/mpiio"_dump_nc.html command to output LAMMPS snapshots
in this format. See src/USER-NC-DUMP/README for more details.
NetCDF files can be directly visualized with the following tools:
Ovito (http://www.ovito.org/). Ovito supports the AMBER convention
and all of the above extensions. :ulb,l
and all of the above extensions. :ulb,l
VMD (http://www.ks.uiuc.edu/Research/vmd/) :l
AtomEye (http://www.libatoms.org/). The libAtoms version of AtomEye contains
a NetCDF reader that is not present in the standard distribution of AtomEye :l,ule
a NetCDF reader that is not present in the standard distribution of AtomEye :l,ule
The person who created these files is Lars Pastewka at
Karlsruhe Institute of Technology (lars.pastewka at kit.edu).

View File

@ -1727,7 +1727,7 @@ thermodynamic state and a total run time for the simulation. It then
appends statistics about the CPU time and storage requirements for the
simulation. An example set of statistics is shown here:
Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms
Loop time of 2.81192 on 4 procs for 300 steps with 2004 atoms :pre
Performance: 18.436 ns/day 1.302 hours/ns 106.689 timesteps/s
97.0% CPU use with 4 MPI tasks x no OpenMP threads :pre
@ -1757,14 +1757,14 @@ Ave special neighs/atom = 2.34032
Neighbor list builds = 26
Dangerous builds = 0 :pre
The first section provides a global loop timing summary. The loop time
The first section provides a global loop timing summary. The {loop time}
is the total wall time for the section. The {Performance} line is
provided for convenience to help predicting the number of loop
continuations required and for comparing performance with other
similar MD codes. The CPU use line provides the CPU utilzation per
continuations required and for comparing performance with other,
similar MD codes. The {CPU use} line provides the CPU utilzation per
MPI task; it should be close to 100% times the number of OpenMP
threads (or 1). Lower numbers correspond to delays due to file I/O or
insufficient thread utilization.
threads (or 1 of no OpenMP). Lower numbers correspond to delays due
to file I/O or insufficient thread utilization.
The MPI task section gives the breakdown of the CPU run time (in
seconds) into major categories:
@ -1791,7 +1791,7 @@ is present that also prints the CPU utilization in percent. In
addition, when using {timer full} and the "package omp"_package.html
command are active, a similar timing summary of time spent in threaded
regions to monitor thread utilization and load balance is provided. A
new entry is the {Reduce} section, which lists the time spend in
new entry is the {Reduce} section, which lists the time spent in
reducing the per-thread data elements to the storage for non-threaded
computation. These thread timings are taking from the first MPI rank
only and and thus, as the breakdown for MPI tasks can change from MPI

View File

@ -91,6 +91,7 @@ Commands :h1
suffix
tad
temper
temper_grem
thermo
thermo_modify
thermo_style

View File

@ -35,6 +35,7 @@ Computes :h1
compute_erotate_sphere_atom
compute_event_displace
compute_fep
compute_global_atom
compute_group_group
compute_gyration
compute_gyration_chunk

View File

@ -29,7 +29,7 @@ fix fxgREM all grem 502 -0.15 -80000 fxnvt :pre
[Description:]
This fix implements the molecular dynamics version of the generalized
replica exchange method (gREM) originally developed by "(Kim)"_#Kim,
replica exchange method (gREM) originally developed by "(Kim)"_#Kim2010,
which uses non-Boltzmann ensembles to sample over first order phase
transitions. The is done by defining replicas with an enthalpy
dependent effective temperature
@ -103,7 +103,7 @@ npt"_fix_nh.html, "thermo_modify"_thermo_modify.html
:line
:link(Kim)
:link(Kim2010)
[(Kim)] Kim, Keyes, Straub, J Chem. Phys, 132, 224107 (2010).
:link(Malolepsza)

View File

@ -23,6 +23,7 @@ Section_history.html
tutorial_drude.html
tutorial_github.html
tutorial_pylammps.html
body.html
manifolds.html
@ -113,6 +114,7 @@ special_bonds.html
suffix.html
tad.html
temper.html
temper_grem.html
thermo.html
thermo_modify.html
thermo_style.html

View File

@ -32,7 +32,7 @@ Run a parallel tempering or replica exchange simulation in LAMMPS
partition mode using multiple generalized replicas (ensembles) of a
system defined by "fix grem"_fix_grem.html, which stands for the
generalized replica exchange method (gREM) originally developed by
"(Kim)"_#Kim. It uses non-Boltzmann ensembles to sample over first
"(Kim)"_#KimStraub. It uses non-Boltzmann ensembles to sample over first
order phase transitions. The is done by defining replicas with an
enthalpy dependent effective temperature
@ -105,5 +105,5 @@ This command must be used with "fix grem"_fix_grem.html.
[Default:] none
:link(Kim)
:link(KimStraub)
[(Kim)] Kim, Keyes, Straub, J Chem Phys, 132, 224107 (2010).

View File

@ -33,14 +33,14 @@ timer loop :pre
Select the level of detail at which LAMMPS performs its CPU timings.
Multiple keywords can be specified with the {timer} command. For
keywords that are mutually exclusive, the last one specified takes
effect.
precedence.
During a simulation run LAMMPS collects information about how much
time is spent in different sections of the code and thus can provide
information for determining performance and load imbalance problems.
This can be done at different levels of detail and accuracy. For more
information about the timing output, see this "discussion of screen
output"_Section_start.html#start_8.
output in Section 2.8"_Section_start.html#start_8.
The {off} setting will turn all time measurements off. The {loop}
setting will only measure the total time for a run and not collect any
@ -52,20 +52,22 @@ procsessors. The {full} setting adds information about CPU
utilization and thread utilization, when multi-threading is enabled.
With the {sync} setting, all MPI tasks are synchronized at each timer
call which meaures load imbalance more accuractly, though it can also
slow down the simulation. Using the {nosync} setting (which is the
default) turns off this synchronization.
call which measures load imbalance for each section more accuractly,
though it can also slow down the simulation by prohibiting overlapping
independent computations on different MPI ranks Using the {nosync}
setting (which is the default) turns this synchronization off.
With the {timeout} keyword a walltime limit can be imposed that
With the {timeout} keyword a walltime limit can be imposed, that
affects the "run"_run.html and "minimize"_minimize.html commands.
This can be convenient when runs have to confirm to time limits,
e.g. when running under a batch system and you want to maximize
the utilization of the batch time slot, especially when the time
per timestep varies and is thus difficult to predict how many
steps a simulation can perform, or for difficult to converge
minimizations. The timeout {elapse} value should be somewhat smaller
than the time requested from the batch system, as there is usually
some overhead to launch jobs, and it may be advisable to write
This can be convenient when calculations have to comply with execution
time limits, e.g. when running under a batch system when you want to
maximize the utilization of the batch time slot, especially for runs
where the time per timestep varies much and thus it becomes difficult
to predict how many steps a simulation can perform for a given walltime
limit. This also applies for difficult to converge minimizations.
The timeout {elapse} value should be somewhat smaller than the maximum
wall time requested from the batch system, as there is usually
some overhead to launch jobs, and it is advisable to write
out a restart after terminating a run due to a timeout.
The timeout timer starts when the command is issued. When the time

4
src/.gitignore vendored
View File

@ -374,6 +374,8 @@
/fix_meso.h
/fix_meso_stationary.cpp
/fix_meso_stationary.h
/fix_mscg.cpp
/fix_mscg.h
/fix_msst.cpp
/fix_msst.h
/fix_neb.cpp
@ -394,6 +396,8 @@
/fix_nph_body.h
/fix_npt_body.cpp
/fix_npt_body.h
/fix_nvk.cpp
/fix_nvk.h
/fix_nvt_body.cpp
/fix_nvt_body.h
/fix_nve_body.cpp

View File

@ -545,7 +545,7 @@ void FixPour::pre_exchange()
delx = coords[m][0] - xnear[i][0];
dely = coords[m][1] - xnear[i][1];
delz = coords[m][2] - xnear[i][2];
domain->minimum_image(delx,dely,delz);
domain->minimum_image(delx,dely,delz);
rsq = delx*delx + dely*dely + delz*delz;
radsum = coords[m][3] + xnear[i][3];
if (rsq <= radsum*radsum) break;
@ -650,9 +650,9 @@ void FixPour::pre_exchange()
atom->radius[n] = radtmp;
atom->rmass[n] = 4.0*MY_PI/3.0 * radtmp*radtmp*radtmp * denstmp;
} else {
onemols[imol]->quat_external = quat;
atom->add_molecule_atom(onemols[imol],m,n,maxtag_all);
}
onemols[imol]->quat_external = quat;
atom->add_molecule_atom(onemols[imol],m,n,maxtag_all);
}
modify->create_attribute(n);
}

View File

@ -149,7 +149,7 @@ FixWallGran::FixWallGran(LAMMPS *lmp, int narg, char **arg) :
if (narg < iarg+2) error->all(FLERR,"Illegal fix wall/gran command");
wallstyle = ZCYLINDER;
lo = hi = 0.0;
cylradius = force->numeric(FLERR,arg[iarg+3]);
cylradius = force->numeric(FLERR,arg[iarg+1]);
iarg += 2;
} else if (strcmp(arg[iarg],"region") == 0) {
if (narg < iarg+2) error->all(FLERR,"Illegal fix wall/gran command");

View File

@ -51,6 +51,8 @@ PairBuckLongCoulLong::PairBuckLongCoulLong(LAMMPS *lmp) : Pair(lmp)
dispersionflag = ewaldflag = pppmflag = 1;
respa_enable = 1;
writedata = 1;
ftable = NULL;
fdisptable = NULL;
}
/* ----------------------------------------------------------------------
@ -230,7 +232,27 @@ void PairBuckLongCoulLong::init_style()
// require an atom style with charge defined
if (!atom->q_flag && (ewald_order&(1<<1)))
error->all(FLERR,"Pair style buck/long/coul/long requires atom attribute q");
error->all(FLERR,
"Invoking coulombic in pair style buck/long/coul/long requires atom attribute q");
// ensure use of KSpace long-range solver, set two g_ewalds
if (force->kspace == NULL)
error->all(FLERR,"Pair style requires a KSpace style");
if (ewald_order&(1<<1)) g_ewald = force->kspace->g_ewald;
if (ewald_order&(1<<6)) g_ewald_6 = force->kspace->g_ewald_6;
// set rRESPA cutoffs
if (strstr(update->integrate_style,"respa") &&
((Respa *) update->integrate)->level_inner >= 0)
cut_respa = ((Respa *) update->integrate)->cutoff;
else cut_respa = NULL;
// setup force tables
if (ncoultablebits && (ewald_order&(1<<1))) init_tables(cut_coul,cut_respa);
if (ndisptablebits && (ewald_order&(1<<6))) init_tables_disp(cut_buck_global);
// request regular or rRESPA neighbor lists if neighrequest_flag != 0
@ -271,24 +293,6 @@ void PairBuckLongCoulLong::init_style()
}
cut_coulsq = cut_coul * cut_coul;
// set rRESPA cutoffs
if (strstr(update->integrate_style,"respa") &&
((Respa *) update->integrate)->level_inner >= 0)
cut_respa = ((Respa *) update->integrate)->cutoff;
else cut_respa = NULL;
// ensure use of KSpace long-range solver, set two g_ewalds
if (force->kspace == NULL)
error->all(FLERR,"Pair style requires a KSpace style");
if (ewald_order&(1<<1)) g_ewald = force->kspace->g_ewald;
if (ewald_order&(1<<6)) g_ewald_6 = force->kspace->g_ewald_6;
// setup force tables
if (ncoultablebits && (ewald_order&(1<<1))) init_tables(cut_coul,cut_respa);
if (ndisptablebits && (ewald_order&(1<<6))) init_tables_disp(cut_buck_global);
}
/* ----------------------------------------------------------------------

View File

@ -81,10 +81,12 @@ void PairLJLongCoulLong::settings(int narg, char **arg)
{
if (narg != 3 && narg != 4) error->all(FLERR,"Illegal pair_style command");
ewald_off = 0;
ewald_order = 0;
options(arg, 6);
options(++arg, 1);
ewald_off = 0;
options(arg,6);
options(++arg,1);
if (!comm->me && ewald_order == ((1<<1) | (1<<6)))
error->warning(FLERR,"Using largest cutoff for lj/long/coul/long");
if (!*(++arg))
@ -226,7 +228,26 @@ void PairLJLongCoulLong::init_style()
if (!atom->q_flag && (ewald_order&(1<<1)))
error->all(FLERR,
"Invoking coulombic in pair style lj/coul requires atom attribute q");
"Invoking coulombic in pair style lj/long/coul/long requires atom attribute q");
// ensure use of KSpace long-range solver, set two g_ewalds
if (force->kspace == NULL)
error->all(FLERR,"Pair style requires a KSpace style");
if (ewald_order&(1<<1)) g_ewald = force->kspace->g_ewald;
if (ewald_order&(1<<6)) g_ewald_6 = force->kspace->g_ewald_6;
// set rRESPA cutoffs
if (strstr(update->integrate_style,"respa") &&
((Respa *) update->integrate)->level_inner >= 0)
cut_respa = ((Respa *) update->integrate)->cutoff;
else cut_respa = NULL;
// setup force tables
if (ncoultablebits && (ewald_order&(1<<1))) init_tables(cut_coul,cut_respa);
if (ndisptablebits && (ewald_order&(1<<6))) init_tables_disp(cut_lj_global);
// request regular or rRESPA neighbor lists if neighrequest_flag != 0
@ -265,27 +286,8 @@ void PairLJLongCoulLong::init_style()
} else irequest = neighbor->request(this,instance_me);
}
cut_coulsq = cut_coul * cut_coul;
// set rRESPA cutoffs
if (strstr(update->integrate_style,"respa") &&
((Respa *) update->integrate)->level_inner >= 0)
cut_respa = ((Respa *) update->integrate)->cutoff;
else cut_respa = NULL;
// ensure use of KSpace long-range solver, set g_ewald
if (force->kspace == NULL)
error->all(FLERR,"Pair style requires a KSpace style");
if (force->kspace) g_ewald = force->kspace->g_ewald;
if (force->kspace) g_ewald_6 = force->kspace->g_ewald_6;
// setup force tables
if (ncoultablebits && (ewald_order&(1<<1))) init_tables(cut_coul,cut_respa);
if (ndisptablebits && (ewald_order&(1<<6))) init_tables_disp(cut_lj_global);
}
/* ----------------------------------------------------------------------

View File

@ -54,7 +54,7 @@ MPI_LIB =
FFT_INC = -DFFT_MKL -DFFT_SINGLE
FFT_PATH =
FFT_LIB = -L$MKLROOT/lib/intel64/ -lmkl_intel_ilp64 \
FFT_LIB = -L$(MKLROOT)/lib/intel64/ -lmkl_intel_ilp64 \
-lmkl_sequential -lmkl_core
# JPEG and/or PNG library

View File

@ -54,7 +54,7 @@ MPI_LIB =
FFT_INC = -DFFT_MKL -DFFT_SINGLE
FFT_PATH =
FFT_LIB = -L$MKLROOT/lib/intel64/ -lmkl_intel_ilp64 \
FFT_LIB = -L$(MKLROOT)/lib/intel64/ -lmkl_intel_ilp64 \
-lmkl_sequential -lmkl_core
# JPEG and/or PNG library

View File

@ -54,7 +54,7 @@ MPI_LIB =
FFT_INC = -DFFT_MKL -DFFT_SINGLE
FFT_PATH =
FFT_LIB = -L$MKLROOT/lib/intel64/ -lmkl_intel_ilp64 \
FFT_LIB = -L$(MKLROOT)/lib/intel64/ -lmkl_intel_ilp64 \
-lmkl_sequential -lmkl_core
# JPEG and/or PNG library

View File

@ -493,7 +493,7 @@ ComputeChunkAtom::~ComputeChunkAtom()
{
// check nfix in case all fixes have already been deleted
if (modify->nfix) modify->delete_fix(id_fix);
if (id_fix && modify->nfix) modify->delete_fix(id_fix);
delete [] id_fix;
memory->destroy(chunk);

View File

@ -247,7 +247,7 @@ void Finish::end(int flag)
}
}
// PRD stats using PAIR,BOND,KSPACE for dephase,dynamics,quench
// PRD stats
if (prdflag) {
if (me == 0) {
@ -329,7 +329,7 @@ void Finish::end(int flag)
}
}
// TAD stats using PAIR,BOND,KSPACE for neb,dynamics,quench
// TAD stats
if (tadflag) {
if (me == 0) {
@ -415,7 +415,7 @@ void Finish::end(int flag)
}
}
// HYPER stats using PAIR,BOND,KSPACE for dynamics,quench
// HYPER stats
if (hyperflag) {
if (me == 0) {
@ -912,7 +912,7 @@ void mpi_timings(const char *label, Timer *t, enum Timer::ttype tt,
time_cpu = tmp/nprocs*100.0;
// % variance from the average as measure of load imbalance
if ((time_sq/time - time) > 1.0e-10)
if ((time > 0.001) && ((time_sq/time - time) > 1.0e-10))
time_sq = sqrt(time_sq/time - time)*100.0;
else
time_sq = 0.0;
@ -964,7 +964,7 @@ void omp_times(FixOMP *fix, const char *label, enum Timer::ttype which,
time_std /= nthreads;
time_total /= nthreads;
if ((time_std/time_avg -time_avg) > 1.0e-10)
if ((time_avg > 0.001) && ((time_std/time_avg -time_avg) > 1.0e-10))
time_std = sqrt(time_std/time_avg - time_avg)*100.0;
else
time_std = 0.0;

View File

@ -43,10 +43,10 @@
# General Many thanks to Paul S. Crozier for checking script validity
# against his projects.
# Also thanks to Xiaohu Hu (hux2@ornl.gov) and Robert A. Latour
# (latourr@clemson.edu), David Hyde-Volpe, and Tigran Abramyan,
# Clemson University and Chris Lorenz (chris.lorenz@kcl.ac.uk),
# King's College London for their efforts to add CMAP sections,
# which is implemented using the option flag "-cmap".
# (latourr@clemson.edu), David Hyde-Volpe, and Tigran Abramyan,
# Clemson University and Chris Lorenz (chris.lorenz@kcl.ac.uk),
# King's College London for their efforts to add CMAP sections,
# which is implemented using the option flag "-cmap".
# Initialization