diff --git a/doc/Section_accelerate.html b/doc/Section_accelerate.html index 296219f985..88d03984b3 100644 --- a/doc/Section_accelerate.html +++ b/doc/Section_accelerate.html @@ -176,8 +176,8 @@ discussed below. package. These styles support vectorized single and mixed precision calculations, in addition to full double precision. In extreme cases, this can provide speedups over 3.5x on CPUs. The package also -supports acceleration with offload to Intel corprocessors (Xeon -Phi). This can result in additional speedup over 2x depending on the +supports acceleration with offload to Intel(R) Xeon Phi(TM) coprocessors. +This can result in additional speedup over 2x depending on the hardware configuration.
Styles with a "kk" suffix are part of the KOKKOS package, and can be @@ -977,10 +977,10 @@ LAMMPS.
The USER-INTEL package was developed by Mike Brown at Intel Corporation. It provides a capability to accelerate simulations by -offloading neighbor list and non-bonded force calculations to Intel -coprocessors (Xeon Phi). Additionally, it supports running +offloading neighbor list and non-bonded force calculations to Intel(R) +Xeon Phi(TM) coprocessors. Additionally, it supports running simulations in single, mixed, or double precision with vectorization, -even if a coprocessor is not present, i.e. on an Intel CPU. The same +even if a coprocessor is not present, i.e. on an Intel(R) CPU. The same C++ code is used for both cases. When offloading to a coprocessor, the routine is run twice, once with an offload flag.
@@ -1004,21 +1004,25 @@ flags to enable OpenMP support (-openmp) to both the CCFLAGS and LINKFLAGS variables. You also need to add -DLAMMPS_MEMALIGN=64 and -restrict to CCFLAGS. +Note that currently you must use the Intel C++ compiler (icc/icpc) to +build the package. In the future, using other compilers (e.g. g++) +may be possible. +
If you are compiling on the same architecture that will be used for the runs, adding the flag -xHost will enable vectorization with the -Intel compiler. In order to build with support for an Intel +Intel(R) compiler. In order to build with support for an Intel(R) coprocessor, the flag -offload should be added to the LINKFLAGS line and the flag -DLMP_INTEL_OFFLOAD should be added to the CCFLAGS line.
The files src/MAKE/Makefile.intel and src/MAKE/Makefile.intel_offload are included in the src/MAKE directory with options that perform well -with the Intel compiler. The latter Makefile has support for offload +with the Intel(R) compiler. The latter Makefile has support for offload to coprocessors and the former does not.
-It is recommended that Intel Compiler 2013 SP1 update 1 be used for +
It is recommended that Intel(R) Compiler 2013 SP1 update 1 be used for compiling. Newer versions have some performance issues that are being -addressed. If using Intel MPI, version 5 or higher is recommended. +addressed. If using Intel(R) MPI, version 5 or higher is recommended.
The rest of the compilation is the same as for any other package that has no additional library dependencies, e.g. @@ -1034,7 +1038,7 @@ them.
The total number of MPI tasks used by LAMMPS (one or multiple per compute node) is set in the usual manner via the mpirun or mpiexec -commands, and is independent of the Intel package. +commands, and is independent of the USER-INTEL package.
Input script requirements to run using pair styles with a intel suffix are as follows: @@ -1054,10 +1058,10 @@ use all single or all double precision, the package intel command must be used in the input script with a "single" or "double" keyword specified.
-Running with an Intel coprocessor: +
Running with an Intel(R) coprocessor:
The USER-INTEL package supports offload of a fraction of the work to -Intel coprocessors (Xeon Phi). This is accomplished by setting a +Intel(R) Xeon Phi(TM) coprocessors. This is accomplished by setting a balance fraction on the package intel command. A balance of 0 runs all calculations on the CPU. A balance of 1 runs all calculations on the coprocessor. A balance of 0.5 runs half of @@ -1075,8 +1079,8 @@ adding a short warm-up run (10-20 steps) will allow the load-balancer to find a setting that will carry over to additional runs.
The default for the package intel command is to have -all the MPI tasks on a given compute node use a single coprocessor -(Xeon Phi). In general, running with a large number of MPI tasks on +all the MPI tasks on a given compute node use a single Xeon Phi(TM) coprocessor +In general, running with a large number of MPI tasks on each node will perform best with offload. Each MPI task will automatically get affinity to a subset of the hardware threads available on the coprocessor. For example, if your card has 61 cores, @@ -1087,7 +1091,7 @@ tuning of the number of threads to use per MPI task or the number of threads to use per core can be accomplished with keywords to the package intel command.
-If LAMMPS is using offload to a coprocessor (Xeon Phi), a diagnostic +
If LAMMPS is using offload to a Intel(R) Xeon Phi(TM) coprocessor, a diagnostic line during the setup for a run is printed to the screen (not to log files) indicating that offload is being used and the number of coprocessor threads per MPI task. Additionally, an offload timing @@ -1095,7 +1099,7 @@ summary is printed at the end of each run. When using offload, the sort frequency for atom data is changed to 1 so that the per-atom data is sorted every neighbor build.
-To use multiple coprocessors (Xeon Phis) on each compute node, the +
To use multiple coprocessors on each compute node, the offload_cards keyword can be specified with the package intel command to specify the number of coprocessors to use. diff --git a/doc/Section_accelerate.txt b/doc/Section_accelerate.txt index 6618ed05af..4e4d54513c 100644 --- a/doc/Section_accelerate.txt +++ b/doc/Section_accelerate.txt @@ -172,8 +172,8 @@ Styles with an "intel" suffix are part of the USER-INTEL package. These styles support vectorized single and mixed precision calculations, in addition to full double precision. In extreme cases, this can provide speedups over 3.5x on CPUs. The package also -supports acceleration with offload to Intel corprocessors (Xeon -Phi). This can result in additional speedup over 2x depending on the +supports acceleration with offload to Intel(R) Xeon Phi(TM) coprocessors. +This can result in additional speedup over 2x depending on the hardware configuration. Styles with a "kk" suffix are part of the KOKKOS package, and can be @@ -976,10 +976,10 @@ LAMMPS. The USER-INTEL package was developed by Mike Brown at Intel Corporation. It provides a capability to accelerate simulations by -offloading neighbor list and non-bonded force calculations to Intel -coprocessors (Xeon Phi). Additionally, it supports running +offloading neighbor list and non-bonded force calculations to Intel(R) +Xeon Phi(TM) coprocessors. Additionally, it supports running simulations in single, mixed, or double precision with vectorization, -even if a coprocessor is not present, i.e. on an Intel CPU. The same +even if a coprocessor is not present, i.e. on an Intel(R) CPU. The same C++ code is used for both cases. When offloading to a coprocessor, the routine is run twice, once with an offload flag. @@ -1003,21 +1003,25 @@ flags to enable OpenMP support ({-openmp}) to both the CCFLAGS and LINKFLAGS variables. You also need to add -DLAMMPS_MEMALIGN=64 and -restrict to CCFLAGS. +Note that currently you must use the Intel C++ compiler (icc/icpc) to +build the package. In the future, using other compilers (e.g. g++) +may be possible. + If you are compiling on the same architecture that will be used for the runs, adding the flag {-xHost} will enable vectorization with the -Intel compiler. In order to build with support for an Intel +Intel(R) compiler. In order to build with support for an Intel(R) coprocessor, the flag {-offload} should be added to the LINKFLAGS line and the flag {-DLMP_INTEL_OFFLOAD} should be added to the CCFLAGS line. The files src/MAKE/Makefile.intel and src/MAKE/Makefile.intel_offload are included in the src/MAKE directory with options that perform well -with the Intel compiler. The latter Makefile has support for offload +with the Intel(R) compiler. The latter Makefile has support for offload to coprocessors and the former does not. -It is recommended that Intel Compiler 2013 SP1 update 1 be used for +It is recommended that Intel(R) Compiler 2013 SP1 update 1 be used for compiling. Newer versions have some performance issues that are being -addressed. If using Intel MPI, version 5 or higher is recommended. +addressed. If using Intel(R) MPI, version 5 or higher is recommended. The rest of the compilation is the same as for any other package that has no additional library dependencies, e.g. @@ -1033,7 +1037,7 @@ them. The total number of MPI tasks used by LAMMPS (one or multiple per compute node) is set in the usual manner via the mpirun or mpiexec -commands, and is independent of the Intel package. +commands, and is independent of the USER-INTEL package. Input script requirements to run using pair styles with a {intel} suffix are as follows: @@ -1053,10 +1057,10 @@ use all single or all double precision, the "package intel"_package.html command must be used in the input script with a "single" or "double" keyword specified. -[Running with an Intel coprocessor:] +[Running with an Intel(R) coprocessor:] The USER-INTEL package supports offload of a fraction of the work to -Intel coprocessors (Xeon Phi). This is accomplished by setting a +Intel(R) Xeon Phi(TM) coprocessors. This is accomplished by setting a balance fraction on the "package intel"_package.html command. A balance of 0 runs all calculations on the CPU. A balance of 1 runs all calculations on the coprocessor. A balance of 0.5 runs half of @@ -1074,8 +1078,8 @@ adding a short warm-up run (10-20 steps) will allow the load-balancer to find a setting that will carry over to additional runs. The default for the "package intel"_package.html command is to have -all the MPI tasks on a given compute node use a single coprocessor -(Xeon Phi). In general, running with a large number of MPI tasks on +all the MPI tasks on a given compute node use a single Xeon Phi(TM) coprocessor +In general, running with a large number of MPI tasks on each node will perform best with offload. Each MPI task will automatically get affinity to a subset of the hardware threads available on the coprocessor. For example, if your card has 61 cores, @@ -1086,7 +1090,7 @@ tuning of the number of threads to use per MPI task or the number of threads to use per core can be accomplished with keywords to the "package intel"_package.html command. -If LAMMPS is using offload to a coprocessor (Xeon Phi), a diagnostic +If LAMMPS is using offload to a Intel(R) Xeon Phi(TM) coprocessor, a diagnostic line during the setup for a run is printed to the screen (not to log files) indicating that offload is being used and the number of coprocessor threads per MPI task. Additionally, an offload timing @@ -1094,7 +1098,7 @@ summary is printed at the end of each run. When using offload, the "sort"_atom_modify.html frequency for atom data is changed to 1 so that the per-atom data is sorted every neighbor build. -To use multiple coprocessors (Xeon Phis) on each compute node, the +To use multiple coprocessors on each compute node, the {offload_cards} keyword can be specified with the "package intel"_package.html command to specify the number of coprocessors to use. diff --git a/doc/Section_example.html b/doc/Section_example.html index dfc356a3d4..7683b56578 100644 --- a/doc/Section_example.html +++ b/doc/Section_example.html @@ -59,7 +59,7 @@ section of the LAMMPS WWW Site.
This package provides options for performing neighbor list and +non-bonded force calculations in single, mixed, or double precision +and also a capability for accelerating calculations with an +Intel(R) Xeon Phi(TM) coprocessor. +
+See this section of the manual to get started: +
+ +The person who created this package is W. Michael Brown at Intel +(michael.w.brown at intel.com). Contact him directly if you have questions. +
+This package contains a LAMMPS implementation of a background diff --git a/doc/Section_packages.txt b/doc/Section_packages.txt index ea0810e923..1936eea35b 100644 --- a/doc/Section_packages.txt +++ b/doc/Section_packages.txt @@ -117,7 +117,7 @@ USER-COLVARS, collective variables, Fiorin & Henin & Kohlmeyer (3), "fix colvars USER-CUDA, NVIDIA GPU styles, Christian Trott (U Tech Ilmenau), "Section accelerate"_Section_accelerate.html#acc_7, USER/cuda, -, lib/cuda USER-EFF, electron force field, Andres Jaramillo-Botero (Caltech), "pair_style eff/cut"_pair_eff.html, USER/eff, "eff"_eff, - USER-FEP, free energy perturbation, Agilio Padua (U Blaise Pascal Clermont-Ferrand), "fix adapt/fep"_fix_adapt.html, USER/fep, -, - -USER-INTEL, Vectorized CPU and Intel coprocessor styles, W. Michael Brown (Intel), "Section accelerate"_Section_accelerate.html#acc_9, examples/intel, -, - +USER-INTEL, Vectorized CPU and Intel(R) coprocessor styles, W. Michael Brown (Intel), "Section accelerate"_Section_accelerate.html#acc_9, examples/intel, -, - USER-LB, Lattice Boltzmann fluid, Colin Denniston (U Western Ontario), "fix lb/fluid"_fix_lb_fluid.html, USER/lb, -, - USER-MISC, single-file contributions, USER-MISC/README, USER-MISC/README, -, -, - USER-MOLFILE, "VMD"_VMD molfile plug-ins, Axel Kohlmeyer (Temple U), "dump molfile"_dump_molfile.html, -, -, VMD-MOLFILE @@ -377,6 +377,22 @@ Contact him directly if you have questions. :line +USER-INTEL package :h4 + +This package provides options for performing neighbor list and +non-bonded force calculations in single, mixed, or double precision +and also a capability for accelerating calculations with an +Intel(R) Xeon Phi(TM) coprocessor. + +See this section of the manual to get started: + +"Section_accelerate"_Section_accelerate.html#acc_9 + +The person who created this package is W. Michael Brown at Intel +(michael.w.brown at intel.com). Contact him directly if you have questions. + +:line + USER-LB package :h4 This package contains a LAMMPS implementation of a background diff --git a/doc/Section_start.html b/doc/Section_start.html index 67da141f95..6448a72a76 100644 --- a/doc/Section_start.html +++ b/doc/Section_start.html @@ -1493,8 +1493,8 @@ default GPU settings, as if the command "package gpu force/neigh 0 0 changed by using the package gpu command in your script if desired.
-For the Intel package, using this command-line switch also invokes the -default Intel settings, as if the command "package intel * mixed +
For the USER-INTEL package, using this command-line switch also invokes the +default USER-INTEL settings, as if the command "package intel * mixed balance -1" were used at the top of your input script. These settings can be changed by using the package intel command in your script if desired. If the USER-OMP package is installed, the diff --git a/doc/Section_start.txt b/doc/Section_start.txt index 3f6a52180e..89270d4846 100644 --- a/doc/Section_start.txt +++ b/doc/Section_start.txt @@ -1487,8 +1487,8 @@ default GPU settings, as if the command "package gpu force/neigh 0 0 changed by using the "package gpu"_package.html command in your script if desired. -For the Intel package, using this command-line switch also invokes the -default Intel settings, as if the command "package intel * mixed +For the USER-INTEL package, using this command-line switch also invokes the +default USER-INTEL settings, as if the command "package intel * mixed balance -1" were used at the top of your input script. These settings can be changed by using the "package intel"_package.html command in your script if desired. If the USER-OMP package is installed, the diff --git a/doc/fix_langevin.html b/doc/fix_langevin.html index 36ffc14e3c..bc35b2181f 100644 --- a/doc/fix_langevin.html +++ b/doc/fix_langevin.html @@ -239,20 +239,29 @@ group. As a result, the center-of-mass of a system with zero initial momentum will not drift over time.
The keyword gjf can be used to run the Gronbech-Jensen/Farago - time-discretization of the Langevin model. The -effective random force is composed of the average of two random forces -representing half-contributions from the previous and current time -intervals. This discretization has been shown to be consistent with -the underlying physical model of Langevin dynamics and produces the -correct Boltzmann distribution of positions for large timesteps, -up to the numerical stability limit. In common with all -methods based on Verlet integration, the discretized velocities -generated by the time integration scheme are not exactly conjugate -to the positions. As a result the temperature computed from the -discretized velocities will be systematically lower than the -target temperature, by an amount that grows with the timestep. -Nonetheless, the distribution of positions will be consistent -with the target temperature. + time-discretization of the Langevin model. As +described in the papers cited below, the purpose of this method is to +enable longer timesteps to be used (up to the numerical stability +limit of the integrator), while still producing the correct Boltzmann +distribution of atom positions. It is implemented within LAMMPS, by +changing how the the random force is applied so that it is composed of +the average of two random forces representing half-contributions from +the previous and current time intervals. In common with all methods +based on Verlet integration, the discretized velocities generated by +this method in conjunction with velocity-Verlet time integration are +not exactly conjugate to the positions. As a result the temperature +(computed from the discretized velocities) will be systematically +lower than the target temperature, by a small amount which grows with +the timestep. Nonetheless, the distribution of atom positions will +still be consistent with the target temperature. For molecules containing +C-H bonds, configurational properties generated with dt = 2.5 fs and +tdamp = 100 fs are indistinguishable from dt = 0.5 fs. +Because the velocity distribution systematically decreases with increasing +timestep, the method should not be used to +generate properties that depend on the velocity distribution, such as +the velocity autocorrelation function (VACF). In the above example, the +velocity distribution at dt = 2.5fs generates an average temperature of 220 K, +instead of 300 K.
Additional keyword-value pairs are available that are used to -determine how work is offloaded to an Intel coprocessor. If LAMMPS is +determine how work is offloaded to an Intel(R) coprocessor. If LAMMPS is built without offload support, these values are ignored. The additional settings are as follows:
diff --git a/doc/package.txt b/doc/package.txt index 7640d335c0..be263abb16 100644 --- a/doc/package.txt +++ b/doc/package.txt @@ -244,7 +244,7 @@ terms and single precision for everything else), or {double} (intel styles use double precision for all calculations). Additional keyword-value pairs are available that are used to -determine how work is offloaded to an Intel coprocessor. If LAMMPS is +determine how work is offloaded to an Intel(R) coprocessor. If LAMMPS is built without offload support, these values are ignored. The additional settings are as follows: diff --git a/doc/suffix.html b/doc/suffix.html index 479a9bcd29..c1aae192de 100644 --- a/doc/suffix.html +++ b/doc/suffix.html @@ -51,7 +51,7 @@ run on one or more GPUs or multicore CPU/GPU nodes