Syntax:
package style args
cuda args = Ngpu keyword value ...
Ngpu = # of GPUs per node
zero or more keyword/value pairs may be appended
keywords = gpuID or timing or test or thread
gpuID values = gpu1 .. gpuN
gpu1 .. gpuN = IDs of the Ngpu GPUs to use
timing values = none
test values = id
id = atom-ID of a test particle
thread = auto or tpa or bpa
auto = test whether tpa or bpa is faster
tpa = one thread per atom
bpa = one block per atom
gpu args = Ngpu keyword value ...
Ngpu = # of GPUs per node
zero or more keyword/value pairs may be appended
keywords = neigh or split or gpuID or tpa or binsize or device
neigh value = yes or no
yes = neighbor list build on GPU (default)
no = neighbor list build on CPU
split = fraction
fraction = fraction of atoms assigned to GPU (default = 1.0)
gpuID values = first last
first = ID of first GPU to be used on each node
last = ID of last GPU to be used on each node
tpa value = Nthreads
Nthreads = # of GPU threads used per atom
binsize value = size
size = bin size for neighbor list construction (distance units)
device value = device_type
device_type = kepler or fermi or cypress or generic
intel args = Nthreads precision keyword value ...
Nthreads = # of OpenMP threads to associate with each MPI process on host
precision = single or mixed or double
keywords = balance or offload_cards or offload_ghost or offload_tpc or offload_threads
balance value = split
split = fraction of work to offload to coprocessor, -1 for dynamic
offload_cards value = ncops
ncops = number of coprocessors to use on each node
offload_ghost value = offload_type
offload_type = 1 to include ghost atoms for offload, 0 for local only
offload_tpc value = tpc
tpc = number of threads to use on each core of coprocessor
offload_threads value = tptask
tptask = max number of threads to use on coprocessor for each MPI task
kokkos args = keyword value ...
one or more keyword/value pairs may be appended
keywords = neigh or comm/exchange or comm/forward
neigh value = full or half/thread or half or n2 or full/cluster
comm/exchange value = no or host or device
comm/forward value = no or host or device
omp args = Nthreads keyword value ...
Nthread = # of OpenMP threads to associate with each MPI process
zero or more keyword/value pairs may be appended
keywords = neigh
neigh value = yes or no
yes = threaded neighbor list build (default)
no = non-threaded neighbor list build
Examples:
package gpu 1 package gpu 1 split 0.75 package gpu 2 split -1.0 package cuda gpu/node/special 2 0 2 package cuda test 3948 package kokkos neigh half/thread comm/forward device package omp 0 neigh yes package omp 4 package intel * mixed balance -1
Description:
This command invokes package-specific settings. Currently the following packages use it: USER-CUDA, GPU, USER-INTEL, KOKKOS, and USER-OMP.
Talk about command line switches
When does it have to be invoked
To use the accelerated GPU and USER-OMP styles, the use of the package command is required. However, as described in the "Defaults" section below, if you use the "-sf gpu" or "-sf omp" command-line options to enable use of these styles, then default package settings are enabled. In that case you only need to use the package command if you want to change the defaults.
To use the accelerated USER-CUDA and KOKKOS styles, the package command is not required as defaults are assigned internally. You only need to use the package command if you want to change the defaults.
See Section_accelerate of the manual for more details about using these various packages for accelerating LAMMPS calculations.
Package GPU always sets newton pair off. Not so for USER-CUDA>
The cuda style invokes settings associated with the use of the USER-CUDA package.
The Ngpus argument sets the number of GPUs per node. There must be exactly one MPI task per GPU, as set by the mpirun or mpiexec command.
Optional keyword/value pairs can also be specified. Each has a default value as listed below.
The gpuID keyword allows selection of which GPUs on each node will be used for a simulation. GPU IDs range from 0 to N-1 where N is the physical number of GPUs/node. An ID is specified for each of the Ngpus being used. For example if you have three GPUs on a machine, one of which is used for the X-Server (the GPU with the ID 1) while the others (with IDs 0 and 2) are used for computations you would specify:
package cuda 2 gpuID 0 2
The purpose of the gpuID keyword is to allow two (or more) simulations to be run on one workstation. In that case one could set the first simulation to use GPU 0 and the second to use GPU 1. This is not necessary however, if the GPUs are in what is called compute exclusive mode. Using that setting, every process will get its own GPU automatically. This compute exclusive mode can be set as root using the nvidia-smi tool which is part of the CUDA installation.
Also note that if the gpuID keyword is not used, the USER-CUDA package sorts existing GPUs on each node according to their number of multiprocessors. This way, compute GPUs will be priorized over X-Server GPUs.
If the timing keyword is specified, detailed timing information for various subroutines will be output.
If the test keyword is specified, information for the specified atom with atom-ID will be output at several points during each timestep. This is mainly usefull for debugging purposes. Note that the simulation slow down dramatically if this option is used.
The thread keyword can be used to specify how GPU threads are assigned work during pair style force evaluation. If the value = tpa, one thread per atom is used. If the value = bpa, one block per atom is used. If the value = auto, a short test is performed at the beginning of each run to determing where tpa or bpa mode is faster. The result of this test is output. Since auto is the default value, it is usually not necessary to use this keyword.
The gpu style invokes settings settings associated with the use of the GPU package.
The Ngpu argument sets the number of GPUs per node. There must be at least as many MPI tasks per node as GPUs, as set by the mpirun or mpiexec command. If there are more MPI tasks (per node) than GPUs, multiple MPI tasks will share each GPU.
Optional keyword/value pairs can also be specified. Each has a default value as listed below.
The neigh keyword specifies where neighbor lists for pair style computation will be built. If neigh is yes, which is the default, neighbor list building is performed on the GPU. If neigh is no, neighbor list building is performed on the CPU. GPU neighbor list building currently cannot be used with a triclinic box. GPU neighbor list calculation currently cannot be used with hybrid pair styles. GPU neighbor lists are not compatible with comannds that are not GPU-enabled. When a non-GPU enabled command requires a neighbor list, it will also be built on the CPU. In these cases, it will typically be more efficient to only use CPU neighbor list builds.
The split keyword can be used for load balancing force calculations between CPU and GPU cores in GPU-enabled pair styles. If 0 < split < 1.0, a fixed fraction of particles is offloaded to the GPU while force calculation for the other particles occurs simulataneously on the CPU. If split < 0.0, the optimal fraction (based on CPU and GPU timings) is calculated every 25 timesteps. If split = 1.0, all force calculations for GPU accelerated pair styles are performed on the GPU. In this case, other hybrid pair interactions, bond, angle, dihedral, improper, and long-range calculations can be performed on the CPU while the GPU is performing force calculations for the GPU-enabled pair style. If all CPU force computations complete before the GPU completes, LAMMPS will block until the GPU has finished before continuing the timestep.
As an example, if you have two GPUs per node and 8 CPU cores per node, and would like to run on 4 nodes (32 cores) with dynamic balancing of force calculation across CPU and GPU cores, you could specify
mpirun -np 32 -sf gpu -in in.script # launch command package gpu 2 split -1 # input script command
In this case, all CPU cores and GPU devices on the nodes would be utilized. Each GPU device would be shared by 4 CPU cores. The CPU cores would perform force calculations for some fraction of the particles at the same time the GPUs performed force calculation for the other particles.
The gpuID keyword allows selection of which GPUs on each node will be used for a simulation. The first and last values specify the GPU IDs to use (from 0 to Ngpu-1). By default, first = 0 and last = Ngpu-1, so that all GPUs are used, assuming Ngpu is set to the number of physical GPUs. If you only wish to use a subset, set Ngpu to a smaller number and first/last to a sub-range of the available GPUs.
The tpa keyword sets the number of GPU thread per atom used to perform force calculations. With a default value of 1, the number of threads will be chosen based on the pair style, however, the value can be set explicitly with this keyword to fine-tune performance. For large cutoffs or with a small number of particles per GPU, increasing the value can improve performance. The number of threads per atom must be a power of 2 and currently cannot be greater than 32.
The binsize keyword sets the size of bins used to bin atoms in neighbor list builds. Setting this value is normally not needed; the optimal value is close to the default, which is set equal to the cutoff distance for the short range interactions plus the neighbor skin. Note that this is 2x larger than the default bin size for neighbor list builds on the CPU. This is becuase GPUs can perform efficiently with much larger cutoffs than CPUs. This can be used to reduce the time required for long-range calculations or in some cases to eliminate them with pair style models such as coul/wolf or coul/dsf. For very large cutoffs, it can be more efficient to use smaller values for binsize in parallel simulations. For example, with a cutoff of 20*sigma in LJ units and a neighbor skin distance of sigma, a binsize = 5.25*sigma can be more efficient than the default.
The device keyword can be used to tune parameters optimized for a specific accelerator, when using OpenCL. For CUDA, the device keyword is ignored. Currently, the device type is limited to NVIDIA Kepler, NVIDIA Fermi, AMD Cypress, or a generic device. More devices may be added later. The default device type can be specified when building LAMMPS with the GPU library, via settings in the lib/gpu/Makefile that is used.
The intel style invokes options associated with the use of the USER-INTEL package.
The Nthread argument allows to one explicitly set the number of OpenMP threads to be allocated for each MPI process, An Nthreads value of '*' instructs LAMMPS to use whatever is the default for the given OpenMP environment. This is usually determined via the OMP_NUM_THREADS environment variable or the compiler runtime.
The precision argument determines the precision mode to use and can take values of single (intel styles use single precision for all calculations), mixed (intel styles use double precision for accumulation and storage of forces, torques, energies, and virial terms and single precision for everything else), or double (intel styles use double precision for all calculations).
Additional keyword-value pairs are available that are used to determine how work is offloaded to an Intel(R) coprocessor. If LAMMPS is built without offload support, these values are ignored. The additional settings are as follows:
The balance setting is used to set the fraction of work offloaded to the coprocessor for an intel style (in the inclusive range 0.0 to 1.0). While this fraction of work is running on the coprocessor, other calculations will run on the host, including neighbor and pair calculations that are not offloaded, angle, bond, dihedral, kspace, and some MPI communications. If the balance is set to -1, the fraction of work is dynamically adjusted automatically throughout the run. This can typically give performance within 5 to 10 percent of the optimal fixed fraction.
The offload_cards setting determines the number of coprocessors to use on each node.
Additional options for fine tuning performance with offload are as follows:
The offload_ghost setting determines whether or not ghost atoms, atoms at the borders between MPI tasks, are offloaded for neighbor and force calculations. When set to "0", ghost atoms are not offloaded. This option can reduce the amount of data transfer with the coprocessor and also can overlap MPI communication of forces with computation on the coprocessor when the newton pair setting is "on". When set to "1", ghost atoms are offloaded. In some cases this can provide better performance, especially if the offload fraction is high.
The offload_tpc option sets the maximum number of threads that will run on each core of the coprocessor.
The offload_threads option sets the maximum number of threads that will be used on the coprocessor for each MPI task. This, along with the offload_tpc setting, are the only methods for changing the number of threads on the coprocessor. The OMP_NUM_THREADS keyword and Nthreads options are only used for threads on the host.
The kokkos style invokes options associated with the use of the KOKKOS package.
The neigh keyword determines what kinds of neighbor lists are built. A value of half uses half-neighbor lists, the same as used by most pair styles in LAMMPS. A value of half/thread uses a threadsafe variant of the half-neighbor list. It should be used instead of half when running with threads on a CPU. A value of full uses a full-neighborlist, i.e. f_ij and f_ji are both calculated. This performs twice as much computation as the half option, however that can be a win because it is threadsafe and doesn't require atomic operations. A value of full/cluster is an experimental neighbor style, where particles interact with all particles within a small cluster, if at least one of the clusters particles is within the neighbor cutoff range. This potentially allows for better vectorization on architectures such as the Intel Phi. If also reduces the size of the neighbor list by roughly a factor of the cluster size, thus reducing the total memory footprint considerably.
The comm/exchange and comm/forward keywords determine whether the host or device performs the packing and unpacking of data when communicating information between processors. "Exchange" communication happens only on timesteps that neighbor lists are rebuilt. The data is only for atoms that migrate to new processors. "Forward" communication happens every timestep. The data is for atom coordinates and any other atom properties that needs to be updated for ghost atoms owned by each processor.
The value options for these keywords are no or host or device. A value of no means to use the standard non-KOKKOS method of packing/unpacking data for the communication. A value of host means to use the host, typically a multi-core CPU, and perform the packing/unpacking in parallel with threads. A value of device means to use the device, typically a GPU, to perform the packing/unpacking operation.
The optimal choice for these keywords depends on the input script and the hardware used. The no value is useful for verifying that Kokkos code is working correctly. It may also be the fastest choice when using Kokkos styles in MPI-only mode (i.e. with a thread count of 1). When running on CPUs or Xeon Phi, the host and device values work identically. When using GPUs, the device value will typically be optimal if all of your styles used in your input script are supported by the KOKKOS package. In this case data can stay on the GPU for many timesteps without being moved between the host and GPU, if you use the device value. This requires that your MPI is able to access GPU memory directly. Currently that is true for OpenMPI 1.8 (or later versions), Mvapich2 1.9 (or later), and CrayMPI. If your script uses styles (e.g. fixes) which are not yet supported by the KOKKOS package, then data has to be move between the host and device anyway, so it is typically faster to let the host handle communication, by using the host value. Using host instead of no will enable use of multiple threads to pack/unpack communicated data.
The omp style invokes settings associated with the use of the USER-OMP package.
The Nthread argument sets the number of OpenMP threads allocated for each MPI task. For example, if your system has nodes with dual quad-core processors, it has a total of 8 cores per node. You could use two MPI tasks per node (e.g. using the -ppn option of the mpirun command), and set Nthreads = 4. This would use all 8 cores on each node. Note that the product of MPI tasks * threads/task should not exceed the physical number of cores (on a node), otherwise performance will suffer.
Setting Nthread = 0 instructs LAMMPS to use whatever value is the default for the given OpenMP environment. This is usually determined via the OMP_NUM_THREADS environment variable or the compiler runtime. Note that in most cases the default for OpenMP capable compilers is to use one thread for each available CPU core when OMP_NUM_THREADS is not explicitly set, which can lead to poor performance.
Here are examples of how to set the environment variable when launching LAMMPS:
env OMP_NUM_THREADS=4 lmp_machine -sf omp -in in.script env OMP_NUM_THREADS=2 mpirun -np 2 lmp_machine -sf omp -in in.script mpirun -x OMP_NUM_THREADS=2 -np 2 lmp_machine -sf omp -in in.script
or you can set it permanently in your shell's start-up script. All three of these examples use a total of 4 CPU cores.
Note that different MPI implementations have different ways of passing the OMP_NUM_THREADS environment variable to all MPI processes. The 2nd example line above is for MPICH; the 3rd example line with -x is for OpenMPI. Check your MPI documentation for additional details.
What combination of threads and MPI tasks gives the best performance is difficult to predict and can depend on many components of your input. Not all features of LAMMPS support OpenMP threading via the USER-OMP packaage and the parallel efficiency can be very different, too.
Optional keyword/value pairs can also be specified. Each has a default value as listed below.
The neigh keyword specifies whether neighbor list building will be multi-threaded in addition to force calculations. If neigh is set to no then neighbor list calculation is performed only by MPI tasks with no OpenMP threading. If mode is yes (the default), a multi-threaded neighbor list build is used. Using neigh = yes is almost always faster and should produce idential neighbor lists at the expense of using more memory. Specifically, neighbor list pages are allocated for all threads at the same time and each thread works within its own pages.
Restrictions:
This command cannot be used after the simulation box is defined by a read_data or create_box command.
The cuda style of this command can only be invoked if LAMMPS was built with the USER-CUDA package. See the Making LAMMPS section for more info.
The gpu style of this command can only be invoked if LAMMPS was built with the GPU package. See the Making LAMMPS section for more info.
The intel style of this command can only be invoked if LAMMPS was built with the USER-INTEL package. See the Making LAMMPS section for more info.
The kk style of this command can only be invoked if LAMMPS was built with the KOKKOS package. See the Making LAMMPS section for more info.
The omp style of this command can only be invoked if LAMMPS was built with the USER-OMP package. See the Making LAMMPS section for more info.
Related commands:
suffix, "-pk" command-line setting
Default:
To use the USER-CUDA package, the package command must be invoked explicitly, either via the "-pk cuda" command-line switch or by invoking the package cuda command in your input script. This will set the # of GPUs/node. The options defaults are gpuID = 0 to Ngpu-1, timing not enabled, test not enabled, and thread = auto.
For the GPU package, the default is Ngpu = 1 and the option defaults are neigh = yes, split = 1.0, gpuID = 0 to Ngpu-1, tpa = 1, binsize = pair cutoff + neighbor skin, device = not used. These settings are made if the "-sf gpu" command-line switch is used. If it is not used, you must invoke the package gpu command in your input script.
The default settings for the USER-INTEL package are "package intel * mixed balance -1 offload_cards 1 offload_tpc 4 offload_threads 240". The offload_ghost default setting is determined by the intel style being used. The value used is output to the screen in the offload report at the end of each run.
The default settings for the KOKKOS package are "package kokkos neigh full comm/exchange host comm/forward host". This is the case whether the "-sf kk" command-line switch is used or not.
For the OMP package, the default is Nthreads = 0 and the option defaults are neigh = yes. These settings are made if the "-sf omp" command-line switch is used. If it is not used, you must invoke the package omp command in your input script.