|
|
|
@ -16,38 +16,45 @@ lmp_linux_double
|
|
|
|
The precision (single, mixed, double) refers to the GPU and USER-CUDA
|
|
|
|
The precision (single, mixed, double) refers to the GPU and USER-CUDA
|
|
|
|
pacakge precision. See the README files in the lib/gpu and lib/cuda
|
|
|
|
pacakge precision. See the README files in the lib/gpu and lib/cuda
|
|
|
|
directories for instructions on how to build the packages with
|
|
|
|
directories for instructions on how to build the packages with
|
|
|
|
different precisions. The doc/Section_accelerate.html file also has a
|
|
|
|
different precisions. The GPU and USER-CUDA sub-sections of the
|
|
|
|
summary description.
|
|
|
|
doc/Section_accelerate.html file also describes this process.
|
|
|
|
|
|
|
|
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
If the script has "cpu" in its name, it is meant to be run in CPU-only
|
|
|
|
To run on just CPUs (without using the GPU or USER-CUDA styles),
|
|
|
|
mode (without using the GPU or USER-CUDA styles). For example:
|
|
|
|
do something like the following:
|
|
|
|
|
|
|
|
|
|
|
|
mpirun -np 1 ../lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj.cpu
|
|
|
|
mpirun -np 1 lmp_linux_double -v x 8 -v y 8 -v z 8 -v t 100 < in.lj
|
|
|
|
mpirun -np 12 ../lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.lj.cpu
|
|
|
|
mpirun -np 12 lmp_linux_double -v x 16 -v y 16 -v z 16 -v t 100 < in.lj
|
|
|
|
|
|
|
|
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
|
|
determines the number of timesteps.
|
|
|
|
determines the number of timesteps.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
These mpirun commands run on a single node. To run on multiple
|
|
|
|
|
|
|
|
nodes, scale up the "-np" setting.
|
|
|
|
|
|
|
|
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
If the script has "gpu" in its name, it is meant to be run using
|
|
|
|
To run with the GPU package, do something like the following:
|
|
|
|
the GPU package. For example:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
mpirun -np 12 ../lmp_linux_single -sf gpu -v g 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu
|
|
|
|
mpirun -np 12 lmp_linux_single -sf gpu -pk gpu 1 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
|
|
|
|
|
|
|
|
mpirun -np 8 lmp_linux_mixed -sf gpu -pk gpu 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj
|
|
|
|
mpirun -np 8 ../lmp_linux_mixed -sf gpu -v g 2 -v x 32 -v y 32 -v z 64 -v t 100 < in.lj.gpu
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
|
|
The "xyz" settings determine the problem size. The "t" setting
|
|
|
|
determines the number of timesteps. The "np" setting determines how
|
|
|
|
determines the number of timesteps. The "np" setting determines how
|
|
|
|
many MPI tasks per compute node the problem will run on, and the "g"
|
|
|
|
many MPI tasks (per node) the problem will run on, The numeric
|
|
|
|
setting determines how many GPUs per compute node the problem will run
|
|
|
|
argument to the "-pk" setting is the number of GPUs (per node). Note
|
|
|
|
on, i.e. 1 or 2 in this case. Note that you can use more MPI tasks
|
|
|
|
that you can use more MPI tasks than GPUs (per node) with the GPU
|
|
|
|
than GPUs (both per compute node) with the GPU package.
|
|
|
|
package.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
These mpirun commands run on a single node. To run on multiple
|
|
|
|
|
|
|
|
nodes, scale up the "-np" setting, and control the number of
|
|
|
|
|
|
|
|
MPI tasks per node via a "-ppn" setting.
|
|
|
|
|
|
|
|
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To run with the USER-CUDA package, do something like the following:
|
|
|
|
|
|
|
|
|
|
|
|
If the script has "cuda" in its name, it is meant to be run using
|
|
|
|
If the script has "cuda" in its name, it is meant to be run using
|
|
|
|
the USER-CUDA package. For example:
|
|
|
|
the USER-CUDA package. For example:
|
|
|
|
|
|
|
|
|
|
|
|
@ -62,7 +69,10 @@ setting determines how many GPUs per compute node the problem will run
|
|
|
|
on, i.e. 1 or 2 in this case. For the USER-CUDA package, the number
|
|
|
|
on, i.e. 1 or 2 in this case. For the USER-CUDA package, the number
|
|
|
|
of MPI tasks and GPUs (both per compute node) must be equal.
|
|
|
|
of MPI tasks and GPUs (both per compute node) must be equal.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
These mpirun commands run on a single node. To run on multiple
|
|
|
|
|
|
|
|
nodes, scale up the "-np" setting.
|
|
|
|
|
|
|
|
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
If the script has "titan" in its name, it was run on the Titan supercomputer
|
|
|
|
If the script has "titan" in its name, it was run on the Titan
|
|
|
|
at ORNL.
|
|
|
|
supercomputer at ORNL.
|
|
|
|
|