/* ----------------------------------------------------------------------
LAMMPS - Large-scale Atomic/Molecular Massively Parallel Simulator
http://lammps.sandia.gov, Sandia National Laboratories
Steve Plimpton, sjplimp@sandia.gov
Copyright (2003) Sandia Corporation. Under the terms of Contract
DE-AC04-94AL85000 with Sandia Corporation, the U.S. Government retains
certain rights in this software. This software is distributed under
the GNU General Public License.
See the README file in the top-level LAMMPS directory.
------------------------------------------------------------------------- */
/* ----------------------------------------------------------------------
Contributing authors: Mike Brown (ORNL), brownw@ornl.gov
Peng Wang (Nvidia), penwang@nvidia.com
Paul Crozier (SNL), pscrozi@sandia.gov
------------------------------------------------------------------------- */
GENERAL NOTES
This library, libgpu.a, provides routines for GPU acceleration
of LAMMPS pair styles. Compilation of this library requires
installing the CUDA GPU driver and CUDA toolkit for your operating
system. In addition to the LAMMPS library, the binary nvc_get_devices
will also be built. This can be used to query the names and
properties of GPU devices on your system. A Makefile for OpenCL
compilation is provided, but support for OpenCL use is not currently
provided by the developers.
NOTE: Installation of the CUDA SDK is not required.
Current pair styles supporting GPU acceleration:
1. lj/cut/gpu
2. lj/cut/coul/cut/gpu
3. lj/cut/coul/long/gpu
4. lj96/cut/gpu
5. gayberne/gpu
MULTIPLE LAMMPS PROCESSES
Multiple LAMMPS MPI processes can share GPUs on the system, but multiple
GPUs cannot be utilized by a single MPI process. In many cases, the
best performance will be obtained by running as many MPI processes as
CPU cores available with the condition that the number of MPI processes
is an integer multiple of the number of GPUs being used. See the
LAMMPS user manual for details on running with GPU acceleration.
BUILDING AND PRECISION MODES
To build, edit the CUDA_ARCH, CUDA_PRECISION, CUDA_INCLUDE, CUDA_LIB,
and CUDA_OPTS values in one of the Makefiles. CUDA_ARCH should be set
based on the compute capability for your GPU. This can be verified by
running the nvc_get_devices executable after the build is complete.
Additionally, the GPU package must be installed and
compiled for LAMMPS. This may require editing the gpu_SYSPATH variable
in the LAMMPS makefile.
The library supports 3 precision modes as determined by
the CUDA_PRECISION variable:
CUDA_PREC = -D_SINGLE_SINGLE # Single precision for all calculations
CUDA_PREC = -D_DOUBLE_DOUBLE # Double precision for all calculations
CUDA_PREC = -D_SINGLE_DOUBLE # Accumulation of forces, etc. in double
NOTE: Double precision is only supported on certain GPUS (with
compute capability>=1.3).
NOTE: For Tesla and other graphics cards with compute capability>=1.3,
make sure that -arch=sm_13 is set on the CUDA_ARCH line.
NOTE: For Fermi, make sure that -arch=sm_20 is set on the CUDA_ARCH line.
NOTE: The gayberne/gpu pair style will only be installed if the ASPHERE
package has been installed before installing the GPU package in LAMMPS.
EXAMPLE BUILD PROCESS
cd ~/lammps/lib/gpu
emacs Makefile.linux
make -f Makefile.linux
./nvc_get_devices
cd ../../src
emacs ./MAKE/Makefile.linux
make yes-asphere
make yes-gpu
make linux