Run lammps/lib/gpu/nvc_get_devices to list supported devices and properties
+- Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 Go
+
- to http://www.nvidia.com/object/cuda_get.html Install a driver and
+
- toolkit appropriate for your system (SDK is not necessary) Follow the
+
- instructions in README in lammps/lib/gpu to build the library. Run
+
- lammps/lib/gpu/nvc_get_devices to list supported devices and
+
- properties
GPU configuration
When using GPUs, you are restricted to one physical GPU per LAMMPS
-process. Multiple processes can share a single GPU and in many cases it
-will be more efficient to run with multiple processes per GPU. Any GPU
-accelerated style requires that fix gpu be used in the
-input script to select and initialize the GPUs. The format for the fix
-is:
+process. Multiple processes can share a single GPU and in many cases
+it will be more efficient to run with multiple processes per GPU. Any
+GPU accelerated style requires that fix gpu be used in
+the input script to select and initialize the GPUs. The format for the
+fix is:
fix name all gpu mode first last split
where name is the name for the fix. The gpu fix must be the first
-fix specified for a given run, otherwise the program will exit
-with an error. The gpu fix will not have any effect on runs
-that do not use GPU acceleration; there should be no problem
-with specifying the fix first in any input script.
+fix specified for a given run, otherwise the program will exit with an
+error. The gpu fix will not have any effect on runs that do not use
+GPU acceleration; there should be no problem with specifying the fix
+first in any input script.
-mode can be either "force" or "force/neigh". In the former,
-neighbor list calculation is performed on the CPU using the
-standard LAMMPS routines. In the latter, the neighbor list
-calculation is performed on the GPU. The GPU neighbor list
-can be used for better performance, however, it
-should not be used with a triclinic box.
+
mode can be either "force" or "force/neigh". In the former, neighbor
+list calculation is performed on the CPU using the standard LAMMPS
+routines. In the latter, the neighbor list calculation is performed on
+the GPU. The GPU neighbor list can be used for better performance,
+however, it cannot not be used with a triclinic box or with
+hybrid pair styles.
-There are cases when it might be more efficient to select the CPU for neighbor
-list builds. If a non-GPU enabled style requires a neighbor list, it will also
-be built using CPU routines. Redundant CPU and GPU neighbor list calculations
-will typically be less efficient. For hybrid pair
-styles, GPU calculated neighbor lists might be less efficient because
-no particles will be skipped in a given neighbor list.
+
There are cases when it might be more efficient to select the CPU for
+neighbor list builds. If a non-GPU enabled style requires a neighbor
+list, it will also be built using CPU routines. Redundant CPU and GPU
+neighbor list calculations will typically be less efficient.
-first is the ID (as reported by lammps/lib/gpu/nvc_get_devices)
-of the first GPU that will be used on each node. last is the
-ID of the last GPU that will be used on each node. If you have
-only one GPU per node, first and last will typically both be
-0. Selecting a non-sequential set of GPU IDs (e.g. 0,1,3)
-is not currently supported.
+
first is the ID (as reported by lammps/lib/gpu/nvc_get_devices) of
+the first GPU that will be used on each node. last is the ID of the
+last GPU that will be used on each node. If you have only one GPU per
+node, first and last will typically both be 0. Selecting a
+non-sequential set of GPU IDs (e.g. 0,1,3) is not currently supported.
-split is the fraction of particles whose forces, torques,
-energies, and/or virials will be calculated on the GPU. This
-can be used to perform CPU and GPU force calculations
-simultaneously. If split is negative, the software will
-attempt to calculate the optimal fraction automatically
-every 25 timesteps based on CPU and GPU timings. Because the GPU speedups
-are dependent on the number of particles, automatic calculation of the
-split can be less efficient, but typically results in loop times
-within 20% of an optimal fixed split.
+
split is the fraction of particles whose forces, torques, energies,
+and/or virials will be calculated on the GPU. This can be used to
+perform CPU and GPU force calculations simultaneously. If split is
+negative, the software will attempt to calculate the optimal fraction
+automatically every 25 timesteps based on CPU and GPU timings. Because
+the GPU speedups are dependent on the number of particles, automatic
+calculation of the split can be less efficient, but typically results
+in loop times within 20% of an optimal fixed split.
-If you have two GPUs per node, 8 CPU cores per node, and
-would like to run on 4 nodes with dynamic balancing of
-force calculation across CPU and GPU cores, the fix
-might be
+
If you have two GPUs per node, 8 CPU cores per node, and would like to
+run on 4 nodes with dynamic balancing of force calculation across CPU
+and GPU cores, the fix might be
fix 0 all gpu force/neigh 0 1 -1
-with LAMMPS run on 32 processes. In this case, all
-CPU cores and GPU devices on the nodes would be utilized.
-Each GPU device would be shared by 4 CPU cores. The
-CPU cores would perform force calculations for some
-fraction of the particles at the same time the GPUs
-performed force calculation for the other particles.
+
with LAMMPS run on 32 processes. In this case, all CPU cores and GPU
+devices on the nodes would be utilized. Each GPU device would be
+shared by 4 CPU cores. The CPU cores would perform force calculations
+for some fraction of the particles at the same time the GPUs performed
+force calculation for the other particles.
-Because of the large number of cores on each GPU
-device, it might be more efficient to run on fewer
-processes per GPU when the number of particles per process
-is small (100's of particles); this can be necessary
-to keep the GPU cores busy.
+
Because of the large number of cores on each GPU device, it might be
+more efficient to run on fewer processes per GPU when the number of
+particles per process is small (100's of particles); this can be
+necessary to keep the GPU cores busy.
GPU input script
-In order to use GPU acceleration in LAMMPS,
-fix_gpu
-should be used in order to initialize and configure the
-GPUs for use. Additionally, GPU enabled styles must be
-selected in the input script. Currently,
-this is limited to a few pair styles.
-Some GPU-enabled styles have additional restrictions
-listed in their documentation.
+
In order to use GPU acceleration in LAMMPS, fix_gpu
+should be used in order to initialize and configure the GPUs for
+use. Additionally, GPU enabled styles must be selected in the input
+script. Currently, this is limited to a few pair
+styles and PPPM. Some GPU-enabled styles have
+additional restrictions listed in their documentation.
GPU asynchronous pair computation
-The GPU accelerated pair styles can be used to perform
-pair style force calculation on the GPU while other
-calculations are
-performed on the CPU. One method to do this is to specify
-a split in the gpu fix as described above. In this case,
-force calculation for the pair style will also be performed
-on the CPU.
+
The GPU accelerated pair styles can be used to perform pair style
+force calculation on the GPU while other calculations are performed on
+the CPU. One method to do this is to specify a split in the gpu fix
+as described above. In this case, force calculation for the pair
+style will also be performed on the CPU.
-When the CPU work in a GPU pair style has finished,
-the next force computation will begin, possibly before the
-GPU has finished. If split is 1.0 in the gpu fix, the next
-force computation will begin almost immediately. This can
-be used to run a hybrid GPU pair style at
-the same time as a hybrid CPU pair style. In this case, the
-GPU pair style should be first in the hybrid command in order to
-perform simultaneous calculations. This also
-allows bond, angle,
-dihedral, improper,
-and long-range force
-computations to be run simultaneously with the GPU pair style.
-Once all CPU force computations have completed, the gpu fix
-will block until the GPU has finished all work before continuing
-the run.
+
When the CPU work in a GPU pair style has finished, the next force
+computation will begin, possibly before the GPU has finished. If
+split is 1.0 in the gpu fix, the next force computation will begin
+almost immediately. This can be used to run a
+hybrid GPU pair style at the same time as a hybrid
+CPU pair style. In this case, the GPU pair style should be first in
+the hybrid command in order to perform simultaneous calculations. This
+also allows bond, angle,
+dihedral, improper, and
+long-range force computations to be run
+simultaneously with the GPU pair style. Once all CPU force
+computations have completed, the gpu fix will block until the GPU has
+finished all work before continuing the run.
GPU timing
GPU accelerated pair styles can perform computations asynchronously
-with CPU computations. The "Pair" time reported by LAMMPS
-will be the maximum of the time required to complete the CPU
-pair style computations and the time required to complete the GPU
-pair style computations. Any time spent for GPU-enabled pair styles
-for computations that run simultaneously with bond,
-angle, dihedral,
-improper, and long-range calculations
-will not be included in the "Pair" time.
+with CPU computations. The "Pair" time reported by LAMMPS will be the
+maximum of the time required to complete the CPU pair style
+computations and the time required to complete the GPU pair style
+computations. Any time spent for GPU-enabled pair styles for
+computations that run simultaneously with bond,
+angle, dihedral,
+improper, and long-range
+calculations will not be included in the "Pair" time.
-When mode for the gpu fix is force/neigh,
-the time for neighbor list calculations on the GPU will be added
-into the "Pair" time, not the "Neigh" time. A breakdown of the
-times required for various tasks on the GPU (data copy, neighbor
-calculations, force computations, etc.) are output only
-with the LAMMPS screen output at the end of each run. These timings represent
-total time spent on the GPU for each routine, regardless of asynchronous
-CPU calculations.
+
When mode for the gpu fix is force/neigh, the time for neighbor list
+calculations on the GPU will be added into the "Pair" time, not the
+"Neigh" time. A breakdown of the times required for various tasks on
+the GPU (data copy, neighbor calculations, force computations, etc.)
+are output only with the LAMMPS screen output at the end of each
+run. These timings represent total time spent on the GPU for each
+routine, regardless of asynchronous CPU calculations.
GPU single vs double precision
-See the lammps/lib/gpu/README file for instructions on how to build
-the LAMMPS gpu library for single, mixed, and double precision. The latter
-requires that your GPU card supports double precision.
+
See the lammps/lib/gpu/README file for instructions on how to build
+the LAMMPS gpu library for single, mixed, and double precision. The
+latter requires that your GPU card supports double precision.
diff --git a/doc/Section_start.txt b/doc/Section_start.txt
index 49e1b5f5cf..fbdd015ab4 100644
--- a/doc/Section_start.txt
+++ b/doc/Section_start.txt
@@ -984,141 +984,130 @@ processing units (GPUs). We plan to add more over time. Currently,
they only support NVIDIA GPU cards. To use them you need to install
certain NVIDIA CUDA software on your system:
-Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0
-Go to http://www.nvidia.com/object/cuda_get.html
-Install a driver and toolkit appropriate for your system (SDK is not necessary)
-Follow the instructions in README in lammps/lib/gpu to build the library.
-Run lammps/lib/gpu/nvc_get_devices to list supported devices and properties :ul
+Check if you have an NVIDIA card: cat /proc/driver/nvidia/cards/0 Go
+to http://www.nvidia.com/object/cuda_get.html Install a driver and
+toolkit appropriate for your system (SDK is not necessary) Follow the
+instructions in README in lammps/lib/gpu to build the library. Run
+lammps/lib/gpu/nvc_get_devices to list supported devices and
+properties :ul
GPU configuration :h4
When using GPUs, you are restricted to one physical GPU per LAMMPS
-process. Multiple processes can share a single GPU and in many cases it
-will be more efficient to run with multiple processes per GPU. Any GPU
-accelerated style requires that "fix gpu"_fix_gpu.html be used in the
-input script to select and initialize the GPUs. The format for the fix
-is:
+process. Multiple processes can share a single GPU and in many cases
+it will be more efficient to run with multiple processes per GPU. Any
+GPU accelerated style requires that "fix gpu"_fix_gpu.html be used in
+the input script to select and initialize the GPUs. The format for the
+fix is:
fix {name} all gpu {mode} {first} {last} {split} :pre
where {name} is the name for the fix. The gpu fix must be the first
-fix specified for a given run, otherwise the program will exit
-with an error. The gpu fix will not have any effect on runs
-that do not use GPU acceleration; there should be no problem
-with specifying the fix first in any input script.
+fix specified for a given run, otherwise the program will exit with an
+error. The gpu fix will not have any effect on runs that do not use
+GPU acceleration; there should be no problem with specifying the fix
+first in any input script.
-{mode} can be either "force" or "force/neigh". In the former,
-neighbor list calculation is performed on the CPU using the
-standard LAMMPS routines. In the latter, the neighbor list
-calculation is performed on the GPU. The GPU neighbor list
-can be used for better performance, however, it
-cannot not be used with a triclinic box or with "hybrid"_pair_hybrid.html
-pair styles.
+{mode} can be either "force" or "force/neigh". In the former, neighbor
+list calculation is performed on the CPU using the standard LAMMPS
+routines. In the latter, the neighbor list calculation is performed on
+the GPU. The GPU neighbor list can be used for better performance,
+however, it cannot not be used with a triclinic box or with
+"hybrid"_pair_hybrid.html pair styles.
-There are cases when it might be more efficient to select the CPU for neighbor
-list builds. If a non-GPU enabled style requires a neighbor list, it will also
-be built using CPU routines. Redundant CPU and GPU neighbor list calculations
-will typically be less efficient.
+There are cases when it might be more efficient to select the CPU for
+neighbor list builds. If a non-GPU enabled style requires a neighbor
+list, it will also be built using CPU routines. Redundant CPU and GPU
+neighbor list calculations will typically be less efficient.
-{first} is the ID (as reported by lammps/lib/gpu/nvc_get_devices)
-of the first GPU that will be used on each node. {last} is the
-ID of the last GPU that will be used on each node. If you have
-only one GPU per node, {first} and {last} will typically both be
-0. Selecting a non-sequential set of GPU IDs (e.g. 0,1,3)
-is not currently supported.
+{first} is the ID (as reported by lammps/lib/gpu/nvc_get_devices) of
+the first GPU that will be used on each node. {last} is the ID of the
+last GPU that will be used on each node. If you have only one GPU per
+node, {first} and {last} will typically both be 0. Selecting a
+non-sequential set of GPU IDs (e.g. 0,1,3) is not currently supported.
-{split} is the fraction of particles whose forces, torques,
-energies, and/or virials will be calculated on the GPU. This
-can be used to perform CPU and GPU force calculations
-simultaneously. If {split} is negative, the software will
-attempt to calculate the optimal fraction automatically
-every 25 timesteps based on CPU and GPU timings. Because the GPU speedups
-are dependent on the number of particles, automatic calculation of the
-split can be less efficient, but typically results in loop times
-within 20% of an optimal fixed split.
+{split} is the fraction of particles whose forces, torques, energies,
+and/or virials will be calculated on the GPU. This can be used to
+perform CPU and GPU force calculations simultaneously. If {split} is
+negative, the software will attempt to calculate the optimal fraction
+automatically every 25 timesteps based on CPU and GPU timings. Because
+the GPU speedups are dependent on the number of particles, automatic
+calculation of the split can be less efficient, but typically results
+in loop times within 20% of an optimal fixed split.
-If you have two GPUs per node, 8 CPU cores per node, and
-would like to run on 4 nodes with dynamic balancing of
-force calculation across CPU and GPU cores, the fix
-might be
+If you have two GPUs per node, 8 CPU cores per node, and would like to
+run on 4 nodes with dynamic balancing of force calculation across CPU
+and GPU cores, the fix might be
fix 0 all gpu force/neigh 0 1 -1 :pre
-with LAMMPS run on 32 processes. In this case, all
-CPU cores and GPU devices on the nodes would be utilized.
-Each GPU device would be shared by 4 CPU cores. The
-CPU cores would perform force calculations for some
-fraction of the particles at the same time the GPUs
-performed force calculation for the other particles.
+with LAMMPS run on 32 processes. In this case, all CPU cores and GPU
+devices on the nodes would be utilized. Each GPU device would be
+shared by 4 CPU cores. The CPU cores would perform force calculations
+for some fraction of the particles at the same time the GPUs performed
+force calculation for the other particles.
-Because of the large number of cores on each GPU
-device, it might be more efficient to run on fewer
-processes per GPU when the number of particles per process
-is small (100's of particles); this can be necessary
-to keep the GPU cores busy.
+Because of the large number of cores on each GPU device, it might be
+more efficient to run on fewer processes per GPU when the number of
+particles per process is small (100's of particles); this can be
+necessary to keep the GPU cores busy.
GPU input script :h4
-In order to use GPU acceleration in LAMMPS,
-"fix_gpu"_fix_gpu.html
-should be used in order to initialize and configure the
-GPUs for use. Additionally, GPU enabled styles must be
-selected in the input script. Currently, this is limited
-to a few "pair styles"_pair_style.html and PPPM.
-Some GPU-enabled styles have additional restrictions
-listed in their documentation.
+In order to use GPU acceleration in LAMMPS, "fix_gpu"_fix_gpu.html
+should be used in order to initialize and configure the GPUs for
+use. Additionally, GPU enabled styles must be selected in the input
+script. Currently, this is limited to a few "pair
+styles"_pair_style.html and PPPM. Some GPU-enabled styles have
+additional restrictions listed in their documentation.
GPU asynchronous pair computation :h4
-The GPU accelerated pair styles can be used to perform
-pair style force calculation on the GPU while other
-calculations are performed on the CPU. One method to do this
-is to specify a {split} in the gpu fix as described above.
-In this case, force calculation for the pair style will also
-be performed on the CPU.
+The GPU accelerated pair styles can be used to perform pair style
+force calculation on the GPU while other calculations are performed on
+the CPU. One method to do this is to specify a {split} in the gpu fix
+as described above. In this case, force calculation for the pair
+style will also be performed on the CPU.
-When the CPU work in a GPU pair style has finished,
-the next force computation will begin, possibly before the
-GPU has finished. If {split} is 1.0 in the gpu fix, the next
-force computation will begin almost immediately. This can
-be used to run a "hybrid"_pair_hybrid.html GPU pair style at
-the same time as a hybrid CPU pair style. In this case, the
-GPU pair style should be first in the hybrid command in order to
-perform simultaneous calculations. This also
-allows "bond"_bond_style.html, "angle"_angle_style.html,
-"dihedral"_dihedral_style.html, "improper"_improper_style.html,
-and "long-range"_kspace_style.html force
-computations to be run simultaneously with the GPU pair style.
-Once all CPU force computations have completed, the gpu fix
-will block until the GPU has finished all work before continuing
-the run.
+When the CPU work in a GPU pair style has finished, the next force
+computation will begin, possibly before the GPU has finished. If
+{split} is 1.0 in the gpu fix, the next force computation will begin
+almost immediately. This can be used to run a
+"hybrid"_pair_hybrid.html GPU pair style at the same time as a hybrid
+CPU pair style. In this case, the GPU pair style should be first in
+the hybrid command in order to perform simultaneous calculations. This
+also allows "bond"_bond_style.html, "angle"_angle_style.html,
+"dihedral"_dihedral_style.html, "improper"_improper_style.html, and
+"long-range"_kspace_style.html force computations to be run
+simultaneously with the GPU pair style. Once all CPU force
+computations have completed, the gpu fix will block until the GPU has
+finished all work before continuing the run.
GPU timing :h4
GPU accelerated pair styles can perform computations asynchronously
-with CPU computations. The "Pair" time reported by LAMMPS
-will be the maximum of the time required to complete the CPU
-pair style computations and the time required to complete the GPU
-pair style computations. Any time spent for GPU-enabled pair styles
-for computations that run simultaneously with "bond"_bond_style.html,
-"angle"_angle_style.html, "dihedral"_dihedral_style.html,
-"improper"_improper_style.html, and "long-range"_kspace_style.html calculations
-will not be included in the "Pair" time.
+with CPU computations. The "Pair" time reported by LAMMPS will be the
+maximum of the time required to complete the CPU pair style
+computations and the time required to complete the GPU pair style
+computations. Any time spent for GPU-enabled pair styles for
+computations that run simultaneously with "bond"_bond_style.html,
+"angle"_angle_style.html, "dihedral"_dihedral_style.html,
+"improper"_improper_style.html, and "long-range"_kspace_style.html
+calculations will not be included in the "Pair" time.
-When {mode} for the gpu fix is force/neigh,
-the time for neighbor list calculations on the GPU will be added
-into the "Pair" time, not the "Neigh" time. A breakdown of the
-times required for various tasks on the GPU (data copy, neighbor
-calculations, force computations, etc.) are output only
-with the LAMMPS screen output at the end of each run. These timings represent
-total time spent on the GPU for each routine, regardless of asynchronous
-CPU calculations.
+When {mode} for the gpu fix is force/neigh, the time for neighbor list
+calculations on the GPU will be added into the "Pair" time, not the
+"Neigh" time. A breakdown of the times required for various tasks on
+the GPU (data copy, neighbor calculations, force computations, etc.)
+are output only with the LAMMPS screen output at the end of each
+run. These timings represent total time spent on the GPU for each
+routine, regardless of asynchronous CPU calculations.
GPU single vs double precision :h4
-See the lammps/lib/gpu/README file for instructions on how to build
-the LAMMPS gpu library for single, mixed, and double precision. The latter
-requires that your GPU card supports double precision.
+See the lammps/lib/gpu/README file for instructions on how to build
+the LAMMPS gpu library for single, mixed, and double precision. The
+latter requires that your GPU card supports double precision.
:line
diff --git a/doc/compute_temp_asphere.html b/doc/compute_temp_asphere.html
index 3b29b68e74..daaad528a9 100644
--- a/doc/compute_temp_asphere.html
+++ b/doc/compute_temp_asphere.html
@@ -13,16 +13,29 @@
Syntax:
-compute ID group-ID temp/asphere bias-ID
+compute ID group-ID temp/asphere keyword value ...
-
- ID, group-ID are documented in compute command
-
- temp/asphere = style name of this compute command
-
- bias-ID = ID of a temperature compute that removes a velocity bias (optional)
+
Examples:
compute 1 all temp/asphere
-compute myTemp mobile temp/asphere tempCOM
+compute myTemp mobile temp/asphere bias tempCOM
+compute myTemp mobile temp/asphere dof rotate
Description:
@@ -75,15 +88,6 @@ vector are ordered xx, yy, zz, xy, xz, yz.
constant for the duration of the run; use the dynamic option of the
compute_modify command if this is not the case.
-If a bias-ID is specified it must be the ID of a temperature compute
-that removes a "bias" velocity from each atom. This allows compute
-temp/sphere to compute its thermal temperature after the translational
-kinetic energy components have been altered in a prescribed way,
-e.g. to remove a velocity profile. Thermostats that use this compute
-will work with this bias term. See the doc pages for individual
-computes that calculate a temperature and the doc pages for fixes that
-perform thermostatting for more details.
-
This compute subtracts out translational degrees-of-freedom due to
fixes that constrain molecular motion, such as fix
shake and fix rigid. This means the
@@ -96,6 +100,26 @@ be altered using the extra option of the
discussion of different ways to compute temperature and perform
thermostatting.
+
+
+The keyword/value option pairs are used in the following ways.
+
+For the bias keyword, bias-ID refers to the ID of a temperature
+compute that removes a "bias" velocity from each atom. This allows
+compute temp/sphere to compute its thermal temperature after the
+translational kinetic energy components have been altered in a
+prescribed way, e.g. to remove a velocity profile. Thermostats that
+use this compute will work with this bias term. See the doc pages for
+individual computes that calculate a temperature and the doc pages for
+fixes that perform thermostatting for more details.
+
+For the dof keyword, a setting of all calculates a temperature
+that includes both translational and rotational degrees of freedom. A
+setting of rotate calculates a temperature that includes only
+rotational degrees of freedom.
+
+
+
Output info:
This compute calculates a global scalar (the temperature) and a global
diff --git a/doc/compute_temp_asphere.txt b/doc/compute_temp_asphere.txt
index b22256fbe8..cdd8870981 100755
--- a/doc/compute_temp_asphere.txt
+++ b/doc/compute_temp_asphere.txt
@@ -10,16 +10,24 @@ compute temp/asphere command :h3
[Syntax:]
-compute ID group-ID temp/asphere bias-ID :pre
+compute ID group-ID temp/asphere keyword value ... :pre
-ID, group-ID are documented in "compute"_compute.html command
-temp/asphere = style name of this compute command
-bias-ID = ID of a temperature compute that removes a velocity bias (optional) :ul
+ID, group-ID are documented in "compute"_compute.html command :ulb,l
+temp/asphere = style name of this compute command :l
+zero or more keyword/value pairs may be appended :l
+keyword = {bias} or {dof} :l
+ {bias} value = bias-ID{uniform} or {gaussian}
+ bias-ID = ID of a temperature compute that removes a velocity bias
+ {dof} value = {all} or {rotate}
+ all = compute temperature of translational and rotational degrees of freedom
+ rotate = compute temperature of just rotational degrees of freedom :pre
+:ule
[Examples:]
compute 1 all temp/asphere
-compute myTemp mobile temp/asphere tempCOM :pre
+compute myTemp mobile temp/asphere bias tempCOM
+compute myTemp mobile temp/asphere dof rotate :pre
[Description:]
@@ -72,15 +80,6 @@ The number of atoms contributing to the temperature is assumed to be
constant for the duration of the run; use the {dynamic} option of the
"compute_modify"_compute_modify.html command if this is not the case.
-If a {bias-ID} is specified it must be the ID of a temperature compute
-that removes a "bias" velocity from each atom. This allows compute
-temp/sphere to compute its thermal temperature after the translational
-kinetic energy components have been altered in a prescribed way,
-e.g. to remove a velocity profile. Thermostats that use this compute
-will work with this bias term. See the doc pages for individual
-computes that calculate a temperature and the doc pages for fixes that
-perform thermostatting for more details.
-
This compute subtracts out translational degrees-of-freedom due to
fixes that constrain molecular motion, such as "fix
shake"_fix_shake.html and "fix rigid"_fix_rigid.html. This means the
@@ -93,6 +92,26 @@ See "this howto section"_Section_howto.html#4_16 of the manual for a
discussion of different ways to compute temperature and perform
thermostatting.
+:line
+
+The keyword/value option pairs are used in the following ways.
+
+For the {bias} keyword, {bias-ID} refers to the ID of a temperature
+compute that removes a "bias" velocity from each atom. This allows
+compute temp/sphere to compute its thermal temperature after the
+translational kinetic energy components have been altered in a
+prescribed way, e.g. to remove a velocity profile. Thermostats that
+use this compute will work with this bias term. See the doc pages for
+individual computes that calculate a temperature and the doc pages for
+fixes that perform thermostatting for more details.
+
+For the {dof} keyword, a setting of {all} calculates a temperature
+that includes both translational and rotational degrees of freedom. A
+setting of {rotate} calculates a temperature that includes only
+rotational degrees of freedom.
+
+:line
+
[Output info:]
This compute calculates a global scalar (the temperature) and a global
diff --git a/doc/compute_temp_sphere.html b/doc/compute_temp_sphere.html
index 31e73a05f5..23e18d16b5 100644
--- a/doc/compute_temp_sphere.html
+++ b/doc/compute_temp_sphere.html
@@ -13,16 +13,29 @@
Syntax:
-compute ID group-ID temp/sphere bias-ID
+compute ID group-ID temp/sphere keyword value ...
-
- ID, group-ID are documented in compute command
-
- temp/sphere = style name of this compute command
-
- bias-ID = ID of a temperature compute that removes a velocity bias (optional)
+
Examples:
compute 1 all temp/sphere
-compute myTemp mobile temp/sphere tempCOM
+compute myTemp mobile temp/sphere bias tempCOM
+compute myTemp mobile temp/sphere dof rotate
Description:
@@ -66,15 +79,6 @@ the vector are ordered xx, yy, zz, xy, xz, yz.
constant for the duration of the run; use the dynamic option of the
compute_modify command if this is not the case.
-If a bias-ID is specified it must be the ID of a temperature compute
-that removes a "bias" velocity from each atom. This allows compute
-temp/sphere to compute its thermal temperature after the translational
-kinetic energy components have been altered in a prescribed way,
-e.g. to remove a velocity profile. Thermostats that use this compute
-will work with this bias term. See the doc pages for individual
-computes that calculate a temperature and the doc pages for fixes that
-perform thermostatting for more details.
-
This compute subtracts out translational degrees-of-freedom due to
fixes that constrain molecular motion, such as fix
shake and fix rigid. This means the
@@ -87,6 +91,26 @@ be altered using the extra option of the
discussion of different ways to compute temperature and perform
thermostatting.
+
+
+The keyword/value option pairs are used in the following ways.
+
+For the bias keyword, bias-ID refers to the ID of a temperature
+compute that removes a "bias" velocity from each atom. This allows
+compute temp/sphere to compute its thermal temperature after the
+translational kinetic energy components have been altered in a
+prescribed way, e.g. to remove a velocity profile. Thermostats that
+use this compute will work with this bias term. See the doc pages for
+individual computes that calculate a temperature and the doc pages for
+fixes that perform thermostatting for more details.
+
+For the dof keyword, a setting of all calculates a temperature
+that includes both translational and rotational degrees of freedom. A
+setting of rotate calculates a temperature that includes only
+rotational degrees of freedom.
+
+
+
Output info:
This compute calculates a global scalar (the temperature) and a global
@@ -116,6 +140,8 @@ particles with radius = 0.0.
compute temp, compute
temp/asphere
-Default: none
+
Default:
+
+The option defaults are no bias and dof = all.