Some more code-blocks instead of parsed-literal

This commit is contained in:
Axel Kohlmeyer
2020-03-13 18:38:47 -04:00
parent e4d6214d3b
commit cdec46ba6a
17 changed files with 43 additions and 43 deletions

View File

@ -11,7 +11,7 @@ The :doc:`Build basics <Build_basics>` doc page explains how to build
LAMMPS as either a shared or static library. This results in one of LAMMPS as either a shared or static library. This results in one of
these 2 files: these 2 files:
.. parsed-literal:: .. code-block:: bash
liblammps.so # shared library liblammps.so # shared library
liblammps.a # static library liblammps.a # static library

View File

@ -73,7 +73,7 @@ in the LAMMPS distribution. Typing "make machine" uses
use Makefile.serial and Makefile.mpi, respectively. Other makefiles use Makefile.serial and Makefile.mpi, respectively. Other makefiles
are in these directories: are in these directories:
.. parsed-literal:: .. code-block:: bash
OPTIONS # Makefiles which enable specific options OPTIONS # Makefiles which enable specific options
MACHINES # Makefiles for specific machines MACHINES # Makefiles for specific machines

View File

@ -155,7 +155,7 @@ Lowercase directories
Here is how you can run and visualize one of the sample problems: Here is how you can run and visualize one of the sample problems:
.. parsed-literal:: .. code-block:: bash
cd indent cd indent
cp ../../src/lmp_linux . # copy LAMMPS executable to this dir cp ../../src/lmp_linux . # copy LAMMPS executable to this dir
@ -177,9 +177,9 @@ like ImageMagick or QuickTime or various Windows-based tools. See the
Imagemagick command would create a GIF file suitable for viewing in a Imagemagick command would create a GIF file suitable for viewing in a
browser. browser.
.. parsed-literal:: .. code-block:: bash
% convert -loop 1 \*.jpg foo.gif % convert -loop 1 *.jpg foo.gif
---------- ----------

View File

@ -83,7 +83,7 @@ You can use the *polarizer* tool (Python script distributed with the
USER-DRUDE package) to convert a non-polarizable data file (here USER-DRUDE package) to convert a non-polarizable data file (here
*data.102494.lmp*\ ) to a polarizable data file (\ *data-p.lmp*\ ) *data.102494.lmp*\ ) to a polarizable data file (\ *data-p.lmp*\ )
.. parsed-literal:: .. code-block:: bash
polarizer -q -f phenol.dff data.102494.lmp data-p.lmp polarizer -q -f phenol.dff data.102494.lmp data-p.lmp

View File

@ -5,7 +5,7 @@ Depending on how you obtained LAMMPS, the doc directory has up
to 6 sub-directories, 2 Nroff files, and optionally 2 PDF files to 6 sub-directories, 2 Nroff files, and optionally 2 PDF files
plus 2 e-book format files: plus 2 e-book format files:
.. parsed-literal:: .. code-block:: bash
src # content files for LAMMPS documentation src # content files for LAMMPS documentation
html # HTML version of the LAMMPS manual (see html/Manual.html) html # HTML version of the LAMMPS manual (see html/Manual.html)

View File

@ -59,7 +59,7 @@ new potential.
To use any of these commands, you only need to build LAMMPS with the To use any of these commands, you only need to build LAMMPS with the
PYTHON package installed: PYTHON package installed:
.. parsed-literal:: .. code-block:: bash
make yes-python make yes-python
make machine make machine

View File

@ -31,7 +31,7 @@ If you set the paths to these files as environment variables, you only
have to do it once. For the csh or tcsh shells, add something like have to do it once. For the csh or tcsh shells, add something like
this to your ~/.cshrc file, one line for each of the two files: this to your ~/.cshrc file, one line for each of the two files:
.. parsed-literal:: .. code-block:: csh
setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src

View File

@ -18,7 +18,7 @@ LAMMPS instances on subsets of the total MPI ranks.
To install mpi4py (version mpi4py-3.0.3 as of Nov 2019), unpack it To install mpi4py (version mpi4py-3.0.3 as of Nov 2019), unpack it
and from its main directory, type and from its main directory, type
.. parsed-literal:: .. code-block:: bash
python setup.py build python setup.py build
sudo python setup.py install sudo python setup.py install
@ -27,27 +27,27 @@ Again, the "sudo" is only needed if required to copy mpi4py files into
your Python distribution's site-packages directory. To install with your Python distribution's site-packages directory. To install with
user privilege into the user local directory type user privilege into the user local directory type
.. parsed-literal:: .. code-block:: bash
python setup.py install --user python setup.py install --user
If you have successfully installed mpi4py, you should be able to run If you have successfully installed mpi4py, you should be able to run
Python and type Python and type
.. parsed-literal:: .. code-block:: python
from mpi4py import MPI from mpi4py import MPI
without error. You should also be able to run python in parallel without error. You should also be able to run python in parallel
on a simple test script on a simple test script
.. parsed-literal:: .. code-block:: bash
% mpirun -np 4 python test.py % mpirun -np 4 python test.py
where test.py contains the lines where test.py contains the lines
.. parsed-literal:: .. code-block:: python
from mpi4py import MPI from mpi4py import MPI
comm = MPI.COMM_WORLD comm = MPI.COMM_WORLD

View File

@ -12,7 +12,7 @@ wrap LAMMPS. On Linux this is a library file that ends in ".so", not
From the src directory, type From the src directory, type
.. parsed-literal:: .. code-block:: bash
make foo mode=shlib make foo mode=shlib
@ -37,7 +37,7 @@ Build LAMMPS as a shared library using CMake
When using CMake the following two options are necessary to generate the LAMMPS When using CMake the following two options are necessary to generate the LAMMPS
shared library: shared library:
.. parsed-literal:: .. code-block:: bash
-D BUILD_LIB=on # enable building LAMMPS as a library -D BUILD_LIB=on # enable building LAMMPS as a library
-D BUILD_SHARED_LIBS=on # enable building of LAMMPS shared library (both options are needed!) -D BUILD_SHARED_LIBS=on # enable building of LAMMPS shared library (both options are needed!)
@ -50,7 +50,7 @@ library path (e.g. /usr/lib64/) or in the LD_LIBRARY_PATH.
If you want to use the shared library with Python the recommended way is to create a virtualenv and use it as If you want to use the shared library with Python the recommended way is to create a virtualenv and use it as
CMAKE_INSTALL_PREFIX. CMAKE_INSTALL_PREFIX.
.. parsed-literal:: .. code-block:: bash
# create virtualenv # create virtualenv
virtualenv --python=$(which python3) myenv3 virtualenv --python=$(which python3) myenv3
@ -69,7 +69,7 @@ This will also install the Python module into your virtualenv. Since virtualenv
doesn't change your LD_LIBRARY_PATH, you still need to add its lib64 folder to doesn't change your LD_LIBRARY_PATH, you still need to add its lib64 folder to
it, which contains the installed liblammps.so. it, which contains the installed liblammps.so.
.. parsed-literal:: .. code-block:: bash
export LD_LIBRARY_PATH=$VIRTUAL_ENV/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=$VIRTUAL_ENV/lib64:$LD_LIBRARY_PATH

View File

@ -49,7 +49,7 @@ interactively from the bench directory:
Or put the same lines in the file test.py and run it as Or put the same lines in the file test.py and run it as
.. parsed-literal:: .. code-block:: bash
% python test.py % python test.py
@ -68,7 +68,7 @@ To run LAMMPS in parallel, assuming you have installed the
`PyPar <https://github.com/daleroberts/pypar>`_ package as discussed `PyPar <https://github.com/daleroberts/pypar>`_ package as discussed
above, create a test.py file containing these lines: above, create a test.py file containing these lines:
.. parsed-literal:: .. code-block:: python
import pypar import pypar
from lammps import lammps from lammps import lammps
@ -81,7 +81,7 @@ To run LAMMPS in parallel, assuming you have installed the
`mpi4py <https://bitbucket.org/mpi4py/mpi4py>`_ package as discussed `mpi4py <https://bitbucket.org/mpi4py/mpi4py>`_ package as discussed
above, create a test.py file containing these lines: above, create a test.py file containing these lines:
.. parsed-literal:: .. code-block:: python
from mpi4py import MPI from mpi4py import MPI
from lammps import lammps from lammps import lammps
@ -94,13 +94,13 @@ above, create a test.py file containing these lines:
You can either script in parallel as: You can either script in parallel as:
.. parsed-literal:: .. code-block:: bash
% mpirun -np 4 python test.py % mpirun -np 4 python test.py
and you should see the same output as if you had typed and you should see the same output as if you had typed
.. parsed-literal:: .. code-block:: bash
% mpirun -np 4 lmp_g++ -in in.lj % mpirun -np 4 lmp_g++ -in in.lj
@ -124,7 +124,7 @@ Running Python scripts:
Note that any Python script (not just for LAMMPS) can be invoked in Note that any Python script (not just for LAMMPS) can be invoked in
one of several ways: one of several ways:
.. parsed-literal:: .. code-block:: bash
% python foo.script % python foo.script
% python -i foo.script % python -i foo.script
@ -133,7 +133,7 @@ one of several ways:
The last command requires that the first line of the script be The last command requires that the first line of the script be
something like this: something like this:
.. parsed-literal:: .. code-block:: bash
#!/usr/local/bin/python #!/usr/local/bin/python
#!/usr/local/bin/python -i #!/usr/local/bin/python -i
@ -141,7 +141,7 @@ something like this:
where the path points to where you have Python installed, and that you where the path points to where you have Python installed, and that you
have made the script file executable: have made the script file executable:
.. parsed-literal:: .. code-block:: bash
% chmod +x foo.script % chmod +x foo.script

View File

@ -14,7 +14,7 @@ Note that the serial executable includes support for multi-threading
parallelization from the styles in the USER-OMP packages. To run with parallelization from the styles in the USER-OMP packages. To run with
4 threads, you can type this: 4 threads, you can type this:
.. parsed-literal:: .. code-block:: bash
lmp_serial -in in.lj -pk omp 4 -sf omp lmp_serial -in in.lj -pk omp 4 -sf omp
@ -43,7 +43,7 @@ into the MPICH2 installation directory, then into the sub-directory
Then type something like this: Then type something like this:
.. parsed-literal:: .. code-block:: bash
mpiexec -localonly 4 lmp_mpi -in in.file mpiexec -localonly 4 lmp_mpi -in in.file
mpiexec -np 4 lmp_mpi -in in.file mpiexec -np 4 lmp_mpi -in in.file
@ -58,13 +58,13 @@ patient before the output shows up.
The parallel executable can also run on a single processor by typing The parallel executable can also run on a single processor by typing
something like this: something like this:
.. parsed-literal:: .. code-block:: bash
lmp_mpi -in in.lj lmp_mpi -in in.lj
Note that the parallel executable also includes OpenMP Note that the parallel executable also includes OpenMP
multi-threading, which can be combined with MPI using something like: multi-threading, which can be combined with MPI using something like:
.. parsed-literal:: .. code-block:: bash
mpiexec -localonly 2 lmp_mpi -in in.lj -pk omp 2 -sf omp mpiexec -localonly 2 lmp_mpi -in in.lj -pk omp 2 -sf omp

View File

@ -76,7 +76,7 @@ automatically append "gpu" to styles that support it. Use the "-pk
gpu Ng" :doc:`command-line switch <Run_options>` to set Ng = # of gpu Ng" :doc:`command-line switch <Run_options>` to set Ng = # of
GPUs/node to use. GPUs/node to use.
.. parsed-literal:: .. code-block:: bash
lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU
mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node
@ -106,7 +106,7 @@ and use of multiple MPI tasks/GPU is the same.
Use the :doc:`suffix gpu <suffix>` command, or you can explicitly add an Use the :doc:`suffix gpu <suffix>` command, or you can explicitly add an
"gpu" suffix to individual styles in your input script, e.g. "gpu" suffix to individual styles in your input script, e.g.
.. parsed-literal:: .. code-block:: LAMMPS
pair_style lj/cut/gpu 2.5 pair_style lj/cut/gpu 2.5

View File

@ -205,7 +205,7 @@ For building with make, several example Makefiles for building with
the Intel compiler are included with LAMMPS in the src/MAKE/OPTIONS/ the Intel compiler are included with LAMMPS in the src/MAKE/OPTIONS/
directory: directory:
.. parsed-literal:: .. code-block:: bash
Makefile.intel_cpu_intelmpi # Intel Compiler, Intel MPI, No Offload Makefile.intel_cpu_intelmpi # Intel Compiler, Intel MPI, No Offload
Makefile.knl # Intel Compiler, Intel MPI, No Offload Makefile.knl # Intel Compiler, Intel MPI, No Offload

View File

@ -23,7 +23,7 @@ instructions.
These examples assume one or more 16-core nodes. These examples assume one or more 16-core nodes.
.. parsed-literal:: .. code-block:: bash
env OMP_NUM_THREADS=16 lmp_omp -sf omp -in in.script # 1 MPI task, 16 threads according to OMP_NUM_THREADS env OMP_NUM_THREADS=16 lmp_omp -sf omp -in in.script # 1 MPI task, 16 threads according to OMP_NUM_THREADS
lmp_mpi -sf omp -in in.script # 1 MPI task, no threads, optimized kernels lmp_mpi -sf omp -in in.script # 1 MPI task, no threads, optimized kernels
@ -60,7 +60,7 @@ and threads/MPI task is the same.
Use the :doc:`suffix omp <suffix>` command, or you can explicitly add an Use the :doc:`suffix omp <suffix>` command, or you can explicitly add an
"omp" suffix to individual styles in your input script, e.g. "omp" suffix to individual styles in your input script, e.g.
.. parsed-literal:: .. code-block:: LAMMPS
pair_style lj/cut/omp 2.5 pair_style lj/cut/omp 2.5

View File

@ -17,7 +17,7 @@ See the :ref:`Build extras <opt>` doc page for instructions.
**Run with the OPT package from the command line:** **Run with the OPT package from the command line:**
.. parsed-literal:: .. code-block:: bash
lmp_mpi -sf opt -in in.script # run in serial lmp_mpi -sf opt -in in.script # run in serial
mpirun -np 4 lmp_mpi -sf opt -in in.script # run in parallel mpirun -np 4 lmp_mpi -sf opt -in in.script # run in parallel
@ -30,7 +30,7 @@ automatically append "opt" to styles that support it.
Use the :doc:`suffix opt <suffix>` command, or you can explicitly add an Use the :doc:`suffix opt <suffix>` command, or you can explicitly add an
"opt" suffix to individual styles in your input script, e.g. "opt" suffix to individual styles in your input script, e.g.
.. parsed-literal:: .. code-block:: LAMMPS
pair_style lj/cut/opt 2.5 pair_style lj/cut/opt 2.5

View File

@ -132,7 +132,7 @@ packages. As an example, here is a command that builds with all the
GPU related packages installed (GPU, KOKKOS with Cuda), including GPU related packages installed (GPU, KOKKOS with Cuda), including
settings to build the needed auxiliary GPU libraries for Kepler GPUs: settings to build the needed auxiliary GPU libraries for Kepler GPUs:
.. parsed-literal:: .. code-block:: bash
Make.py -j 16 -p omp gpu kokkos -cc nvcc wrap=mpi -gpu mode=double arch=35 -kokkos cuda arch=35 lib-all file mpi Make.py -j 16 -p omp gpu kokkos -cc nvcc wrap=mpi -gpu mode=double arch=35 -kokkos cuda arch=35 lib-all file mpi

View File

@ -99,7 +99,7 @@ binary2txt tool
The file binary2txt.cpp converts one or more binary LAMMPS dump file The file binary2txt.cpp converts one or more binary LAMMPS dump file
into ASCII text files. The syntax for running the tool is into ASCII text files. The syntax for running the tool is
.. parsed-literal:: .. code-block:: bash
binary2txt file1 file2 ... binary2txt file1 file2 ...
@ -149,7 +149,7 @@ chains and solvent atoms can strongly overlap, so LAMMPS needs to run
the system initially with a "soft" pair potential to un-overlap it. the system initially with a "soft" pair potential to un-overlap it.
The syntax for running the tool is The syntax for running the tool is
.. parsed-literal:: .. code-block:: bash
chain < def.chain > data.file chain < def.chain > data.file
@ -178,11 +178,11 @@ Version 20110511
.. parsed-literal:: .. parsed-literal::
Syntax: ./abf_integrate < filename > [-n < nsteps >] [-t < temp >] [-m [0\|1] (metadynamics)] [-h < hill_height >] [-f < variable_hill_factor >] ./abf_integrate < filename > [-n < nsteps >] [-t < temp >] [-m [0\|1] (metadynamics)] [-h < hill_height >] [-f < variable_hill_factor >]
The LAMMPS interface to the colvars collective variable library, as The LAMMPS interface to the colvars collective variable library, as
well as these tools, were created by Axel Kohlmeyer (akohlmey at well as these tools, were created by Axel Kohlmeyer (akohlmey at
gmail.com) at ICTP, Italy. gmail.com) while at ICTP, Italy.
---------- ----------
@ -427,7 +427,7 @@ atoms can strongly overlap, so LAMMPS needs to run the system
initially with a "soft" pair potential to un-overlap it. The syntax initially with a "soft" pair potential to un-overlap it. The syntax
for running the tool is for running the tool is
.. parsed-literal:: .. code-block:: bash
micelle2d < def.micelle2d > data.file micelle2d < def.micelle2d > data.file