Some more code-blocks instead of parsed-literal
This commit is contained in:
@ -11,7 +11,7 @@ The :doc:`Build basics <Build_basics>` doc page explains how to build
|
||||
LAMMPS as either a shared or static library. This results in one of
|
||||
these 2 files:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
liblammps.so # shared library
|
||||
liblammps.a # static library
|
||||
|
||||
@ -73,7 +73,7 @@ in the LAMMPS distribution. Typing "make machine" uses
|
||||
use Makefile.serial and Makefile.mpi, respectively. Other makefiles
|
||||
are in these directories:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
OPTIONS # Makefiles which enable specific options
|
||||
MACHINES # Makefiles for specific machines
|
||||
|
||||
@ -155,7 +155,7 @@ Lowercase directories
|
||||
|
||||
Here is how you can run and visualize one of the sample problems:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
cd indent
|
||||
cp ../../src/lmp_linux . # copy LAMMPS executable to this dir
|
||||
@ -177,9 +177,9 @@ like ImageMagick or QuickTime or various Windows-based tools. See the
|
||||
Imagemagick command would create a GIF file suitable for viewing in a
|
||||
browser.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% convert -loop 1 \*.jpg foo.gif
|
||||
% convert -loop 1 *.jpg foo.gif
|
||||
|
||||
----------
|
||||
|
||||
|
||||
@ -83,7 +83,7 @@ You can use the *polarizer* tool (Python script distributed with the
|
||||
USER-DRUDE package) to convert a non-polarizable data file (here
|
||||
*data.102494.lmp*\ ) to a polarizable data file (\ *data-p.lmp*\ )
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
polarizer -q -f phenol.dff data.102494.lmp data-p.lmp
|
||||
|
||||
|
||||
@ -5,7 +5,7 @@ Depending on how you obtained LAMMPS, the doc directory has up
|
||||
to 6 sub-directories, 2 Nroff files, and optionally 2 PDF files
|
||||
plus 2 e-book format files:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
src # content files for LAMMPS documentation
|
||||
html # HTML version of the LAMMPS manual (see html/Manual.html)
|
||||
|
||||
@ -59,7 +59,7 @@ new potential.
|
||||
To use any of these commands, you only need to build LAMMPS with the
|
||||
PYTHON package installed:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
make yes-python
|
||||
make machine
|
||||
|
||||
@ -31,7 +31,7 @@ If you set the paths to these files as environment variables, you only
|
||||
have to do it once. For the csh or tcsh shells, add something like
|
||||
this to your ~/.cshrc file, one line for each of the two files:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: csh
|
||||
|
||||
setenv PYTHONPATH ${PYTHONPATH}:/home/sjplimp/lammps/python
|
||||
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/home/sjplimp/lammps/src
|
||||
|
||||
@ -18,7 +18,7 @@ LAMMPS instances on subsets of the total MPI ranks.
|
||||
To install mpi4py (version mpi4py-3.0.3 as of Nov 2019), unpack it
|
||||
and from its main directory, type
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
python setup.py build
|
||||
sudo python setup.py install
|
||||
@ -27,27 +27,27 @@ Again, the "sudo" is only needed if required to copy mpi4py files into
|
||||
your Python distribution's site-packages directory. To install with
|
||||
user privilege into the user local directory type
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
python setup.py install --user
|
||||
|
||||
If you have successfully installed mpi4py, you should be able to run
|
||||
Python and type
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: python
|
||||
|
||||
from mpi4py import MPI
|
||||
|
||||
without error. You should also be able to run python in parallel
|
||||
on a simple test script
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% mpirun -np 4 python test.py
|
||||
|
||||
where test.py contains the lines
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: python
|
||||
|
||||
from mpi4py import MPI
|
||||
comm = MPI.COMM_WORLD
|
||||
|
||||
@ -12,7 +12,7 @@ wrap LAMMPS. On Linux this is a library file that ends in ".so", not
|
||||
|
||||
From the src directory, type
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
make foo mode=shlib
|
||||
|
||||
@ -37,7 +37,7 @@ Build LAMMPS as a shared library using CMake
|
||||
When using CMake the following two options are necessary to generate the LAMMPS
|
||||
shared library:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
-D BUILD_LIB=on # enable building LAMMPS as a library
|
||||
-D BUILD_SHARED_LIBS=on # enable building of LAMMPS shared library (both options are needed!)
|
||||
@ -50,7 +50,7 @@ library path (e.g. /usr/lib64/) or in the LD_LIBRARY_PATH.
|
||||
If you want to use the shared library with Python the recommended way is to create a virtualenv and use it as
|
||||
CMAKE_INSTALL_PREFIX.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
# create virtualenv
|
||||
virtualenv --python=$(which python3) myenv3
|
||||
@ -69,7 +69,7 @@ This will also install the Python module into your virtualenv. Since virtualenv
|
||||
doesn't change your LD_LIBRARY_PATH, you still need to add its lib64 folder to
|
||||
it, which contains the installed liblammps.so.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
export LD_LIBRARY_PATH=$VIRTUAL_ENV/lib64:$LD_LIBRARY_PATH
|
||||
|
||||
|
||||
@ -49,7 +49,7 @@ interactively from the bench directory:
|
||||
|
||||
Or put the same lines in the file test.py and run it as
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% python test.py
|
||||
|
||||
@ -68,7 +68,7 @@ To run LAMMPS in parallel, assuming you have installed the
|
||||
`PyPar <https://github.com/daleroberts/pypar>`_ package as discussed
|
||||
above, create a test.py file containing these lines:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: python
|
||||
|
||||
import pypar
|
||||
from lammps import lammps
|
||||
@ -81,7 +81,7 @@ To run LAMMPS in parallel, assuming you have installed the
|
||||
`mpi4py <https://bitbucket.org/mpi4py/mpi4py>`_ package as discussed
|
||||
above, create a test.py file containing these lines:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: python
|
||||
|
||||
from mpi4py import MPI
|
||||
from lammps import lammps
|
||||
@ -94,13 +94,13 @@ above, create a test.py file containing these lines:
|
||||
|
||||
You can either script in parallel as:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% mpirun -np 4 python test.py
|
||||
|
||||
and you should see the same output as if you had typed
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% mpirun -np 4 lmp_g++ -in in.lj
|
||||
|
||||
@ -124,7 +124,7 @@ Running Python scripts:
|
||||
Note that any Python script (not just for LAMMPS) can be invoked in
|
||||
one of several ways:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% python foo.script
|
||||
% python -i foo.script
|
||||
@ -133,7 +133,7 @@ one of several ways:
|
||||
The last command requires that the first line of the script be
|
||||
something like this:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
#!/usr/local/bin/python
|
||||
#!/usr/local/bin/python -i
|
||||
@ -141,7 +141,7 @@ something like this:
|
||||
where the path points to where you have Python installed, and that you
|
||||
have made the script file executable:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
% chmod +x foo.script
|
||||
|
||||
|
||||
@ -14,7 +14,7 @@ Note that the serial executable includes support for multi-threading
|
||||
parallelization from the styles in the USER-OMP packages. To run with
|
||||
4 threads, you can type this:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
lmp_serial -in in.lj -pk omp 4 -sf omp
|
||||
|
||||
@ -43,7 +43,7 @@ into the MPICH2 installation directory, then into the sub-directory
|
||||
|
||||
Then type something like this:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
mpiexec -localonly 4 lmp_mpi -in in.file
|
||||
mpiexec -np 4 lmp_mpi -in in.file
|
||||
@ -58,13 +58,13 @@ patient before the output shows up.
|
||||
The parallel executable can also run on a single processor by typing
|
||||
something like this:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
lmp_mpi -in in.lj
|
||||
|
||||
Note that the parallel executable also includes OpenMP
|
||||
multi-threading, which can be combined with MPI using something like:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
mpiexec -localonly 2 lmp_mpi -in in.lj -pk omp 2 -sf omp
|
||||
|
||||
@ -76,7 +76,7 @@ automatically append "gpu" to styles that support it. Use the "-pk
|
||||
gpu Ng" :doc:`command-line switch <Run_options>` to set Ng = # of
|
||||
GPUs/node to use.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
lmp_machine -sf gpu -pk gpu 1 -in in.script # 1 MPI task uses 1 GPU
|
||||
mpirun -np 12 lmp_machine -sf gpu -pk gpu 2 -in in.script # 12 MPI tasks share 2 GPUs on a single 16-core (or whatever) node
|
||||
@ -106,7 +106,7 @@ and use of multiple MPI tasks/GPU is the same.
|
||||
Use the :doc:`suffix gpu <suffix>` command, or you can explicitly add an
|
||||
"gpu" suffix to individual styles in your input script, e.g.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: LAMMPS
|
||||
|
||||
pair_style lj/cut/gpu 2.5
|
||||
|
||||
|
||||
@ -205,7 +205,7 @@ For building with make, several example Makefiles for building with
|
||||
the Intel compiler are included with LAMMPS in the src/MAKE/OPTIONS/
|
||||
directory:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
Makefile.intel_cpu_intelmpi # Intel Compiler, Intel MPI, No Offload
|
||||
Makefile.knl # Intel Compiler, Intel MPI, No Offload
|
||||
|
||||
@ -23,7 +23,7 @@ instructions.
|
||||
|
||||
These examples assume one or more 16-core nodes.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
env OMP_NUM_THREADS=16 lmp_omp -sf omp -in in.script # 1 MPI task, 16 threads according to OMP_NUM_THREADS
|
||||
lmp_mpi -sf omp -in in.script # 1 MPI task, no threads, optimized kernels
|
||||
@ -60,7 +60,7 @@ and threads/MPI task is the same.
|
||||
Use the :doc:`suffix omp <suffix>` command, or you can explicitly add an
|
||||
"omp" suffix to individual styles in your input script, e.g.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: LAMMPS
|
||||
|
||||
pair_style lj/cut/omp 2.5
|
||||
|
||||
|
||||
@ -17,7 +17,7 @@ See the :ref:`Build extras <opt>` doc page for instructions.
|
||||
|
||||
**Run with the OPT package from the command line:**
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
lmp_mpi -sf opt -in in.script # run in serial
|
||||
mpirun -np 4 lmp_mpi -sf opt -in in.script # run in parallel
|
||||
@ -30,7 +30,7 @@ automatically append "opt" to styles that support it.
|
||||
Use the :doc:`suffix opt <suffix>` command, or you can explicitly add an
|
||||
"opt" suffix to individual styles in your input script, e.g.
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: LAMMPS
|
||||
|
||||
pair_style lj/cut/opt 2.5
|
||||
|
||||
|
||||
@ -132,7 +132,7 @@ packages. As an example, here is a command that builds with all the
|
||||
GPU related packages installed (GPU, KOKKOS with Cuda), including
|
||||
settings to build the needed auxiliary GPU libraries for Kepler GPUs:
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
Make.py -j 16 -p omp gpu kokkos -cc nvcc wrap=mpi -gpu mode=double arch=35 -kokkos cuda arch=35 lib-all file mpi
|
||||
|
||||
|
||||
@ -99,7 +99,7 @@ binary2txt tool
|
||||
The file binary2txt.cpp converts one or more binary LAMMPS dump file
|
||||
into ASCII text files. The syntax for running the tool is
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
binary2txt file1 file2 ...
|
||||
|
||||
@ -149,7 +149,7 @@ chains and solvent atoms can strongly overlap, so LAMMPS needs to run
|
||||
the system initially with a "soft" pair potential to un-overlap it.
|
||||
The syntax for running the tool is
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
chain < def.chain > data.file
|
||||
|
||||
@ -178,11 +178,11 @@ Version 20110511
|
||||
|
||||
.. parsed-literal::
|
||||
|
||||
Syntax: ./abf_integrate < filename > [-n < nsteps >] [-t < temp >] [-m [0\|1] (metadynamics)] [-h < hill_height >] [-f < variable_hill_factor >]
|
||||
./abf_integrate < filename > [-n < nsteps >] [-t < temp >] [-m [0\|1] (metadynamics)] [-h < hill_height >] [-f < variable_hill_factor >]
|
||||
|
||||
The LAMMPS interface to the colvars collective variable library, as
|
||||
well as these tools, were created by Axel Kohlmeyer (akohlmey at
|
||||
gmail.com) at ICTP, Italy.
|
||||
gmail.com) while at ICTP, Italy.
|
||||
|
||||
----------
|
||||
|
||||
@ -427,7 +427,7 @@ atoms can strongly overlap, so LAMMPS needs to run the system
|
||||
initially with a "soft" pair potential to un-overlap it. The syntax
|
||||
for running the tool is
|
||||
|
||||
.. parsed-literal::
|
||||
.. code-block:: bash
|
||||
|
||||
micelle2d < def.micelle2d > data.file
|
||||
|
||||
|
||||
Reference in New Issue
Block a user