LAMMPS WWW Site - LAMMPS Documentation - LAMMPS Commands

processors command

Syntax:

processors Px Py Pz keyword args ... 

Examples:

processors 2 4 4
processors * * 5
processors * * * grid xyz
processors * * * numa
processors * * * part 1 2 multiple 

Description:

Specify how processors are mapped as a 3d logical grid to the global simulation box. This involves 2 steps. First if there are P processors it means choosing a factorization P = Px by Py by Pz so that there are Px processors in the x dimension, and similarly for the y and z dimensions. Second, the P processors (with MPI ranks 0 to P-1) are mapped to the logical grid so that each grid cell is a processor. The arguments to this command control each of these 2 steps.

The Px, Py, Pz parameters affect the factorization. Any of the 3 parameters can be specified with an asterisk "*", which means LAMMPS will choose the number of processors in that dimension. It will do this based on the size and shape of the global simulation box so as to minimize the surface-to-volume ratio of each processor's sub-domain.

Since LAMMPS does not load-balance by changing the grid of 3d processors on-the-fly, this choosing explicit values for Px or Py or Pz can be used to override the LAMMPS default if it is known to be sub-optimal for a particular problem. For example, a problem where the extent of atoms will change dramatically in a particular dimension over the course of the simulation.

The product of Px, Py, Pz must equal P, the total # of processors LAMMPS is running on. For a 2d simulation, Pz must equal 1. If multiple partitions are being used then P is the number of processors in this partition; see this section for an explanation of the -partition command-line switch.

Note that if you run on a large, prime number of processors P, then a grid such as 1 x P x 1 will be required, which may incur extra communication costs due to the high surface area of each processor's sub-domain.


The grid keyword affects how processor IDs are mapped to the 3d grid of processors.

The cart style uses the family of MPI Cartesian functions to do this, namely MPI_Cart_create(), MPI_Cart_get(), MPI_Cart_shift(), and MPI_Cart_rank(). It invokes the MPI_Cart_create() function with its reorder flag = 0, so that MPI is not free to reorder the processors.

The cart/reorder style does the same thing as the cart style except it sets the reorder flag to 1, so that MPI is free to reorder processors if it desires.

The xyz, xzy, yxz, yzx, zxy, and zyx styles are all similar. If the style is IJK, then it explicitly maps the P processors to the grid so that the processor ID in the I direction varies fastest, the processor ID in the J direction varies next fastest, and the processor ID in the K direction varies slowest. For example, if you select style xyz and you have a 2x2x2 grid of 8 processors, the assignments of the 8 octants of the simulation domain will be:

proc 0 = lo x, lo y, lo z octant
proc 1 = hi x, lo y, lo z octant
proc 2 = lo x, hi y, lo z octant
proc 3 = hi x, hi y, lo z octant
proc 4 = lo x, lo y, hi z octant
proc 5 = hi x, lo y, hi z octant
proc 6 = lo x, hi y, hi z octant
proc 7 = hi x, hi y, hi z octant 

Note that, in principle, an MPI implementation on a particular machine should be aware of both the machine's network topology and the specific subset of processors and nodes that were assigned to your simulation. Thus its MPI_Cart calls can optimize the assignment of MPI processes to the 3d grid to minimize communication costs. However in practice, few if any MPI implementations actually do this. So it is likely that the cart and cart/reorder styles simply give the same result as one of the IJK styles.


The numa keyword affects both the factorization of P into Px,Py,Pz and the mapping of processors to the 3d grid.

It will perform a two-level factorization of the simulation box to minimize inter-node communication. This can improve parallel efficiency by reducing network traffic. When this keyword is set, the simulation box is first divided across nodes. Then within each node, the subdomain is further divided between the cores of each node.

The numa setting will be ignored if (a) there are less than 4 cores per node, or (b) the number of MPI processes is not divisible by the number of cores used per node, or (c) only 1 node is allocated, or (d) any of the Px or Py of Pz values is greater than 1.


The part keyword can be useful when running in multi-partition mode, e.g. with the -partition">>run_style verlet/split command. It specifies a dependency bewteen a sending partition Psend and a receiving partition Precv which is enforced when each is setting up their own mapping of the partitions processors to the simulation box. Each of Psend and Precv must be integers from 1 to Np, where Np is the number of partitions you have defined via the command-line switch.

A "dependency" means that the sending partition will create its 3d logical grid as Px by Py by Pz and after it has done this, it will send the Px,Py,Pz values to the receiving partition. The receiving partition will wait to receive these values before creating its own 3d logical grid and will use the sender's Px,Py,Pz values as a constraint. The nature of the constraint is determined by the cstyle argument.

For a cstyle of multiple, each dimension of the sender's processor grid is required to be an integer multiple of the corresponding dimension in the receiver's processor grid. This is a requirement of the run_style verlet/split command.

For example, assume the sending partition creates a 4x6x10 grid = 240 processor grid. If the receiving partition is running on 80 processors, it could create a 4x2x10 grid, but it will not create a 2x4x10 grid, since in the y-dimension, 6 is not an integer multiple of 4.


Note that you can use the partition command to specify different processor grids for different partitions, e.g.

partition yes 1 processors 4 4 4
partition yes 2 processors 2 3 2 

IMPORTANT NOTE: If you use the partition command to invoke different "processsors" commands on different partitions, and you also use the part keyword, then you must insure that both the sending and receiving partitions invoke the "processors" command that connects the 2 partitions via the part keyword. LAMMPS cannot easily check for this, but your simulation will likely hang in its setup phase if this error has been made.


Restrictions:

This command cannot be used after the simulation box is defined by a read_data or create_box command. It can be used before a restart file is read to change the 3d processor grid from what is specified in the restart file.

The numa keyword cannot be used with the part keyword, or with any grid setting other than cart.

Related commands: none

Default:

The option defaults are Px Py Pz = * * *, grid = cart, numa = 0.