- split/duplicate functionality
- rework inter-node/intra-node handling to allow selection of
splitting based on 'shared' or hostname (default).
- always creates node communicators at startup:
* commInterNode() - between nodes
* commLocalNode() - within a node
- world-comm is now always a duplicate of MPI_COMM_WORLD to provide
better separation from other processes.
NB:
the inter-node comm is a slight exception to other communicators
in that we always retain its list of (global) ranks, even if the
local process is not in that communicator.
This can help when constructing topology-aware patterns.
FIX: non-participating ranks still had knowledge of their potential siblings
- after create by group, the procIDs_ of non-participating ranks
should be empty (except for the inter-node exception)
- UPstream::Communicator is similar to UPstream::Request to
wrap/unwrap MPI_Comm. Provides a 'lookup' method to transcribe
the internal OpenFOAM communicator tracking to the opaque wrapped
version.
- provide an 'openfoam_mpi.H' interfacing file, which includes
the <mpi.h> as well as casting routines.
Example (caution: ugly!)
MPI_Comm myComm =
PstreamUtils::Cast::to_mpi
(
UPstream::Communicator::lookup(UPstream::worldComm)
);
- MPI_THREAD_MULTIPLE is usually undesirable for performance reasons,
but in some cases may be necessary if a linked library expects it.
Provide a '-mpi-threads' option to explicitly request it.
ENH: consolidate some looping logic within argList
- added UPstream::allGatherValues() with a direct call to MPI_Allgather.
This enables possible benefit from a variety of internal algorithms
and simplifies the caller
Old:
labelList nPerProc
(
UPstream::listGatherValues<label>(patch_.size(), myComm)
);
Pstream::broadcast(nPerProc, myComm);
New:
const labelList nPerProc
(
UPstream::allGatherValues<label>(patch_.size(), myComm)
);
- Pstream::allGatherList uses MPI_Allgather for contiguous values
instead of the hand-rolled tree walking involved with
gatherList/scatterList.
-
- simplified the calling parameters for mpiGather/mpiScatter.
Since send/recv data types are identical, the send/recv count
is also always identical. Eliminates the possibility of any
discrepancies.
Since this is a low-level call, it does not affect much code.
Currently just Foam::profilingPstream and a UPstream internal.
BUG: call to MPI_Allgather had hard-coded MPI_BYTE (not the data type)
- a latent bug since it is currently only passed char data anyhow
- this refinement of commit 81807646ca makes these methods
consistent with other objects/containers.
The 'unsigned char' access is still available via cdata()
- simplifies communication structuring with intra-host communication.
Can be used for IO only, or for specialised communication.
Demand-driven construction. Gathers the SHA1 of host names when
determining the connectivity. Internally uses an MPI_Gather of the
digests and a MPI_Bcast of the unique host indices.
NOTE:
does not use MPI_Comm_splt or MPI_Comm_splt_type since these
return MPI_COMM_NULL on non-participating process which does not
easily fit into the OpenFOAM framework.
Additionally, if using the caching version of
UPstream::commInterHost() and UPstream::commIntraHost()
the topology is determined simultaneously
(ie, equivalent or potentially lower communication).
- make sizing of commsStruct List demand-driven as well
for more robustness, fewer unneeded allocations.
- fix potential latent bug with allBelow/allNotBelow for proc 0
(linear communication).
ENH: remove unused/unusable UPstream::communicator optional parameter
- had constructor option to avoid constructing the MPI backend,
but this is not useful and inconsistent with what the reset or
destructor expect.
STYLE: local use of UPstream::communicator
- automatically frees communicator when it leaves scope
- these are primarily when encountering sparse (eg, inter-host)
communicators. Additional UPstream convenience methods:
is_rank(comm)
=> True if process corresponds to a rank in the communicators.
Can be a master rank or a sub-rank.
is_parallel(comm)
=> True if parallel algorithm or exchange is used on the process.
same as
(parRun() && (nProcs(comm) > 1) && is_rank(comm))
- not currently used, but it is possible that communicator allocation
modifies the list of sub-ranks. Ensure that the correct size is used
when (re)initialising the linear/tree structures.
STYLE: adjust MPI test applications
- remove some clutter and unneeded grouping.
Some ideas for host-only communicators
- similar to UPstream::parRun(), the setter returns the previous value.
The accessors are prefixed with 'comm':
Eg, commGlobal(), commWarn(), commWorld(), commSelf().
This distinguishes them from any existing variables (eg, worldComm)
and arguably more similar to MPI_COMM_WORLD etc...
If demand-driven communicators are added in the future, the function
call syntax can help encapsulate that.
Previously:
const label oldWarnComm = UPstream::warnComm;
const label oldWorldComm = UPstream::worldComm;
UPstream::warnComm = myComm;
UPstream::worldComm = myComm;
...
UPstream::warnComm = oldWarnComm;
UPstream::worldComm = oldWorldComm;
Now:
const label oldWarnComm = UPstream::commWarn(myComm);
const label oldWorldComm = UPstream::commWorld(myComm);
...
UPstream::commWarn(oldWarnComm);
UPstream::commWorld(oldWorldComm);
STYLE: check (warnComm >= 0) instead of (warnComm != -1)
- permits distinction between communicators/groups that were
user-created (eg, MPI_Comm_create) versus those queried from MPI.
Previously simply relied on non-null values, but that is too fragile
ENH: support List<Request> version of UPstream::finishedRequests
- allows more independent algorithms
ENH: added UPstream::probeMessage(...). Blocking or non-blocking