- UPstream::Communicator is similar to UPstream::Request to
wrap/unwrap MPI_Comm. Provides a 'lookup' method to transcribe
the internal OpenFOAM communicator tracking to the opaque wrapped
version.
- provide an 'openfoam_mpi.H' interfacing file, which includes
the <mpi.h> as well as casting routines.
Example (caution: ugly!)
MPI_Comm myComm =
PstreamUtils::Cast::to_mpi
(
UPstream::Communicator::lookup(UPstream::worldComm)
);
- this simplifies polling receives and allows separation from
the sends
ENH: add UPstream::removeRequests(pos, len)
- cancel/free of outstanding requests and remove segment from the
internal list of outstanding requests
- useful when speculative receives have been initiated but are no
longer required.
Combines MPI_Cancel() + MPI_Request_free() for consistent resource
management. Currently no feedback provided if the request was
satisfied by a completed send/recv or by cancellation (can be added
later if required).
ENH: support transfer from a wrapped MPI request to global list
- allows coding with a list UPstream::Request and subsequently either
retain that list or transfer into the global list.
- for querying all outstanding requests:
if (UPstream::finishedRequests(startRequest)) ...
if (UPstream::finishedRequests(startRequest, -1)) ...
- for querying slice of outstanding requests:
if (UPstream::finishedRequests(startRequest, 10)) ...
- checks requests from completion, returning true when some requests
have completed and false when there are no active requests.
This allows it to be used in a polling loop to progress MPI
and then respond when as requests become satisfied.
When using as part of a dispatch loop, waitSomeRequests() is
probably more efficient than calling waitAnyRequest() and can help
avoid biasing which client requests are serviced.
Takes an optional return parameter, to retrieve the indices,
but more importantly to avoid inner-loop reallocations.
Example,
DynamicList<int> indices;
while (UPstream::waitSomeRequests(startRequest, &indices))
{
// Dispatch something ....
}
// Reset list of outstanding requests with 'Waitall' for safety
UPstream::waitRequests(startRequest);
---
If only dealing with single items and an index is required for
dispatching, it can be better to use a list of UPstream::Request
instead.
Example,
List<UPstream::Request> requests = ...;
label index = -1;
while ((index = UPstream::waitAnyRequest(requests)) >= 0)
{
// Do something at index
}
ENH: pair-wise wrappers for MPI_Test or MPI_Wait
- for send/recv pairs of requests, can bundle both together and use a
single MPI_Testsome and MPI_Waitall instead of two individual
calls.
- previously had an additional stack for freedRequests_,
which were used to 'remember' locations into the list of
outstandingRequests_ that were handled by 'waitRequest()'.
This was principally done for sanity checks on shutdown,
but we now just test for any outstanding requests that
are *not* MPI_REQUEST_NULL instead (much simpler).
The framework with freedRequests_ also had a provision to 'recycle'
them by popping from that stack, but this is rather fragile since it
would only triggered by some collectives
(MPI_Iallreduce, MPI_Ialltoall, MPI_Igather, MPI_Iscatter)
with no guarantee that these will all be properly removed again.
There was also no pruning of extraneous indices.
ENH: consolidate internal reset/push of requests
- replace duplicate code with inline functions
reset_request(), push_request()
ENH: null out trailing requests
- extra safety (paranoia) for the UPstream::Request versions
of finishedRequests(), waitAnyRequest()
CONFIG: document nPollProcInterfaces in etc/controlDict
- still experimental, but at least make the keyword known
- waits for completion of any of the listed requests and returns the
corresponding index into the list.
This allows, for example, dispatching of data when the receive is
completed.
- permits distinction between communicators/groups that were
user-created (eg, MPI_Comm_create) versus those queried from MPI.
Previously simply relied on non-null values, but that is too fragile
ENH: support List<Request> version of UPstream::finishedRequests
- allows more independent algorithms
ENH: added UPstream::probeMessage(...). Blocking or non-blocking
- UPstream::Request wrapping class provides an opaque wrapper for
vendor MPI_Request values, independent of global lists.
ENH: support for MPI barrier (blocking or non-blocking)
ENH: support for MPI sync-send variants
STYLE: deprecate waitRequests() without a position parameter
- in many cases this can indicate a problem in the program logic since
normally the startOfRequests should be tracked locally.
- now simply a no-op for out-of-range values (instead of an error),
which simplifies the calling code.
Previously
==========
if (request_ >= 0 && request_ < UPstream::nRequests())
{
UPstream::waitRequest(request_);
}
Updated
=======
UPstream::waitRequest(request_);
- when 'recycling' freed request indices, ensure they are actually
within the currently addressable range
- MPI finalization now checks outstanding requests against
MPI_REQUEST_NULL to verify that they have been waited or tested on.
Previously simply checked against freed request indices
ENH: consistent initialisation of send/receive bookkeeping