- simplifies code by avoiding code duplication:
* parLagrangianDistributor
* meshToMesh (processorLOD and AABBTree methods)
BUG: inconsistent mapping when using processorLOD boxes (fixes#2932)
- internally the processorLODs createMap() method used a 'localFirst'
layout whereas a 'linear' order is what is actually expected for the
meshToMesh mapping. This will cause of incorrect behaviour
if using processorLOD instead of AABBTree.
A dormant bug since processorLOD is not currently selectable.
- individual processor Time databases are purely for internal logistics
and should not be introducing any new library symbols: these will
already have been loaded in the outer loop.
- MPI_THREAD_MULTIPLE is usually undesirable for performance reasons,
but in some cases may be necessary if a linked library expects it.
Provide a '-mpi-threads' option to explicitly request it.
ENH: consolidate some looping logic within argList
- primarily relevant for finite-area meshes, in which case they can be
considered to be an additional, detailed diagnositic that is
normally not needed (clutters the file system).
Writing can be enabled with the `-write-edges` option.
- fatten the interface to continue allowing write control with a bool
or with a dedicated file handler. This may slim down in the future.
Co-authored-by: mattijs <mattijs>
- added UPstream::allGatherValues() with a direct call to MPI_Allgather.
This enables possible benefit from a variety of internal algorithms
and simplifies the caller
Old:
labelList nPerProc
(
UPstream::listGatherValues<label>(patch_.size(), myComm)
);
Pstream::broadcast(nPerProc, myComm);
New:
const labelList nPerProc
(
UPstream::allGatherValues<label>(patch_.size(), myComm)
);
- Pstream::allGatherList uses MPI_Allgather for contiguous values
instead of the hand-rolled tree walking involved with
gatherList/scatterList.
-
- simplified the calling parameters for mpiGather/mpiScatter.
Since send/recv data types are identical, the send/recv count
is also always identical. Eliminates the possibility of any
discrepancies.
Since this is a low-level call, it does not affect much code.
Currently just Foam::profilingPstream and a UPstream internal.
BUG: call to MPI_Allgather had hard-coded MPI_BYTE (not the data type)
- a latent bug since it is currently only passed char data anyhow
- use typeHeaderOk<regIOobject>(false) for some generic file existence
checks. Often had something like labelIOField as a placeholder, but
that may be construed to have a particular something.
- coupled patches are treated distinctly and independently of
internalFacesOnly, it makes little sense to report them with a
warning about turning off boundary faces (which would not work
anyhow).
STYLE: update code style for createBaffles
- this complements the whichPatch(meshFacei) method [binary search]
and the list of patchID() by adding internal range checks.
eg,
Before
~~~~~~
if (facei >= mesh.nInternalFaces() && facei < mesh.nFaces())
{
patchi = pbm.patchID()[facei - mesh.nInternalFaces()];
...
}
After
~~~~~
patchi = pbm.patchID(facei);
if (patchi >= 0)
{
...
}
ENH: add pTraits and IO for std::int8_t
STYLE: cull some implicitly available includes
- pTraits.H is included by label/scalar etc
- zero.H is included by UList
STYLE: cull redundant forward declarations for Istream/Ostream
- newer naming allows for less confusing code.
Eg,
max(lower) -> clamp_min(lower)
min(upper) -> clamp_max(upper)
- prefer combined method, for few operations.
Eg,
max(lower) + min(upper) -> clamp_range(lower, upper)
The updated naming also helps avoid some obvious coding errors.
Eg,
Re.min(1200.0);
Re.max(18800.0);
instead of
Re.clamp_range(1200.0, 18800.0);
- can also use implicit conversion of zero_one to MinMax<Type> for
this type of code:
lambda_.clamp_range(zero_one{});