- as part of the cleanup of dictionary access methods (c6520033c9)
made the dictionary class single inheritance from IDLList<entry>.
This eliminates any ambiguities for iterators and allows
for simple use of range-for looping.
Eg,
for (const entry& e : topDict))
{
Info<< "entry:" << e.keyword() << " is dict:" << e.isDict() << nl;
}
vs
forAllConstIter(dictionary, topDict, iter))
{
Info<< "entry:" << iter().keyword()
<< " is dict:" << iter().isDict() << nl;
}
- use keyType::option enum to consolidate searching options.
These enumeration names should be more intuitive to use
and improve code readability.
Eg, lookupEntry(key, keyType::REGEX);
vs lookupEntry(key, false, true);
or
Eg, lookupEntry(key, keyType::LITERAL_RECURSIVE);
vs lookupEntry(key, true, false);
- new findEntry(), findDict(), findScoped() methods with consolidated
search options for shorter naming and access names more closely
aligned with other components. Behave simliarly to the
methods lookupEntryPtr(), subDictPtr(), lookupScopedEntryPtr(),
respectively. Default search parameters consistent with lookupEntry().
Eg, const entry* e = dict.findEntry(key);
vs const entry* e = dict.lookupEntryPtr(key, false, true);
- added '*' and '->' dereference operators to dictionary searchers.
- use the dictionary 'get' methods instead of readScalar for
additional checking
Unchecked: readScalar(dict.lookup("key"));
Checked: dict.get<scalar>("key");
- In templated classes that also inherit from a dictionary, an additional
'template' keyword will be required. Eg,
this->coeffsDict().template get<scalar>("key");
For this common use case, the predefined getXXX shortcuts may be
useful. Eg,
this->coeffsDict().getScalar("key");
- nBoundaryFaces() is often used and is identical to
(nFaces() - nInternalFaces()).
- forward the mesh nInternalFaces() and nBoundaryFaces() to
polyBoundaryMesh as nFaces() and start() respectively,
for use when operating on a polyBoundaryMesh.
STYLE:
- use identity() function with starting offset when creating boundary maps.
labelList map
(
identity(mesh.nBoundaryFaces(), mesh.nInternalFaces())
);
vs.
labelList map(mesh.nBoundaryFaces());
forAll(map, i)
{
map[i] = mesh.nInternalFaces() + i;
}
Within decomposeParDict, it is now possible to specify a different
decomposition method, methods coefficients or number of subdomains
for each region individually.
The top-level numberOfSubdomains remains mandatory, since this
specifies the number of domains for the entire simulation.
The individual regions may use the same number or fewer domains.
Any optional method coefficients can be specified in a general
"coeffs" entry or a method-specific one, eg "metisCoeffs".
For multiLevel, only the method-specific "multiLevelCoeffs" dictionary
is used, and is also mandatory.
----
ENH: shortcut specification for multiLevel.
In addition to the longer dictionary form, it is also possible to
use a shorter notation for multiLevel decomposition when the same
decomposition method applies to each level.
splitMeshRegions: handle flipping of faces for surface fields
subsetMesh: subset dimensionedFields
decomposePar: use run-time selection of decomposition constraints. Used to
keep cells on particular processors. See the decomposeParDict in
$FOAM_UTILITIES/parallel/decomposePar:
- preserveBaffles: keep baffle faces on same processor
- preserveFaceZones: keep faceZones owner and neighbour on same processor
- preservePatches: keep owner and neighbour on same processor. Note: not
suitable for cyclicAMI since these are not coupled on the patch level
- singleProcessorFaceSets: keep complete faceSet on a single processor
- refinementHistory: keep cells originating from a single cell on the
same processor.
decomposePar: clean up decomposition of refinement data from snappyHexMesh
reconstructPar: reconstruct refinement data (refineHexMesh, snappyHexMesh)
reconstructParMesh: reconstruct refinement data (refineHexMesh, snappyHexMesh)
redistributePar:
- corrected mapping surfaceFields
- adding processor patches in order consistent with decomposePar
argList: check that slaves are running same version as master
fvMeshSubset: move to dynamicMesh library
fvMeshDistribute:
- support for mapping dimensionedFields
- corrected mapping of surfaceFields
parallel routines: allow parallel running on single processor
Field: support for
- distributed mapping
- mapping with flipping
mapDistribute: support for flipping
AMIInterpolation: avoid constructing localPoints
- redistributePar to have almost (complete) functionality of decomposePar+reconstructPar
- low-level distributed Field mapping
- support for mapping surfaceFields (including flipping faces)
- support for decomposing/reconstructing refinement data