The most frequent changes have been as follows.
from:
tmp<scalarField> tuTau(new scalarField(patch().size(), Zero));
scalarField& uTau = tuTau.ref();
to:
auto tuTau = tmp<scalarField>::New(patch().size(), Zero);
auto& uTau = tuTau.ref();
- Other changes involved the addition of - wherever approapriate -:
const
noexcept
auto
Previously, a nutWallFunctionFvPatchScalarField ref should be
created in epsilon, k, and omega wall functions to fetch various
common wall-function coefficients necessary to carry out and complete
local operations inside these wall functions.
However, this arrangement required the use of a nut wall function,
even when unnecessary, when any of non-nut wall functions are being used.
Therefore, some users had been redundantly restrained and
obstructed with rather obscure casting-error messages.
Also, the wall-function coefficients Cmu, kappa and E have been obtained
from the specified nutWallFunction in order to ensure that each patch
possesses the same set of values for these coefficients.
Although the motivation sounds reasonable, it has also been putting redundant
restraints on users and disregarding the specifics of each wall-function.
For example, the variation of epsilon in near-wall regions is usually very
steep and non-monotonic specific - an expert user may therefore want to use
an epsilon-specific coefficient, and this was not allowed by the previous
arrangement.
This commit introduces a new class (i.e. wallFunctionCoefficients) comprising
all common wall-function coefficients and yPlus calculations.
Previously, a number of wall functions were not not writing
their boundary-condition entries in the defacto order
(i.e. from type to value) while writing a field. For example:
<patchName>
{
lowReCorrection 1;
blending stepwise;
n 2;
type epsilonWallFunction; <!-- expected to be the first entry
value uniform 1; <!-- expected to be the last entry
}
Also, various wall functions have been writing out entries that
have not been being used by the wall function. For example:
<patchName>
{
type nutUSpaldingWallFunction;
...
blending stepwise; <!-- no blending treatment in nutUSpaldingWF
...
}
Additionally, various derived wall functions (e.g. atmOmegaWallFunction)
have been failing to write some of the inherited entries even though
these entries have been being used in carrying out wall-function calculations.
Taken these into consideration, wall functions have been reworked to obtain
reliable and consistent way of writing their traits while writing out a field.
- writeLocalEntries uses writeIfDifferent if constructed with getOrDefault.
ENH: simple faMeshSubset (zero-sized meshes only)
ENH: additional access methods for faMesh, primitive geometry mode
- wrapped walking of boundary edgeLabels as list of list
(similar to edgeFaces).
- primitive finiteArea geometry mode with reduced communication:
primarily interesting for decomposition/redistribution (#2436)
ENH: extra vtk debug outputs for checkFaMesh
- report per-processor sizes in the mesh summary
- similar functionality as newMesh etc.
Relocated to finiteVolume since there are no dynamicMesh dependencies.
- use simpler procAddressing (with updated mapDistributeBase).
separated from redistributePar
- returns UPtrList view (read-only or read/write) of the objects
- shorter names for IOobject checks: hasHeaderClass(), isHeaderClass()
- remove unused IOobject::isHeaderClassName(const word&) method.
The typed versions are preferable/recommended, but can still check
directly if needed:
(io.headerClassName() == "foo")
- additional distribute/reverseDistribute with specified commsType.
Improves flexibility.
- distribute with nullValue
- support move construct mapDistribute from mapDistributeBase
- refactor handling of schedules (as whichSchedule method) to
simplify code.
- renumberMap helper for working with compact sub maps
and renumberVisit for handling walk-ordered compaction.
COMP: make mapDistributeBase data private
- accessor methods are available - direct access is unnecessary
- mapDistribute : inherit mapDistributeBase constructors
STYLE: use List<labelPair>::null() for schedule placeholders
- clearer that they are doing nothing
- for int64 compilations this disambiguates between '0' as int32 (size)
or as bool 'false' for local processor validity
Eg,
IOList list(io, 0); <- With label-size 64: is this bool or label?
IOList list(io, Zero); <- Size = 0 (int32/int64), not a bool
- for indirect lists we use element-wise output streaming and read
back as a regular list. This approach cannot however work with
non-blocking mode - the receive buffers will simply not be filled
before attempting to read from them.
For contiguous data, the lowest overhead solution is to locally
flatten the indirect list and use the regular gather routines
for non-blocking mode. For non-contiguous data, can continue to
use the element-wise output, but cannot use non-blocking for it.
STYLE: use non-blocking consistently as default for globalIndex gather(s)
- most of the front-facing code was already using non-blocking,
but there were a few low-level routines defaulting to scheduled
(but never relied upon in the code).
- previously filtered on the existence of area fields, but with
faMesh::TryNew this is not required anymore.
STYLE: enable -verbose for various parallel utilities (consistency)
- introduced UList<bool>::operator()(label) as part of bf0b3d8872
but with gcc-4.8.5 this participates in operator resolution even
for non-bool lists!!
Partial revert until this predicate handling is really required.
- use DynamicList instead of List in the cache, which reduces the
number of allocations occuring each time.
- since the cached times are stored in sorted order, first check if the
new time is greater than the last list entry. Can then simply append
without performing a binary search and can obviously also skip any
subsequent sorting.
STYLE: add noexcept to Instant methods, declare in header (like Tuple2)
- as part of #2358 the writing was changed to be lazy.
Which means that files are only created before they are actually
written, which helps avoid flooding the filesystem if sample-only
is required and also handles case such as "rho.*" where the sampled
fields are not known from the objectRegistry at startup.
- now create any new files using the startTime value, which means they
are easier to find but still retains the lazy construct.
Don't expect any file collisions with this, but there could be some
corner cases where the user has edited to remove fields (during
runtime) and then re-edits to add them back in. In this case the
file pointers would be closed but reopened later and overwriting
the old probed values. This could be considered a feature or a bug.
BUG: bad indexing for streamlines (fixes#2454)
- a cut-and-paste error
- only wrap compiler calls (not things like flex/bison)
- avoid single quoted '&&' (causes syntax errors)
STYLE: report WM_COMPILE_CONTROL value in top-level Allwmake
- relocate templating to factory method 'New'.
Adds provisions for more general re-use.
- expose processor topology in globalMesh as topology()
- wrap proc->patch lookup as processorTopology::procPatchLookup method
(failsafe). May consider using Map<label> for its storage in the
future.
- Uses a refPtr to reference external content.
Useful (for example) when writing data without copying.
Reading into external locations is not implemented
(no current requirement for that).
* IOFieldRef -> IOField
* IOListRef -> IOList
* IOmapDistributePolyMeshRef -> IOmapDistributePolyMesh
Eg,
labelList addressing = ...;
io.rename("cellProcAddressing");
IOListRef<label>(io, addressing).write();
Or,
primitivePatch patch = ...;
IOFieldRef<vector>(io, patch.localPoints()).write();
- the values from non-overlapping blocks were simply ignored,
which meant that ('111111111111' & '111111') would not mask out
the unset values at all.
- similar oddities in other operations (|=, ^= etc)
where the original implementation tried hard to avoid touching the
sizing at all, but now better resolved as follows:
- '|=' : Set may grow to accommodate new 'on' bits.
- '^=' : Set may grow to accommodate new 'on' bits.
- '-=' : Never changes the original set size.
- '&=' : Never changes the original set size.
Non-overlapping elements are considered 'off'.
These definitions are consistent with HashSet behaviour
and also ensures that (a & b) == (b & a)
ENH: improve short-circuiting within bitSet ops
- in a few places can optimise by checking for none() instead of
empty() and avoid unnecessary block operations.
ENH: added bitSet::resize_last() method
- as the name says: resizes to the last bit set.
A friendlier way of writing `resize(find_last()+1)`
- uniq() : creates an IndirectList with duplicated entries
filtered out
- subset() : creates an IndirectList with positions that satisfy
a condition predicate.
- subset_if() : creates an IndirectList with values that satisfy a
given predicate.
An indirect subset will be cheaper than creating a subset copy
of the original data, and also allows modification.
STYLE: combine UIndirectList.H into UIndirectList.H (reduce file clutter)
- the sorted() method fills a UPtrList with sorted entries. In some
places this can provide a more convenient means of traversing a
HashTable in consistent order, without the extra step of creating
a sortedToc(). The sorted() method with a UPtrList will also have
a lower overhead than creating any sortedToc() or toc() since it is
list of pointers and not full copies of the keys.
Instead of this:
HashTable<someType> table = ...;
for (const word& key : table.sortedToc())
{
Info<< key << " => " << table[key] << nl;
}
can write this:
for (const auto& iter : table.sorted())
{
Info<< iter.key() << " => " << iter.val() << nl;
}
STYLE:
- declare hash entry key 'const' since it is immutable
- local writeHeaderEntry helper was not marked as file-scope static.
- use do/while to simplify handling of padding spaces
ENH: IOobject - copy construct, resetting name and local component
- when copying with a new local component, this is simpler than
constructing from all of the components, which was previously the
only possibility for setting a new local component.
- commonly used, only depends on routines defined in UList
(don't need the rest of ListOps for it).
ENH: implement boolList::operator() const
- allows use as a predicate functor, as per bitSet and labelHashSet
GIT: combine SubList, UList into List directory (intertwined concepts)
STYLE: default initialize DynamicList instead of with size 0
- specifies the number of consecutive cells to assign to the same
randomly chosen processor. Can be used to have a less extremely
random distribution for testing possible breaking points.
Eg,
method random;
coeffs
{
agglom 4;
}
- Add finiteArea cellID (actually face ids) / faceLabel and procID
for foamToVTK with -write-ids. Useful when this type of information
is needed.
- Arbitrary number of outlets can be connected to a single inlet
- Each inlet can be connected to different and arbitrary
combination of outlets
- Each outlet-inlet connection has:
- Optional filtration fraction as a Function1 type
- Optional offset as a Function1 type (i.e. adding/substracting a substance)
- Optional time delay (from outlet to inlet) as a Function1 type
- Each inlet has an optional base inlet-field as a PatchFunction1 type