- old logic (v2206 and earlier) always disabled writing on non-master,
but other parts of the code were more recently updated to use lazy
evaluation of surface data (with parallel communication)
- now retain full write/no-write logic identically on all ranks. Take
care of master/non-master at the final output stage.
It has been observed that the finite-area framework is prone to numerical
issues when zero-valued edge lenghts, edge/face normals and face areas exist.
To improve exception handling at identified code sections to gracefully
overcome math errors, the problematic entities are lower-bounded by SMALL.
Specified using the optional 'omega' entry (Function1 type), e.g. for a constant
value:
omega 12.56;
Note that the swirl contribution is applied in addition to the velocity set by
the 'flowType' option. For example, for the 'constantVelocity' option, parcels
are initially set the velocity according to the UMag and direction/cone angle;
the swirl velocity is then added.
- similar to surface writing formats, also support optional
dictionary of reading options. The main beneficiary of this is the
ensight surface reader:
readOptions
{
ensight
{
masterOnly true;
}
}
This will restrict reading to the master rank. Surfaces and values
read will be broadcast to the other ranks, with the intention of
reducing load on the filesystem.
ENH: add writing of Dimensioned fields for areaWrite functionObject
- can be useful for examining finite-area source terms
- flowRate: volume flow-rate through given patches
- flowRatePartition: distribution of the inlet flow-rate to certain
outlet patches, with given percentages
- uniformityPatch: uniformity of the velocity field at given (outlet) patches,
expressed as (half) the variance of the velocity field
- uniformityCellZone: same as uniformityPatch, but defined over
cellZones
- powerDissipation: the fluid power dissipation taking place within
given cellZones. In the absence of viscous stress at the "inlets" and
"outlets" of the cellZones, this corresponds to the volume flow-rate
weighted total pressure losses through the cellZones
ENH: updated nutSqr so it can be used with adjointkOmegaSST too
to help allocate pointers related to contributions to the adjoint
turbulence model PDEs, populate them and check the validity of the
cellZones provided for cellZone-based objectives
This pack adds a new entry 'parcelTypes' which can specify the list of
parcel type IDs interacting with a surface film. If the entry
is omitted, all particle types are considered.
```
surfaceFilmModel kinematicSurfaceFilm;
kinematicSurfaceFilmCoeffs
{
interactionType absorb;
// Optional list of participating parcel IDs
parcelTypes (10);
}
```
To set the parcel type by injector, 'injectorID' entry can be used
when specifying the injector models, e.g.
```
injectionModels
{
model1
{
type <injectionModelType>;
// Optional injector ID
// - if ommitted, parcels use '-1'
injectorID 10;
...
}
}
```
- make fileHandler deletion mechanism more
transparent by providing a nullptr signature. A nullptr parameter
is already being used in the argList destructor for shutdown, but that
relied on an implicit conversion to autoPtr to trigger things.
- improved handling of file handler replacement.
Previously had a very basic check on old vs new handlers using their
type() values (string comparison!!), which would unfortunately
prevent proper swapping of the contents.
Check the actual pointers instead.
As part of the change, treat any empty autoPtr as no-op instead of as
deletion (which is handled explicitly as nullptr instead).
In addition to making the internal logic simpler, it means that the
current file handler always changes to a valid state without
inadvertently removing everything and falling back to creating a new
default handler (again).
This handling of no-ops also simplifies call code. For example,
<code>
autoPtr<fileHandler> oldHandler;
autoPtr<fileHandler> writeHandler;
word handlerName;
if (arg.readIfPresent("writeHandler", handlerName))
{
writeHandler = fileOperation::New(handlerName);
}
oldHandler = fileHandler(std::move(writeHandler));
... do something
writeHandler = fileHandler(std::move(oldHandler));
</code>
If the "writeHandler" is not specified, each call is a no-op.
If it is specified, the handlers are swapped out each time.
- the management of the fileHandler communicators is now encapsulated
privately (managedComm_) with the final layer being responsible for
cleaning up after itself. This makes delegation/inheritance clearer
and avoids the risk of freeing an MPI communicator twice.
STYLE: uniformFile static check relocated to fileOperation layer
- UPstream::globalComm constant always refers to MPI_COMM_WORLD but
UPstream::worldComm could be MPI_COMM_WORLD (single world)
or a dedicated local communicator (for multi-world).
- provide a Pstream wrapped version of MPI_COMM_SELF,
references as UPstream::selfComm
- UPstream::isUserComm(label)
test for additional user-defined communicators
- recover the target of symbolic links.
This is needed when re-creating a file tree on another rank.
ENH: handle checkGzip, followLink flags in fileHander filePath()
- previously just relied on the backend defaults, now pass through
- separate init(...) for common constructor init steps
- was previously populated with "IOobject" (the typeName) but then
cannot easily detect if the object was actually read.
Also clear the headerClassName on a failed read
BUG: parallel inconsistency in regIOobject::readHeaderOk
- headerOk() checked with master, but possible parallel operations
within it
- comprises a few different elements:
FilterField (currently packaged in PatchFunction1Types namespace)
~~~~~~~~~~~
The FilterField helper class provides a multi-sweep median filter
for a Field of data associated with a geometric point cloud.
The points can be freestanding or the faceCentres (or points)
of a meshedSurface, for example.
Using an initial specified search radius, the nearest point
neighbours are gathered and addressing/weights are built for them.
This currently uses an area-weighted, linear RBF interpolator
with provision for quadratic RBF interpolator etc.
After the weights and addressing are established,
the evaluate() method can be called to apply a median filter
to data fields, with a specified number of sweeps.
boundaryDataSurfaceReader
~~~~~~~~~~~~~~~~~~~~~~~~~
- a surfaceReader (similar to ensightSurfaceReader) when a general
point data reader is needed.
MappedFile
~~~~~~~~~~
- has been extended to support alternative surface reading formats.
This allows, for example, sampled ensight data to be reused for
mapping. Cavaet: multi-patch entries may still needs some work.
- additional multi-sweep median filtering of the input data.
This can be used to remove higher spatial frequencies when
sampling onto a coarse mesh.
smoothSurfaceData
~~~~~~~~~~~~~~~~~
- standalone application for testing of filter radii/sweeps
Changes / Improvements
- more consistent subsetting, interface
* Extend the use of subset and non-subset collections with uniform
internal getters to ensure that the subset/non-subset versions
are robustly handled.
* operator[](label) and objectIndex(label) for standardized access
to the underlying item, or the original index, regardless of
subsetting or not.
* centres() and centre(label) for representative point cloud
information.
* nDim() returns the object dimensionality (0: point, 1: line, etc)
these can be used to determine how 'fat' each shape may be
and whether bounds(labelList) may contribute any useful information.
* bounds(labelList) to return the full bound box required for
specific items. Eg, the overall bounds for various 3D cells.
- easier construction of non-caching versions. The bounding boxes are
rarely cached, so simpler constructors without the caching bool
are provided.
- expose findNearest (bound sphere) method to allow general use
since this does not actually need a tree.
- static helpers
The boxes() static methods can be used by callers that need to build
their own treeBoundBoxList of common shapes (edge, face, cell)
that are also available as treeData types.
The bounds() static methods can be used by callers to determine the
overall bound-box size prior to constructing an indexedOctree
without writing ad hoc code inplace.
Not implemented for treeDataPrimitivePatch since similiar
functionality is available directly from the PrimitivePatch::box()
method with less typing.
========
BREAKING: cellLabels(), faceLabels(), edgeLabel() access methods
- it was always unsafe to use the treeData xxxLabels() methods without
subsetting elements. However, since the various classes
(treeDataCell, treeDataEdge, etc) automatically provided
an identity lookup, this problem was not apparent.
Use objectIndex(label) to safely de-reference to the original index
and operator[](index) to de-reference to the original object.
- more memory efficient within loops
- octree/boundBox overlaps().
Like findBox(), findSphere() but early exit if any shapes overlap.
ENH: additional query for nLeafs()
- don't need separate scratch arrays (avoids possible reallocations
when split is imbalanced)
ENH: upgrade dynamicIndexedOctree to use DynamicList directly
- with C++11 move semantics don't need lists of autoPtr
for efficient transfers
- use default initialize boundBox instead of invertedBox
- reset() instead of assigning from invertedBox
- extend (three parameter version) and grow method
- inflate(Random) instead of extend + re-assigning
- null() static method
* as const reference to the invertedBox with the appropriate casting.
- boundBox inflate(random)
* refactored from treeBoundBox::extend, but allows in-place modification
- boundBox::hexFaces() instead of boundBox::faces
* rarely used, but avoids confusion with treeBoundBox::faces
and reuses hexCell face definitions without code duplication
- boundBox::hexCorners() for corner points corresponding to a hexCell.
Can also be accessed from a treeBoundBox without ambiguity with
points(), which could be hex corners (boundBox) or octant corners
(treeBoundBox)
- boundBox::add with pairs of points
* convenient (for example) when adding edges or a 'box' that has
been extracted from a primitive mesh shape.
- declare boundBox nPoints(), nFaces(), nEdges() as per hexCell
ENH: return invertedBox instead of FatalError for empty trees
- similar to #2612
ENH: cellShape(HEX, ...) + boundBox hexCorners for block meshes
STYLE: cellModel::ref(...) instead of de-reference cellModel::ptr(...)
- the boundBox for a given cell, using the cheapest calculation:
- cellPoints if already available, since this will involve the
fewest number of min/max comparisions.
- otherwise walk the cell faces: via the cell box() method
to avoid creating demand-driven cellPoints etc.
ENH: use direct access to pointHit as point(), use dist(), distSqr()
- if the pointHit has already been checked for hit(), can/should
simply use point() noexcept access subsequently to avoid redundant
checks. Using vector distSqr() methods provides a minor optimization
(no itermediate temporary), but can also make for clearer code.
ENH: copy construct pointIndexHit with different index
- symmetric with constructing from a pointHit with an index
STYLE: prefer pointHit point() instead of rawPoint()
ENH: use DynamicList instead of List + size for point wave
- consistent with previous updates for the other algorithms
STYLE: unique_ptr instead of raw pointer in wave algorithms
- provides fast compile-time indexing for FixedList
(invalid indices trigger a compiler error).
This enables noexcept access, which can propagate into various
other uses (eg, triFace, triPoints, ...)
ENH: add triangle edge vectors
- traditionally used first(), last() methods,
but front(), back() are well-known from std::vector etc
which makes the access more familiar.
- support push_back() method for containers that already had append().
This increases name familiar and can help when porting between
different C++ code bases.
- support pop_back() method for List containers.
This is similar to std::vector
- ie, front(), back(), push_front(), push_back(), pop_front()
ENH: add CircularBuffer flattening operator() and list() method
- useful if assigning content to a List etc
BUG: CircularBuffer find() did not return logical index
Grid independency studies and grid adaptation for implicit LES/DES are
nontrivial and intractable due to the inherent coupling between spatial
resolution and subgrid-scale modelling.
To enable assessments for LES/DES resolution, a function object of
single-mesh resolution index with three submodels is introduced.
- replaced PstreamBuffers mechanism with globalIndex for both gather
and scatter operations. Use scheduled communication by default, but
is selectable.
- reduced communication with ensemble averaging and no-write
- pattern as per surfaceFieldValue::setFaceZoneFaces()
1. define faceId, facePatchId assuming an internal face
2. if actually a boundary face:
- get facePatchId
- ignore if emptyPolyPatch or coupledPolyPatch (neighbour side)
- get patch relative faceId
This currently seems to be the least amount of code clutter.
ENH: recover some memory my shrinking lists in fluxSummary
BUG: potentially trailing rubbish in the heatExchangerModel lists
- the final resize to length actually used was missing.
Does not affect any released versions
- in makeFaMesh, the serial fields are now only read on the master
process and broadcast to the other ranks. The read+distribute is
almost identical to that used in redistributePar, except that in
this case entire fields are sent and not a zero-sized subset.
- improved internal faMesh checking for files so that the TryNew
method works with distributed roots.
- if the volume faceProcAddressing is missing, it is not readily
possible to determine equivalent area procAddressing.
Instead of throwing an error, be more fault-tolerant by having it
create with READ_IF_PRESENT and then detect and warn
if there are problems.
- accept IOobjectOption::registerOption with (MUST_READ, NO_WRITE)
being implicit. Direct handling of IOobjectOption itself, for
consistency with IOobject.
The disabling of object registration is currently the only case
where IOobjectList doesn't use default construction parameters,
but it was previously a bit awkward to specify.
- for repeated tests (eg, during bisection) can be used to preserve
the existing directory as tutorialsTest.bak01,
tutorialsTest.bak02, ... (max of 10).
- preserve the commit information as tutorialsTest/commit-info
to help document the current or backup test results.
- had an off-by-one in the accounting for some corner caes,
partly because the logic was a bit convoluted
ENH: improved string wrapping (#2625)
- reworked logic (like a state machine) to handle backtracking
with fallback of splitting near punctuation characters.
Still doesn't compete with nroff or TeX, but does avoid long lines
and many funny splits. With this change the help for mapFieldsPar
now like this:
=====
Specify the mapping method
(direct|mapNearest|cellVolumeWeight|
correctedCellVolumeWeight)
=====
Since the list of options is very long without any spaces, it takes
'|' as the best split point, which definitely reads better
- functionality similar to that provided by foamToEnsight, foamToVTK
which allows blocking out patches (eg, outer walls, inlet/outlet)
that are not particularly interesting to visualize
- since ensight format is always float and also always written
component-wise, perform the double -> float narrowing when
extracting the components. This reduces the amount of data
transferred between processors.
ENH: avoid vtk/ensight parallel communication of empty messages
- since ensight writes by element type (eg, tet, hex, polyhedral) the
individual written field sections will tend to be relatively sparse.
Skip zero-size messages, which should help reduce some of the
synchronization bottlenecks.
ENH: use 'data chunking' when writing ensight files in parallel
- since ensight fields are written on a per-element basis, the
corresponding segment can become rather sparsely distributed. With
'data chunking', we attempt to get as many send/recv messages in
before flushing the buffer for writing. This should make the
sequential send/recv less affected by the IO time.
ENH: allow use of an external buffer when writing ensight components
STYLE: remove last vestiges of autoPtr<ensightFile> for output routines
- with ATOMIC, an intermediary file is created - eg, (fileAbc~tmp~)
where all of the output is written to. When the stream goes out of
scope, this intermediary file is moved/renamed to the actually
output name - eg, (fileAbc~tmp~) -> (fileAbc).
This adds some safety if the simulation crashes while writing the
file, since it will the partial (corrupt) file will be left
behind as (fileAbc~tmp~) and not as (fileAbc), which means it will
will be treated as a backup file and not loaded again on restart.
ENH: provided enumeration for APPEND/NON_APPEND
- clearer than using bool (with comments).
Since append mode is primarily only used by masterOFstream etc
this change is unlikely to affect user coding.
ENH: use file atomic for ensight file creation
- avoids corrupt (truncated) files being referenced by the ensight
case file if the simulation crashes while writing the ensight file.
- eg, for partially incomplete systems (without libz devel header)
ENH: clearer binding of dummy Pstream in OpenFOAM/Make/options
- link of dummy stub Pstream now contingent on linking libOpenFOAM as
well. This makes the purpose slightly clearer
ENH: cleaner option naming/handling in wmake script
- allow special purpose -no-openfoam option.
Eg, compiling test programs without OpenFOAM and Pstream libraries
but using the rest of the wmake system.
ENH: add +openmp support into WM_COMPILE_CONTROL (#2633)
- this adds compile/link flags for openmp.
For single-use, can also use 'wmake -openmp'.
If both +openmp and ~openmp are specified in WM_COMPILE_CONTROL
the ~openmp will have priority.
This is actually done indirectly since ~openmp will set empty
COMP_OPENMP, LINK_OPENMP internal variables, which the +openmp then
adds to the c++FLAGS and linkexe targets (ie, won't actually add
anything).
ENH: add +ccache or ccache=... support into WM_COMPILE_CONTROL (#2633)
- with the first version (+ccache), simply use ccache from the path
without any extra options.
- with the second version (ccache=...), can be more specific about
what is called.
Using "+ccache" is identical to "ccache=ccache", but the later could
be used in other ways. For example,
ccache=/strange/install/path/ccache
ccache=</path/my-tooling --option>
Have the choice of unquoted, single or double quoted or '< >' quoted
STYLE: relocate FOAM_EXTRA_LDFLAGS in general makefile
- removes clutter for different linkers (eg, gold, mold, ldd)
making it easier to extend for other linkers.
STYLE: protect makefile checks with 'strip' function
- consistent with sumOp
ENH: globalIndex with gatherNonLocal tag, and use leading dispatch tags
- useful for gather/write where the master data can be written
separately. Leading vs trailing dispatch tags for more similarity to
other C++ conventions.
- new submodels:
- 'equalBinWidth': groups data into bins of equal widths (previous behaviour)
- 'unequalBinWidth': groups data into bins of unequal widths
- output files per time-step are replaced with a single output file
- silently deprecates the input entries: 'setFormat' and 'formatOptions'
The improvements include:
- Allowing overset patches to be displaced outside background domain.
- The approach does not support overlapping of multiple inset meshes
on top of background domain.
- Allowing fringe faces to walk away from hole cells in background domain.
- The approach was not extensibly tested with overlapping patches.
- Improving mass conservation.
- Various experimental entries are removed: massFluxInterpolation, ddtCorr.
- New entries:
- oversetAdjustPhi: adds a flux correction outside the pressure equation.
- massCorrection: adds an implicit correction.
- replaced ad hoc handling of formatOptions with coordSetWriter and
surfaceWriter helpers.
Accompanying this change, it is now possible to specify "default"
settings to be inherited, format-specific settings and have a
similar layering with surface-specific overrides.
- snappyHexMesh now conforms to setFormats
Eg,
formatOptions
{
default
{
verbose true;
format binary;
}
vtk
{
precision 10;
}
}
surfaces
{
surf1
{
...
formatOptions
{
ensight
{
scale 1000;
}
}
}
}
- for later reuse with fields (for example)
ENH: use 'scheduled' for surfaceWriter field merging (#2402)
- in tests with merging fields (surfaceWriter), 'scheduled' was
generally faster than 'nonBlocking' for scalars, minorly faster for
vectors.
Thus make 'scheduled' the default for the surfaceWriter but with a
user-option to adjust as required. Previously simply relied on
whichever default globalIndex had (currently nonBlocking).
Reuse globalIndex information from mergedSurf instead of
globalIndex::gatherOp to avoid an extra MPI call to gather sizes
each time.
These changes will not be noticable unless surface sampling is done
very frequently (eg, every iteration) and with large core counts.
- support globalIndex for points/faces as an output parameter,
which allows reuse in subsequent field merge operations.
- make pointMergeMap an optional parameter. This information is not
always required. Eg, if only using gatherAndMerge to combine faces
but without any point fields.
ENH: make globalIndex() noexcept, add globalIndex::clear() method
- end_value() corresponds to the infrequently used after() method, but
with naming that corresponds better to iterator naming conventions.
Eg,
List<Type> list = ...;
labelRange range = ...;
std::transform
(
(list.data() + range.begin_value()),
(list.data() + range.end_value()),
outIter,
op
);
- promote min()/max() methods from labelRange to IntRange base class
STYLE: change timeSelector from "is-a" to "has-a" scalarRanges.
- resets min/max to be identical to the specified value,
which can be more convenient (and slightly more efficient) than doing
a full reset followed by add()
- additional MinMax intersects() query, which works like overlaps()
but with exclusive checks at the ends
- provide MinMax::operator&=() to replace (unused) intersect() method
ENH: single/double value reset method for boundBox
- boundBox::operator&=() to replace (rarely used) intersect() method.
Deprecate boundBox::intersect() to avoid confusion with various
intersects() method
COMP: provide triangleFwd.H
- background: for some application it can be useful to have fully
sorted points. i.e., sorted by x, followed by y, followed by z.
The default VectorSpace 'operator<' compares *all*
components. This is seen by the following comparisons
1. a = (-2.2 -3.3 -4.4)
b = (-1.1 -2.2 3.3)
(a < b) : True
Each 'a' component is less than each 'b' component
2. a = (-2.2 -3.3 -4.4)
b = (-2.2 3.3 4.4)
(a < b) : False
The a.x() is not less than b.x()
The static definitions 'less_xyz', 'less_yzx', 'less_zxy'
instead use comparison of the next components as tie breakers
(like a lexicographic sort).
- same type of definition that Pair and Tuple2 use.
a = (-2.2 -3.3 -4.4)
b = (-2.2 3.3 4.4)
vector::less_xyz(a, b) : True
The a.x() == b.x(), but a.y() < b.y()
They can be used directly as comparators:
pointField points = ...;
std::sort(points.begin(), points.end(), vector::less_zxy);
ENH: make VectorSpace named access methods noexcept.
Since the addressing range is restricted to enumerated offsets
(eg, X/Y/Z) into storage, always remains in-range.
Possible to make constexpr with future C++ versions.
STYLE: VectorSpace 'operator>' defined using 'operator<'
- standard rewriting rule
- useful when a characteristic per-face search dimension is required.
With PrimitivePatch we are certain to have consistent evaluations
of the face centre.
STYLE: tag PrimitivePatch compatibility headers as such
STYLE: combine templated/non-templated headers (reduced clutter)
STYLE: use hitPoint(const point&) combined setter
- same as setHit() + setPoint(const point&)
ENH: expose and use labelOctBits::pack method for addressing
- the old List_FOR_ALL macro only remained in use in relatively few
places. Replace with the expanded equivalent and move the looping
parameter out of the macro and give an explicit name (eg, loopLen)
which simplifies the addition of any loop pragmas in the various
TFOR_ALL... macros (for example).
- in places where direct reading from the std::stream is used,
this method can be used to ensure that the OpenFOAM Sstream state
is properly updated from the std::stream.
ENH: restrict stream renaming to ISstream
- non-const access was previously declared at the top-level (IOstream)
but that not only added in potentially odd setting of the static
fileName, but also meant that the OFstream name() could potentially
be altered after opening a file and thus be inconsistent with the
underlying file that had been opened.
Now restrict name modification to ISstream (and ITstream
counterpart). Does not affect any existing valid code.
STYLE: non-default OFstream destructor (for future file staging)
- construct boundBox from Pair<point> of min/max limits,
make sortable
- additional bounding box intersections (linePointRef), add noexcept
- templated access for boundBox hex-corners
(used to avoid temporary point field).
Eg, unrolled plane/bound-box intersection with early exit
- bounding box grow() to expand box by absolute amounts
Eg,
bb.grow(ROOTVSMALL); // Or: bb.grow(point::uniform(ROOTVSMALL));
vs
bb.min() -= point::uniform(ROOTVSMALL);
bb.max() += point::uniform(ROOTVSMALL);
- treeBoundBox bounding box extend with two or three parameters.
The three parameter version includes grow(...) for reduced writing.
Eg,
bb = bb.extend(rndGen, 1e-4, ROOTVSMALL);
vs
bb = bb.extend(rndGen, 1e-4);
bb.min() -= point::uniform(ROOTVSMALL);
bb.max() += point::uniform(ROOTVSMALL);
This also permits use as const variables or parameter passing.
Eg,
const treeBoundBox bb
(
treeBoundBox(some_points).extend(rndGen, 1e-4, ROOTVSMALL)
);
- box method on meshShapes (cell,edge,face,triangle,...)
returns a Pair<point>.
Can be used directly without dependency on boundBox,
but the limits can also passed through to boundBox.
- Direct box calculation for cell, which walks the cell-faces and
mesh-faces. Direct calculation for face (#2609)
- with geometryOrder=1, calculate the edge normals from the adjacent
faces (area-weighted, inverse distance squared) and also
use that for the Le() calculation.
Includes the contributions from processor edge neighbours, so it
should be consistent on both sides.
This new method (consider as 'beta') contrasts with the current
standard method that first calculates area-weighted point normals
and uses the average of them for the edge normal.
Enable for testing either with a controlDict OptimisationSwitch entry
"fa:geometryOrder", or on the command-line:
solverName -opt-switch=fa:geometryOrder=1
- the Le vector is calculated from (edgeVec ^ edgeNorm)
and should be oriented in direction (faceCentre -> edgeCentre).
If, however, the edgeNorm value is bad for any reason, the
cross-product falls apart and Le vector is calculated as a zero
vector!
For these cases, revert to using (faceCentre -> edgeCentre)
as a better approximation than a zero vector.
In the future, will very likely switch calculating the edge normals
directly from the attached faces, instead of from the attached
points as is currently done, which should improve robustness.
ENH: expose fa:geometryOrder as a registered OptimisationSwitch
ENN: reuse polyMesh data (eg, faceCentres) if possible in faMesh
STYLE: add code lambdas and static functions to isolate logic
ENH: extend rmDir to handle removal of empty directories only
- recursively remove directories that only contain other directories
but no other contents. Treats dead links as non-content.
- stem(), replace_name(), replace_ext(), remove_ext() etc
- string::contains() method - similar to C++23 method
Eg,
if (keyword.contains('/')) ...
vs
if (keyword.find('/') != std::string::npos) ...
- construct based on db and mesh information from an existing field
- check movable() instead of isTmp() when reusing fields
STYLE: isolate check for reuse GeometricField into Detail namespace
- code remnant from separate lookup + construct of coordinateSystem
(7b2bcfda0b).
Apply consistent use of coordinateSystem::NewIfPresent to avoid
these types of coding mishaps
- in continuation of #2565 (rotationCentre for surface output formats)
it is helpful to also support READ_IF_PRESENT behaviour for the
'origin' keyword.
This can be safely used wherever the coordinate system definition
is embedded within a sub-dictionary scope.
Eg,
dict1
{
coordinateSystem
{
origin (0 0 0); // now optional here
rotation ...;
}
}
but remains mandatory if constructed without a sub-dict:
dict2
{
origin (0 0 0); // still mandatory
e1 (1 0 0);
e3 (0 0 1);
}
With this change, the "transform" sub-dictionary can written
more naturally:
formatOptions
{
vtk
{
scale 1000; // m -> mm
transform
{
rotationCentre (1 0 0);
rotation axisAngle;
axis (0 0 1);
angle -45;
}
}
}
ENH: simplify handling of "coordinateSystem" dictionary lookups
- coordinateSystems::NewIfPresent method for optional entries:
coordSysPtr_ = coordinateSystem::NewIfPresent(mesh, dict);
Instead of
if (dict.found(coordinateSystem::typeName, keyType::LITERAL))
{
coordSysPtr_ =
coordinateSystem::New
(
mesh_,
dict,
coordinateSystem::typeName
);
}
else
{
coordSysPtr_.reset();
}
ENH: more consistent handling of priorities for binModels, forces (#2598)
- if the dictionaries are overspecified, give a 'coordinateSystem'
entry a higher prioriy than the 'CofR' shortcuts.
Was previously slightly inconsistent between the different models.
- previously had 'mandatory' (bool) for advanced control of reading
dictionary entries but its meaning was unclear in the calling code
without extra code comments.
Now use IOobjectOption::readOption instead, which allows further
options (ie, NO_READ) and is more transparent as to its purpose in
the code than a true/false bool flag was.
This is a minor breaking change (infrequent, advanced usage only)
- minor code cleanup in dictionary lookup methods
- with IOstreamOption there are no cases where we need to construct
top-level streams (eg, IFstream, OFstream) with additional information
about the internal IOstream 'version' (eg, version: 2.0).
Makes it more convenient to open files with a specified
format/compression combination - no clutter of specifying the
version
- avoids redundant dictionary searching
STYLE: remove dictionary lookupOrDefaultCompat wrapper
- deprecated and replaced by getOrDefaultCompat (2019-05).
The function is usually specific to internal keyword upgrading
(version compatibility) and unlikely to exist in any user code.
- read construct from dictionary.
Calling syntax similar to dimensionedType, dimensionedSet,...
Replaces the older getEntry(), getOptional() static methods
- support readIfPresent
- in expressions BCs in particular, there is various logic handling
for if value/refValue/refGradient etc are found or not.
Handle the lookups as findEntry and branch to use Field assign
or other handling, depending on its existence.
STYLE: use wordList instead of wordRes for copy/filter dictionary
- noexcept on some Time methods
ENH: pass through is_oriented() method for clearer coding
- use logical and/or/xor instead of bitwise versions (clearer intent)
Header information now includes, e.g.
f [Hz] vs P(f) [Pa]
Lower frequency: 2.500000e+01
Upper frequency: 5.000000e+03
Window model: Hanning
Window number: 2
Window samples: 512
Window overlap %: 5.000000e+01
dBRef : 2.000000e-05
Area average: false
Area sum : 6.475194e-04
Number of faces: 473
Note: output files now have .dat extension
- for example,
defaultFieldValues
(
areaScalarFieldValue h 0.00014
);
regions
(
clipPlaneToFace
{
point (0 0 0);
normal (1 0 0);
fieldValues
(
areaScalarFieldValue h 0.00015
);
}
);
ENH: additional clipPlaneTo{Cell,Face,Point} topo sets
- less cumbersome than defining a semi-infinite bounding box
- remedy by performing the attach() action sequentially (as per
stitchMesh changes). This ensures that the current point addressing
is always used and avoids references to the already-merged points
(which is what causes the failure).
ENH: improve handling of empty patch removal
- only remove empty *merged* patches, but leave any other empty
patches untouched since they may intentional placeholders for other
parts of a workflow.
- remove any empty point/face zones created for patch merging
- commonly used calculations
ENH: add faPatch::patchRawSlice method
- slices using the nEdges() instead of the virtual size(),
which provides similar functionality as finite-volume has with
its distinction between polyPatch vs fvPatch patchSlice
- use patchInternal for obtaining faPatch, fvPatch information
- similar to boundaryFieldRef(), primitiveFieldRef() for providing
write access. Complimentary naming to internalField(). Identical to
ref() but more explicitly named, and less likely to be confused with
a tmp::ref(), for example.
- prefer .primitiveFieldRef() over .ref().field()
- mark some access methods noexcept
- these were previously constructing from an fvPatch (for simpler
integration with regionFaModel) but this unnecessarily restricts
the finiteArea to a single volume patch.
- adjusted derived faOptions to support multiple patches
- list of faces() was using mesh-faces, not area-faces
ENH: provision for patch and faceSet selection in fa::faceSetOption
- adjust most of the faOptions to respect subset of faces
ENH: support Function1 for externalHeatFluxSource
BUG: incorrect handling of fixedPower (externalHeatFluxSource)
- used local areas instead of global total area
- old constructor interface allowed arbitrary strings to specify the
method enumeration. If actually used at runtime, they could/would
raise a FatalError (unknown enumeration).
Define a simpler default constructor instead.
- whichPolyPatches() = the polyPatches related to the areaMesh.
This helps when pre-calculating (and caching) any patch-specific
content.
- whichPatchFaces() = the poly-patch/patch-face for each of the faceLabels.
This allows more convenient lookups and, since the list is cached on
the area mesh, reduces the number of calls to whichPatch() etc.
- whichFace() = the area-face corresponding to the given mesh-face
ENH: more flexible/consistent volume->area mapper functions
- whichPatchFace() returns the (patchi, patchFacei) tuple,
whichPatch() simply wraps whichPatchFace()
- groupNames() : similar to zones
ENH: simplify calls to faPatch/fvPatch patchField, lookupPatchField
- make second (ununsed) template parameter optional.
Was previously needed for old compilers (2008 and earlier).
- simplifies construction/inheritance
ENH: add {fa,fv}PatchField::zeroGradientType() static
- can be used to avoid literal "zeroGradient" in places
STYLE: adjust naming of pointPatch runtime selection table
- simply use 'patch' as per fa/fv fields
STYLE: add zero-size guard to patch constraintType(const word&)
For example, instead of
if (dict.found("value"))
{
fvScalarField::operator=
(
Field<scalar>("value", dict, p.size())
);
}
can use more precise specifications, and also eliminate searching
the dictionary multiple times:
const auto* eptr = dict.findEntry("value", keyType::LITERAL);
//or: dict.findCompat("value", {{"oldName" ... }}, keyType::LITERAL);
if (eptr)
{
fvScalarField::assign(*eptr, p.size());
}
STYLE: combine declaration of FieldBase into Field.H
- include -no-libs option by default, similar to '-lib',
which makes it available to all solvers/utilities.
Add argList allowLibs() method to query it.
- relocate with/no functionObjects logic from Time to argList
itself as argList allowFunctionObjects()
- add libs/functionObjects override handling to decomposePar etc
ENH: report the stream relativeName for IOerrors (see c9333a5ac8)
- clearer coding intent. Mark operator() as 'deprecated'
- add bounds checking to get(label) and set(label) methods.
This gives failsafe behaviour for get() that is symmetric with
HashPtrTable, autoPtr etc and aligns the set(label) methods
for UPtrList, PtrList and PtrDynList.
- use top-level PtrList::clone() instead of cloning individual elements
ENH: support HashPtrTable set with refPtr/tmp (flexibility)
- define returnReduce *after* defining all specializations for reduce
so that the compiler does not take the generic templated reduce.
ENH: add UPstream::reduceAnd, UPstream::reduceOr
- direct wrapper of MPI_LAND, MPI_LOR intrinsics
ENH: provide special purpose returnReduce for logical operations
- returnReduceAnd(bool), returnReduceOr(bool) as a inline wrappers
for returnReduce with andOp<bool>(), orOp<bool>() operators,
respectively.
These forms are more succinct and force casting of the parameter
into a bool. Using MPI bool operations allows vendor/hardware MPI
optimisations.
* Test for existence on any rank:
1. if (returnReduceOr(list.size()) { ... }
1b. if (returnReduceOr(!list.empty()) { ... }
2. if (returnReduce(bool(list.size(), orOp<bool>())) { ... }
3. if (returnReduce(list.size(), sumOp<label>()) != 0) { ... }
3b. if (returnReduce(list.size(), sumOp<label>()) > 0) { ... }
* Test for non-existence on all ranks:
1. if (returnReduceAnd(list.empty()) { ... }
1b. if (!returnReduceOr(list.size()) { ... }
2. if (returnReduce(list.empty(), andOp<bool>())) { ... }
3. if (returnReduce(list.size(), sumOp<label>()) == 0) { ... }
Notes:
Form 1. succinct
Form 2. may require explicit bool() for correct dispatch
Form 3. more expensive sumOp<label> just for testing size!
There are also some places using maxOp<label> instead of sumOp<label>
- simplifies coding
* finishedRequest(), waitRequest(), waitRequests() with parRun guards
* nRequests() is noexcept
- more consistent use of UPstream::defaultCommsType in branching
- uses '-g -DFULLDEBUG' (like Debug), but with -O3 (like Opt).
This adds in debug symbols and FULLDEBUG code segments (good for
code development) but retains -O3 optimizations and code paths and
avoids the much slower -O0 associated with 'Debug'.
- add in central wmake/General/common/{c,c++}XXX tuning,
which helps reduce the number of nearly identical files
ENH: add support for wmake -debug-Og
- previously threw FatalError, which downgrades to a Warning only when
loading the functionObject. Now throw a FatalIOError so that missing
control files are treated as a critical error.
- this is especially evident in -reconstruct mode when
the fields have several processor boundaries.
Testing for an existing patch edge mapping must use the `test`
method (with range-checking) instead of the more common `set`
method since the source field will likely have many more boundaries
than physical edge mappings.
- rename effectivenessHeatExchangerSource -> heatExchangerSource
- introduce submodels:
- effectivenessTable (previous behaviour)
- referenceTemperature
- the referenceTemperature submodel uses a reference temperature
which is either a scalar or calculated from a 2D interpolation
table in order to calculate the heat exchange.
- the cpp command is used to process Make/{files,options}, but builtin
defines such as `linux` will cause problems (macro replacement) if
these is present in the Make/{files,options}.
Solve by undefining -Ulinux, -Uunix macros, which will leave directory
names such as "/usr/lib/x86_64-linux-gnu/..." intact.
Directories with _linux, __linux__ content (for example), could
still pose future issues.
- as an alternative output transform (supplementary to the regular
coordinate system specification - issue #2505) it is now possible to
specify the rotation centre directly.
Example:
formatOptions
{
vtk
{
scale 1000; // m -> mm
transform
{
origin (0 0 0);
rotationCentre (1 0 0);
rotation axisAngle;
axis (0 0 1);
angle -45;
}
}
}
This behaves like the transformPoints and surfaceTransformPoints
'-centre' option (formerly '-origin') in that it removes the
specified amount from the point locations, applies the rotation and
finally adds the specified amount back to the newly rotated point
locations.
The results of specifying a `rotationCentre` and a non-zero
coordinate system `origin` may not be intuitively evident.
- introduce a FOAM_LD_LIBRARY_PATH variable to shadow
DYLD_LIBRARY_PATH on MacOS.
The DYLD_LIBRARY_PATH and LD_LIBRARY_PATH cannot be modified via sub
shells etc when SIP is active. This helps circumvent these
restrictions, which is obviously a hack, but seems to be required.
COMP: disable -ftrapping-math in geompack for MacOS
- the output write scaling should be applied *after* undoing the
effects of the specified rotation centre. Fixes#2566
ENH: update option names for transformPoints and surfaceTransformPoints
- prefer '-auto-centre' and '-centre', but also accept the previous
options '-auto-origin' and '-origin' as aliases.
Changing to '-centre' avoids possible confusion with
coordinate system origin().
When a finite-area case could not find an entry for "lnGradSchemes"
in the "faSchemes" file, the "corrected" scheme has been picked up
by default. Therefore, any changes in "snGradSchemes" entry will not
be read by finite-area models.
- consistent with defining IO of int32_t/int64_t and with recent
changes to ensightFile. Using the primitives directly instead of
typedefs to them makes the code somewhat less opaque.
- parse out symbols and use abi::__cxa_demangle for more readable
names in safePrintStack.
- shorten prefixed /path/openfoam/platforms/lib/... to start
with "platforms/lib/..." to avoid unreadably long lines.
- improved file-scope localization of helper functions.
STYLE: use std::ios_base::basefield instead of dec|oct|hex for masking
- align timeVaryingMappedFixedValuePointPatchField keywords with
MappedFile
STYLE: minor cleanup of pointToPointPlanarInterpolation
BUG: incorrect keyword for timeVaryingMappedFixedValuePointPatchField
- lookup should be "fieldTable" (not "fieldTableName") for consistency
with the output and other BCs. (Bug introduced by a623ab42a3)
- some central (core) bits under fileFormats,
- general surface reading relocated from sampling to surfMesh since it
does not use any sampling-specific components and will permit
re-use in meshTools (for example)
- remove old mask, subDir methods from ensightFile which were
previously relocated to ensightCase
- improve handling of 'undef' values when generating and reading,
respect Ensight component ordering when reading.
STYLE: qualify format/version/compression with IOstreamOption not IOstream
STYLE: reduce number of lookups when scanning {fa,fv}Solution
STYLE: call IOobject::writeEndDivider as static
- Ensight places restrictions both on variable names and on file
names. When generating the variable to file name correspondence for
use in the Ensight case file, previously used the less stringent
variable name for both sides of the variable table.
This would lead to situations where the (valid) variable name
referred to the wrong file name. Now apply the file-name restriction
consistently when creating the variable table. This is especially
necessary since the stem of the filename additionally has
specific characters (eg, ":<>[]") that can be problematic for the
shell or file-system.
ENH: avoid repeated '_' in qualified ensight names.
- when replacing undesirable characters (eg, ":<>[]") with '_', avoid
duplicates.
Eg, "PaSR<psiReactionThermo>:Qdot" becomes
"PaSR_psiReactionThermo_Qdot" instead of
"PaSR_psiReactionThermo__Qdot"
ENH: additional ensightCase::padded static method
For example,
T
{
solver PBiCGStab;
preconditioner DILU;
tolerance 1e-6;
norm none;
}
STYLE: define defaultMaxIter, defaultTolerance directly in lduMatrix
- in situations where the simulation diverges, the ensight writing can
be incomplete. If the case file is updated prior to writing geometry
or fields, the generated case may refer to incomplete entries (which
make loading problematic).
NOTE: if multiple fields are sampled and written, this change cannot
entirely prevent case files addressing corrupt fields. For example,
1a. write U field, update case file with new times/fields
1b. write p field, update case file with new times/fields
2a. write U field, update case file with new times
2b. write p field, but fails
Since 2a already updates the case file with a new time-step entry
(for the U field), the case glob patterns will automatically include
the not-yet-written 'p' field. If this write fails with an
incomplete/corrupt field, the case file will still be addressing it!
- barycentric coordinates in interpolation (instead of x/y/z)
- ease U (velocity) requirement.
Needn't be named in the sampled fields.
- default tracking direction is 'forward'
In movePoints had some duplicated code but did not update the
lower level (polyPatch) areas. This caused scaling to be applied
multiple times (so only 1.0 would not be affected)
- the file removal cleanup, which makes reasonable sense for
redistribute mode, always forced the removal of the reconstructed
lagrangian fields (since all of the non-master fields are empty by
definition)!
Detect reconstruct mode (by using constructSize from the map) to
circumvent this logic.
phaseSystemModels function objects are relocated within
functionObjects in order to enable broader usage.
ENH: multiphaseInterHtcModel: new heatTransferCoeff function object model
COMP: createExternalCoupledPatchGeometry: add new dependencies
COMP: alphaContactAngle: avoid duplicate entries between multiphaseEuler and reactingEuler
TUT: damBreak4Phase: rename alphaContactAngle as multiphaseEuler::alphaContactAngle
thermoTools is a relocation of various existing tools:
- src/TurbulenceModels/compressible/turbulentFluidThermoModels/derivedFvPatchFields/
- src/semiPermeableBaffle/derivedFvPatchFields/
- src/thermophysicalModels/thermophysicalPropertiesFvPatchFields/liquidProperties/
ENH: Allwmake: reordering various compilation steps
Co-authored-by: Kutalmis Bercin <kutalmis.bercin@esi-group.com>
This is on
- incompressible/pimpleFoam/laminar/mixerVesselAMI2D/mixerVesselAMI2D-topologyChange
- redistributePar -reconstruct
where the fvMesh::updateMesh does an early trigger of
mesh.phi() calculation
Specific to the VOF-to-lagrangian FO is to generate particles
which potentially do not relate to the mesh. So here they
are preserved instead of trying to locate them on the
reconstructed mesh. Note: this has the same effect
of actually copying the file...
speciesSorption is a zeroGradient BC which absorbs mass given by a first
order time derivative, absoprtion rate and an equilibrium value
calculated based on internal species values next to the wall.
patchCellsSource is a source fvOption which applies to the corresponding
species and apply the source calculated on the speciesSorption BC.
A new abstract virtual class was created to group BC's which
don't introduce a source to the matrix (i.e zeroGradient) but calculate
a mass sink/source which should be introduced into the matrix. This
is done through the fvOption patchCellsSource.
- this allows the "relocation" of sampled surfaces. For example,
to reposition into a different coordinate system for importing
into CAD.
- incorporate output scaling for all surface writer types.
This was previously done on an adhoc basis for different writers,
but with now included in the base-level so that all writers
can automatically use scale + transform.
Example:
formatOptions
{
vtk
{
scale 1000; // m -> mm
transform
{
origin (0.05 0 0);
rotation axisAngle;
axis (0 0 1);
angle -45;
}
}
}
in RASModelVariables were doing this by checking whether the
corresponding pointer was allocated. In some cases, however, even if the
field does not exist, the pointer is not null, leading to the wrong
output. Made the correspding functions virtual and overwritten their
return values in the derived classes. Kept the initial implementation in
base to facilitate the clone function.
in cases with more than one primal or adjoint solvers
TUT: removed all occurances of useSolverNameForFields
from the optimisation tutorials since it is now set
automatically.
in the sensitivity patches, symmetry::evaluate() needs access to the
internalField which does exist, leading to wrong memory access.
Fixed by specifying a calculated type fvPatchField for all patches when
creating a boundaryField<Type>
Using a symmetry(Plane) as a sensitivity patch is quite rare and
borderline wrong, but this provides a fix nonetheless.
The multiplier of grad(dxdb) is a volTensorField which, by itself, is
memory consuming. The function computing it though was sloppy in terms
of memory management, constituting the peak memory consumption during an
adjoint optimisation. Initial changes to remedy the problem include the
deallocation of some of the volTensorFields included in the computation
of grad(dxdb) once unneeded, the utilisation of volSymmTensorFields
instead of volTensorFields where possible and avoiding allocating some
unnecessary intermediate fields.
Actions to further reduce memory consumption:
- For historical reasons, the code computes/stores the transpose of
grad(dxdb), which is then transposed when used in the computation of
the FI or the ESI sensitivity derivatives. This redundant
transposition can be avoid, saving the allocation of an additional
volTensorField, but the changes need to permeate a number of places in
the code that contribute to grad(dxdb) (e.g. ATC, adjoint turbulence
models, adjoint MRF, etc).
- Allocation of unnecessary pointers in the objective class should be
avoided.
- ATCstandard, ATCUaGradU:
the ATC is now added as a dimensioned field and not as an fvMatrix
to UaEqn. This get rid of many unnecessary allocations.
- ATCstandard:
gradU is cached within the class to avoid its re-computation in
every adjoint iteration of the steady state solver.
- Inlined a number of functions within the primal and adjoint solvers.
This probably has a negligible effect since they likely were inlined
by the compiler either way.
- The momentum diffusivity at the boundary, used by the adjoint boundary
conditions, was computed for the entire field and, then, only the
boundary field of each adjoint boundary condition was used. If many
outlet boundaries exist, the entire nuEff field would be computed as
many times as the number of boundaries, leading to an unnecessary
computational overhead.
- Outlet boundary conditions (both pressure and velocity) use the local
patch gradient to compute their fluxes. This patch gradient requires
the computation of the adjacent cell gradient, which is done on the
fly, on a per patch basis. To compute this patch adjacent gradient
however, the field under the grad sign is interpolated on the entire
mesh. If many outlets exist, this leads to a huge computational
overhead. Solved by caching the interpolated field to the database and
re-using it, in a way similar to the caching of gradient fields (see
fvc::grad).
WIP: functions returning references to primal and adjoint boundary
fields within boundaryAdjointContributions seem to have a non-negligible
overhead for cases with many patches. No easy work-around here since
these are virtual and cannot be inlined.
WIP: introduced the code structure for caching the contributions to
the adjoint boundary conditions that depend only on the primal fields
and reusing. The process needs to be completed and evaluated, to make
sure that the extra code complexity is justified by gains in
performance.
is now appended by the name of the adjoint solver, if more than one
exist. This was necessary for an accurate continuation since, before
these changes, only the ma field of the last solver was written. As a
result, when restarting the first adjoint solver was reading the ma
field of the last one. No changes are needed in fvSolution and fvSchemes
w.r.t. the previous code version.
as a step towards machine-accuracy continuation of the optimisation
loop.
Additionally, control points are now written under the time/uniform
folder, to be in-line with rest of the code structure for continuation.
As a side-effect, the controlPointsDefinition in
constant/dynamicMeshDict does not need to be changed to 'fromFile'
anymore in order to perform the continuation. The 'fromFile' option is
still valid if the user wants to supply the control points manually but,
as with all other controlPointsDefinitions, it will be disregarded if the
proper file exists under the time/uniform/volumetricBSplines folder.
Before the commit, the sensitivity classes were receiving references of
the (incompressible) primal and adjoint variables. However, if
additional physics was added (energy equation, multiphase, etc), the
infrastructure wasn't convenient for accommodating (new terms in the FI
and E-SI formulations, new terms in the sensitivity map, etc).
Now, the sensitivity classes receive a reference to an
incompressibleAdjointSolver and receive the terms for the FI and
sensitivity maps through there. The latter is still WIP.
Modified adjointSimple to incorporate these changes as well.
Each solver now writes its sensitivity derivatives to its dictionary,
enabling also a binary format. If present, the sensitivities are then
re-read from the dictionary, avoiding thus possible loss of information
due to re-computation.
As a side-effect, sensitivities are computed after the completion of
each adjoint solver, instead of being computed after all adjoint solvers
have been completed.
for incompressible flows. The typical convention of appending the primal
field name with 'a' to form the adjoint field is followed for the
adjoint turbulent kinetic energy (i.e. 'ka') but since this would produce
an ugly variable name for the adjoint to omega (i.e. omegaa), the latter
is abbreviated to 'wa'.
The work is based on
\verbatim
Kavvadias, I., Papoutsis-Kiachagias, E.,
Dimitrakopoulos, G., & Giannakoglou, K. (2014).
The continuous adjoint approach to the k–$omega$ SST turbulence model with
applications in shape optimization
Engineering Optimization, 47(11), 1523-1542.
https://doi.org/10.1080/0305215X.2014.979816
\endverbatim
with changes in the discretisation of
a number of differential operators and the formulation of the adjoint to
the wall functions employed by the primal model.
Regarding the latter, the code assumes (and differentiates) the default
behaviour of nutkWallFunction (i.e. nutWallFunction::blendingType::STEPWISE)
and omegaWallFunction (i.e. omegaWallFunction::blendingType::BINOMIAL2).
Due to the availability of a number of terms required for the
formulation of the wall function for ka, the latter is implemented
within adjointkOmegaSST itself, with contributions from objective functions
implemented within kaqRWallFunction. Wall functions for wa are
implemented within waWallFunction.
The initial implementation of the above-mentioned reference was
performed by Dr. Ioannis Kavvadias
the Jacobian of an objective function, defined at the boundary, wrt nut
and gradU. Also modified the current objectives that include such
contributions
- update the area-centres processor/processor information as part of
faMesh::init() after all of the global data and geometry data is
setup.
- improve flattenEdgeField helper to properly handle empty patches.
This change removes the false fails when testing edge-centre
redistribution (FULLDEBUG mode).
TUT: add filmPanel (rivulet) tutorial
- include constant/faMesh cleanup (cleanFaMesh) as part of standard
cleanCase
- simplify cleanPolyMesh function to now just warn about old
constant/polyMesh/blockMeshDict but not try to remove anything
- cleanup cellDist.vtu (decomposePar -dry-run) as well
ENH: foamRunTutorials - fallback to Allrun-parallel, Allrun-serial
TUT: call m4 with file argument instead of redirected stdin
TUT: adjust suffixes on decomposeParDict variants
- enables runtime selection of operand coefficients by 'coefficients' entry
- removes binning - now handled using the new 'binField' FO
Co-authored-by: Kutalmis Bercin <kutalmis.bercin@esi-group.com>
The new 'binField' function object calculates binned data,
where specified patches are divided into segments according to
various input bin characteristics, so that spatially-localised
information can be output for each segment.
Co-authored-by: Kutalmis Bercin <kutalmis.bercin@esi-group.com>
- simpler to write for sampled cutting planes etc.
For example,
slice
{
type cuttingPlane;
point (0 0 0);
normal (0 0 1);
interpolate true;
}
instead of
slice
{
type cuttingPlane;
planeType pointAndNormal;
pointAndNormalDict
{
point (0 0 0);
normal (0 0 1);
}
interpolate true;
}
STYLE: add noexcept to some plane methods
- Previous state of the condition was largely inoperative
due to bugs and lack of functionalities
- New state of the condition is more versatile, elegant, robust and faster
ENH: turbulentDigitalFilter: add new scalar-based synthetic turbulence condition
- Realistic temperature and/or concentration fluctuations
can be generated based on given input statistics
- can specify rotations that are not "axes" in a compact form:
transform
{
origin (0 0 0);
rotation none;
}
transform
{
origin (0 0 0);
rotation axisAngle;
axis (0 0 1);
angle 45;
}
An expanded dictionary form also remains possible:
transform
{
origin (0 0 0);
rotation
{
type axisAngle;
axis (0 0 1);
angle 45;
}
}
STYLE: verbose deprecation for "coordinateRotation" keyword
- the "coordinateRotation" keyword was replaced by the "rotation"
keyword (OpenFOAM-v1812 and later) but was handled silently.
Now elevated to non-silent.
STYLE: alias lookups "axesRotation", "EulerRotation", "STARCDRotation"
- these warn and report the equivalent short form, which aids in
upgrading. Previously had silent lookups.
- append single character
- make append() methods void: methods are never chained anyhow
- refactor digest comparison (code reduction)
COMP: add overflow handling for OSHA1stream
- add overflow() method to the SHA1 streambuf. Previously could rely
on xsputn for adding to sha1 content, but streams now check pptr()
first to test for the buffering range and thus overflow() is needed.
- can be more intuitive to specify for some cases:
rotation
{
type euler;
order rollPitchYaw;
angles (0 20 45);
}
- refactor starcd rotation to reuse Euler ZXY ordering
(code reduction)
ENH: add -rotate-x, -rotate-y, -rotate-z for transformPoints etc
- easier to specify for simple rotations
- aligns calling signatures with wordList, for possible future
replacement
- drop construct from const char** (can use initializer_list instead)
ENH: replace hashedWordList with plain wordList in triSurfaceLoader
- additional hashing optimisation (and overhead) is not worth it for
the comparatively small lists of surfaces used.
- catch extra punctuation tokens in chemical equations
- catch unknown species
- simplify generation of reaction string (output)
ENH: allow access of solid concentrations from sub-classes (#2441)
- ensightWrite, vtkWrite, fv::cellSetOption
ENH: additional topoSet "ignore" action
- this no-op can be used to skip an action step, instead of removing
the entire entry
- this allows more flexibility when defining the location or intensity
of sources.
For example,
{
type scalarSemiImplicitSource;
volumeMode specific;
selectionMode all;
sources
{
tracer0
{
explicit
{
type exprField;
functions<scalar>
{
square
{
type square;
scale 0.0025;
level 0.0025;
frequency 10;
}
}
expression
#{
(hypot(pos().x() + 0.025, pos().y()) < 0.01)
? fn:square(time())
: 0
#};
}
}
}
}
ENH: SemiImplicitSource: handle "sources" with explicit/implicit entries
- essentially the same as injectionRateSuSp with Su/Sp,
but potentially clearer in purpose.
ENH: add Function1 good() method to define if function can be evaluated
- for example, provides a programmatic means of avoiding the 'none'
function
- avoid any operations for zero sources
- explicit sources are applied to the entire mesh can be added directly,
without an intermediate DimensionedField
- update some legacy faMatrix/fvMatrix methods that used Istream
instead of dictionary or dimensionSet for their parameters.
Simplify handling of tmps.
- align faMatrix methods with the updated their fvMatrix counterparts
(eg, DimensionedField instead of GeometricField for sources)
- similar to the geometric decomposition constraint,
allows a compositing selection of cells based on topoSet sources
which also include various searchableSurface mechanisms.
This makes for potentially easier placement of sources without
resorting to defining a cellSet.
ENH: support zone group selection for fv::cellSetOption and fa::faceSetOption
- select motion for the entire mesh, or restrict to a subset
of points based on a specified cellSet or cellZone(s).
Can now combine cellSet and cellZone specifications
(uses an 'or' combination).
- move consistent use of keyType and wordRe to allow regex selection,
possibly using zone groups
STYLE: remove duplicate code in solidBodyMotionSolver
- shorter lookup names for more consistency
ENH: accept point1/point2 as alternative to p1/p2 for sources
- better alignment with searchable specification
- refactor so that cylinderAnnulus sources derive directly from
cylinder sources (which handle an annulus as well).
Accept radius or outerRadius as synonyms.
STYLE: noexcept on topoBitSet access methods
DOC: update description for geometricConstraint
- in various situations with mesh regions it is also useful to
filter out or remove the defaultRegion name (ie, "region0").
Can now do that conveniently from the polyMesh itself or as a static
function. Simply use this
const word& regionDir = polyMesh::regionName(regionName);
OR mesh.regionName()
instead of
const word& regionDir =
(
regionName != polyMesh::defaultRegion
? regionName
: word::null
);
Additionally, since the string '/' join operator filters out empty
strings, the following will work correctly:
(polyMesh::regionName(regionName)/polyMesh::meshSubDir)
(mesh.regionName()/polyMesh::meshSubDir)
Reports cloud information for particles passing through a specified cell
zone.
Example usage:
cloudFunctions
{
particleZoneInfo1
{
type particleZoneInfo;
cellZone leftFluid;
// Optional entries
//writer vtk;
}
}
Results are written to file:
- \<case\>/postProcessing/lagrangian/\<cloudName\>/\<functionName\>/\<time\>
\# cellZone : leftFluid
\# time : 1.0000000000e+00
\#
\# origID origProc (x y z) time0 age d0 d mass0 mass
Where
- origID : particle ID
- origProc : processor ID
- (x y z) : Cartesian co-ordinates
- time0 : time particle enters the cellZone
- age : time spent in the cellZone
- d0 : diameter on entry to the cellZone
- d : current diameter
- mass0 : mass on entry to the cellZone
- mass : current mass
If the optional \c writer entry is supplied, cloud data is written in the
specified format.
During the run, output statistics are reported after the cloud solution,
e.g.:
particleZoneInfo:
Cell zone = leftFluid
Contributions = 257
Here, 'Contributions' refers to the number of incremental particle-move
contributions recorded during this time step. At write times, the output
is extended, e.g.:
particleZoneInfo:
Cell zone = leftFluid
Contributions = 822
Number of particles = 199
Written data to "postProcessing/lagrangian/reactingCloud1/
TUT: filter: add an example for the particleZoneInfo function object
- Previously, the multiFieldValue function object was limited to operate on
lists of fieldValue function objects.
- Any function objects that generate results can now be used, e.g.
pressureAverage
{
type multiFieldValue;
libs (fieldFunctionObjects);
operation average;
functions
{
inlet
{
type surfaceFieldValue;
operation areaAverage;
regionType patch;
name inlet;
fields (p);
writeFields no;
writeToFile no;
log no;
resultFields (areaAverage(inlet,p));
}
outlet
{
type surfaceFieldValue;
operation areaAverage;
regionType patch;
name outlet;
fields (p);
writeFields no;
writeToFile no;
log no;
}
average
{
type valueAverage;
functionObject testSample1;
fields (average(p));
writeToFile no;
log no;
}
}
}
TUT: cavity: add an example for the multiFieldValue function object
- now have both compactData(),compactLocalData(), compactRemoteData()
depending on where the compaction information is actually known.
The compactData() performs a consistent union of local and remote
values, which eliminates the danger of mapping to non-existent
locations but does require a double communication to setup.
Typically needed for point maps (for example).
The compactLocalData() and compactRemoteData() work on the
assumption that the source or target values are sufficent for
creating unique compact maps.
Can be used, for example, when compacting cell maps since there is
no possibility of a source cell being represented on different
target processors (ie, each cell is unique and only occurs once).
The existing compact() is equivalent to compactRemoteData()
and is now simply a redirect.
- use bitSet for defining compaction, but the existing compact()
continues to use a boolList (for code compatibility).
BUG: compaction in non-parallel mode didn't compact anything.
STYLE: compact ascii output for procAddressing
- simplify procAddressing read/write
- avoid accessing points in faMeshReconstructor.
Can rely on the patch meshPoints (labelList), which does not need
access to a pointField
- report number of points on decomposed mesh.
Can be useful additional information.
Additional statistics for finite area decomposition
- provide bundled reconstructAllFields for various reconstructors
- remove reconstructPar checks for very old face addressing
(from foam2.0 - ie, older than OpenFOAM itself)
- bundle all reading into fieldsDistributor tools,
where it can be reused by various utilities as required.
- combine decomposition fields as respective fieldsCache
which eliminates most of the clutter from decomposePar
and similfies reuse in the future.
STYLE: remove old wordHashSet selection (deprecated in 2018)
BUG: incorrect face flip handling for faMeshReconstructor
- a latent bug which is not yet triggered since the faMesh faces are
currently only definable on boundary faces (which never flip)
Geometry calculation scheme that performs geometry updates only in regions
where the mesh has changed, identified by comparing current and old points.
Example usage in fvSchemes:
geometry
{
type solidBody;
// Optional entries
// If set to false, update the entire mesh
partialUpdate yes;
// Cache the motion addressing (changed points, faces, cells etc)
cacheMotion yes;
}
The most frequent changes have been as follows.
from:
tmp<scalarField> tuTau(new scalarField(patch().size(), Zero));
scalarField& uTau = tuTau.ref();
to:
auto tuTau = tmp<scalarField>::New(patch().size(), Zero);
auto& uTau = tuTau.ref();
- Other changes involved the addition of - wherever approapriate -:
const
noexcept
auto
Previously, a nutWallFunctionFvPatchScalarField ref should be
created in epsilon, k, and omega wall functions to fetch various
common wall-function coefficients necessary to carry out and complete
local operations inside these wall functions.
However, this arrangement required the use of a nut wall function,
even when unnecessary, when any of non-nut wall functions are being used.
Therefore, some users had been redundantly restrained and
obstructed with rather obscure casting-error messages.
Also, the wall-function coefficients Cmu, kappa and E have been obtained
from the specified nutWallFunction in order to ensure that each patch
possesses the same set of values for these coefficients.
Although the motivation sounds reasonable, it has also been putting redundant
restraints on users and disregarding the specifics of each wall-function.
For example, the variation of epsilon in near-wall regions is usually very
steep and non-monotonic specific - an expert user may therefore want to use
an epsilon-specific coefficient, and this was not allowed by the previous
arrangement.
This commit introduces a new class (i.e. wallFunctionCoefficients) comprising
all common wall-function coefficients and yPlus calculations.
Previously, a number of wall functions were not not writing
their boundary-condition entries in the defacto order
(i.e. from type to value) while writing a field. For example:
<patchName>
{
lowReCorrection 1;
blending stepwise;
n 2;
type epsilonWallFunction; <!-- expected to be the first entry
value uniform 1; <!-- expected to be the last entry
}
Also, various wall functions have been writing out entries that
have not been being used by the wall function. For example:
<patchName>
{
type nutUSpaldingWallFunction;
...
blending stepwise; <!-- no blending treatment in nutUSpaldingWF
...
}
Additionally, various derived wall functions (e.g. atmOmegaWallFunction)
have been failing to write some of the inherited entries even though
these entries have been being used in carrying out wall-function calculations.
Taken these into consideration, wall functions have been reworked to obtain
reliable and consistent way of writing their traits while writing out a field.
- writeLocalEntries uses writeIfDifferent if constructed with getOrDefault.
ENH: simple faMeshSubset (zero-sized meshes only)
ENH: additional access methods for faMesh, primitive geometry mode
- wrapped walking of boundary edgeLabels as list of list
(similar to edgeFaces).
- primitive finiteArea geometry mode with reduced communication:
primarily interesting for decomposition/redistribution (#2436)
ENH: extra vtk debug outputs for checkFaMesh
- report per-processor sizes in the mesh summary
- similar functionality as newMesh etc.
Relocated to finiteVolume since there are no dynamicMesh dependencies.
- use simpler procAddressing (with updated mapDistributeBase).
separated from redistributePar
- returns UPtrList view (read-only or read/write) of the objects
- shorter names for IOobject checks: hasHeaderClass(), isHeaderClass()
- remove unused IOobject::isHeaderClassName(const word&) method.
The typed versions are preferable/recommended, but can still check
directly if needed:
(io.headerClassName() == "foo")
- additional distribute/reverseDistribute with specified commsType.
Improves flexibility.
- distribute with nullValue
- support move construct mapDistribute from mapDistributeBase
- refactor handling of schedules (as whichSchedule method) to
simplify code.
- renumberMap helper for working with compact sub maps
and renumberVisit for handling walk-ordered compaction.
COMP: make mapDistributeBase data private
- accessor methods are available - direct access is unnecessary
- mapDistribute : inherit mapDistributeBase constructors
STYLE: use List<labelPair>::null() for schedule placeholders
- clearer that they are doing nothing
- for int64 compilations this disambiguates between '0' as int32 (size)
or as bool 'false' for local processor validity
Eg,
IOList list(io, 0); <- With label-size 64: is this bool or label?
IOList list(io, Zero); <- Size = 0 (int32/int64), not a bool
- for indirect lists we use element-wise output streaming and read
back as a regular list. This approach cannot however work with
non-blocking mode - the receive buffers will simply not be filled
before attempting to read from them.
For contiguous data, the lowest overhead solution is to locally
flatten the indirect list and use the regular gather routines
for non-blocking mode. For non-contiguous data, can continue to
use the element-wise output, but cannot use non-blocking for it.
STYLE: use non-blocking consistently as default for globalIndex gather(s)
- most of the front-facing code was already using non-blocking,
but there were a few low-level routines defaulting to scheduled
(but never relied upon in the code).
- previously filtered on the existence of area fields, but with
faMesh::TryNew this is not required anymore.
STYLE: enable -verbose for various parallel utilities (consistency)
- introduced UList<bool>::operator()(label) as part of bf0b3d8872
but with gcc-4.8.5 this participates in operator resolution even
for non-bool lists!!
Partial revert until this predicate handling is really required.
- use DynamicList instead of List in the cache, which reduces the
number of allocations occuring each time.
- since the cached times are stored in sorted order, first check if the
new time is greater than the last list entry. Can then simply append
without performing a binary search and can obviously also skip any
subsequent sorting.
STYLE: add noexcept to Instant methods, declare in header (like Tuple2)
- as part of #2358 the writing was changed to be lazy.
Which means that files are only created before they are actually
written, which helps avoid flooding the filesystem if sample-only
is required and also handles case such as "rho.*" where the sampled
fields are not known from the objectRegistry at startup.
- now create any new files using the startTime value, which means they
are easier to find but still retains the lazy construct.
Don't expect any file collisions with this, but there could be some
corner cases where the user has edited to remove fields (during
runtime) and then re-edits to add them back in. In this case the
file pointers would be closed but reopened later and overwriting
the old probed values. This could be considered a feature or a bug.
BUG: bad indexing for streamlines (fixes#2454)
- a cut-and-paste error
- only wrap compiler calls (not things like flex/bison)
- avoid single quoted '&&' (causes syntax errors)
STYLE: report WM_COMPILE_CONTROL value in top-level Allwmake
- relocate templating to factory method 'New'.
Adds provisions for more general re-use.
- expose processor topology in globalMesh as topology()
- wrap proc->patch lookup as processorTopology::procPatchLookup method
(failsafe). May consider using Map<label> for its storage in the
future.
- Uses a refPtr to reference external content.
Useful (for example) when writing data without copying.
Reading into external locations is not implemented
(no current requirement for that).
* IOFieldRef -> IOField
* IOListRef -> IOList
* IOmapDistributePolyMeshRef -> IOmapDistributePolyMesh
Eg,
labelList addressing = ...;
io.rename("cellProcAddressing");
IOListRef<label>(io, addressing).write();
Or,
primitivePatch patch = ...;
IOFieldRef<vector>(io, patch.localPoints()).write();
- the values from non-overlapping blocks were simply ignored,
which meant that ('111111111111' & '111111') would not mask out
the unset values at all.
- similar oddities in other operations (|=, ^= etc)
where the original implementation tried hard to avoid touching the
sizing at all, but now better resolved as follows:
- '|=' : Set may grow to accommodate new 'on' bits.
- '^=' : Set may grow to accommodate new 'on' bits.
- '-=' : Never changes the original set size.
- '&=' : Never changes the original set size.
Non-overlapping elements are considered 'off'.
These definitions are consistent with HashSet behaviour
and also ensures that (a & b) == (b & a)
ENH: improve short-circuiting within bitSet ops
- in a few places can optimise by checking for none() instead of
empty() and avoid unnecessary block operations.
ENH: added bitSet::resize_last() method
- as the name says: resizes to the last bit set.
A friendlier way of writing `resize(find_last()+1)`
- uniq() : creates an IndirectList with duplicated entries
filtered out
- subset() : creates an IndirectList with positions that satisfy
a condition predicate.
- subset_if() : creates an IndirectList with values that satisfy a
given predicate.
An indirect subset will be cheaper than creating a subset copy
of the original data, and also allows modification.
STYLE: combine UIndirectList.H into UIndirectList.H (reduce file clutter)
- the sorted() method fills a UPtrList with sorted entries. In some
places this can provide a more convenient means of traversing a
HashTable in consistent order, without the extra step of creating
a sortedToc(). The sorted() method with a UPtrList will also have
a lower overhead than creating any sortedToc() or toc() since it is
list of pointers and not full copies of the keys.
Instead of this:
HashTable<someType> table = ...;
for (const word& key : table.sortedToc())
{
Info<< key << " => " << table[key] << nl;
}
can write this:
for (const auto& iter : table.sorted())
{
Info<< iter.key() << " => " << iter.val() << nl;
}
STYLE:
- declare hash entry key 'const' since it is immutable
- local writeHeaderEntry helper was not marked as file-scope static.
- use do/while to simplify handling of padding spaces
ENH: IOobject - copy construct, resetting name and local component
- when copying with a new local component, this is simpler than
constructing from all of the components, which was previously the
only possibility for setting a new local component.
- commonly used, only depends on routines defined in UList
(don't need the rest of ListOps for it).
ENH: implement boolList::operator() const
- allows use as a predicate functor, as per bitSet and labelHashSet
GIT: combine SubList, UList into List directory (intertwined concepts)
STYLE: default initialize DynamicList instead of with size 0
- specifies the number of consecutive cells to assign to the same
randomly chosen processor. Can be used to have a less extremely
random distribution for testing possible breaking points.
Eg,
method random;
coeffs
{
agglom 4;
}
- Add finiteArea cellID (actually face ids) / faceLabel and procID
for foamToVTK with -write-ids. Useful when this type of information
is needed.
- Arbitrary number of outlets can be connected to a single inlet
- Each inlet can be connected to different and arbitrary
combination of outlets
- Each outlet-inlet connection has:
- Optional filtration fraction as a Function1 type
- Optional offset as a Function1 type (i.e. adding/substracting a substance)
- Optional time delay (from outlet to inlet) as a Function1 type
- Each inlet has an optional base inlet-field as a PatchFunction1 type
The blendingFactor function object overwrites the DEShybrid:Factor
field internally when blendedSchemeBase debug flag is active.
However, users are allowed to write out the original DEShybrid:Factor
field by executing the writeObjects function object before
any blendingFactor function object execution.
- direct construct and reset method for creating a zero-sized (dummy)
subMesh. Has no exposed faces and no parallel synchronization
required.
- core mapping (interpolate) functionality with direct handling
of subsetting in fvMeshSubset (src/finiteVolume).
Does not use dynamicMesh topology changes
- two-step subsetting as fvMeshSubsetter (src/dynamicMesh).
Does use dynamicMesh topology changes.
This is apparently only needed by the subsetMesh application itself.
DEFEATURE: remove deprecated setLargeCellSubset() method
- was deprecated JUL-2018, now removed (see issue #951)
- allows restricted evaluation to specific coupled patch types.
Code relocated/refactored from redistributePar.
STYLE: ensure use of waitRequests() also corresponds to nonBlocking
ENH: additional copy/move construct GeometricField from DimensionedField
STYLE: processorPointPatch owner()/neighbour() as per processorPolyPatch
STYLE: orientedType with bool cast operator and noexcept
- move construct from components. Construct with optional IO control
- separate init() method (as per polyMesh) to delay evaluation of
globalData and base geometry.
- faMesh removeFiles method
ENH: faBoundaryMeshEntries for reading faBoundary files without a mesh
ENH: adjust debug output for {fa,fae,fv,fvs}patchField::New
- add alternative constraint type selection for faePatchField.
- unify handling of "patchType" reading.
Make less noisy when reporting dictionary defaults.
- allows reuse by finiteArea, for example.
- simplify edge looping with face thisLabel/nextLabel method
ENH: additional storage checks for mesh weights (faMesh + fvMesh)
- allow finite-area field decomposition without edge weights.
STYLE: use tmp New in various places. Simpler updateGeom check
STYLE: remove spurious (no-op) processor boundary evaluations
- boundary fields for faceAreaCentres and edgeCentres had no-op
initEvaluate/evaluate pair on processor boundaries.
Now consistent with each other and with how finiteVolume is defined.
STYLE: add comments about which private methods trigger communication
- reduce the amount of communication when checking zones and patches
by performing the synchonization check on the gathered strings
(master only) and reduce or broadcast the result.
STYLE: simplify coupled() checks depending only on parRun
* lessEqOp -> lessEqualOp
* greaterEqOp -> greaterEqualOp
to avoid ambiguitity with other forms such as 'plusEqOp' where the
'Eq' implies an assigment. The name change also aligns better with
C++ <functional> names such as std::less_equal, std::greater_equal
ENH: simple labelRange predicates gt0/ge0/lt0/le0
- mirrors scalarRange tests.
Lower overhead than using labelMinMax::ge(0) etc since it does not
create an intermediate (is stateless) and can be used as a constexpr
- was in fvMotionSolver, but only requires PatchFunction1 capabilities
(from within meshTools).
GIT: relocate IOmapDistributePolyMesh (from dynamicMesh to OpenFOAM)
- adds handling of negative start times for masterUncollatedFileOperation
as well (#1112).
- handle failures *after* restoring non-parRun mode.
This ensures exit(FatalError) will exit MPI properly as well.
STYLE: replace "polyMesh" with polyMesh::meshSubDir
STYLE: adjust IOobject read/write enumerated values
- provision for possible bitwise handling
- additional Pstream::broadcasts() method to serialize/deserialize
multiple items.
- revoke the broadcast specialisations for std::string and List(s) and
use a generic broadcasting template. In most cases, the previous
specialisations would have required two broadcasts:
(1) for the size
(2) for the contiguous content.
Now favour reduced communication over potential local (intermediate)
storage that would have only benefited a few select cases.
ENH: refine PstreamBuffers access methods
- replace 'bool hasRecvData(label)' with 'label recvDataCount(label)'
to recover the number of unconsumed receive bytes from specified
processor. Can use 'labelList recvDataCounts()' to recover the
number of unconsumed receive bytes from all processor.
- additional peekRecvData() method (for transcribing contiguous data)
ENH: globalIndex whichProcID - check for isLocal first
- reasonable to assume that local items are searched for more
frequently, so do preliminary check for isLocal before performing
a more costly binary search of globalIndex offsets
ENH: masterUncollatedFileOperation - bundled scatter of status
Eg,
export WM_COMPILER=Clang130
export WM_COMPILE_CONTROL="version=13.0 +lld"
- also support the mold linker (+mold) for clang
STYLE: report as 'link' stage instead of 'ld' in short messages
- use vector::removeCollinear a few places
COMP: incorrect initialization order in edgeFaceCirculator
COMP: Silence boost bind deprecation warnings (before CGAL-5.2.1)
- for most field types this is a no-op, but for a field of floatVector
or doubleVector (eg, vector and solveVector) it will normalise each
element with divide-by-zero protection.
More reliable and efficient than dividing a field by the mag of itself
(even with VSMALL protection).
Applied to FieldField and GeometricField as well.
Eg,
fld.normalise();
vs.
fld /= mag(fld) + VSMALL;
ENH: support optional tolerance for vector::normalise
- for cases where tolerances larger than ROOTVSMALL are preferable.
Not currently available for the field method (a templating question).
ENH: vector::removeCollinear method
- when working with geometries it is frequently necessary to have a
normal vector without any collinear components. The removeCollinear
method provides for clearer, compacter code.
Eg,
vector edgeNorm = ...;
const vector edgeDirn = e.unitVec(points());
edgeNorm.removeCollinear(edgeDirn);
edgeNorm.normalise();
vs.
vector edgeNorm = ...;
const vector edgeDirn = e.unitVec(points());
edgeNorm -= edgeDirn*(edgeDirn & edgeNorm);
edgeNorm /= mag(edgeNorm);
- for obtaining set entries from a boolList
- BitOps::select to mirror bitSet constructor but returning a boolList
- BitOps::set/unset for boolList
ENH: construct bitSet from a labelRange
- useful, for example, when marking up patch slices
ENH: ListOps methods
- ListOps::count_if to mirror std::count_if but with list indexing.
- ListOps::find_if to mirror std::find_if but with list indexing.
ENH: UPtrList::test() method.
- includes bounds checks, which means it can be used in more places
(eg, even if the storage is empty).
Previous commit solved: "mixture rho to volume-based in rhoThermo."
This proved to work correctly for rho=constant EoS but not for
idealGas. Fixes#2304. The previous gitlab issue was #1812.
- `functions<scalar>` and `functions<vector>` were erroneously
documented in header as `lookup<scalar>` etc.
INT: handle fluent square brackets (fixes#2429)
- patch applied from openfoam.org
- support direct processing of CompactListList instead of requiring
a conversion to labelListList for bandCompression and renumbering
methods.
- manage FIFO with CircularBuffer instead of SLList (avoids
allocations in inner loops). Invert logic to use a bitSet of
unvisited cells, which improves looping as the matrix becomes more
sparse.
- fix missed weighting in bandCompression (same as #1376).
In polyTopoChange, handle removed cells immediately to simplify
the logic and align more closely with bandCompression.
STYLE: enclose bandCompression within meshTools namespace
ENH: PrimitivePatch pointFaces with DynamicList instead of SLList
- MPI_Gatherv requires contiguous data, but a byte-wise transfer can
quickly exceed the 'int' limits used for MPI sizes/offsets. Thus
gather label/scalar components when possible to increase the
effective size limit.
For non-contiguous types (or large contiguous data) now also
reverts to manual handling
ENH: handle contiguous data in GAMGAgglomeration gather values
- delegate to globalIndex::gatherValues static method (new)
- bundles frequently used 'gather/scatter' patterns more consistently.
- combineAllGather -> combineGather + broadcast
- listCombineAllGather -> listCombineGather + broadcast
- mapCombineAllGather -> mapCombineGather + broadcast
- allGatherList -> gatherList + scatterList
- reduce -> gather + broadcast (ie, allreduce)
- The allGatherList currently wraps gatherList/scatterList, but may be
replaced with a different algorithm in the future.
STYLE: PstreamCombineReduceOps.H is mostly unneeded now
STYLE: LduInterfaceFieldPtrsList as alias instead of a class
STYLE: define patch lists typedefs when defining the base patch
- eg, polyPatchList typedef within polyPatch.H
INT: relocate GeometricField::Boundary -> GeometricBoundaryField
- was internal to GeometricField but moving it outside simplifies
forward declarations etc. Code adapted from openfoam.org
Two problems:
- flipping inside snappyHexMesh is not done in a parallel
consistent way. So e.g. the octree-cached inside/outside information
has already been calculated. For now flipping of
distributedTriSurfaceMesh is disabled.
- octree-cached inside/outside information was using already
cached information and would only work for outwards pointing
volumes
- percent of cells is taken relative to selection size.
- percent of faces is taken relative to the number of boundary faces
that do not fix velocity themselves.
ENH: avoid correctBoundaryConditions() if values were not limited
- when writing surface formats (eg, vtk, ensight etc) the sampled
surfaces merge the faces/points originating from different
processors into a single surface (ie, patch gatherAndMerge).
Previous versions of mergePoints simply merged all points possible,
which proves to be rather slow for larger meshes. This has now been
modified to only consider boundary points, which reduces the number
of points to consider. As part of this change, the reference point
is now always equivalent to the min of the bounding box, which
reduces the number of search loops. The merged points retain their
original order.
- inplaceMergePoints version to simplify use and improve code
robustness and efficiency.
ENH: make PrimitivePatch::boundaryPoints() less costly
- if edge addressing does not already exist, it will now simply walk
the local face edges directly to define the boundary points.
This avoids a rather large overhead of the full faceFaces,
edgeFaces, faceEdges addressing.
This operation is now more important since it is used in the revised
patch gatherAndMerge.
ENH: topological merge for mesh-based surfaces in surfaceFieldValue
- lower memory overhead, simpler code and eliminates need for
ListListOps::combineOffset()
- optional handling of local faces/points for re-using in different
contexts
STYLE: labelUList instead of labelList for globalMesh mergePoints
STYLE: adjust verbose information from mergePoints
- also report the current new-point location
- also disables PointData if manifold cells are detected.
This is a partial workaround for volPointInterpolation problems
with handling manifold cells.
- additional verbosity option for conversions
- ignore old `-finite-area` option and always convert available
finiteArea mesh/fields unless `-no-finite-area` is specified (#2374)
ENH: simplify point offset handling for ensight output
- extend writing to include compact face/cell lists
- a try/catch approach is not really robust enough (or even possible)
since read failures likely do not occur on all ranks simultaneously.
This leads to situations where the master has thrown an exception
(and thus exiting the current routine) while other ranks are still
waiting to receive data and the program blocks completely.
Since this primarily affects data conversion routines such as
foamToEnsight etc, treat similarly to lagrangian: check for the
existence of essential files before proceeding or not. This is
wrapped into a TryNew factory method:
autoPtr<faMesh> faMeshPtr(faMesh::TryNew(mesh));
if (faMeshPtr) ...
- gather/scatter types of operations can avoid AllToAll communication
and use simple MPI gather (or scatter) to establish the receive sizes.
New methods: finishedGathers() / finishedScatters()
BUG: masterUncollatedFileOperation checking of file-size
- used Foam:fileSize check to decide on scheduled/nonBlocking but this
was being done on all ranks and subsequently broadcast.
Now avoid unnecessary filesystem access on non-master ranks.
- both schemes and solutions data are treated as MUST_READ_IF_MODIFIED
even if the requested readOption is nominally MUST_READ or
READ_IF_PRESENT, but now delay this change.
- do not need contruct or move assign from SortableList.
Rarely (never) used and can simply treat like a normal list
by applying shrink beforehand.
- make append() methods return void instead of returning self, which
makes it easier to derive from. Having them return self was a bit of
an original design mistake.
Chaining appends do not actually occur anywhere. Even if they were
to be used, would not want to rely on them (fear of slicing on any
derived classes).
BUG: IndirectList iterator comparison loses constness
- eliminate redundant size_ accounting
- drop extra 'Container' template parameter and replace functionality
with more flexible pack/unpack methods.
There is also a pack() method that handles indirect lists of lists
that can be used, for example, to pack a patch slice of faces.
Drop the 'operator()' method in favour of unpack to expose and properly
document the conversion. Should revisit the corresponding code in
some places for optimization potential.
- align some method names with globalIndex:
totalSize(), maxSize() etc
- less communication than gatherList/scatterList
ENH: refine send granularity in Pstream::exchange
STYLE: ensure PstreamBuffers and defaultCommsType agree
- simpler loops for lduSchedule
- can restrict calculation of D32 and other spray properties to a
subset of parcels. Uses a predicate selection mechanism similar to
vtkCloud etc.
ENH: code cleanup in scalar predicates
- pass by value not reference in predicates
- additional assign() method to refactor common code
- with the special setFormat "probes", all of the sampled sets are
treated more similarly to probes, with an ensemble output to raw
probed format.
This is of course less useful when the number of sampled points
becomes very large.
- can now specify sampled sets as dictionary entries instead of a list
entry.
can now use: sets { ... }
instead of: sets ( ... );
This is similar to sampled surfaces and makes it easier to
manage with dictionary manipulation tools.
TUT: update to use writeTime instead of outputTime
- in v2112 the functionObject results were only delivering values from
the last set listed (ie, overwritten).
Now that the values are properly scoped by the name of the set itself
Eg, `average(lines,p)` for the average for 'lines' set, existing
workflows will break.
It thus makes reasonble sense to also handle results without a
qualifier as ensemble values.
average(p) // Ensemble average of all listed sets
- the very old 'writer' class was fully stateless and always templated
on an particular output type.
This is now replaced with a 'coordSetWriter' with similar concepts
as previously introduced for surface writers (#1206).
- writers change from being a generic state-less set of routines to
more properly conforming to the normal notion of a writer.
- Parallel data is done *outside* of the writers, since they are used
in a wide variety of contexts and the caller is currently still in
a better position for deciding how to combine parallel data.
ENH: update sampleSets to sample on per-field basis (#2347)
- sample/write a field in a single step.
- support for 'sampleOnExecute' to obtain values at execution
intervals without writing.
- support 'sets' input as a dictionary entry (as well as a list),
which is similar to the changes for sampled-surface and permits use
of changeDictionary to modify content.
- globalIndex for gather to reduce parallel communication, less code
- qualify the sampleSet results (properties) with the name of the set.
The sample results were previously without a qualifier, which meant
that only the last property value was actually saved (previous ones
overwritten).
For example,
```
sample1
{
scalar
{
average(line,T) 349.96521;
min(line,T) 349.9544281;
max(line,T) 350;
average(cells,T) 349.9854619;
min(cells,T) 349.6589286;
max(cells,T) 350.4967271;
average(line,epsilon) 0.04947733869;
min(line,epsilon) 0.04449639927;
max(line,epsilon) 0.06452856475;
}
label
{
size(line,T) 79;
size(cells,T) 1720;
size(line,epsilon) 79;
}
}
```
ENH: update particleTracks application
- use globalIndex to manage original parcel addressing and
for gathering. Simplify code by introducing a helper class,
storing intermediate fields in hash tables instead of
separate lists.
ADDITIONAL NOTES:
- the regionSizeDistribution largely retains separate writers since
the utility of placing sum/dev/count for all fields into a single file
is questionable.
- the streamline writing remains a "soft" upgrade, which means that
scalar and vector fields are still collected a priori and not
on-the-fly. This is due to how the streamline infrastructure is
currently handled (should be upgraded in the future).
Automatic hole closure:
- introduces 'holeToFace' topoSet source
- used when detecting a 'leak-path'
- creates additional baffles to close the leak
Multi-stage layer addition:
- Can add layers in multiple passes
See issues: #2403, #2404
- for metis-like graphs there is no guarantee that a zero-sized graph
has an offsets list with size 1 or size 0, so always use
numCells = max(0, xadj.size()-1)
this was already done in most places, but missed in the
decomposeGeneral method
STYLE: use sumOp<label>() instead of plusOp<label>()
- the internal data are contiguous so can broadcast size and internals
directly without an intermediate stream.
ENH: split out broadcast time for profilingPstream information
STYLE: minor Pstream cleanup
- UPstream::commsType_ from protected to private, since it already has
inlined noexcept getters/setters that should be used.
- don't pass unused/unneed tag into low-level MPI reduction templates.
Document where tags are not needed
- had Pstream::broadcast instead of UPstream::broadcast in internals
- used Pstream::maxCommsSize (bytes) for the lower limit when sending.
This would have send more data on each iteration than expected based
on maxCommsSize and finish with a number of useless iterations.
Was generally not a serious bug since maxCommsSize (if used) was
likely still far away from the MPI limits and exchange() is primarily
harnessed by PstreamBuffers, which is sending character data
(ie, number of elements and number of bytes is identical).
- For v2112 and earlier: pre-assembled lists of particles
to be transferred and target patch on a per processor basis.
Apart from memory overhead of assembling the lists this adds
allocations/de-allocation when building linked-lists.
- Now stream particle transfer tuples directly into PstreamBuffers.
Use a local cache of UOPstream wrappers for the formatters
(since there are potentially many particles being shifted about).
On the receiving size, read out tuple-wise.
- Communication on transfers now restricted to the immediate
neighbours instead of using an all-to-all to exchange sizes.
Applied to Cloud::move and RecycleInteraction
- now largely encapsulated using PstreamBuffers methods,
which makes it simpler to centralize and maintain
- avoid building intermediate structures when sending data,
remove unused methods/data
TUT: parallel version of depthCharge2D
STYLE: minor update in ProcessorTopology
- PstreamBuffers nProcs() and allProcs() methods to recover the rank
information consistent with the communicator used for construction
- allowClearRecv() methods for more control over buffer reuse
For example,
pBufs.allowClearRecv(false);
forAll(particles, particlei)
{
pBufs.clear();
fill...
read via IPstream(..., pBufs);
}
This preserves the receive buffers memory allocation between calls.
- finishedNeighbourSends() method as compact wrapper for
finishedSends() when send/recv ranks are identically
(eg, neighbours)
- hasSendData()/hasRecvData() methods for PstreamBuffers.
Can be useful for some situations to skip reading entirely.
For example,
pBufs.finishedNeighbourSends(neighProcs);
if (!returnReduce(pBufs.hasRecvData(), orOp<bool>()))
{
// Nothing to do
continue;
}
...
On an individual basis:
for (const int proci : pBufs.allProcs())
{
if (pBufs.hasRecvData(proci))
{
...
}
}
Also conceivable to do the following instead (nonBlocking only):
if (!returnReduce(pBufs.hasSendData(), orOp<bool>()))
{
// Nothing to do
pBufs.clear();
continue;
}
pBufs.finishedNeighbourSends(neighProcs);
...
- a somewhat specialized use case, but can be useful when there are
many ranks with sparse communication but for which the access
pattern is established during inner loops.
PstreamBuffers pBufs(Pstream::commsTypes::nonBlocking);
pBufs.allowClearRecv(false);
PtrList<OPstream> output(Pstream::nProcs());
while (condition)
{
// Rewind existing streams
forAll(output, proci)
{
auto* osptr = output.get(proci);
if (osptr)
{
(*osptr).rewind();
}
}
for (Particle& p : myCloud)
{
label toProci = ...;
// Get or create output stream
auto* osptr = output.get(toProci);
if (!osptr)
{
osptr = new OPstream(toProci, pBufs);
output.set(toProci, osptr);
}
// Append more data...
(*osptr) << p;
}
pBufs.finishedSends();
... reads
}
- split off a Pstream::genericBroadcast() which uses UOPBstream during
serialization and UOPBstream during de-serialization.
This function will not normally be used directly by callers, but
provides a base layer for higher-level broadcast calls.
- low-level UPstream broadcast of string content.
Since std::string has length and contiguous content, it is possible
to handle directly by the following:
1. broadcast size
2. resize
3. broadcast content when size != 0
Although this is a similar amount of communication as the generic
streaming version (min 1, max 2 broadcasts) it is more efficient
by avoiding serialization/de-serialization overhead.
- handle broadcast of List content distinctly.
Allows an optimized path for contiguous data, similar to how
std::string is handled (broadcast size, resize container, broadcast
content when size != 0), but can revert to genericBroadcast (streamed)
for non-contiguous data.
- make various scatter variants simple aliases for broadcast, since
that is what they are doing behind the scenes anyhow:
* scatter()
* combineScatter()
* listCombineScatter()
* mapCombineScatter()
Except scatterList() which remains somewhat different.
Beyond the additional (size == nProcs) check, the only difference to
using broadcast(List<T>&) or a regular scatter(List<T>&) is that
processor-local data is skipped. So leave this variant as-is.
STYLE: rename/prefix implementation code with 'Pstream'
- better association with its purpose and provides a unique name
- reduces later surprises and simplifies effort for the caller
- more flexible globalIndex scatter with auto-sized return field.
- Avoid communication for scattering into zero-sized fields.
- the data front for isoAdvection can be particularly sparse and at
higher processor counts there is an advantage to avoiding all-to-all
communication for the PstreamBuffers exchange
Based on code changes from T.Aoyagi(RIST), A.Azami(RIST)
- use MPI_Bcast intrinsic instead of manual tree to reduce the overall
number of messages.
Old behaviour can be re-enabled with
`#define Foam_Pstream_scatter_nobroadcast`
- The idea of broadcast streams is to replace multiple master to
subProcs communications with a single MPI_Bcast.
if (Pstream::master())
{
OPBstream toAll(Pstream::masterNo());
toAll << data;
}
else
{
IPBstream fromMaster(Pstream::masterNo());
fromMaster >> data;
}
// vs.
if (Pstream::master())
{
for (const int proci : Pstream::subProcs())
{
OPstream os(Pstream::commsTypes::scheduled, proci);
os << data;
}
}
else
{
IPstream is(Pstream::commsTypes::scheduled, Pstream::masterNo());
is >> data;
}
Can simply use UPstream::broadcast() directly for contiguous data
with known lengths.
Based on ideas from T.Aoyagi(RIST), A.Azami(RIST)
- native MPI min/max/sum reductions for float/double
irrespective of WM_PRECISION_OPTION
- native MPI min/max/sum reductions for (u)int32_t/(u)int64_t types,
irrespective of WM_LABEL_SIZE
- replace rarely used vector2D sum reduction with FixedList as a
indicator of its intent and also generalizes to different lengths.
OLD:
vector2D values; values.x() = ...; values.y() = ...;
reduce(values, sumOp<vector2D>());
NEW:
FixedList<scalar,2> values; values[0] = ...; values[1] = ...;
reduce(values, sumOp<scalar>());
- allow returnReduce() to use native reductions. Previous code (with
linear/tree selector) would have bypassed them inadvertently.
ENH: added support for MPI broadcast (for a memory span)
ENH: select communication schedule as a static method
- UPstream::whichCommunication(comm) to select linear/tree
communication instead of ternary or
if (Pstream::nProcs() < Pstream::nProcsSimpleSum) ...
STYLE: align nProcsSimpleSum static value with etc/controlDict override
- refactor as an MPI-independent base class.
Add bufferIPC{send,recv} private methods for construct/destruct.
Eliminates code duplication from two constructor forms and reduces
additional constructor definitions in dummy library.
- add PstreamBuffers access methods, refactor common finish sends
code, tweak member packing
ENH: resize_nocopy for processorLduInterface buffers
- content is immediately overwritten
STYLE: cull unneeded includes in processorFa*
- handled by processorLduInterface
- this can be used to apply a uniform field level to remove from
a sampled field. For example,
fieldLevel
{
"p.*" 1e5; // Absolute -> gauge [Pa]
T 273.15; // [K] -> [C]
U #eval{ 10/sqrt(3) }; // Uniform mag(U)=10
}
After the fieldLevel has been removed, any fieldScale is applied.
For example
fieldScale
{
"p.*" 0.01; // [Pa] -> [mbar]
}
The fieldLevel for vector and tensor fields may still need some
further refinement.
The runTimeControl function object can activate further function objects using
triggers. Previously the trigger index could only advance; this change set
allows users to set smaller values to enable function object recycling, e.g.
Repeat for N cycles:
1. average the pressure at a point in space
2. when the average stabilises, run for a further 100 iterations
3. set a new patch inlet velocity
- back to (1)
- Removes old default behaviour that only permitted an increase in the
trigger level. This type of 'ratcheting' mechanism (if required) is
now the responsibility of the derived function object.
- notably affects writing continuous data in binary. If generating a
compound token (eg, List<label>), need to add in the size prefix
otherwise it cannot actually be parsed properly as a List.
BUG: bad fallthrough for compound reading (FixedList)
- the branch was likely never reached, but would have attempted to
read twice due to a bad fall-through condition.
GIT: relocate globalIndex (is independent of mesh)
STYLE: include label/scalar Fwd in contiguous.H
STYLE: unneed commSchedule include in GeometricField
- as a side-effect of changes to probes, the file pointers are not
automatically creating when reading the dictionary but delayed
until prepare(WRITE_ACTION) is called.
This nuance was missed in thermoCoupleProbes.
- added in special handling for monitoring controlDict.
Since controlDict is an unwatchedIOdictionary (not IOdictionary) and
not registered either, the usual objectRegistry caching is not
available. Instead, access directly from Time.
Left the balance of the file handling largely intact (for handling
unregistered dictionaries) but could potentially revisit in the
future and attempt master-only file access if required. However,
most other IOdictionary types will be registered, otherwise the
READ_IF_MODIFIED mechanism would not really work properly.
- when used for example with wallShearStress, the stress field is
initially created as incompressible but later updated with the
correct compressible/incompressible dimensions.
If this field is sampled as a surface and stored on the registry
the dimensions should be reset() and not '=' assigned, since that
causes a dimension check which will obviously fail.
- add writer support for VERTICES
- updated use of globalIndex
ENH: add base vtk writer for points/verts/lines
STYLE: noexcept, explicit constructors etc
- when used with *any* alphaField and normalised (the usual case)
would largely give a 0-1 corresponding to the min/max of the first
component, but could also yield negative values.
- if the alpha field corresponds identically to colour field, it is
readily possible to combine as into RGBA sequences. However, if the
fields are different it potentially means referencing an opacity
field that has not yet been sampled. This impedes using the format
for a streaming sampler without additional overhead and/or rewriting
the alpha channel later.
- scene
- write with fileName, additional getMesh accessor
- addColourToMesh accepts an alpha field size 1 as a constant
alpha value
- sceneWriter wrapper
ENH: improve gltf handling of colour and alpha specification
- accept plain input directly.
Eg,
colour (1 0 1);
vs
colour uniform;
colourValue (1 0 1);
- use field magnitude for colouring of non-scalar fields.
Eg, having three different colour maps for a vector field simply
does not help much with visualisation.
- meshTools is the first layer in which coordSet is actually needed
STYLE: rename writer implementations in advance of upcoming changes (#2347)
- simplifies tracing of code changes (git blame)
- supports sampling/probing of values to obtain min/max/average/size
at execution intervals without writing any output or generating
output directories.
- 'verbose' option for additional output
- min, max, average and sample size results now stored in
functionObjectProperties similar to sampledSets, e.g. for field p
- min(p)
- max(p)
- average(p)
- size(p)
ENH: provide fieldTypes::surface names (as per fieldTypes::volume)
ENH: reduce number of files for surface fields
- combine face and point field declarations/definitions,
simplify typeName definitions
- used low-level MPI gather, but the wrapping routine contains an
additional safety check for is_contiguous which is not defined for
various std::pair<..> combination.
So std::pair<label,vector> (which is actually contiguous, but not
declared as is_contiguous) would falsely trip the check.
Avoid by simply gathering unbundled values instead.
- do not need STRINGIFY macros in ragel code
- remove wordPairHashTable.H and use equivalent wordPairHashes.H instead
STYLE: replace addDictOption with explicit option
- the usage text is otherwise misleading
GIT: combine Pair/Tuple2 directories
- unused in regular OpenFOAM code
- POSIX version uses deprecated gethostbyname()
- Windows version never worked
COMP: localize, noexcept on internal OSspecific methods
STYLE: support fileName::Type SYMLINK and LINK as synonyms
The logic was not maintaining consistent sets of constraints
on different processors. A single processor with a full
match (very easy with 0 local faces) would invalidate
adding the constraint.
- for contiguous data, added mpiGatherOp() to complement the
gatherOp() static method
- the gather ops (static methods) populate the globalIndex on the
master only (not needed on other procs) for reduced communication
- rename inplace gather methods to include 'inplace' in their name.
Regular gather methods return the gathered data directly, which
allows the following:
const scalarField mergedWeights(globalFaces().gather(wghtSum));
vs.
scalarField mergedWeights;
globalFaces().gather(wghtSum, mergedWeights());
or even:
scalarField mergedWeights;
List<scalarField> allWeights(Pstream::nProcs());
allWeights[Pstream::myProcNo()] = wghtSum;
Pstream::gatherList(allWeights);
if (Pstream::master())
{
mergedWeights =
ListListOps::combine<scalarField>
(
allWeights, accessOp<scalarField>()
);
}
- add parRun guards on various globalIndex gather methods
(simple copies or no-ops in serial) to simplify the effort for callers.
ENH: reduce code effort for clearing linked-lists
ENH: adjust linked-list method name
- complement linked-list append() method with prepend() method
instead of 'insert', which is not very descriptive
Assumes that gap is formed when both surfaces agree i.e.
it takes the minimum distance of the two. This means that
any wave only needs to be propagated according to the
originating surface.
- set() was silently deprecated in favour of reset() FEB-2018
since the original additional check for overwriting an existing
pointer was never used. The reset(...) name is more consistent
with unique_ptr, tmp etc.
Now emit deprecations for set().
- use direct test for autoPtr, tmp instead of valid() method.
More consistent with unique_ptr etc.
STYLE: eliminate redundant ptr() use on cloned quantities
- occurs with newer gcc on ubuntu impish (gcc-11.2.0), but may perhaps
actually be related to `-flto=auto` or to the destruction order of
the static variables (race condition?).
Leaving the compat table around for automatic cleanup does not
impact on other lookups (which are nullptr checked anyhow).
- partial revert for 13740de427 (#2158)
MS-MPI does not currently have a MPI_Comm_create_group(),
so keep using MPI_Comm_create() there.
Only affects multi-world simulations.
CONFIG: retain dummy version of libPstream.dll
- retain as libPstream.dll-dummy so that it is available for
manual replacement of the regular libPstream.dll (#2290)
Keep extra copy of libPstream.dll as libPstream.dll-msmpi
(for example) for manual replacement.
- this is now consistent with what the internal
"get(Vol|Surface|Point)Field" methods deliver
(ie, zero-gradient for volume, calculated otherwise).
Still some slight inconsistencies with what the internal
"new(Vol|Surface|Point)Field" methods deliver however.
There they are always "calculated"
Enables particles to interact with mesh faces (decsribed using faceZones).
faceInteraction1
{
type faceInteraction;
faceZones
(
(blockageFaces stick)
// (blockageFaces escape)
// (blockageFaces rebound) // not applicable for this test case (!)
);
dMin 0;
dMax 1;
}
The faceZones entry is a list of (faceZoneName interactionType), where
interaction type is either stick, escape or rebound.
The parcel initial velocity can now be set using the new `velocityType`
entry, taking one of the following options:
- fixedValue : (default) same as earlier versions, requires U0
- patchValue : velocity set to seed patch face value
- zeroGradient : velocity set to seed patch face adjacent cell value
Example usage:
model1
{
type patchInjection;
massTotal 1;
SOI 0;
parcelBasisType mass;
patch cylinder;
duration 10;
parcelsPerSecond 100;
velocityType patchValue;
//velocityType zeroGradient;
//U0 (-10 0 0);
flowRateProfile constant 1;
sizeDistribution
{
type normal;
normalDistribution
{
expectation 1e-3;
variance 1e-4;
minValue 1e-5;
maxValue 2e-3;
}
}
}
See the new $FOAM_TUTORIALS/lagrangian/kinematicParcelFoam/spinningDisk tutorial
The turbulentTemperatureCoupledBaffleMixed boundary condition
has been superseded by the turbulentTemperatureRadCoupledMixed condition
TUT: injectorPipe: remove an unused entry
TUT: waveMakerFlap: remove uncompressed entry
ENH: Copying alphatLiquid value to alphatVapour for boiling regimes.
When using correlations for boiling regimes the phases next to the
wall are not relevant to these. Therefore the alphat is copied
accordingly from the alphat for liquid.
Only in the sub-cooling RPI model the partition of heat flux
between vapour and liquid is considered.
Calculates propeller performance and wake field properties.
Controlled by executeControl:
- Propeller performance
- Thrust coefficient, Kt
- Torque coefficient, 10*Kq
- Advance coefficient, J
- Open water efficiency, etaO
- Written to postProcessing/<name>/<time>/propellerPerformance.dat
Controlled by writeControl:
- Wake field text file
- Wake: 1 - UzMean/URef
- Velocity in cylindrical coordinates at xyz locations
- Written to postProcessing/<name>/<time>/wake.dat
- Axial wake field text file
- 1 - Uz/URef at r/R and angle
- Written to postProcessing/<name>/<time>/axialWake.dat
- Velocity surface
- Written to postProcessing/<name>/surfaces/time>/disk.<fileType>
Usage
Example of function object specification:
\verbatim
propellerInfo1
{
type propellerInfo;
libs (forces);
writeControl writeTime;
patches ("propeller.*");
URef 5; // Function1 type; 'constant' form shown here
rho rhoInf; // incompressible
rhoInf 1.2;
// Optionally write propeller performance data
writePropellerPerformance yes;
// Propeller data:
// Radius
radius 0.1;
rotationMode specified; // specified | MRF
// rotationMode = specified:
origin (0 -0.1 0);
n 25.15;
axis (0 1 0);
// Optional reference direction for angle (alpha) = 0
alphaAxis (1 0 0);
//// rotationMode = mrf
//// MRF MRFZoneName;
//// (origin, n and axis retrieved from MRF model)
// Optionally write wake text files
// Note: controlled by writeControl
writeWakeFields yes;
// Sample plane (disk) properties
// Note: controlled by writeControl
sampleDisk
{
surfaceWriter vtk;
r1 0.05;
r2 0.2;
nTheta 36;
nRadial 10;
interpolationScheme cellPoint;
errorOnPointNotFound false;
}
}
\endverbatim
Where the entries comprise:
\table
Property | Description | Required | Deflt value
type | Type name: propellerInfo | yes |
log | Write to standard output | no | no
patches | Patches included in the forces calculation | yes |
p | Pressure field name | no | p
U | Velocity field name | no | U
rho | Density field name | no | rho
URef | Reference velocity | yes |
rotationMode | Rotation mode (see below) | yes |
origin | Sample disk centre | no* |
n | Revolutions per second | no* |
axis | Propeller axis | no* |
alphaAxis | Axis that defines alpha=0 dir | no |
MRF | Name of MRF zone | no* |
originOffset | Origin offset for MRF mode | no | (0 0 0)
writePropellerPerformance| Write propeller performance text file | yes |
writeWakeFields | Write wake field text files | yes |
surfaceWriter | Sample disk surface writer | no* |
r1 | Sample disk inner radius | no | 0
r2 | Sample disk outer radius | no* |
nTheta | Divisions in theta direction | no* |
nRadial | Divisions in radial direction | no* |
interpolationScheme | Sampling interpolation scheme | no* | cell
\endtable
Note
- URef is a scalar Function1 type, i.e. supports constant, table, lookup values
- rotationMode is used to set the origin, axis and revolutions per second
- if set to 'specified' all 3 entries are required
- note: origin is the sample disk origin
- if set to 'MRF' only the MRF entry is required
- to move the sample disk away from the MRF origin, use the originOffset
- if writePropellerPerformance is set to on|true:
- propellerPerformance text file will be written
- if writeWakeFields is set to on|true:
- wake and axialWake text files will be written
- if the surfaceWriter entry is set, the sample disk surface will be written
- extents set according to the r1 and r2 entries
- discretised according to the nTheta and nRadial entries
- provides a simple means of defining/modifying fields. For example,
```
<name1>
{
type exprField;
libs (fieldFunctionObjects);
field pTotal;
expression "p + 0.5*(rho*magSqr(U))";
dimensions [ Pa ];
}
```
It is is also possible to modify an existing field.
For example, to modify the previous one.
```
<name2>
{
type exprField;
libs (fieldFunctionObjects);
field pTotal;
action modify;
// Static pressure only in these regions
fieldMask
#{
(mag(pos()) < 0.05) && (pos().y() > 0)
|| cellZone(inlet)
#};
expression "p";
}
```
To use as a simple post-process calculator, simply avoid storing the
result and only generate on write:
```
<name2>
{
store false;
executionControl none;
writeControl writeTime;
...
}
```
- literal lookups only for expression strings
- code reduction for setExprFields.
- changed keyword "condition" to "fieldMask" (option -field-mask).
This is a better description of its purpose and avoids possible
naming ambiguities with functionObject triggers (for example)
if we apply similar syntax elsewhere.
BUG: erroneous check in volumeExpr::parseDriver::isResultType()
- not triggered since this method is not used anywhere
(may remove in future version)
Based on:
Cao, L., Sun, F., Chen, T., Tang, Y., & Liao, D. (2018).
Quantitative prediction of oxide inclusion defects inside
the casting and on the walls during cast-filling processes.
International Journal of Heat and Mass Transfer, 119, 614-623.
DOI:10.1016/j.ijheatmasstransfer.2017.11.127
Co-authored-by: Kutalmis Bercin <kutalmis.bercin@esi-group.com>
- this refines commit c233961d45, which added prefix scoping.
Default is now off (v2106 behaviour).
The 'useNamePrefix' keyword can be specified on a per function basis
or at the top-level of "functions".
```
functions
{
errors warn;
useNamePrefix true;
func1
{
type ...;
useNamePrefix false;
}
func2
{
type ...;
// Uses current default for useNamePrefix
}
}
```
- at the moment there is no significant difference since FieldBase is
essentially just a refCount anyhow, but changing the inheritance
ensures that reinterpret casting from SubField -> Field will
continue to work if FieldBase is changed in the future.
A Helmholtz-like filter is applied to the original field of sensitivity
derivatives. The corresponding PDE is solved on the sensitivity patches,
using the finite area infrastructure. A smoothing radius is needed,
which is computed based on the average 'length' of the boundary faces,
if not provided by the user explicitly.
If an faMesh is provided, it will be used; otherwise it will be created
on the fly based on either an faMeshDefinition dictionary in system or
one constructed internally based on the sensitivity patches.
Surface gradient scheme with under-/over-relaxed
full or limited explicit non-orthogonal correction.
A minimal example by using system/fvSchemes:
snGradSchemes
{
snGrad(<term>) relaxed;
}
and by using system/fvSolution:
relaxationFactors
{
fields
{
snGrad(<term>) <relaxation factor>;
}
}
A second-order gradient scheme using face-interpolation,
Gauss' theorem and iterative skew correction.
Minimal example by using system/fvSchemes:
gradSchemes
{
grad(<term>) iterativeGauss <interpolation scheme> <number of iters>;
}
- fix overly aggressive match in the API value
- allow `INTELMPI*` generic value, this can be used to specify something
like INTELMPI_custom and populate the corresponding wmake rule
manually
STYLE: mention FOAM_BUILDROOT in wmake -help-full output
STYLE: adjust openfoam shell session welcome information
- adjust internal variable names to reduce collision potential
- improve handling of openfoam -etc=...
Description
Writes point data in glTF v2 format
Two files are generated:
- filename.bin : a binary file containing all scene entities
- filename.gltf : a JSON file that ties fields to the binary data
The output can contain both geometry and fields, with additional support
for colours using a user-supplied colour map, and animation of particle
tracks.
Controls are provided via the optional formatOptions dictionary.
For non-particle track data:
\verbatim
formatOptions
{
// Apply colours flag (yes | no ) [optional]
colours yes;
// List of options per field
fieldInfo
{
p
{
// Colour map [optional]
colourMap <colourMap>;
// Colour map minimum and maximum limits [optional]
// Uses field min and max if not specified
min 0;
max 1;
// Alpha channel [optional] (uniform | field)
alpha uniform;
alphaValue 0.5;
//alpha field;
//alphaField T;
//normalise yes;
}
}
}
\verbatim
For particle tracks:
\verbatim
formatOptions
{
// Apply colours flag (yes | no) [optional]
colours yes;
// Animate tracks (yes | no) [optional]
animate yes;
// Animation properties [optional]
animationInfo
{
// Colour map [optional]
colourMap <colourMap>;
// Colour [optional] (uniform | field)
colour uniform;
colourValue (1 0 0); // RGB in range [0-1]
//colour field;
//colourField d;
// Colour map minimum and maximum limits [optional]
// Note: for colour = field option
// Uses field min and max if not specified
min 0;
max 1;
// Alpha channel [optional] (uniform | field)
alpha uniform;
alphaValue 0.5;
//alpha field;
//alphaField T;
//normalise yes;
}
}
\endverbatim
Note
When writing particle animations, the particle field and colour properties
correspond to initial particle state (first data point) and cannot be
animated (limitation of the file format).
For more information on the specification see
https://www.khronos.org/registry/glTF/
The utility will now add field data to all tracks (previous version only
created the geometry)
The new 'fields' entry can be used to output specific fields.
Example
cloud reactingCloud1;
sampleFrequency 1;
maxPositions 1000000;
fields (d U); // includes wildcard support
STYLE: minor typo fix
- specify any of these
./Allwmake -build-root=...
wmake -build-root=...
FOAM_BUILDROOT=... wmake
these specify an alternative root where build artifacts are to land.
Currently only used as an alternative for the 'build/' hierarchy
since the 'platforms/' target normally includes inputs as well.
Possible use:
```
(
export WM_MPLIB="%{foam_mplib}"
export FOAM_MPI="%{foam_mpi}"
export MPI_ARCH_PATH="%{mpi_prefix}"
export FOAM_BUILDROOT=/tmp/mpibuild
export FOAM_MPI_LIBBIN="$FOAM_BUILDROOT/platforms/$WM_OPTIONS/lib/$FOAM_MPI"
src/Pstream/Allwmake-mpi
)
```
- exposed by the new embedded function handling.
Requires local copies of dictionary content instead
(similar to coded BCs handling)
BUG: incorrect formatting for expression function output
ENH: simpler copyDict version taking wordList instead of wordRes
- corresponds to the most common use case at the moment
ENH: expression string writeEntry method
- write as verbatim for better readability
- this revises the changes made in 95cd8ee75c to replace the
SFINAE-type of handling of string hashes with direct definitions.
This places a bit more burden on the developer if creating hashable
classes derived from std::string or variants of Foam::string, but
improves reliability when linking.
STYLE: drop template key defaulting from HashSet
- this was never used and `HashSet<>` is much less transparent
than writing `HashSet<word>` or `wordHashSet`
- Generic thermophysical properties class for a liquid in which the
functions and coefficients for each property are run-time selected.
Code adapted from openfoam.org
- had lookups into the merge-point map instead of
determining/remapping the duplicate points directly.
The result was a jumble of face/point addressing.
STYLE: additional debug/verbosity comment for mergePoints
- marks if the value is considered to be independent of 'x'.
Propagate into PatchFunction1 instead ad hoc checks there.
- adjust method name in PatchFunction1 to 'whichDb()' to reflect
final changes in Function1 method names.
ENH: add a Function1 'none' placeholder function
- This is principally useful for interfaces that expect a Function1
but where it is not necessarily used by a particular submodel.
TUT: update Function1 creation to use objectRegistry
- allows an additional HashTable of pointers to reference external
content which not otherwise directly available via an
objectRegistry.
This could typically be used to provide a function-local "rho"
to the expression evaluation.
- for cell quantities, these evaluate on the faceCells associated with
that patch to produce a field of true/false values
- for face quantities, these simply correspond to the mesh faces
associated with that patch to produce a field of true/false values
- similar idea to swak timelines/lookuptables but combined together
and based on Function1 for more flexibility.
Specified as 'functions<scalar>' or 'functions<vector>'.
For example,
functions<scalar>
{
intakeType table ((0 0) (10 1.2));
p_inlet
{
type sine;
frequency 3000;
scale 50;
level 101325;
}
}
These can be referenced in the expressions as a nullary function or a
unary function.
Within the parser, the names are prefixed with "fn:" (function).
It is thus possible to define "fn:sin()" that is different than
the builtin "sin()" function.
* A nullary call uses time value
- Eg, fn:p_inlet()
* A unary call acts as a remapper function.
- Eg, fn:intakeType(6.25)
- previously simply reused the scan token, which works fine for
non-nested tokenizations but becomes too fragile with nesting.
Now changed to use tagged unions that can be copied about
and still retain some rudimentary knowledge of their types,
which can be manually triggered with a destroy() call.
- provide an 'identifier' non-terminal as an additional catch
to avoid potential leakage on parsing failure.
- adjust lemon rules and infrastructure:
- use %token to predefine standard tokens.
Will reduce some noise on the generated headers by retaining the
order on the initial token names.
- Define BIT_NOT, internal token rename NOT -> LNOT
- handle non-terminal vector values.
Support vector::x, vector::y and vector::z constants
- permit fieldExpr access to time().
Probably not usable or useful for an '#eval' expression,
but useful for a Function1.
- provisioning for hooks into function calls. Establishes token
names for next commit(s).
Returns a 0/1 value corresponding to function object trigger levels.
Usage:
\verbatim
<entryName> functionObjectTrigger;
<entryName>Coeffs
{
triggers (1 3 5);
defaultValue false; // Default when no triggers activated
}
\endverbatim
ENH: add reset() method for Constant Function1
ENH: allow forced change of trigger index
- the triggers are normally increase only,
but can now override this optionally
Description
Function1 wrapper that maps the input value prior to it being used by
another Function1.
Example usage for limiting a polynomial:
\verbatim
<entryName>
{
type inputValueMapper;
mode minMax;
min 0.4;
max 1.4;
value polynomial
(
(5 1)
(-2 2)
(-2 3)
(1 4)
);
}
\endverbatim
Here the return value will be:
- poly(0.4) for x <= 0.4;
- poly(1.4) for x >= 1.4; and
- poly(x) for 0.4 < x < 1.4.
Example usage for supplying a patch mass flux for a table lookup:
\verbatim
<entryName>
{
type inputValueMapper;
mode function;
function
{
type functionObjectValue;
functionObject surfaceFieldValue1;
functionObjectResult sum(outlet,phi);
}
value
{
type table;
file "<system>/fanCurve.txt";
}
}
\endverbatim
Where:
\table
Property | Description | Required
mode | Mapping mode (see below) | yes
function | Mapping Function1 | no*
min | Minimum input value | no*
max | Maximum input value | no*
value | Function of type Function1<Type> | yes
\endtable
Mapping modes include
- none : the input value is simply passed to the 'value' Function1
- function : the input value is passed through the 'function' Function1
before being passed to the 'value' Function1
- minMax : limits the input value to 'min' and 'max' values before being
passed to the 'value' Function1
Note
Replaces the LimitRange Function1 (v2106 and earlier)
Returns a value retrieved from a function object result.
Usage:
<entryName> functionObjectValue;
<entryName>Coeffs
{
functionObject <name>;
functionObjectResult <function object result field name>
}
Function1 can now be created with an object registry, e.g. time or mesh
database. This enables access to other stored objects, e.g. fields,
dictionaries etc. making Function1 much more flexible.
Note: will allow TimeFunction1 to be deprecated
- created new functionObjects::properties class derived from IOdictionary
- replaces raw state IOdictionary owned by functionObjectList
- state dictionary access/manipulators moved from stateFunctionObject
- stateFunctionObject now acts as a light wrapper around
functionObjecties::properties
- updated dependent code
- more closely reflect what the binaries report
- report the installation path
- change PS1 case/separator to roughly correspond to package names
STYLE: adjust README to mention upcoming v2112
- use `#word` to concatenate, expand content with the resulting string
being treated as a word token. Can be used in dictionary or
primitive context.
In dictionary context, it fills the gap for constructing dictionary
names on-the-fly. For example,
```
#word "some_prefix_solverInfo_${application}"
{
type solverInfo;
libs (utilityFunctionObjects);
...
}
```
The '#word' directive will automatically squeeze out non-word
characters. In the block content form, it will also strip out
comments. This means that this type of content should also work:
```
#word {
some_prefix_solverInfo
/* Appended with application name (if defined) */
${application:+_} // Use '_' separator
${application} // The application
}
{
type solverInfo;
libs (utilityFunctionObjects);
...
}
```
This is admittedly quite ugly, but illustrates its capabilities.
- use `#message` to report expanded string content to stderr.
For example,
```
T
{
solver PBiCG;
preconditioner DILU;
tolerance 1e-10;
relTol 0;
#message "using solver: $solver"
}
```
Only reports on the master node.
- use FACE_DATA (was SURFACE_DATA) for similarity with polySurface
ENH: add expression value enumerations and traits
- simple enumeration of standard types (bool, label, scalar, vector)
that can be used as a value type-code for internal bookkeeping.
GIT: relocate pTraits into general traits/ directory
- releases ownership of the pointer. A no-op (and returns nullptr)
for references.
Naming consistent with unique_ptr and autoPtr.
DOC: adjust wording for memory-related classes
- add is_const() method for tmp, refPtr.
Drop (ununsed and confusing looking) isTmp method from refPtr
in favour of is_pointer() or movable() checks
ENH: noexcept for some pTraits methods, remove redundant 'inline'
- test for const first for tmp/refPtr (simpler logic)
- previously had codeAddSup used for both incompressible and
compressible source terms. However, it was not actually possible to
use it for compressible sources since any references to the 'rho'
parameter would cause a compilation error for the incompressible case.
Added 'codeAddSupRho' to distinguish the compressible case.
User must supply one or both of them on input.
- decomposePar: -no-fields to suppress decomposition of fields
- makeFaMesh: -no-decompose to suppress creation of *ProcAddressing
and fields, -no-fields to suppress decomposition of fields only
- switch from default topology merge to point merge if degenerate
blocks are detected. This should alleviate the problems noted in
#1862.
NB: this detection only works for blocks with duplicate vertex
indices, not ones with geometrically duplicate points.
ENH: add patch block/face summary in blockMesh generation
- add blockMesh -verbose option to override the static or dictionary
settings. The -verbose option can be used multiple times to increase
the verbosity.
ENH: extend hexCell handling with more cellShape-type methods
- allows better reuse in blockMesh.
Remove blockMesh-local hex edge definitions that shadowed the
hexCell values.
ENH: simplify some of the block-edge internals
- similar to -dry-run handling, can be interrogated from argList,
which makes it simpler to add into utilities.
- support multiple uses of -dry-run and -verbose to increase the
level. For example, could have
someApplication -verbose -verbose
and inside of the application:
if (args.verbose() > 2) ...
BUG: error with empty distributed roots specification (fixes#2196)
- previously used the size of distributed roots to transmit if the
case was running in distributed mode, but this behaves rather poorly
with bad input. Specifically, the following questionable setup:
distributed true;
roots ( /*none*/ );
Now transmit the ParRunControl distributed() value instead,
and also emit a gentle warning for the user:
WARNING: running distributed but did not specify roots!
COMP: implicit cast scope name to C++-string in IOobject::scopedName
- handles 'const char*' and allows a check for an empty scope name
COMP: avoid potential name conflict in local function (Istream)
- reportedly some resolution issues (unconfirmed) with Fujitsu clang
- previously used the size of distributed roots to transmit if the
case was running in distributed mode, but this behaves rather poorly
with bad input. Specifically, the following questionable setup:
distributed true;
roots ( /*none*/ );
Now transmit the ParRunControl distributed() value instead,
and also emit a gentle warning for the user:
WARNING: running distributed but did not specify roots!
2021-09-08 09:29:27 +02:00
14724 changed files with 325026 additions and 160063 deletions
// Calculate absolute flux from the mapped surface velocity
#include"correctPhi.H"
}
// Make the flux relative to the mesh motion
fvc::makeRelative(phi,U);
if(mesh.changing()&&checkMeshCourantNo)
{
#include"meshCourantNo.H"
if(checkMeshCourantNo)
{
#include"meshCourantNo.H"
}
}
// --- Pressure-velocity PIMPLE corrector loop
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.