- combines region-based and proximity-based filtering
proxityRegions (post-filter):
Checks the distance of the resulting faces against the original
search surface. Filters based on the area-weighted distance
of each topologically connected region.
If the area-weighted distance of a region is greater than
\c absProximity, the entire region is rejected.
STYLE: 'proxityFaces' as newer synonym for 'proximity' filter
- snGrad, internalField, neighbourField.
Functional use as per swak: "... + internalField(T) ..."
ENH: additional volume/patch expressions
- deltaT()
STYLE: rename exprDriverWriter -> fvExprDriverWriter
- the original class name was a misnomer since it holds a reference
to fvExprDriver
BUG: expression faceToPoint/pointToFace definitions were flipped
ENH: refactor expression hierarchy and code style
- handle TimeState reference at the top-level for simpler derivations
- unified internal search parameters (cruft)
- support wordRes for selecting patch names
- ownerPolyPatch specification is now optional, which simplifies input
and also supports a faMesh spanning different patches but with a
single boundary condition.
Alternatively, can specify more granularity if required.
```
polyMeshPatches ( "top.*" );
boundary
{
inlet1
{
type patch;
ownerPolyPatch top1; // <- specific to this portion
neighbourPolyPatch inlet;
}
inlet2
{
type patch;
ownerPolyPatch top2; // <- specific to this portion
neighbourPolyPatch inlet;
}
outlet
{
type patch;
neighbourPolyPatch outflow;
}
bound
{
type symmetry;
neighbourPolyPatch bound;
}
}
```
Prior to the commit, initial residual fields were registered by
the `setResidualField()` function of a linear solver with a field name
prefixed by `residual:`. However, `solverInfo` FO could only access to
the initial residual fields prefixed by `initialResidual:`.
Due to this discrepancy, using `solverInfo` FO with `writeResidualFields=true`
option was resulting in empty residual fields to be output.
- ENH: turbulentDFSEMInlet: add normalisation factors for
input Reynolds stresses, mean velocity and integral-length
scales as entries `Uref` and `Lref`.
- ENH: turbulentDFSEMInlet: add scaling factor entries, `scale`
and `m`, to enable users to tune C1 normalisation coefficient,
if need be.
- BUG: turbulentDFSEM: (fixes#1004#1744#2089)
- see #2090 for theoretical issues related to the DFSEM method.
three macros:
- makeParcelCloudFunctionObjects for kinematic parcels
- makeThermoParcelCloudFunctionObjects for thermo parcels
- makeReactingParcelCloudFunctionObjects for reacting parcels
code style and quality improvements
renamed recon::centre to interfaceCentre.{groupName}
ranmed recon::normal to interfaceNormal.{groupName}
centre and normal field are not written by default
- makes it easier to use for local or alternative storage.
Eg,
```
tmp<volScalarField> tfld;
tfld.cref(obj.cfindObject<volScalarField>("name"));
if (!tfld)
{
tfld = volScalarField::New("name", ...);
}
```
- in some cases, additional dictionary inputs are useful for extending
the input parameters or functionality of dynamic coded conditions.
Typically this can be used to provide a simple set of dictionary
inputs that are used to drive specific code, but allows changing the
inputs without causing a recompilation.
Accessed with this type of code:
```
const dictionary& dict = this->codeContext();
```
boundary conditions and function objects:
* specify an additional codeContext dictionary entry:
```
codeContext
{
...
}
```
PatchFunction1:
* The code context dictionary is simply the dictionary used to specify
the PatchFunction1 coefficients.
To replicated persistant data, use local member static data.
Eg,
```
code
#{
// Persistent (Member) Data
static autoPtr<Function1<scalar>> baseVel;
static autoPtr<Function1<vector>> baseDir;
...
#}
```
fvOptions:
* currently not applicable
- meshTools include/library for many (most) coded items
- add PatchFunction1 include for coded BCs to provide ready access
to Function1 and PatchFunction1
- resizes to current fieldNames_ size and assigns everything to
false to avoid any "stickiness" if the field ordering changes
between reads.
ENH: additional debugging faOption/fvOption (#2110)
- aids tracing which sources are being used/ignored
- update code style
STYLE: rename CodedSource -> CodedFvSource
- avoid future name clashes with CodedFaSource
- with the changes in vtkCellArray, the legacy files now have
OFFSET, CONNECTIVITY information.
- support reading of both versions.
- continue to generate legacy format 2.0, since this is what
many programs still expect
- consider the neighbour polyPatch addressing on the connecting edge,
even when the neighbouring processor does not have a corresponding
section of the finiteArea mesh.
These "dangling" edges now propagate their real connectivity across.
- A bare-bones reconstructor for finiteArea meshes when processor
meshes are available (in parallel) but an equivalent serial faMesh
is needed for reconstruction or decomposition.
In these situations, a serial version of the faMesh is needed,
but preferably without reconstructing the entire volume mesh.
It uses the finiteVolume faceProcAddressing in addition to
the geometric information available from the underlying polyMesh.
The resulting equivalent faMesh can be used for basic operations,
but caution should be exercised before attempting large operations.
- improved separation of patch creation that is also parallel-aware,
which now allows creation in parallel
- memory-safe use of PtrList for adding patches, with a more generalized
faPatchData helper
- use uindirectPrimitivePatch instead of indirectPrimitivePatch
for internal patch handling.
- align boundary methods with polyMesh equivalents
- system/faMeshDefinition instead of constant/faMesh/faMeshDefinition
as per blockMesh convention. Easier to manage definitions, easier
for cleanup.
- drop inheritence from GeoMesh.
- depending on how the finiteArea is split up across processors,
it is possible that some processors have failed to register
fields in their object registry.
Now ensure that the field names are synchronized in parallel before
attempting a write. Replace locally missing fields with a dummy
zero-sized field.
- refine definition of patch boundary faces to distinguish between
boundaryFaces() and uniqBoundaryFaces().
* boundaryFaces() for edge to face lookup on boundary edges.
* uniqBoundaryFaces() for accessing quantities such as face areas
or eroding an outer layer
ENH: LabelledItem container, replaces unused 'Keyed' container
- method names in alignment with objectHit, pointIndexHit etc.
Top-level name aligns with labelledTri.
This is a partial fix for #2103. If there are no points
extruded for a stand-alone mesh (so not adding to mesh)
it should still include the original patch points. Not
doing so would generate illegal faces (also copiedPatchPoints
would not get set).
* removed internal upper limit on word/string length for parsed input.
- Although it has not caused many problems, no reason to retain
these limits.
- simplify some of the internal logic for reading string-like items.
- localize parsers for better separation from the header
- expose new function seekCommentEnd_Cstyle(), as useful
handler of C-style comments
* exclude imbalanced closing ')' from word/variable
- previously included this into the word/variable, but makes more
sense to leave on the parser for the following token.
Prevents content like 'vector (10 20 $zmax);' from being parsed
as '$zmax)' instead of as '$zmax' followed by a ')'.
No conceivable reason that the former would actually be desirable,
but can still be obtained with brace notation: Eg, '${zmax)}'
* consistent handling of ${{ ... }} expressions
- within a dictionary content, the following construct was
incorrectly processed:
value ${{2*sqrt(0.5)}};
Complains about no dictionary/env variable "{2*sqrt(0.5)}"
Now trap expressions directly and assign their own token type
while reading. Later expansion can then be properly passed to
the exprDriver (evalEntry) instead of incorrectly trying
variable expansion.
Does not alter the use of expressions embedded within other
expansions. Eg, "file${{10*2}}"
* improve #eval { ... } brace slurping
- the initial implementation of this was rudimentary and simply
grabbed everything until the next '}'. Now continue to grab
content until braces are properly balanced
Eg, the content: value #eval{${radius}*2};
would have previously terminated prematurely with "${radius" for
the expression!
NOTE:
both the ${{ expr }} parsed input and the #eval { ... } input
discard C/C++ comments during reading to reduce intermediate
overhead for content that will be discarded before evaluation
anyhow.
* tighten recognition of verbatim strings and expressions.
- parser was previously sloppy and would have accepted content such
as "# { ..." (for example) as an verbatim string introducer.
Now only accept parse if there are no intermediate characters
discarded.
- minor simplification of #if/#endif handling
ENH: improve input robustness with negative-prefixed expansions (#2095)
- especially in blockMeshDict it is useful to negate an value directly.
Eg,
```
xmax 100;
xmin -$xmax;
```
However, this fails since the dictionary expansion is a two-step
process of tokenization followed by expansion. After the expansion
the given input would now be the following:
```
xmax 100;
xmin - 100;
```
and retrieving a scalar value for 'xmin' fails.
Counteract this by being more generous on tokenized input when
attempting to retrieve a label or scalar value.
If a '-' is found where a number is expected, use it to negate the
subsequent value.
The previous solution was to invoke an 'eval':
```
xmax 100;
xmin #eval{-$xmax};
```
which adds additional clutter.