For compatibility with all the mesh and related classes in OpenFOAM The 'normal'
function of the 'triangle', 'triFace' and 'face' classes now returns the unit
normal vector rather than the vector area which is now provided by the 'area'
function.
In early versions of OpenFOAM the scalar limits were simple macro replacements and the
names were capitalized to indicate this. The scalar limits are now static
constants which is a huge improvement on the use of macros and for consistency
the names have been changed to camel-case to indicate this and improve
readability of the code:
GREAT -> great
ROOTGREAT -> rootGreat
VGREAT -> vGreat
ROOTVGREAT -> rootVGreat
SMALL -> small
ROOTSMALL -> rootSmall
VSMALL -> vSmall
ROOTVSMALL -> rootVSmall
The original capitalized are still currently supported but their use is
deprecated.
This change tests all edges when breaking strings, not just those
connected to collapsing cells. In rare cases a cell can collapse despite
none of it's connected edges being marked as collapsing, because enough
of it's points collapse together via other edges.
In the event that matching centroids across a coupled patch pair fails,
we fall back to matching the face point average. The latter can be
obtained more reliably on degenerate faces as the calculation does not
involve division by the face area.
This fallback was already implemented as part of processorPolyPatch.
This change also applies it to the faceCoupleInfo class used by
reconstructParMesh.
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
now possible with level-sets as well as planes. Removed tetPoints class
as this wasn't really used anywhere except for the old tet-cutting
routines. Restored tetPointRef.H to be consistent with other primitive
shapes. Re-wrote tet-overlap mapping in terms of the new cutting.
terms of the local barycentric coordinates of the current tetrahedron,
rather than the global coordinate system.
Barycentric tracking works on any mesh, irrespective of mesh quality.
Particles do not get "lost", and tracking does not require ad-hoc
"corrections" or "rescues" to function robustly, because the calculation
of particle-face intersections is unambiguous and reproducible, even at
small angles of incidence.
Each particle position is defined by topology (i.e. the decomposed tet
cell it is in) and geometry (i.e. where it is in the cell). No search
operations are needed on restart or reconstruct, unlike when particle
positions are stored in the global coordinate system.
The particle positions file now contains particles' local coordinates
and topology, rather than the global coordinates and cell. This change
to the output format is not backwards compatible. Existing cases with
Lagrangian data will not restart, but they will still run from time
zero without any modification. This change was necessary in order to
guarantee that the loaded particle is valid, and therefore
fundamentally prevent "loss" and "search-failure" type bugs (e.g.,
2517, 2442, 2286, 1836, 1461, 1341, 1097).
The tracking functions have also been converted to function in terms
of displacement, rather than end position. This helps remove floating
point error issues, particularly towards the end of a tracking step.
Wall bounded streamlines have been removed. The implementation proved
incompatible with the new tracking algorithm. ParaView has a surface
LIC plugin which provides equivalent, or better, functionality.
Additionally, bug report <https://bugs.openfoam.org/view.php?id=2517>
is resolved by this change.
except turbulence and lagrangian which will also be updated shortly.
For example in the nonNewtonianIcoFoam offsetCylinder tutorial the viscosity
model coefficients may be specified in the corresponding "<type>Coeffs"
sub-dictionary:
transportModel CrossPowerLaw;
CrossPowerLawCoeffs
{
nu0 [0 2 -1 0 0 0 0] 0.01;
nuInf [0 2 -1 0 0 0 0] 10;
m [0 0 1 0 0 0 0] 0.4;
n [0 0 0 0 0 0 0] 3;
}
BirdCarreauCoeffs
{
nu0 [0 2 -1 0 0 0 0] 1e-06;
nuInf [0 2 -1 0 0 0 0] 1e-06;
k [0 0 1 0 0 0 0] 0;
n [0 0 0 0 0 0 0] 1;
}
which allows a quick change between models, or using the simpler
transportModel CrossPowerLaw;
nu0 [0 2 -1 0 0 0 0] 0.01;
nuInf [0 2 -1 0 0 0 0] 10;
m [0 0 1 0 0 0 0] 0.4;
n [0 0 0 0 0 0 0] 3;
if quick switching between models is not required.
To support this more convenient parameter specification the inconsistent
specification of seedSampleSet in the streamLine and wallBoundedStreamLine
functionObjects had to be corrected from
// Seeding method.
seedSampleSet uniform; //cloud; //triSurfaceMeshPointSet;
uniformCoeffs
{
type uniform;
axis x; //distance;
// Note: tracks slightly offset so as not to be on a face
start (-1.001 -0.05 0.0011);
end (-1.001 -0.05 1.0011);
nPoints 20;
}
to the simpler
// Seeding method.
seedSampleSet
{
type uniform;
axis x; //distance;
// Note: tracks slightly offset so as not to be on a face
start (-1.001 -0.05 0.0011);
end (-1.001 -0.05 1.0011);
nPoints 20;
}
which also support the "<type>Coeffs" form
// Seeding method.
seedSampleSet
{
type uniform;
uniformCoeffs
{
axis x; //distance;
// Note: tracks slightly offset so as not to be on a face
start (-1.001 -0.05 0.0011);
end (-1.001 -0.05 1.0011);
nPoints 20;
}
}
The files relating to the hex refinement are written out explicitly both by
snappyHexMesh and dynamicRefineFvMesh and hence should be set "NO_WRITE" rather
than "AUTO_WRITE" to avoid writing them twice. This change corrects the
handling of the "refinementHistory" file which should not be written by
snappyHexMesh.
e.g. the motion of two counter-rotating AMI regions could be defined:
dynamicFvMesh dynamicMotionSolverListFvMesh;
solvers
(
rotor1
{
solver solidBody;
cellZone rotor1;
solidBodyMotionFunction rotatingMotion;
rotatingMotionCoeffs
{
origin (0 0 0);
axis (0 0 1);
omega 6.2832; // rad/s
}
}
rotor2
{
solver solidBody;
cellZone rotor2;
solidBodyMotionFunction rotatingMotion;
rotatingMotionCoeffs
{
origin (0 0 0);
axis (0 0 1);
omega -6.2832; // rad/s
}
}
);
Any combination of motion solvers may be selected but there is no special
handling of motion interaction; the motions are applied sequentially and
potentially cumulatively.
To support this new general framework the solidBodyMotionFvMesh and
multiSolidBodyMotionFvMesh dynamicFvMeshes have been converted into the
corresponding motionSolvers solidBody and multiSolidBody and the tutorials
updated to reflect this change e.g. the motion in the mixerVesselAMI2D tutorial
is now defined thus:
dynamicFvMesh dynamicMotionSolverFvMesh;
solver solidBody;
solidBodyCoeffs
{
cellZone rotor;
solidBodyMotionFunction rotatingMotion;
rotatingMotionCoeffs
{
origin (0 0 0);
axis (0 0 1);
omega 6.2832; // rad/s
}
}
splitMeshRegions: handle flipping of faces for surface fields
subsetMesh: subset dimensionedFields
decomposePar: use run-time selection of decomposition constraints. Used to
keep cells on particular processors. See the decomposeParDict in
$FOAM_UTILITIES/parallel/decomposePar:
- preserveBaffles: keep baffle faces on same processor
- preserveFaceZones: keep faceZones owner and neighbour on same processor
- preservePatches: keep owner and neighbour on same processor. Note: not
suitable for cyclicAMI since these are not coupled on the patch level
- singleProcessorFaceSets: keep complete faceSet on a single processor
- refinementHistory: keep cells originating from a single cell on the
same processor.
decomposePar: clean up decomposition of refinement data from snappyHexMesh
reconstructPar: reconstruct refinement data (refineHexMesh, snappyHexMesh)
reconstructParMesh: reconstruct refinement data (refineHexMesh, snappyHexMesh)
redistributePar:
- corrected mapping surfaceFields
- adding processor patches in order consistent with decomposePar
argList: check that slaves are running same version as master
fvMeshSubset: move to dynamicMesh library
fvMeshDistribute:
- support for mapping dimensionedFields
- corrected mapping of surfaceFields
parallel routines: allow parallel running on single processor
Field: support for
- distributed mapping
- mapping with flipping
mapDistribute: support for flipping
AMIInterpolation: avoid constructing localPoints
These new names are more consistent and logical because:
primitiveField():
primitiveFieldRef():
Provides low-level access to the Field<Type> (primitive field)
without dimension or mesh-consistency checking. This should only be
used in the low-level functions where dimensional consistency is
ensured by careful programming and computational efficiency is
paramount.
internalField():
internalFieldRef():
Provides access to the DimensionedField<Type, GeoMesh> of values on
the internal mesh-type for which the GeometricField is defined and
supports dimension and checking and mesh-consistency checking.
In order to simplify expressions involving dimensioned internal field it
is preferable to use a simpler access convention. Given that
GeometricField is derived from DimensionedField it is simply a matter of
de-referencing this underlying type unlike the boundary field which is
peripheral information. For consistency with the new convention in
"tmp" "dimensionedInteralFieldRef()" has been renamed "ref()".
Non-const access to the internal field now obtained from a specifically
named access function consistent with the new names for non-canst access
to the boundary field boundaryFieldRef() and dimensioned internal field
dimensionedInternalFieldRef().
See also commit a4e2afa4b3
When the GeometricBoundaryField template class was originally written it
was a separate class in the Foam namespace rather than a sub-class of
GeometricField as it is now. Without loss of clarity and simplifying
code which access the boundary field of GeometricFields it is better
that GeometricBoundaryField be renamed Boundary for consistency with the
new naming convention for the type of the dimensioned internal field:
Internal, see commit a25a449c9e
This is a very simple text substitution change which can be applied to
any code which compiles with the OpenFOAM-dev libraries.
Given that the type of the dimensioned internal field is encapsulated in
the GeometricField class the name need not include "Field"; the type
name is "Internal" so
volScalarField::DimensionedInternalField -> volScalarField::Internal
In addition to the ".dimensionedInternalField()" access function the
simpler "()" de-reference operator is also provided to greatly simplify
FV equation source term expressions which need not evaluate boundary
conditions. To demonstrate this kEpsilon.C has been updated to use
dimensioned internal field expressions in the k and epsilon equation
source terms.
The motion of the bodies is integrated using the rigidBodyDynamics
library with joints, restraints and external forces.
The mesh-motion is interpolated using septernion averaging.
This development is sponsored by Carnegie Wave Energy Ltd.