The subtract option in mapFieldsPar was not implemented correctly and the
significant complexity in meshToMesh required to support it creates an
unwarranted maintenance overhead. The equivalent functionality is now provided
by the more flexible, convenient and simpler subtract functionObject.
When snappyHexMesh is run in parallel it re-balances the mesh during refinement
and layer addition by redistribution which requires a decomposition method
that operates in parallel, e.g. hierachical or ptscotch. decomposePar uses a
decomposition method which operates in serial e.g. hierachical but NOT
ptscotch. In order to run decomposePar followed by snappyHexMesh in parallel it
has been necessary to change the method specified in decomposeParDict but now
this is avoided by separately specifying the decomposition and distribution
methods, e.g. in the incompressible/simpleFoam/motorBike case:
numberOfSubdomains 6;
decomposer hierarchical;
distributor ptscotch;
hierarchicalCoeffs
{
n (3 2 1);
order xyz;
}
The distributor entry is also used for run-time mesh redistribution, e.g. in the
multiphase/interFoam/RAS/floatingObject case re-distribution for load-balancing
is enabled in constant/dynamicMeshDict:
distributor
{
type distributor;
libs ("libfvMeshDistributors.so");
redistributionInterval 10;
}
which uses the distributor specified in system/decomposeParDict:
distributor hierarchical;
This rationalisation provides the structure for development of mesh
redistribution and load-balancing.
Mesh motion and topology change are now combinable run-time selectable options
within fvMesh, replacing the restrictive dynamicFvMesh which supported only
motion OR topology change.
All solvers which instantiated a dynamicFvMesh now instantiate an fvMesh which
reads the optional constant/dynamicFvMeshDict to construct an fvMeshMover and/or
an fvMeshTopoChanger. These two are specified within the optional mover and
topoChanger sub-dictionaries of dynamicFvMeshDict.
When the fvMesh is updated the fvMeshTopoChanger is first executed which can
change the mesh topology in anyway, adding or removing points as required, for
example for automatic mesh refinement/unrefinement, and all registered fields
are mapped onto the updated mesh. The fvMeshMover is then executed which moved
the points only and calculates the cell volume change and corresponding
mesh-fluxes for conservative moving mesh transport. If multiple topological
changes or movements are required these would be combined into special
fvMeshMovers and fvMeshTopoChangers which handle the processing of a list of
changes, e.g. solidBodyMotionFunctions:multiMotion.
The tutorials/multiphase/interFoam/laminar/sloshingTank3D3DoF case has been
updated to demonstrate this new functionality by combining solid-body motion
with mesh refinement/unrefinement:
/*--------------------------------*- C++ -*----------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: dev
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
FoamFile
{
format ascii;
class dictionary;
location "constant";
object dynamicMeshDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
mover
{
type motionSolver;
libs ("libfvMeshMovers.so" "libfvMotionSolvers.so");
motionSolver solidBody;
solidBodyMotionFunction SDA;
CofG (0 0 0);
lamda 50;
rollAmax 0.2;
rollAmin 0.1;
heaveA 4;
swayA 2.4;
Q 2;
Tp 14;
Tpn 12;
dTi 0.06;
dTp -0.001;
}
topoChanger
{
type refiner;
libs ("libfvMeshTopoChangers.so");
// How often to refine
refineInterval 1;
// Field to be refinement on
field alpha.water;
// Refine field in between lower..upper
lowerRefineLevel 0.001;
upperRefineLevel 0.999;
// Have slower than 2:1 refinement
nBufferLayers 1;
// Refine cells only up to maxRefinement levels
maxRefinement 1;
// Stop refinement if maxCells reached
maxCells 200000;
// Flux field and corresponding velocity field. Fluxes on changed
// faces get recalculated by interpolating the velocity. Use 'none'
// on surfaceScalarFields that do not need to be reinterpolated.
correctFluxes
(
(phi none)
(nHatf none)
(rhoPhi none)
(alphaPhi.water none)
(meshPhi none)
(meshPhi_0 none)
(ghf none)
);
// Write the refinement level as a volScalarField
dumpLevel true;
}
// ************************************************************************* //
Note that currently this is the only working combination of mesh-motion with
topology change within the new framework and further development is required to
update the set of topology changers so that topology changes with mapping are
separated from the mesh-motion so that they can be combined with any of the
other movements or topology changes in any manner.
All of the solvers and tutorials have been updated to use the new form of
dynamicMeshDict but backward-compatibility was not practical due to the complete
reorganisation of the mesh change structure.
used to check the existence of and open an object file, read and check the
header without constructing the object.
'typeIOobject' operates in an equivalent and consistent manner to 'regIOobject'
but the type information is provided by the template argument rather than via
virtual functions for which the derived object would need to be constructed,
which is the case for 'regIOobject'.
'typeIOobject' replaces the previous separate functions 'typeHeaderOk' and
'typeFilePath' with a single consistent interface.
now all path functions in 'IOobject' are either templated on the type or require a
'globalFile' argument to specify if the type is case global e.g. 'IOdictionary' or
decomposed in parallel, e.g. almost everything else.
The 'global()' and 'globalFile()' virtual functions are now in 'regIOobject'
abstract base-class and overridden as required by derived classes. The path
functions using 'global()' and 'globalFile()' to differentiate between global
and processor local objects are now also in 'regIOobject' rather than 'IOobject'
to ensure the path returned is absolutely consistent with the type.
Unfortunately there is still potential for unexpected IO behaviour inconsistent
with the global/local nature of the type due to the 'fileOperation' classes
searching the processor directory for case global objects before searching the
case directory. This approach appears to be a work-around for incomplete
integration with and rationalisation of 'IOobject' but with the changes above it
is no longer necessary. Unfortunately this "up" searching is baked-in at a low
level and mixed-up with various complex ways to pick the processor directory
name out of the object path and will take some unravelling but this work will
undertaken as time allows.
for
db/functionObjects/timeControl/timeControl.H: timeControls
primitives/bools/Switch/Switch.H: class switchType
primitives/strings/fileName/fileName.H: fileType
primitives/strings/wordRe/wordRe.H: compOption
Tracking data classes are no longer templated on the derived cloud type.
The advantage of this is that they can now be passed to sub models. This
should allow continuous phase data to be removed from the parcel
classes. The disadvantage is that every function which once took a
templated TrackData argument now needs an additional TrackCloudType
argument in order to perform the necessary down-casting.
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
terms of the local barycentric coordinates of the current tetrahedron,
rather than the global coordinate system.
Barycentric tracking works on any mesh, irrespective of mesh quality.
Particles do not get "lost", and tracking does not require ad-hoc
"corrections" or "rescues" to function robustly, because the calculation
of particle-face intersections is unambiguous and reproducible, even at
small angles of incidence.
Each particle position is defined by topology (i.e. the decomposed tet
cell it is in) and geometry (i.e. where it is in the cell). No search
operations are needed on restart or reconstruct, unlike when particle
positions are stored in the global coordinate system.
The particle positions file now contains particles' local coordinates
and topology, rather than the global coordinates and cell. This change
to the output format is not backwards compatible. Existing cases with
Lagrangian data will not restart, but they will still run from time
zero without any modification. This change was necessary in order to
guarantee that the loaded particle is valid, and therefore
fundamentally prevent "loss" and "search-failure" type bugs (e.g.,
2517, 2442, 2286, 1836, 1461, 1341, 1097).
The tracking functions have also been converted to function in terms
of displacement, rather than end position. This helps remove floating
point error issues, particularly towards the end of a tracking step.
Wall bounded streamlines have been removed. The implementation proved
incompatible with the new tracking algorithm. ParaView has a surface
LIC plugin which provides equivalent, or better, functionality.
Additionally, bug report <https://bugs.openfoam.org/view.php?id=2517>
is resolved by this change.
This version is very inefficient in parallel and does not provide the
-parallelSource or -parallelTarget options which will need to be
reinstanted in the future or we could revert mapFields to the
OpenFOAM-2.2 version.