Geometric point merging has an inherent chance of failure that occurs
when a mesh contains valid distinct points that are closer together than
the supplied tolerance. It is beneficial to avoid such merging whenever
possible.
reconstructParMesh does not need explicit point merging any more. Points
may be duplicated temporarily when processor meshes are combined which
share points and edges but not faces. Ultimately, however,
reconstructParMesh reconstructs the entire mesh so everything eventually
gets face-connected and all point duplications get resolved.
fvMeshDistribute requires point-merging, as the entire mesh is not
constructed. However, since 5d4c8f5d, this process has been purely
topological and has not relied on any of the geometric merging processes
triggered by utilised code.
As such, all geometric point merging operations and tolerances have been
removed from these two implementations, as well as in lower level code
in faceCoupleInfo and polyMeshAdder. faceCoupleInfo has also had support
for face and edge splits removed as this was not being used. This change
will have improved the robustness of both reconstruction and
redistributuon and has greatly reduced the total amount of code
involved.
The only geometric tolerance-based matching still being performed by
either of these processes is as a result of coupled patch ordering in
fvMeshDistribute. It is possible that this is not necessary either
(though at present coupled patch ordering is certainly needed
elsewhere). This warrants further investigation.
For many information and diagnostic messages the absolute path of the object is
not required and the local path relative to the current case is sufficient; the
new localObjectPath() member function of IOobject provides a convenient way of
printing this.
The calculation and input/output of transformations has been rewritten
for all coupled patches. This replaces multiple duplicated, inconsistent
and incomplete implementations of transformation handling which were
spread across the different coupled patch types.
Transformations are now calculated or specified once, typically during
mesh construction or manipulation, and are written out with the boundary
data. They are never re-calculated. Mesh changes should not change the
transformation across a coupled interface; to do so would violate the
transformation.
Transformations are now calculated using integral properties of the
patches. This is more numerically stable that the previous methods which
functioned in terms of individual faces. The new routines are also able
to automatically calculate non-zero centres of rotation.
The user input of transformations is backwards compatible, and permits
the user to manually specify varying amounts of the transformation
geometry. Anything left unspecified gets automatically computed from the
patch geometry. Supported specifications are:
1) No specification. Transformations on cyclics are automatically
generated, and cyclicAMI-type patches assume no transformation. For
example (in system/blockMeshDict):
cyclicLeft
{
type cyclic;
neighbourPatch cyclicRight;
faces ((0 1 2 3));
}
cyclicRight
{
type cyclic;
neighbourPatch cyclicLeft;
faces ((4 5 6 7));
}
2) Partial specification. The type of transformation is specified
by the user, as well as the coordinate system if the transform is
rotational. The rotation angle or separation vector is still
automatically generated. This form is useful as the signs of the
angle and separation are opposite on different sides of an interface
and can be difficult to specify correctly. For example:
cyclicLeft
{
type cyclic;
neighbourPatch cyclicRight;
transformType translational;
faces ((0 1 2 3));
}
cyclicRight
{
type cyclic;
neighbourPatch cyclicLeft;
transformType translational;
faces ((4 5 6 7));
}
cyclicAMILeft
{
type cyclicAMI;
neighbourPatch cyclicAMIRight;
transformType rotational;
rotationAxis (0 0 1);
rotationCentre (0.05 -0.01 0);
faces ((8 9 10 11));
}
cyclicAMIRight
{
type cyclicAMI;
neighbourPatch cyclicAMILeft;
transformType rotational;
rotationAxis (0 0 1);
rotationCentre (0.05 -0.01 0);
faces ((12 13 14 15));
}
3) Full specification. All parameters of the transformation are
given. For example:
cyclicLeft
{
type cyclic;
neighbourPatch cyclicRight;
transformType translational;
separaion (-0.01 0 0);
faces ((0 1 2 3));
}
cyclicRight
{
type cyclic;
neighbourPatch cyclicLeft;
transformType translational;
separaion (0.01 0 0);
faces ((4 5 6 7));
}
cyclicAMILeft
{
type cyclicAMI;
neighbourPatch cyclicAMIRight;
transformType rotational;
rotationAxis (0 0 1);
rotationCentre (0.05 -0.01 0);
rotationAngle 60;
faces ((8 9 10 11));
}
cyclicAMIRight
{
type cyclicAMI;
neighbourPatch cyclicAMILeft;
transformType rotational;
rotationAxis (0 0 1);
rotationCentre (0.05 -0.01 0);
rotationAngle 60;
faces ((12 13 14 15));
}
Automatic ordering of faces and points across coupled patches has also
been rewritten, again replacing multiple unsatisfactory implementations.
The new ordering method is more robust on poor meshes as it
geometrically matches only a single face (per contiguous region of the
patch) in order to perform the ordering, and this face is chosen to be
the one with the highest quality. A failure in ordering now only occurs
if the best face in the patch cannot be geometrically matched, whether
as previously the worst face could cause the algorithm to fail.
The oldCyclicPolyPatch has been removed, and the mesh converters which
previously used it now all generate ordered cyclic and baffle patches
directly. This removes the need to run foamUpgradeCyclics after
conversion. In addition the fluent3DMeshToFoam converter now supports
conversion of periodic/shadow pairs to OpenFOAM cyclic patches.
Currently these deleted function declarations are still in the private section
of the class declarations but will be moved by hand to the public section over
time as this is too complex to automate reliably.
Replaced all uses of complex Xfer class with C++11 "move" constructors and
assignment operators. Removed the now redundant Xfer class.
This substantial changes improves consistency between OpenFOAM and the C++11 STL
containers and algorithms, reduces memory allocation and copy overhead when
returning containers from functions and simplifies maintenance of the core
libraries significantly.
Using the new field mapper framework it is now possible to create specialised
mappers rather than creating a fatter and fatter interface in the base mapper.
This approach is far more extensible, comprehensible and maintainable.
This clarifies the purpose which is to indicate that the object should be read
or written on this particular processor rather than it is or is not valid.
Registration occurs when the temporary field is transferred to a non-temporary
field via a constructor or if explicitly transferred to the database via the
regIOobject "store" methods.
for
db/functionObjects/timeControl/timeControl.H: timeControls
primitives/bools/Switch/Switch.H: class switchType
primitives/strings/fileName/fileName.H: fileType
primitives/strings/wordRe/wordRe.H: compOption
This method waits until all the threads have completed IO operations and
then clears any cached information about the files on disk. This
replaces the deactivation of threading by means of zeroing the buffer
size when writing and reading of a file happen in sequence. It also
allows paraFoam to update the list of available times.
Patch contributed by Mattijs Janssens
Resolves bug report https://bugs.openfoam.org/view.php?id=2962
Improvements to existing functionality
--------------------------------------
- MPI is initialised without thread support if it is not needed e.g. uncollated
- Use native c++11 threading; avoids problem with static destruction order.
- etc/cellModels now only read if needed.
- etc/controlDict can now be read from the environment variable FOAM_CONTROLDICT
- Uniform files (e.g. '0/uniform/time') are now read only once on the master only
(with the masterUncollated or collated file handlers)
- collated format writes to 'processorsNNN' instead of 'processors'. The file
format is unchanged.
- Thread buffer and file buffer size are no longer limited to 2Gb.
The global controlDict file contains parameters for file handling. Under some
circumstances, e.g. running in parallel on a system without NFS, the user may
need to set some parameters, e.g. fileHandler, before the global controlDict
file is read from file. To support this, OpenFOAM now allows the global
controlDict to be read as a string set to the FOAM_CONTROLDICT environment
variable.
The FOAM_CONTROLDICT environment variable can be set to the content the global
controlDict file, e.g. from a sh/bash shell:
export FOAM_CONTROLDICT=$(foamDictionary $FOAM_ETC/controlDict)
FOAM_CONTROLDICT can then be passed to mpirun using the -x option, e.g.:
mpirun -np 2 -x FOAM_CONTROLDICT simpleFoam -parallel
Note that while this avoids the need for NFS to read the OpenFOAM configuration
the executable still needs to load shared libraries which must either be copied
locally or available via NFS or equivalent.
New: Multiple IO ranks
----------------------
The masterUncollated and collated fileHandlers can now use multiple ranks for
writing e.g.:
mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)'
In this example ranks 0 ('processor0') and 3 ('processor3') now handle all the
I/O. Rank 0 handles 0,1,2 and rank 3 handles 3,4,5. The set of IO ranks should always
include 0 as first element and be sorted in increasing order.
The collated fileHandler uses the directory naming processorsNNN_XXX-YYY where
NNN is the total number of processors and XXX and YYY are first and last
processor in the rank, e.g. in above example the directories would be
processors6_0-2
processors6_3-5
and each of the collated files in these contains data of the local ranks
only. The same naming also applies when e.g. running decomposePar:
decomposePar -fileHandler collated -ioRanks '(0 3)'
New: Distributed data
---------------------
The individual root directories can be placed on different hosts with different
paths if necessary. In the current framework it is necessary to specify the
root per slave process but this has been simplified with the option of specifying
the root per host with the -hostRoots command line option:
mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)' \
-hostRoots '("machineA" "/tmp/" "machineB" "/tmp")'
The hostRoots option is followed by a list of machine name + root directory, the
machine name can contain regular expressions.
New: hostCollated
-----------------
The new hostCollated fileHandler automatically sets the 'ioRanks' according to
the host name with the lowest rank e.g. to run simpleFoam on 6 processors with
ranks 0-2 on machineA and ranks 3-5 on machineB with the machines specified in
the hostfile:
mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler hostCollated
This is equivalent to
mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler collated -ioRanks '(0 3)'
This example will write directories:
processors6_0-2/
processors6_3-5/
A typical example would use distributed data e.g. no two nodes, machineA and
machineB, each with three processes:
decomposePar -fileHandler collated -case cavity
# Copy case (constant/*, system/*, processors6/) to master:
rsync -a cavity machineA:/tmp/
# Create root on slave:
ssh machineB mkdir -p /tmp/cavity
# Run
mpirun --hostfile hostfile icoFoam \
-case /tmp/cavity -parallel -fileHandler hostCollated \
-hostRoots '("machineA" "/tmp" "machineB" "/tmp")'
Contributed by Mattijs Janssens
When an OpenFOAM simulation runs in parallel, the data for decomposed fields and
mesh(es) has historically been stored in multiple files within separate
directories for each processor. Processor directories are named 'processorN',
where N is the processor number.
This commit introduces an alternative "collated" file format where the data for
each decomposed field (and mesh) is collated into a single file, which is
written and read on the master processor. The files are stored in a single
directory named 'processors'.
The new format produces significantly fewer files - one per field, instead of N
per field. For large parallel cases, this avoids the restriction on the number
of open files imposed by the operating system limits.
The file writing can be threaded allowing the simulation to continue running
while the data is being written to file. NFS (Network File System) is not
needed when using the the collated format and additionally, there is an option
to run without NFS with the original uncollated approach, known as
"masterUncollated".
The controls for the file handling are in the OptimisationSwitches of
etc/controlDict:
OptimisationSwitches
{
...
//- Parallel IO file handler
// uncollated (default), collated or masterUncollated
fileHandler uncollated;
//- collated: thread buffer size for queued file writes.
// If set to 0 or not sufficient for the file size threading is not used.
// Default: 2e9
maxThreadFileBufferSize 2e9;
//- masterUncollated: non-blocking buffer size.
// If the file exceeds this buffer size scheduled transfer is used.
// Default: 2e9
maxMasterFileBufferSize 2e9;
}
When using the collated file handling, memory is allocated for the data in the
thread. maxThreadFileBufferSize sets the maximum size of memory in bytes that
is allocated. If the data exceeds this size, the write does not use threading.
When using the masterUncollated file handling, non-blocking MPI communication
requires a sufficiently large memory buffer on the master node.
maxMasterFileBufferSize sets the maximum size in bytes of the buffer. If the
data exceeds this size, the system uses scheduled communication.
The installation defaults for the fileHandler choice, maxThreadFileBufferSize
and maxMasterFileBufferSize (set in etc/controlDict) can be over-ridden within
the case controlDict file, like other parameters. Additionally the fileHandler
can be set by:
- the "-fileHandler" command line argument;
- a FOAM_FILEHANDLER environment variable.
A foamFormatConvert utility allows users to convert files between the collated
and uncollated formats, e.g.
mpirun -np 2 foamFormatConvert -parallel -fileHandler uncollated
An example case demonstrating the file handling methods is provided in:
$FOAM_TUTORIALS/IO/fileHandling
The work was undertaken by Mattijs Janssens, in collaboration with Henry Weller.
terms of the local barycentric coordinates of the current tetrahedron,
rather than the global coordinate system.
Barycentric tracking works on any mesh, irrespective of mesh quality.
Particles do not get "lost", and tracking does not require ad-hoc
"corrections" or "rescues" to function robustly, because the calculation
of particle-face intersections is unambiguous and reproducible, even at
small angles of incidence.
Each particle position is defined by topology (i.e. the decomposed tet
cell it is in) and geometry (i.e. where it is in the cell). No search
operations are needed on restart or reconstruct, unlike when particle
positions are stored in the global coordinate system.
The particle positions file now contains particles' local coordinates
and topology, rather than the global coordinates and cell. This change
to the output format is not backwards compatible. Existing cases with
Lagrangian data will not restart, but they will still run from time
zero without any modification. This change was necessary in order to
guarantee that the loaded particle is valid, and therefore
fundamentally prevent "loss" and "search-failure" type bugs (e.g.,
2517, 2442, 2286, 1836, 1461, 1341, 1097).
The tracking functions have also been converted to function in terms
of displacement, rather than end position. This helps remove floating
point error issues, particularly towards the end of a tracking step.
Wall bounded streamlines have been removed. The implementation proved
incompatible with the new tracking algorithm. ParaView has a surface
LIC plugin which provides equivalent, or better, functionality.
Additionally, bug report <https://bugs.openfoam.org/view.php?id=2517>
is resolved by this change.
Using
decomposePar -copyZero
The mesh is decomposed as usual but the '0' directory is recursively copied to
the 'processor.*' directories rather than decomposing the fields. This is a
convenient option to handle cases where the initial field files are generic and
can be used for serial or parallel running. See for example the
incompressible/simpleFoam/motorBike tutorial case.