The majority of input parameters now support automatic unit conversion.
Units are specified within square brackets, either before or after the
value. Primitive parameters (e.g., scalars, vectors, tensors, ...),
dimensioned types, fields, Function1-s and Function2-s all support unit
conversion in this way.
Unit conversion occurs on input only. OpenFOAM writes out all fields and
parameters in standard units. It is recommended to use '.orig' files in
the 0 directory to preserve user-readable input if those files are being
modified by pre-processing applications (e.g., setFields).
For example, to specify a volumetric flow rate inlet boundary in litres
per second [l/s], rather than metres-cubed per second [m^3/s], in 0/U:
boundaryField
{
inlet
{
type flowRateInletVelocity;
volumetricFlowRate 0.1 [l/s];
value $internalField;
}
...
}
Or, to specify the pressure field in bar, in 0/p:
internalField uniform 1 [bar];
Or, to convert the parameters of an Arrhenius reaction rate from a
cm-mol-kcal unit system, in constant/chemistryProperties:
reactions
{
methaneReaction
{
type irreversibleArrhenius;
reaction "CH4^0.2 + 2O2^1.3 = CO2 + 2H2O";
A 6.7e12 [(mol/cm^3)^-0.5/s];
beta 0;
Ea 48.4 [kcal/mol];
}
}
Or, to define a time-varying outlet pressure using a CSV file in which
the pressure column is in mega-pascals [MPa], in 0/p:
boundaryField
{
outlet
{
type uniformFixedValue;
value
{
type table;
format csv;
nHeaderLine 1;
units ([s] [MPa]); // <-- new units entry
columns (0 1);
mergeSeparators no;
file "data/pressure.csv";
outOfBounds clamp;
interpolationScheme linear;
}
}
...
}
(Note also that a new 'columns' entry replaces the old 'refColumn' and
'componentColumns'. This is is considered to be more intuitive, and has
a consistent syntax with the new 'units' entry. 'columns' and
'componentColumns' have been retained for backwards compatibility and
will continue to work for the time being.)
Unit definitions can be added in the global or case controlDict files.
See UnitConversions in $WM_PROJECT_DIR/etc/controlDict for examples.
Currently available units include:
Standard: kg m s K kmol A Cd
Derived: Hz N Pa J W g um mm cm km l ml us ms min hr mol
rpm bar atm kPa MPa cal kcal cSt cP % rad rot deg
A user-time unit is also provided if user-time is in operation. This
allows it to be specified locally whether a parameter relates to
real-time or to user-time. For example, to define a mass source that
ramps up from a given engine-time (in crank-angle-degrees [CAD]) over a
duration in real-time, in constant/fvModels:
massSource1
{
type massSource;
points ((1 2 3));
massFlowRate
{
type scale;
scale linearRamp;
start 20 [CAD];
duration 50 [ms];
value 0.1 [g/s];
}
}
Specified units will be checked against the parameter's dimensions where
possible, and an error generated if they are not consistent. For the
dimensions to be available for this check, the code requires
modification, and work propagating this change across OpenFOAM is
ongoing. Unit conversions are still possible without these changes, but
the validity of such conversions will not be checked.
Units are no longer permitted in 'dimensions' entries in field files.
These 'dimensions' entries can now, instead, take the names of
dimensions. The names of the available dimensions are:
Standard: mass length time temperature
moles current luminousIntensity
Derived: area volume rate velocity momentum acceleration density
force energy power pressure kinematicPressure
compressibility gasConstant specificHeatCapacity
kinematicViscosity dynamicViscosity thermalConductivity
volumetricFlux massFlux
So, for example, a 0/epsilon file might specify the dimensions as
follows:
dimensions [energy/mass/time];
And a 0/alphat file might have:
dimensions [thermalConductivity/specificHeatCapacity];
*** Development Notes ***
A unit conversion can construct trivially from a dimension set,
resulting in a "standard" unit with a conversion factor of one. This
means the functions which perform unit conversion on read can be
provided dimension sets or unit conversion objects interchangeably.
A basic `dict.lookup<vector>("Umean")` call will do unit conversion, but
it does not know the parameter's dimensions, so it cannot check the
validity of the supplied units. A corresponding lookup function has been
added in which the dimensions or units can be provided; in this case the
corresponding call would be `dict.lookup<vector>("Umean", dimVelocity)`.
This function enables additional checking and should be used wherever
possible.
Function1-s and Function2-s have had their constructors and selectors
changed so that dimensions/units must be specified by calling code. In
the case of Function1, two unit arguments must be given; one for the
x-axis and one for the value-axis. For Function2-s, three must be
provided.
In some cases, it is desirable (or at least established practice), that
a given non-standard unit be used in the absence of specific
user-defined units. Commonly this includes reading angles in degrees
(rather than radians) and reading times in user-time (rather than
real-time). The primitive lookup functions and Function1 and Function2
selectors both support specifying a non-standard default unit. For
example, `theta_ = dict.lookup<scalar>("theta", unitDegrees)` will read
an angle in degrees by default. If this is done within a model which
also supports writing then the write call must be modified accordingly
so that the data is also written out in degrees. Overloads of writeEntry
have been created for this purpose. In this case, the angle theta should
be written out with `writeEntry(os, "theta", unitDegrees, theta_)`.
Function1-s and Function2-s behave similarly, but with greater numbers
of dimensions/units arguments as before.
The non-standard user-time unit can be accessed by a `userUnits()`
method that has been added to Time. Use of this user-time unit in the
construction of Function1-s should prevent the need for explicit
user-time conversion in boundary conditions and sub-models and similar.
Some models might contain non-typed stream-based lookups of the form
`dict.lookup("p0") >> p0_` (e.g., in a re-read method), or
`Umean_(dict.lookup("Umean"))` (e.g., in an initialiser list). These
calls cannot facilitate unit conversion and are therefore discouraged.
They should be replaced with
`p0_ = dict.lookup<scalar>("p0", dimPressure)` and
`Umean_(dict.lookup<vector>("Umean", dimVelocity))` and similar whenever
they are found.
Now the HashTable underlying PtrListDictionary is used for zone lookup by name
which is a lot faster than the linear search method used previously if there are
a large number of zones.
Now faceZones are handled directly by the applications and the new
faceZone::topoChange function so that any face can now be in any number of
zones, significantly increasing the flexibility and usefulness of faceZones.
This completes the generalisation of cellZone, faceZone and pointZone to support
multiple zones for each cell, face or point respectively. Next step will be to
make zones polymorphic and run-time selectable so that they can alter during the
run and adapt to moving meshes for example.
The surface mesh setPoints function resets the points without caching the old
points or swept areas so is the equivalent of the polyMesh::setPoints rather
than movePoints.
The concept of cell and face inflation proved unworkable in general and has been
replaced by the more flexible and robust cell-splitting combined with
conservative interpolative mapping and mesh morphing as appropriate.
Zones are now completely dynamic, i.e. the number of zones of each type can
change during the run, e.g. by run-time mesh-to-mesh mapping onto meshes with
different zones used to control mesh motion. This means that the index of each
zone may change during the run and so it better that the zones do not cache
their own index but it is looked-up from the zone list using findIndex when
required.
The mergePatchPairs functionality in blockMesh also now uses patchIntersection.
The new mergePatchPairs and patchIntersection replaces the old, fragile and
practically unusable polyTopoChanger::slidingInterface functionality the removal
of which has allowed the deletion of a lot of other ancient and otherwise unused
clutter including polyTopoChanger, polyMeshModifier, polyTopoChange::setAction
and associated addObject/*, modifyObject/* and removeObject/*. This
rationalisation paves the way for the completion of the update of zone handling
allowing mesh points, faces and cells to exist in multiple zones which is
currently not supported with mesh topology change.
Application
stitchMesh
Description
Utility to stitch or conform pairs of patches,
converting the patch faces either into internal faces
or conformal faces or another patch.
Usage
\b stitchMesh (\<list of patch pairs\>)
E.g. to stitch patches \c top1 to \c top2 and \c bottom1 to \c bottom2
stitchMesh "((top1 top2) (bottom1 bottom2))"
Options:
- \par -overwrite \n
Replace the old mesh with the new one, rather than writing the new one
into a separate time directory
- \par -region \<name\>
Specify an alternative mesh region.
- \par -fields
Update vol and point fields
- \par -tol
Merge tolerance relative to local edge length (default 1e-4)
See also
Foam::mergePatchPairs
Replaces MeshObject, providing a formalised method for creating demand-driven
mesh objects, optionally supporting update functions called by the mesh
following mesh changes.
Class
Foam::DemandDrivenMeshObject
Description
Templated abstract base-class for demand-driven mesh objects used to
automate their allocation to the mesh database and the mesh-modifier
event-loop.
DemandDrivenMeshObject is templated on the type of mesh it is allocated
to, the type of the mesh object (TopologicalMeshObject, GeometricMeshObject,
MoveableMeshObject, DistributeableMeshObject, UpdateableMeshObject) and the
type of the actual object it is created for example:
\verbatim
class leastSquaresVectors
:
public DemandDrivenMeshObject
<
fvMesh,
MoveableMeshObject,
leastSquaresVectors
>
{
.
.
.
//- Delete the least square vectors when the mesh moves
virtual bool movePoints();
};
\endverbatim
MeshObject types:
- TopologicalMeshObject: mesh object to be deleted on topology change
- GeometricMeshObject: mesh object to be deleted on geometry change
- MoveableMeshObject: mesh object to be updated in movePoints
- UpdateableMeshObject: mesh object to be updated in topoChange or
movePoints
- PatchMeshObject: mesh object to be additionally updated patch changes
DemandDrivenMeshObject should always be constructed and accessed via the New
methods provided so that they are held and maintained by the objectRegistry.
To ensure this use constructors of the concrete derived types should be
private or protected and friendship with the DemandDrivenMeshObject
base-class declared so that the New functions can call the the constructors.
Additionally the mesh-object types (TopologicalMeshObject, GeometricMeshObject,
MoveableMeshObject, DistributeableMeshObject, UpdateableMeshObject) can now be
used as mix-in types for normally allocated objects providing the same interface
to mesh-change update functions, see the Fickian fluid
thermophysicalTransportModel or anisotropic solid thermophysicalTransportModel.
This new approach to adding mesh-update functions to classes will be applied to
other existing classes and future developments to simplify the support and
maintenance of run-time mesh changes, in particular mesh refinement/unrefinement
and mesh-to-mesh mapping.
The timeName() function simply returns the dimensionedScalar::name() which holds
the user-time name of the current time and now that timeName() is no longer
virtual the dimensionedScalar::name() can be called directly. The timeName()
function implementation is maintained for backward-compatibility.
This is a map data structure rather than a class or function which performs the
mapping operation so polyMeshDistributionMap is more logical and comprehensible
than mapDistributePolyMesh.
Sampled sets and streamlines now write all their fields to the same
file. This prevents excessive duplication of the geometry and makes
post-processing tasks more convenient.
"axis" entries are now optional in sampled sets and streamlines. When
omitted, a default entry will be used, which is chosen appropriately for
the coordinate set and the write format. Some combinations are not
supported. For example, a scalar ("x", "y", "z" or "distance") axis
cannot be used to write in the vtk format, as vtk requires 3D locations
with which to associate data. Similarly, a point ("xyz") axis cannot be
used with the gnuplot format, as gnuplot needs a single scalar to
associate with the x-axis.
Streamlines can now write out fields of any type, not just scalars and
vectors, and there is no longer a strict requirement for velocity to be
one of the fields.
Streamlines now output to postProcessing/<functionName>/time/<file> in
the same way as other functions. The additional "sets" subdirectory has
been removed.
The raw set writer now aligns columns correctly.
The handling of segments in coordSet and sampledSet has been
fixed/completed. Segments mean that a coordinate set can represent a
number of contiguous lines, disconnected points, or some combination
thereof. This works in parallel; segments remain contiguous across
processor boundaries. Set writers now only need one write method, as the
previous "writeTracks" functionality is now handled by streamlines
providing the writer with the appropriate segment structure.
Coordinate sets and set writers now have a convenient programmatic
interface. To write a graph of A and B against some coordinate X, in
gnuplot format, we can call the following:
setWriter::New("gnuplot")->write
(
directoryName,
graphName,
coordSet(true, "X", X), // <-- "true" indicates a contiguous
"A", // line, "false" would mean
A, // disconnected points
"B",
B
);
This write function is variadic. It supports any number of
field-name-field pairs, and they can be of any primitive type.
Support for Jplot and Xmgrace formats has been removed. Raw, CSV,
Gnuplot, VTK and Ensight formats are all still available.
The old "graph" functionality has been removed from the code, with the
exception of the randomProcesses library and associated applications
(noise, DNSFoam and boxTurb). The intention is that these should also
eventually be converted to use the setWriters. For now, so that it is
clear that the "graph" functionality is not to be used elsewhere, it has
been moved into a subdirectory of the randomProcesses library.
This prevents excessive duplication of surface geometry and makes
post-processing tasks in paraview more convenient.
The Nastran and Star-CD surface formats were found not to work, so
support for these output types has been removed. Raw, VTK, Foam and
Ensight formats are all still available.
used to check the existence of and open an object file, read and check the
header without constructing the object.
'typeIOobject' operates in an equivalent and consistent manner to 'regIOobject'
but the type information is provided by the template argument rather than via
virtual functions for which the derived object would need to be constructed,
which is the case for 'regIOobject'.
'typeIOobject' replaces the previous separate functions 'typeHeaderOk' and
'typeFilePath' with a single consistent interface.
now all path functions in 'IOobject' are either templated on the type or require a
'globalFile' argument to specify if the type is case global e.g. 'IOdictionary' or
decomposed in parallel, e.g. almost everything else.
The 'global()' and 'globalFile()' virtual functions are now in 'regIOobject'
abstract base-class and overridden as required by derived classes. The path
functions using 'global()' and 'globalFile()' to differentiate between global
and processor local objects are now also in 'regIOobject' rather than 'IOobject'
to ensure the path returned is absolutely consistent with the type.
Unfortunately there is still potential for unexpected IO behaviour inconsistent
with the global/local nature of the type due to the 'fileOperation' classes
searching the processor directory for case global objects before searching the
case directory. This approach appears to be a work-around for incomplete
integration with and rationalisation of 'IOobject' but with the changes above it
is no longer necessary. Unfortunately this "up" searching is baked-in at a low
level and mixed-up with various complex ways to pick the processor directory
name out of the object path and will take some unravelling but this work will
undertaken as time allows.
The FOAM file format has not changed from version 2.0 in many years and so there
is no longer a need for the 'version' entry in the FoamFile header to be
required and to reduce unnecessary clutter it is now optional, defaulting to the
current file format 2.0.
This makes usage of transformPoints the same as for
surfaceTransformPoints. Transformations are supplied as a string and are
applied in sequence.
Usage
transformPoints "\<transformations\>" [OPTION]
Supported transformations:
- "translate=<translation vector>"
Translational transformation by given vector
- "rotate=(<n1 vector> <n2 vector>)"
Rotational transformation from unit vector n1 to n2
- "Rx=<angle [deg] about x-axis>"
Rotational transformation by given angle about x-axis
- "Ry=<angle [deg] about y-axis>"
Rotational transformation by given angle about y-axis
- "Rz=<angle [deg] about z-axis>"
Rotational transformation by given angle about z-axis
- "Ra=<axis vector> <angle [deg] about axis>"
Rotational transformation by given angle about given axis
- "scale=<x-y-z scaling vector>"
Anisotropic scaling by the given vector in the x, y, z
coordinate directions
Example usage:
transformPoints \
"translate=(-0.05 -0.05 0), \
Rz=45, \
translate=(0.05 0.05 0)"
Description
Transform (translate, rotate, scale) a surface.
Usage
\b surfaceTransformPoints "\<transformations\>" \<input\> \<output\>
Supported transformations:
- \par translate=<translation vector>
Translational transformation by given vector
- \par rotate=(\<n1 vector\> \<n2 vector\>)
Rotational transformation from unit vector n1 to n2
- \par Rx=\<angle [deg] about x-axis\>
Rotational transformation by given angle about x-axis
- \par Ry=\<angle [deg] about y-axis\>
Rotational transformation by given angle about y-axis
- \par Rz=\<angle [deg] about z-axis\>
Rotational transformation by given angle about z-axis
- \par Ra=\<axis vector\> \<angle [deg] about axis\>
Rotational transformation by given angle about given axis
- \par scale=\<x-y-z scaling vector\>
Anisotropic scaling by the given vector in the x, y, z
coordinate directions
Example usage:
surfaceTransformPoints \
"translate=(-0.586 0 -0.156), \
Ry=3.485, \
translate=(0.586 0 0.156)" \
constant/geometry/w3_orig.stl constant/geometry/w3.stl
The transformation sequence is specified like a substitution string used by
Description
Transform (translate, rotate, scale) a surface.
The rollPitchYaw option takes three angles (degrees):
- roll (rotation about x) followed by
- pitch (rotation about y) followed by
- yaw (rotation about z)
The yawPitchRoll does yaw followed by pitch followed by roll.
Usage
\b surfaceTransformPoints "\<transformations\>" \<input\> \<output\>
Example usage:
surfaceTransformPoints \
"translate=(-0.586 0 -0.156), \
rollPitchYaw=(0 -3.485 0), \
translate=(0.586 0 0.156)" \
constant/geometry/w3_orig.stl constant/geometry/w3.stl
Specifying a plane with which to subset feature edges is now done using
the same dictionary syntax used elsewhere in OpenFOAM. For example, in
system/surfaceFeaturesDict:
subsetFeatures
{
// Include only edges that intersect the plane
plane
{
planeType pointAndNormal;
point (0 0 0);
normal (1 0 0);
}
...
}
Originally the only supported geometry specification were triangulated surfaces,
hence the name of the directory: constant/triSurface, however now that other
surface specifications are supported and provided it is much more logical that
the directory is named accordingly: constant/geometry. All tutorial and
template cases have been updated.
Note that backward compatibility is provided such that if the constant/geometry
directory does not exist but constant/triSurface does then the geometry files
are read from there.
To handle the additional optional specification for the closeness calculation
these settings are now is a sub-dictionary of surfaceFeaturesDict, e.g.
closeness
{
// Output the closeness of surface elements to other surface elements.
faceCloseness no;
// Output the closeness of surface points to other surface elements.
pointCloseness yes;
// Optional maximum angle between opposite points considered close
internalAngleTolerance 80;
externalAngleTolerance 80;
}