Files
openfoam/applications/utilities/parallelProcessing/decomposePar/decomposePar.C
Henry Weller 8959b8e00a ENH: Improvements to the fileHandler and collated IO
Improvements to existing functionality
--------------------------------------
  - MPI is initialised without thread support if it is not needed e.g. uncollated
  - Use native c++11 threading; avoids problem with static destruction order.
  - etc/cellModels now only read if needed.
  - etc/controlDict can now be read from the environment variable FOAM_CONTROLDICT
  - Uniform files (e.g. '0/uniform/time') are now read only once on the master only
    (with the masterUncollated or collated file handlers)
  - collated format writes to 'processorsNNN' instead of 'processors'.  The file
    format is unchanged.
  - Thread buffer and file buffer size are no longer limited to 2Gb.

The global controlDict file contains parameters for file handling.  Under some
circumstances, e.g. running in parallel on a system without NFS, the user may
need to set some parameters, e.g. fileHandler, before the global controlDict
file is read from file.  To support this, OpenFOAM now allows the global
controlDict to be read as a string set to the FOAM_CONTROLDICT environment
variable.

The FOAM_CONTROLDICT environment variable can be set to the content the global
controlDict file, e.g. from a sh/bash shell:

    export FOAM_CONTROLDICT=$(foamDictionary $FOAM_ETC/controlDict)

FOAM_CONTROLDICT can then be passed to mpirun using the -x option, e.g.:

    mpirun -np 2 -x FOAM_CONTROLDICT simpleFoam -parallel

Note that while this avoids the need for NFS to read the OpenFOAM configuration
the executable still needs to load shared libraries which must either be copied
locally or available via NFS or equivalent.

New: Multiple IO ranks
----------------------
The masterUncollated and collated fileHandlers can now use multiple ranks for
writing e.g.:

    mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)'

In this example ranks 0 ('processor0') and 3 ('processor3') now handle all the
I/O.  Rank 0 handles 0,1,2 and rank 3 handles 3,4,5.  The set of IO ranks should always
include 0 as first element and be sorted in increasing order.

The collated fileHandler uses the directory naming processorsNNN_XXX-YYY where
NNN is the total number of processors and XXX and YYY are first and last
processor in the rank, e.g. in above example the directories would be

    processors6_0-2
    processors6_3-5

and each of the collated files in these contains data of the local ranks
only. The same naming also applies when e.g. running decomposePar:

decomposePar -fileHandler collated -ioRanks '(0 3)'

New: Distributed data
---------------------

The individual root directories can be placed on different hosts with different
paths if necessary.  In the current framework it is necessary to specify the
root per slave process but this has been simplified with the option of specifying
the root per host with the -hostRoots command line option:

    mpirun -np 6 simpleFoam -parallel -ioRanks '(0 3)' \
        -hostRoots '("machineA" "/tmp/" "machineB" "/tmp")'

The hostRoots option is followed by a list of machine name + root directory, the
machine name can contain regular expressions.

New: hostCollated
-----------------

The new hostCollated fileHandler automatically sets the 'ioRanks' according to
the host name with the lowest rank e.g. to run simpleFoam on 6 processors with
ranks 0-2 on machineA and ranks 3-5 on machineB with the machines specified in
the hostfile:

    mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler hostCollated

This is equivalent to

    mpirun -np 6 --hostfile hostfile simpleFoam -parallel -fileHandler collated -ioRanks '(0 3)'

This example will write directories:

    processors6_0-2/
    processors6_3-5/

A typical example would use distributed data e.g. no two nodes, machineA and
machineB, each with three processes:

    decomposePar -fileHandler collated -case cavity

    # Copy case (constant/*, system/*, processors6/) to master:
    rsync -a cavity machineA:/tmp/

    # Create root on slave:
    ssh machineB mkdir -p /tmp/cavity

    # Run
    mpirun --hostfile hostfile icoFoam \
        -case /tmp/cavity -parallel -fileHandler hostCollated \
        -hostRoots '("machineA" "/tmp" "machineB" "/tmp")'

Contributed by Mattijs Janssens
2018-03-21 12:42:22 +00:00

1455 lines
52 KiB
C

/*---------------------------------------------------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration |
\\ / A nd | Copyright (C) 2011-2017 OpenFOAM Foundation
\\/ M anipulation | Copyright (C) 2016-2017 OpenCFD Ltd.
-------------------------------------------------------------------------------
License
This file is part of OpenFOAM.
OpenFOAM is free software: you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with OpenFOAM. If not, see <http://www.gnu.org/licenses/>.
Application
decomposePar
Group
grpParallelUtilities
Description
Automatically decomposes a mesh and fields of a case for parallel
execution of OpenFOAM.
Usage
\b decomposePar [OPTION]
Options:
- \par -cellDist
Write the cell distribution as a labelList, for use with 'manual'
decomposition method or as a volScalarField for post-processing.
- \par -region \<regionName\>
Decompose named region. Does not check for existence of processor*.
- \par -allRegions
Decompose all regions in regionProperties. Does not check for
existence of processor*.
- \par -copyZero
Copy \a 0 directory to processor* rather than decompose the fields.
- \par -copyUniform
Copy any \a uniform directories too.
- \par -constant
- \par -time xxx:yyy
Override controlDict settings and decompose selected times. Does not
re-decompose the mesh i.e. does not handle moving mesh or changing
mesh cases.
- \par -fields
Use existing geometry decomposition and convert fields only.
- \par -noSets
Skip decomposing cellSets, faceSets, pointSets.
- \par -force
Remove any existing \a processor subdirectories before decomposing the
geometry.
- \par -ifRequired
Only decompose the geometry if the number of domains has changed from a
previous decomposition. No \a processor subdirectories will be removed
unless the \a -force option is also specified. This option can be used
to avoid redundant geometry decomposition (eg, in scripts), but should
be used with caution when the underlying (serial) geometry or the
decomposition method etc. have been changed between decompositions.
\*---------------------------------------------------------------------------*/
#include "OSspecific.H"
#include "fvCFD.H"
#include "IOobjectList.H"
#include "domainDecomposition.H"
#include "labelIOField.H"
#include "labelFieldIOField.H"
#include "scalarIOField.H"
#include "scalarFieldIOField.H"
#include "vectorIOField.H"
#include "vectorFieldIOField.H"
#include "sphericalTensorIOField.H"
#include "sphericalTensorFieldIOField.H"
#include "symmTensorIOField.H"
#include "symmTensorFieldIOField.H"
#include "tensorIOField.H"
#include "tensorFieldIOField.H"
#include "pointFields.H"
#include "regionProperties.H"
#include "readFields.H"
#include "dimFieldDecomposer.H"
#include "fvFieldDecomposer.H"
#include "pointFieldDecomposer.H"
#include "lagrangianFieldDecomposer.H"
#include "decompositionModel.H"
#include "collatedFileOperation.H"
#include "faCFD.H"
#include "emptyFaPatch.H"
#include "faMeshDecomposition.H"
#include "faFieldDecomposer.H"
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
namespace Foam
{
const labelIOList& procAddressing
(
const PtrList<fvMesh>& procMeshList,
const label proci,
const word& name,
PtrList<labelIOList>& procAddressingList
)
{
const fvMesh& procMesh = procMeshList[proci];
if (!procAddressingList.set(proci))
{
procAddressingList.set
(
proci,
new labelIOList
(
IOobject
(
name,
procMesh.facesInstance(),
procMesh.meshSubDir,
procMesh,
IOobject::MUST_READ,
IOobject::NO_WRITE,
false
)
)
);
}
return procAddressingList[proci];
}
void decomposeUniform
(
const bool copyUniform,
const domainDecomposition& mesh,
const Time& processorDb,
const word& regionDir = word::null
)
{
const Time& runTime = mesh.time();
// Any uniform data to copy/link?
const fileName uniformDir(regionDir/"uniform");
if (fileHandler().isDir(runTime.timePath()/uniformDir))
{
Info<< "Detected additional non-decomposed files in "
<< runTime.timePath()/uniformDir
<< endl;
const fileName timePath =
fileHandler().filePath(processorDb.timePath());
// If no fields have been decomposed the destination
// directory will not have been created so make sure.
mkDir(timePath);
if (copyUniform || mesh.distributed())
{
if (!fileHandler().exists(timePath/uniformDir))
{
fileHandler().cp
(
runTime.timePath()/uniformDir,
timePath/uniformDir
);
}
}
else
{
// Link with relative paths
string parentPath = string("..")/"..";
if (regionDir != word::null)
{
parentPath = parentPath/"..";
}
fileName currentDir(cwd());
chDir(timePath);
if (!fileHandler().exists(uniformDir))
{
fileHandler().ln
(
parentPath/runTime.timeName()/uniformDir,
uniformDir
);
}
chDir(currentDir);
}
}
}
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
int main(int argc, char *argv[])
{
argList::addNote
(
"decompose a mesh and fields of a case for parallel execution"
);
argList::noParallel();
argList::addOption
(
"decomposeParDict",
"file",
"read decomposePar dictionary from specified location"
);
#include "addRegionOption.H"
argList::addBoolOption
(
"allRegions",
"operate on all regions in regionProperties"
);
argList::addBoolOption
(
"cellDist",
"write cell distribution as a labelList - for use with 'manual' "
"decomposition method or as a volScalarField for post-processing."
);
argList::addBoolOption
(
"copyZero",
"Copy \a 0 directory to processor* rather than decompose the fields"
);
argList::addBoolOption
(
"copyUniform",
"copy any uniform/ directories too"
);
argList::addBoolOption
(
"fields",
"use existing geometry decomposition and convert fields only"
);
argList::addBoolOption
(
"noSets",
"skip decomposing cellSets, faceSets, pointSets"
);
argList::addBoolOption
(
"force",
"remove existing processor*/ subdirs before decomposing the geometry"
);
argList::addBoolOption
(
"ifRequired",
"only decompose geometry if the number of domains has changed"
);
// Include explicit constant options, have zero from time range
timeSelector::addOptions(true, false);
#include "setRootCase.H"
const bool optRegion = args.found("region");
const bool allRegions = args.found("allRegions");
const bool writeCellDist = args.found("cellDist");
const bool copyZero = args.found("copyZero");
const bool copyUniform = args.found("copyUniform");
const bool decomposeSets = !args.found("noSets");
const bool ifRequiredDecomposition = args.found("ifRequired");
bool decomposeFieldsOnly = args.found("fields");
bool forceOverwrite = args.found("force");
// Set time from database
#include "createTime.H"
// Allow override of time
instantList times = timeSelector::selectIfPresent(runTime, args);
// Allow override of decomposeParDict location
fileName decompDictFile;
args.readIfPresent("decomposeParDict", decompDictFile);
wordList regionNames;
if (allRegions)
{
Info<< "Decomposing all regions in regionProperties" << nl << endl;
regionProperties rp(runTime);
wordHashSet names;
forAllConstIters(rp, iter)
{
names.insertMany(iter.object());
}
regionNames = names.sortedToc();
}
else
{
regionNames = {fvMesh::defaultRegion};
args.readIfPresent("region", regionNames[0]);
}
forAll(regionNames, regioni)
{
const word& regionName = regionNames[regioni];
const word& regionDir =
regionName == fvMesh::defaultRegion ? word::null : regionName;
Info<< "\n\nDecomposing mesh " << regionName << nl << endl;
// Determine the existing processor count directly
label nProcs = fileHandler().nProcs(runTime.path(), regionDir);
// Get requested numberOfSubdomains directly from the dictionary.
// Note: have no mesh yet so cannot use decompositionModel::New
const label nDomains = decompositionMethod::nDomains
(
IOdictionary
(
decompositionModel::selectIO
(
IOobject
(
"decomposeParDict",
runTime.time().system(),
regionDir, // use region if non-standard
runTime,
IOobject::MUST_READ,
IOobject::NO_WRITE,
false
),
decompDictFile
)
)
);
// Give file handler a chance to determine the output directory
const_cast<fileOperation&>(fileHandler()).setNProcs(nDomains);
if (decomposeFieldsOnly)
{
// Sanity check on previously decomposed case
if (nProcs != nDomains)
{
FatalErrorInFunction
<< "Specified -fields, but the case was decomposed with "
<< nProcs << " domains"
<< nl
<< "instead of " << nDomains
<< " domains as specified in decomposeParDict" << nl
<< exit(FatalError);
}
}
else if (nProcs)
{
bool procDirsProblem = true;
if (ifRequiredDecomposition && nProcs == nDomains)
{
// we can reuse the decomposition
decomposeFieldsOnly = true;
procDirsProblem = false;
forceOverwrite = false;
Info<< "Using existing processor directories" << nl;
}
if (allRegions || optRegion)
{
procDirsProblem = false;
forceOverwrite = false;
}
if (forceOverwrite)
{
Info<< "Removing " << nProcs
<< " existing processor directories" << endl;
// Remove existing processors directory
fileNameList dirs
(
fileHandler().readDir
(
runTime.path(),
fileName::Type::DIRECTORY
)
);
forAllReverse(dirs, diri)
{
const fileName& d = dirs[diri];
// Starts with 'processors'
if (d.find("processors") == 0)
{
if (fileHandler().exists(d))
{
fileHandler().rmDir(d);
}
}
// Starts with 'processor'
if (d.find("processor") == 0)
{
// Check that integer after processor
fileName num(d.substr(9));
label proci = -1;
if (Foam::read(num.c_str(), proci))
{
if (fileHandler().exists(d))
{
fileHandler().rmDir(d);
}
}
}
}
procDirsProblem = false;
}
if (procDirsProblem)
{
FatalErrorInFunction
<< "Case is already decomposed with " << nProcs
<< " domains, use the -force option or manually" << nl
<< "remove processor directories before decomposing. e.g.,"
<< nl
<< " rm -rf " << runTime.path().c_str() << "/processor*"
<< nl
<< exit(FatalError);
}
}
Info<< "Create mesh" << endl;
domainDecomposition mesh
(
IOobject
(
regionName,
runTime.timeName(),
runTime,
IOobject::NO_READ,
IOobject::NO_WRITE,
false
),
decompDictFile
);
// Decompose the mesh
if (!decomposeFieldsOnly)
{
// Disable buffering when writing mesh since we need to read
// it later on when decomposing the fields
float bufSz =
fileOperations::collatedFileOperation::maxThreadFileBufferSize;
fileOperations::collatedFileOperation::maxThreadFileBufferSize = 0;
mesh.decomposeMesh();
mesh.writeDecomposition(decomposeSets);
if (writeCellDist)
{
const labelList& procIds = mesh.cellToProc();
// Write the decomposition as labelList for use with 'manual'
// decomposition method.
labelIOList cellDecomposition
(
IOobject
(
"cellDecomposition",
mesh.facesInstance(),
mesh,
IOobject::NO_READ,
IOobject::NO_WRITE,
false
),
procIds
);
cellDecomposition.write();
Info<< nl << "Wrote decomposition to "
<< cellDecomposition.objectPath()
<< " for use in manual decomposition." << endl;
// Write as volScalarField for postprocessing.
volScalarField cellDist
(
IOobject
(
"cellDist",
runTime.timeName(),
mesh,
IOobject::NO_READ,
IOobject::AUTO_WRITE
),
mesh,
dimensionedScalar("cellDist", dimless, -1),
zeroGradientFvPatchScalarField::typeName
);
forAll(procIds, celli)
{
cellDist[celli] = procIds[celli];
}
cellDist.correctBoundaryConditions();
cellDist.write();
Info<< nl << "Wrote decomposition as volScalarField to "
<< cellDist.name() << " for use in postprocessing."
<< endl;
}
fileOperations::collatedFileOperation::maxThreadFileBufferSize =
bufSz;
}
if (copyZero)
{
// Copy the 0 directory into each of the processor directories
fileName prevTimePath;
for (label proci = 0; proci < mesh.nProcs(); ++proci)
{
Time processorDb
(
Time::controlDictName,
args.rootPath(),
args.caseName()/fileName(word("processor") + name(proci))
);
processorDb.setTime(runTime);
if (fileHandler().isDir(runTime.timePath()))
{
// Get corresponding directory name (to handle processors/)
const fileName timePath
(
fileHandler().objectPath
(
IOobject
(
"",
processorDb.timeName(),
processorDb
),
word::null
)
);
if (timePath != prevTimePath)
{
Info<< "Processor " << proci
<< ": copying " << runTime.timePath() << nl
<< " to " << timePath << endl;
fileHandler().cp(runTime.timePath(), timePath);
prevTimePath = timePath;
}
}
}
}
else
{
// Decompose the field files
// Cached processor meshes and maps. These are only preserved if
// running with multiple times.
PtrList<Time> processorDbList(mesh.nProcs());
PtrList<fvMesh> procMeshList(mesh.nProcs());
PtrList<labelIOList> faceProcAddressingList(mesh.nProcs());
PtrList<labelIOList> cellProcAddressingList(mesh.nProcs());
PtrList<labelIOList> boundaryProcAddressingList(mesh.nProcs());
PtrList<fvFieldDecomposer> fieldDecomposerList(mesh.nProcs());
PtrList<dimFieldDecomposer> dimFieldDecomposerList(mesh.nProcs());
PtrList<labelIOList> pointProcAddressingList(mesh.nProcs());
PtrList<pointFieldDecomposer> pointFieldDecomposerList
(
mesh.nProcs()
);
// Loop over all times
forAll(times, timeI)
{
runTime.setTime(times[timeI], timeI);
Info<< "Time = " << runTime.timeName() << endl;
// Search for list of objects for this time
IOobjectList objects(mesh, runTime.timeName());
// Construct the vol fields
// ~~~~~~~~~~~~~~~~~~~~~~~~
PtrList<volScalarField> volScalarFields;
readFields(mesh, objects, volScalarFields, false);
PtrList<volVectorField> volVectorFields;
readFields(mesh, objects, volVectorFields, false);
PtrList<volSphericalTensorField> volSphericalTensorFields;
readFields(mesh, objects, volSphericalTensorFields, false);
PtrList<volSymmTensorField> volSymmTensorFields;
readFields(mesh, objects, volSymmTensorFields, false);
PtrList<volTensorField> volTensorFields;
readFields(mesh, objects, volTensorFields, false);
// Construct the dimensioned fields
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PtrList<DimensionedField<scalar, volMesh>> dimScalarFields;
readFields(mesh, objects, dimScalarFields);
PtrList<DimensionedField<vector, volMesh>> dimVectorFields;
readFields(mesh, objects, dimVectorFields);
PtrList<DimensionedField<sphericalTensor, volMesh>>
dimSphericalTensorFields;
readFields(mesh, objects, dimSphericalTensorFields);
PtrList<DimensionedField<symmTensor, volMesh>>
dimSymmTensorFields;
readFields(mesh, objects, dimSymmTensorFields);
PtrList<DimensionedField<tensor, volMesh>> dimTensorFields;
readFields(mesh, objects, dimTensorFields);
// Construct the surface fields
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PtrList<surfaceScalarField> surfaceScalarFields;
readFields(mesh, objects, surfaceScalarFields, false);
PtrList<surfaceVectorField> surfaceVectorFields;
readFields(mesh, objects, surfaceVectorFields, false);
PtrList<surfaceSphericalTensorField>
surfaceSphericalTensorFields;
readFields(mesh, objects, surfaceSphericalTensorFields, false);
PtrList<surfaceSymmTensorField> surfaceSymmTensorFields;
readFields(mesh, objects, surfaceSymmTensorFields, false);
PtrList<surfaceTensorField> surfaceTensorFields;
readFields(mesh, objects, surfaceTensorFields, false);
// Construct the point fields
// ~~~~~~~~~~~~~~~~~~~~~~~~~~
const pointMesh& pMesh = pointMesh::New(mesh);
PtrList<pointScalarField> pointScalarFields;
readFields(pMesh, objects, pointScalarFields, false);
PtrList<pointVectorField> pointVectorFields;
readFields(pMesh, objects, pointVectorFields, false);
PtrList<pointSphericalTensorField> pointSphericalTensorFields;
readFields(pMesh, objects, pointSphericalTensorFields, false);
PtrList<pointSymmTensorField> pointSymmTensorFields;
readFields(pMesh, objects, pointSymmTensorFields, false);
PtrList<pointTensorField> pointTensorFields;
readFields(pMesh, objects, pointTensorFields, false);
// Construct the Lagrangian fields
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
fileNameList cloudDirs
(
fileHandler().readDir
(
runTime.timePath()/cloud::prefix,
fileName::DIRECTORY
)
);
// Particles
PtrList<Cloud<indexedParticle>> lagrangianPositions
(
cloudDirs.size()
);
// Particles per cell
PtrList<List<SLList<indexedParticle*>*>> cellParticles
(
cloudDirs.size()
);
PtrList<PtrList<labelIOField>> lagrangianLabelFields
(
cloudDirs.size()
);
PtrList<PtrList<labelFieldCompactIOField>>
lagrangianLabelFieldFields
(
cloudDirs.size()
);
PtrList<PtrList<scalarIOField>> lagrangianScalarFields
(
cloudDirs.size()
);
PtrList<PtrList<scalarFieldCompactIOField>>
lagrangianScalarFieldFields
(
cloudDirs.size()
);
PtrList<PtrList<vectorIOField>> lagrangianVectorFields
(
cloudDirs.size()
);
PtrList<PtrList<vectorFieldCompactIOField>>
lagrangianVectorFieldFields
(
cloudDirs.size()
);
PtrList<PtrList<sphericalTensorIOField>>
lagrangianSphericalTensorFields
(
cloudDirs.size()
);
PtrList<PtrList<sphericalTensorFieldCompactIOField>>
lagrangianSphericalTensorFieldFields(cloudDirs.size());
PtrList<PtrList<symmTensorIOField>> lagrangianSymmTensorFields
(
cloudDirs.size()
);
PtrList<PtrList<symmTensorFieldCompactIOField>>
lagrangianSymmTensorFieldFields
(
cloudDirs.size()
);
PtrList<PtrList<tensorIOField>> lagrangianTensorFields
(
cloudDirs.size()
);
PtrList<PtrList<tensorFieldCompactIOField>>
lagrangianTensorFieldFields
(
cloudDirs.size()
);
label cloudI = 0;
for (const fileName& cloudDir : cloudDirs)
{
IOobjectList sprayObjs
(
mesh,
runTime.timeName(),
cloud::prefix/cloudDir,
IOobject::MUST_READ,
IOobject::NO_WRITE,
false
);
// Note: looking up "positions" for backwards compatibility
IOobject* positionsPtr =
sprayObjs.lookup(word("positions"));
IOobject* coordsPtr = sprayObjs.lookup(word("coordinates"));
if (positionsPtr || coordsPtr)
{
// Read lagrangian particles
// ~~~~~~~~~~~~~~~~~~~~~~~~~
Info<< "Identified lagrangian data set: "
<< cloudDir << endl;
lagrangianPositions.set
(
cloudI,
new Cloud<indexedParticle>
(
mesh,
cloudDir,
false
)
);
// Sort particles per cell
// ~~~~~~~~~~~~~~~~~~~~~~~
cellParticles.set
(
cloudI,
new List<SLList<indexedParticle*>*>
(
mesh.nCells(),
static_cast<SLList<indexedParticle*>*>(nullptr)
)
);
label i = 0;
forAllIter
(
Cloud<indexedParticle>,
lagrangianPositions[cloudI],
iter
)
{
iter().index() = i++;
label celli = iter().cell();
// Check
if (celli < 0 || celli >= mesh.nCells())
{
FatalErrorInFunction
<< "Illegal cell number " << celli
<< " for particle with index "
<< iter().index()
<< " at position "
<< iter().position() << nl
<< "Cell number should be between 0 and "
<< mesh.nCells()-1 << nl
<< "On this mesh the particle should"
<< " be in cell "
<< mesh.findCell(iter().position())
<< exit(FatalError);
}
if (!cellParticles[cloudI][celli])
{
cellParticles[cloudI][celli] =
new SLList<indexedParticle*>();
}
cellParticles[cloudI][celli]->append(&iter());
}
// Read fields
// ~~~~~~~~~~~
IOobjectList lagrangianObjects
(
mesh,
runTime.timeName(),
cloud::prefix/cloudDirs[cloudI],
IOobject::MUST_READ,
IOobject::NO_WRITE,
false
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianLabelFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianLabelFieldFields
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianScalarFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianScalarFieldFields
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianVectorFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianVectorFieldFields
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianSphericalTensorFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianSphericalTensorFieldFields
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianSymmTensorFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianSymmTensorFieldFields
);
lagrangianFieldDecomposer::readFields
(
cloudI,
lagrangianObjects,
lagrangianTensorFields
);
lagrangianFieldDecomposer::readFieldFields
(
cloudI,
lagrangianObjects,
lagrangianTensorFieldFields
);
cloudI++;
}
}
lagrangianPositions.setSize(cloudI);
cellParticles.setSize(cloudI);
lagrangianLabelFields.setSize(cloudI);
lagrangianLabelFieldFields.setSize(cloudI);
lagrangianScalarFields.setSize(cloudI);
lagrangianScalarFieldFields.setSize(cloudI);
lagrangianVectorFields.setSize(cloudI);
lagrangianVectorFieldFields.setSize(cloudI);
lagrangianSphericalTensorFields.setSize(cloudI);
lagrangianSphericalTensorFieldFields.setSize(cloudI);
lagrangianSymmTensorFields.setSize(cloudI);
lagrangianSymmTensorFieldFields.setSize(cloudI);
lagrangianTensorFields.setSize(cloudI);
lagrangianTensorFieldFields.setSize(cloudI);
Info<< endl;
// split the fields over processors
for (label proci = 0; proci < mesh.nProcs(); ++proci)
{
Info<< "Processor " << proci << ": field transfer" << endl;
// open the database
if (!processorDbList.set(proci))
{
processorDbList.set
(
proci,
new Time
(
Time::controlDictName,
args.rootPath(),
args.caseName()
/fileName(word("processor") + name(proci))
)
);
}
Time& processorDb = processorDbList[proci];
processorDb.setTime(runTime);
// read the mesh
if (!procMeshList.set(proci))
{
procMeshList.set
(
proci,
new fvMesh
(
IOobject
(
regionName,
processorDb.timeName(),
processorDb
)
)
);
}
const fvMesh& procMesh = procMeshList[proci];
const labelIOList& faceProcAddressing = procAddressing
(
procMeshList,
proci,
"faceProcAddressing",
faceProcAddressingList
);
const labelIOList& cellProcAddressing = procAddressing
(
procMeshList,
proci,
"cellProcAddressing",
cellProcAddressingList
);
const labelIOList& boundaryProcAddressing = procAddressing
(
procMeshList,
proci,
"boundaryProcAddressing",
boundaryProcAddressingList
);
// FV fields
{
if (!fieldDecomposerList.set(proci))
{
fieldDecomposerList.set
(
proci,
new fvFieldDecomposer
(
mesh,
procMesh,
faceProcAddressing,
cellProcAddressing,
boundaryProcAddressing
)
);
}
const fvFieldDecomposer& fieldDecomposer =
fieldDecomposerList[proci];
fieldDecomposer.decomposeFields(volScalarFields);
fieldDecomposer.decomposeFields(volVectorFields);
fieldDecomposer.decomposeFields
(
volSphericalTensorFields
);
fieldDecomposer.decomposeFields(volSymmTensorFields);
fieldDecomposer.decomposeFields(volTensorFields);
fieldDecomposer.decomposeFields(surfaceScalarFields);
fieldDecomposer.decomposeFields(surfaceVectorFields);
fieldDecomposer.decomposeFields
(
surfaceSphericalTensorFields
);
fieldDecomposer.decomposeFields
(
surfaceSymmTensorFields
);
fieldDecomposer.decomposeFields(surfaceTensorFields);
if (times.size() == 1)
{
// Clear cached decomposer
fieldDecomposerList.set(proci, nullptr);
}
}
// Dimensioned fields
{
if (!dimFieldDecomposerList.set(proci))
{
dimFieldDecomposerList.set
(
proci,
new dimFieldDecomposer
(
mesh,
procMesh,
faceProcAddressing,
cellProcAddressing
)
);
}
const dimFieldDecomposer& dimDecomposer =
dimFieldDecomposerList[proci];
dimDecomposer.decomposeFields(dimScalarFields);
dimDecomposer.decomposeFields(dimVectorFields);
dimDecomposer.decomposeFields(dimSphericalTensorFields);
dimDecomposer.decomposeFields(dimSymmTensorFields);
dimDecomposer.decomposeFields(dimTensorFields);
if (times.size() == 1)
{
dimFieldDecomposerList.set(proci, nullptr);
}
}
// Point fields
if
(
pointScalarFields.size()
|| pointVectorFields.size()
|| pointSphericalTensorFields.size()
|| pointSymmTensorFields.size()
|| pointTensorFields.size()
)
{
const labelIOList& pointProcAddressing = procAddressing
(
procMeshList,
proci,
"pointProcAddressing",
pointProcAddressingList
);
const pointMesh& procPMesh = pointMesh::New(procMesh);
if (!pointFieldDecomposerList.set(proci))
{
pointFieldDecomposerList.set
(
proci,
new pointFieldDecomposer
(
pMesh,
procPMesh,
pointProcAddressing,
boundaryProcAddressing
)
);
}
const pointFieldDecomposer& pointDecomposer =
pointFieldDecomposerList[proci];
pointDecomposer.decomposeFields(pointScalarFields);
pointDecomposer.decomposeFields(pointVectorFields);
pointDecomposer.decomposeFields
(
pointSphericalTensorFields
);
pointDecomposer.decomposeFields(pointSymmTensorFields);
pointDecomposer.decomposeFields(pointTensorFields);
if (times.size() == 1)
{
pointProcAddressingList.set(proci, nullptr);
pointFieldDecomposerList.set(proci, nullptr);
}
}
// If there is lagrangian data write it out
forAll(lagrangianPositions, cloudI)
{
if (lagrangianPositions[cloudI].size())
{
lagrangianFieldDecomposer fieldDecomposer
(
mesh,
procMesh,
faceProcAddressing,
cellProcAddressing,
cloudDirs[cloudI],
lagrangianPositions[cloudI],
cellParticles[cloudI]
);
// Lagrangian fields
{
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianLabelFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianLabelFieldFields[cloudI]
);
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianScalarFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianScalarFieldFields[cloudI]
);
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianVectorFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianVectorFieldFields[cloudI]
);
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianSphericalTensorFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianSphericalTensorFieldFields[cloudI]
);
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianSymmTensorFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianSymmTensorFieldFields[cloudI]
);
fieldDecomposer.decomposeFields
(
cloudDirs[cloudI],
lagrangianTensorFields[cloudI]
);
fieldDecomposer.decomposeFieldFields
(
cloudDirs[cloudI],
lagrangianTensorFieldFields[cloudI]
);
}
}
}
// Decompose the "uniform" directory in the time region
// directory
decomposeUniform(copyUniform, mesh, processorDb, regionDir);
// For a multi-region case, also decompose the "uniform"
// directory in the time directory
if (regionNames.size() > 1 && regioni == 0)
{
decomposeUniform(copyUniform, mesh, processorDb);
}
// We have cached all the constant mesh data for the current
// processor. This is only important if running with
// multiple times, otherwise it is just extra storage.
if (times.size() == 1)
{
boundaryProcAddressingList.set(proci, nullptr);
cellProcAddressingList.set(proci, nullptr);
faceProcAddressingList.set(proci, nullptr);
procMeshList.set(proci, nullptr);
processorDbList.set(proci, nullptr);
}
}
// Finite area mesh and field decomposition
IOobject faMeshBoundaryIOobj
(
"faBoundary",
mesh.time().findInstance
(
mesh.dbDir()/polyMesh::meshSubDir,
"boundary"
),
faMesh::meshSubDir,
mesh,
IOobject::READ_IF_PRESENT,
IOobject::NO_WRITE
);
if (faMeshBoundaryIOobj.typeHeaderOk<faBoundaryMesh>(true))
{
Info << "\nFinite area mesh decomposition" << endl;
faMeshDecomposition aMesh(mesh);
aMesh.decomposeMesh();
aMesh.writeDecomposition();
// Construct the area fields
// ~~~~~~~~~~~~~~~~~~~~~~~~
PtrList<areaScalarField> areaScalarFields;
readFields(aMesh, objects, areaScalarFields);
PtrList<areaVectorField> areaVectorFields;
readFields(aMesh, objects, areaVectorFields);
PtrList<areaSphericalTensorField> areaSphericalTensorFields;
readFields(aMesh, objects, areaSphericalTensorFields);
PtrList<areaSymmTensorField> areaSymmTensorFields;
readFields(aMesh, objects, areaSymmTensorFields);
PtrList<areaTensorField> areaTensorFields;
readFields(aMesh, objects, areaTensorFields);
// Construct the edge fields
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
PtrList<edgeScalarField> edgeScalarFields;
readFields(aMesh, objects, edgeScalarFields);
Info << endl;
// Split the fields over processors
for (label procI = 0; procI < mesh.nProcs(); procI++)
{
Info<< "Processor " << procI
<< ": finite area field transfer" << endl;
// open the database
Time processorDb
(
Time::controlDictName,
args.rootPath(),
args.caseName()/
fileName(word("processor") + name(procI))
);
processorDb.setTime(runTime);
// Read the mesh
fvMesh procFvMesh
(
IOobject
(
regionName,
processorDb.timeName(),
processorDb
)
);
faMesh procMesh(procFvMesh);
// // Does not work. HJ, 15/Aug/2017
// const labelIOList& faceProcAddressing =
// procAddressing
// (
// procMeshList,
// procI,
// "faceProcAddressing",
// faceProcAddressingList
// );
// const labelIOList& boundaryProcAddressing =
// procAddressing
// (
// procMeshList,
// procI,
// "boundaryProcAddressing",
// boundaryProcAddressingList
// );
labelIOList faceProcAddressing
(
IOobject
(
"faceProcAddressing",
"constant",
procMesh.meshSubDir,
procFvMesh,
IOobject::MUST_READ,
IOobject::NO_WRITE
)
);
labelIOList boundaryProcAddressing
(
IOobject
(
"boundaryProcAddressing",
"constant",
procMesh.meshSubDir,
procFvMesh,
IOobject::MUST_READ,
IOobject::NO_WRITE
)
);
// FA fields
if
(
areaScalarFields.size()
|| areaVectorFields.size()
|| areaSphericalTensorFields.size()
|| areaSymmTensorFields.size()
|| areaTensorFields.size()
|| edgeScalarFields.size()
)
{
labelIOList edgeProcAddressing
(
IOobject
(
"edgeProcAddressing",
"constant",
procMesh.meshSubDir,
procFvMesh,
IOobject::MUST_READ,
IOobject::NO_WRITE
)
);
faFieldDecomposer fieldDecomposer
(
aMesh,
procMesh,
edgeProcAddressing,
faceProcAddressing,
boundaryProcAddressing
);
fieldDecomposer.decomposeFields(areaScalarFields);
fieldDecomposer.decomposeFields(areaVectorFields);
fieldDecomposer.decomposeFields(areaSphericalTensorFields);
fieldDecomposer.decomposeFields(areaSymmTensorFields);
fieldDecomposer.decomposeFields(areaTensorFields);
fieldDecomposer.decomposeFields(edgeScalarFields);
}
}
}
}
}
}
Info<< "\nEnd\n" << endl;
return 0;
}
// ************************************************************************* //