Some stuff

This commit is contained in:
henry
2009-01-15 16:45:44 +00:00
515 changed files with 11670 additions and 7079 deletions

View File

@ -121,7 +121,8 @@ scalar StCoNum = 0.0;
// Test : disable refinement for some cells // Test : disable refinement for some cells
PackedList<1>& protectedCell = PackedList<1>& protectedCell =
refCast<dynamicRefineFvMesh>(mesh).protectedCell(); refCast<dynamicRefineFvMesh>(mesh).protectedCell();
if (protectedCell.size() == 0)
if (protectedCell.empty())
{ {
protectedCell.setSize(mesh.nCells()); protectedCell.setSize(mesh.nCells());
protectedCell = 0; protectedCell = 0;

View File

@ -26,7 +26,8 @@ Class
Foam::PDRDragModel Foam::PDRDragModel
Description Description
Base-class for sub-grid obstacle drag models. Base-class for sub-grid obstacle drag models. The available drag model is at
\link basic.H \endlink.
SourceFiles SourceFiles
PDRDragModel.C PDRDragModel.C

View File

@ -70,7 +70,7 @@ Description
\f$ T \f$ is a tensor in the file CT. \f$ T \f$ is a tensor in the file CT.
The term \f$ G_{R} \f$ is treated explicitly in the \f$ \kappa-\epsilon The term \f$ G_{R} \f$ is treated explicitly in the \f$ \kappa-\epsilon
\f$ Eqs in the PDRkEpsilon.C file. \f$ Eqs in the \link PDRkEpsilon.C \endlink file.
SourceFiles SourceFiles

View File

@ -27,7 +27,7 @@ Class
Description Description
Standard k-epsilon turbulence model with additional source terms Standard k-epsilon turbulence model with additional source terms
corresponding to PDR basic drag model (basic.H) corresponding to PDR basic drag model (\link basic.H \endlink)
The turbulence source term \f$ G_{R} \f$ appears in the The turbulence source term \f$ G_{R} \f$ appears in the
\f$ \kappa-\epsilon \f$ equation for the generation of turbulence due to \f$ \kappa-\epsilon \f$ equation for the generation of turbulence due to

View File

@ -26,6 +26,10 @@ Class
Foam::XiEqModels::SCOPEBlend Foam::XiEqModels::SCOPEBlend
Description Description
Simple SCOPEBlendXiEq model for XiEq based on SCOPEXiEqs correlation
with a linear correction function to give a plausible profile for XiEq.
See @link SCOPELaminarFlameSpeed.H @endlink for details on the SCOPE
laminar flame speed model.
SourceFiles SourceFiles
SCOPEBlend.C SCOPEBlend.C

View File

@ -28,6 +28,8 @@ Class
Description Description
Simple SCOPEXiEq model for XiEq based on SCOPEXiEqs correlation Simple SCOPEXiEq model for XiEq based on SCOPEXiEqs correlation
with a linear correction function to give a plausible profile for XiEq. with a linear correction function to give a plausible profile for XiEq.
See \link SCOPELaminarFlameSpeed.H \endlink for details on the SCOPE laminar
flame speed model.
SourceFiles SourceFiles
SCOPEXiEq.C SCOPEXiEq.C

View File

@ -27,6 +27,12 @@ Class
Description Description
Base-class for all XiEq models used by the b-XiEq combustion model. Base-class for all XiEq models used by the b-XiEq combustion model.
The available models are :
\link basicXiSubXiEq.H \endlink
\link Gulder.H \endlink
\link instabilityXiEq.H \endlink
\link SCOPEBlendXiEq.H \endlink
\link SCOPEXiEq.H \endlink
SourceFiles SourceFiles
XiEqModel.C XiEqModel.C

View File

@ -26,6 +26,9 @@ Class
Foam::XiEqModels::instability Foam::XiEqModels::instability
Description Description
This is the equilibrium level of the flame wrinkling generated by
instability. It is a constant (default 2.5). It is used in
@link XiModel.H @endlink.
SourceFiles SourceFiles
instability.C instability.C

View File

@ -26,7 +26,8 @@ Class
Foam::XiGModels::KTS Foam::XiGModels::KTS
Description Description
Simple Kolmogorov time-scale model for the flame-wrinling generation rate. Simple Kolmogorov time-scale (KTS) model for the flame-wrinling generation
rate.
SourceFiles SourceFiles
KTS.C KTS.C

View File

@ -27,6 +27,9 @@ Class
Description Description
Base-class for all Xi generation models used by the b-Xi combustion model. Base-class for all Xi generation models used by the b-Xi combustion model.
See Technical Report SH/RE/01R for details on the PDR modelling. For details
on the use of XiGModel see \link XiModel.H \endlink. The model available is
\link instabilityG.H \endlink
SourceFiles SourceFiles
XiGModel.C XiGModel.C

View File

@ -26,7 +26,10 @@ Class
Foam::XiGModels::instabilityG Foam::XiGModels::instabilityG
Description Description
Flame-surface instabilityG flame-wrinking generation rate coefficient model. Flame-surface instabilityG flame-wrinking generation rate coefficient model
used in \link XiModel.H \endlink.
See Technical Report SH/RE/01R for details on the PDR modelling.
SourceFiles SourceFiles
instabilityG.C instabilityG.C

View File

@ -29,8 +29,10 @@ Description
Base-class for all Xi models used by the b-Xi combustion model. Base-class for all Xi models used by the b-Xi combustion model.
See Technical Report SH/RE/01R for details on the PDR modelling. See Technical Report SH/RE/01R for details on the PDR modelling.
Xi is given through an algebraic expression (algebraic.H), Xi is given through an algebraic expression (\link algebraic.H \endlink),
by solving a transport equation (transport.H) or a fixed value (fixed.H). by solving a transport equation (\link transport.H \endlink) or a
fixed value (\link fixed.H \endlink).
See report TR/HGW/10 for details on the Weller two equations model. See report TR/HGW/10 for details on the Weller two equations model.
In the algebraic and transport methods \f$\Xi_{eq}\f$ is calculated in In the algebraic and transport methods \f$\Xi_{eq}\f$ is calculated in
@ -45,6 +47,8 @@ Description
\f$ \dwea{b} \f$ is the regress variable. \f$ \dwea{b} \f$ is the regress variable.
\f$ \Xi_{coeff} \f$ is a model constant.
\f$ \Xi^* \f$ is the total equilibrium wrinkling combining the effects \f$ \Xi^* \f$ is the total equilibrium wrinkling combining the effects
of the flame inestability and turbulence interaction and is given by of the flame inestability and turbulence interaction and is given by

View File

@ -28,6 +28,8 @@ Class
Description Description
Simple algebraic model for Xi based on Gulders correlation Simple algebraic model for Xi based on Gulders correlation
with a linear correction function to give a plausible profile for Xi. with a linear correction function to give a plausible profile for Xi.
See report TR/HGW/10 for details on the Weller two equations model.
See \link XiModel.H \endlink for more details on flame wrinkling modelling.
SourceFiles SourceFiles
algebraic.C algebraic.C

View File

@ -26,7 +26,8 @@ Class
Foam::XiModels::fixed Foam::XiModels::fixed
Description Description
Fixed value model for Xi. Fixed value model for Xi. See \link XiModel.H \endlink for more details
on flame wrinkling modelling.
SourceFiles SourceFiles
fixed.C fixed.C

View File

@ -28,6 +28,8 @@ Class
Description Description
Simple transport model for Xi based on Gulders correlation Simple transport model for Xi based on Gulders correlation
with a linear correction function to give a plausible profile for Xi. with a linear correction function to give a plausible profile for Xi.
See report TR/HGW/10 for details on the Weller two equations model.
See \link XiModel.H \endlink for more details on flame wrinkling modelling.
SourceFiles SourceFiles
transport.C transport.C

View File

@ -129,9 +129,20 @@ void Foam::solidWallMixedTemperatureCoupledFvPatchScalarField::updateCoeffs()
{ {
Info<< "solidWallMixedTemperatureCoupledFvPatchScalarField::" Info<< "solidWallMixedTemperatureCoupledFvPatchScalarField::"
<< "updateCoeffs() :" << "updateCoeffs() :"
<< " patch:" << patch().name()
<< " walltemperature " << " walltemperature "
<< " min:" << gMin(*this) << " min:"
<< " max:" << gMax(*this) << returnReduce
(
(this->size() > 0 ? min(*this) : VGREAT),
minOp<scalar>()
)
<< " max:"
<< returnReduce
(
(this->size() > 0 ? max(*this) : -VGREAT),
maxOp<scalar>()
)
<< " avg:" << gAverage(*this) << " avg:" << gAverage(*this)
<< endl; << endl;
} }
@ -163,7 +174,9 @@ void Foam::solidWallMixedTemperatureCoupledFvPatchScalarField::updateCoeffs()
label nTotSize = returnReduce(this->size(), sumOp<label>()); label nTotSize = returnReduce(this->size(), sumOp<label>());
Info<< "solidWallMixedTemperatureCoupledFvPatchScalarField::" Info<< "solidWallMixedTemperatureCoupledFvPatchScalarField::"
<< "updateCoeffs() : Out of " << nTotSize << "updateCoeffs() :"
<< " patch:" << patch().name()
<< " out of:" << nTotSize
<< " fixedBC:" << nFixed << " fixedBC:" << nFixed
<< " gradient:" << nTotSize-nFixed << endl; << " gradient:" << nTotSize-nFixed << endl;
} }
@ -213,9 +226,22 @@ void Foam::solidWallMixedTemperatureCoupledFvPatchScalarField::evaluate
if (debug) if (debug)
{ {
Info<< "Setting master and slave to wall temperature " Info<< "solidWallMixedTemperatureCoupledFvPatchScalarField::"
<< " min:" << gMin(*this) << "updateCoeffs() :"
<< " max:" << gMax(*this) << " patch:" << patch().name()
<< " setting master and slave to wall temperature "
<< " min:"
<< returnReduce
(
(this->size() > 0 ? min(*this) : VGREAT),
minOp<scalar>()
)
<< " max:"
<< returnReduce
(
(this->size() > 0 ? max(*this) : -VGREAT),
maxOp<scalar>()
)
<< " avg:" << gAverage(*this) << " avg:" << gAverage(*this)
<< endl; << endl;
} }

View File

@ -1,3 +0,0 @@
gnemdFoam.C
EXE = $(FOAM_APPBIN)/gnemdFoam

View File

@ -1,6 +1,7 @@
EXE_INC = \ EXE_INC = \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecule/lnInclude \ -I$(LIB_SRC)/lagrangian/molecularDynamics/molecule/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/potential/lnInclude \ -I$(LIB_SRC)/lagrangian/molecularDynamics/potential/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecularMeasurements/lnInclude \
-I$(LIB_SRC)/finiteVolume/lnInclude \ -I$(LIB_SRC)/finiteVolume/lnInclude \
-I$(LIB_SRC)/lagrangian/basic/lnInclude \ -I$(LIB_SRC)/lagrangian/basic/lnInclude \
-I$(LIB_SRC)/meshTools/lnInclude -I$(LIB_SRC)/meshTools/lnInclude
@ -10,4 +11,6 @@ EXE_LIBS = \
-lfiniteVolume \ -lfiniteVolume \
-llagrangian \ -llagrangian \
-lmolecule \ -lmolecule \
-lpotential -lpotential \
-lmolecularMeasurements

View File

@ -23,7 +23,7 @@ License
Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Application Application
mdEquilibrationFOAM mdEquilibrationFoam
Description Description
Equilibrates and/or preconditions MD systems Equilibrates and/or preconditions MD systems
@ -40,9 +40,9 @@ int main(int argc, char *argv[])
# include "createTime.H" # include "createTime.H"
# include "createMesh.H" # include "createMesh.H"
moleculeCloud molecules(mesh); potential pot(mesh);
molecules.removeHighEnergyOverlaps(); moleculeCloud molecules(mesh, pot);
# include "temperatureAndPressureVariables.H" # include "temperatureAndPressureVariables.H"
@ -60,7 +60,7 @@ int main(int argc, char *argv[])
Info << "Time = " << runTime.timeName() << endl; Info << "Time = " << runTime.timeName() << endl;
molecules.integrateEquationsOfMotion(); molecules.evolve();
# include "meanMomentumEnergyAndNMols.H" # include "meanMomentumEnergyAndNMols.H"

View File

@ -1,4 +1,4 @@
Info<< "Reading MD Equilibration Dictionary" << nl << endl; Info<< nl << "Reading MD Equilibration Dictionary" << nl << endl;
IOdictionary mdEquilibrationDict IOdictionary mdEquilibrationDict
( (

View File

@ -0,0 +1,3 @@
mdFoam.C
EXE = $(FOAM_APPBIN)/mdFoam

View File

@ -1,6 +1,7 @@
EXE_INC = \ EXE_INC = \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecule/lnInclude \ -I$(LIB_SRC)/lagrangian/molecularDynamics/molecule/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/potential/lnInclude \ -I$(LIB_SRC)/lagrangian/molecularDynamics/potential/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecularMeasurements/lnInclude \
-I$(LIB_SRC)/finiteVolume/lnInclude \ -I$(LIB_SRC)/finiteVolume/lnInclude \
-I$(LIB_SRC)/lagrangian/basic/lnInclude \ -I$(LIB_SRC)/lagrangian/basic/lnInclude \
-I$(LIB_SRC)/meshTools/lnInclude -I$(LIB_SRC)/meshTools/lnInclude
@ -10,4 +11,6 @@ EXE_LIBS = \
-lfiniteVolume \ -lfiniteVolume \
-llagrangian \ -llagrangian \
-lmolecule \ -lmolecule \
-lpotential -lpotential \
-lmolecularMeasurements

View File

@ -23,10 +23,10 @@ License
Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
Application Application
gnemdFOAM mdFoam
Description Description
MD for Fluid Mechanics and hybridising with a continuum solver. molecular dynamics solver for fluid dynamics
\*---------------------------------------------------------------------------*/ \*---------------------------------------------------------------------------*/
@ -40,11 +40,9 @@ int main(int argc, char *argv[])
# include "createTime.H" # include "createTime.H"
# include "createMesh.H" # include "createMesh.H"
moleculeCloud molecules(mesh); potential pot(mesh);
# include "createMDFields.H" moleculeCloud molecules(mesh, pot);
molecules.removeHighEnergyOverlaps();
# include "temperatureAndPressureVariables.H" # include "temperatureAndPressureVariables.H"
@ -60,20 +58,14 @@ int main(int argc, char *argv[])
Info << "Time = " << runTime.timeName() << endl; Info << "Time = " << runTime.timeName() << endl;
molecules.integrateEquationsOfMotion(); molecules.evolve();
# include "meanMomentumEnergyAndNMols.H" # include "meanMomentumEnergyAndNMols.H"
# include "temperatureAndPressure.H" # include "temperatureAndPressure.H"
# include "calculateMDFields.H"
# include "averageMDFields.H"
runTime.write(); runTime.write();
# include "resetMDFields.H"
if (runTime.outputTime()) if (runTime.outputTime())
{ {
nAveragingSteps = 0; nAveragingSteps = 0;

View File

@ -1,5 +1,3 @@
INTERFOAM = $(FOAM_SOLVERS)/multiphase/interFoam
EXE_INC = \ EXE_INC = \
-I$(LIB_SRC)/transportModels \ -I$(LIB_SRC)/transportModels \
-I$(LIB_SRC)/transportModels/incompressible/lnInclude \ -I$(LIB_SRC)/transportModels/incompressible/lnInclude \

View File

@ -1,5 +1,3 @@
INTERFOAM = $(FOAM_SOLVERS)/multiphase/interFoam
EXE_INC = \ EXE_INC = \
-I$(LIB_SRC)/transportModels \ -I$(LIB_SRC)/transportModels \
-I$(LIB_SRC)/transportModels/incompressible/lnInclude \ -I$(LIB_SRC)/transportModels/incompressible/lnInclude \

View File

@ -39,7 +39,7 @@ int main(int argc, char *argv[])
wordHashSet setA(0); wordHashSet setA(0);
HashTable<label, word> tableA; HashTable<label, word> tableA;
HashTable<empty> tableB; HashTable<nil> tableB;
Map<label> mapA; Map<label> mapA;
setA.insert("kjhk"); setA.insert("kjhk");
@ -49,9 +49,9 @@ int main(int argc, char *argv[])
tableA.insert("value2", 2); tableA.insert("value2", 2);
tableA.insert("value3", 3); tableA.insert("value3", 3);
tableB.insert("value4", empty()); tableB.insert("value4", nil());
tableB.insert("value5", empty()); tableB.insert("value5", nil());
tableB.insert("value6", empty()); tableB.insert("value6", nil());
mapA.set(1, 1); mapA.set(1, 1);
mapA.set(2, 2); mapA.set(2, 2);
@ -66,7 +66,7 @@ int main(int argc, char *argv[])
Info<< wordHashSet(setA) << endl; Info<< wordHashSet(setA) << endl;
Info<< "create from HashTable<T>: "; Info<< "create from HashTable<T>: ";
Info<< wordHashSet(tableA) << endl; Info<< wordHashSet(tableA) << endl;
Info<< "create from HashTable<empty>: "; Info<< "create from HashTable<nil>: ";
Info<< wordHashSet(tableB) << endl; Info<< wordHashSet(tableB) << endl;
Info<< "create from Map<label>: "; Info<< "create from Map<label>: ";

View File

@ -100,12 +100,12 @@ int main()
<< "\ntable2" << table1 << nl << "\ntable2" << table1 << nl
<< "\ntable3" << table3 << nl; << "\ntable3" << table3 << nl;
Info<< "\ndelete table2" << nl; Info<< "\nerase table2 by iterator" << nl;
forAllIter(HASHTABLE_CLASS<double>, table2, iter) forAllIter(HASHTABLE_CLASS<double>, table2, iter)
{ {
Info<< "deleting " << iter.key() << " => " << iter() << " ... "; Info<< "erasing " << iter.key() << " => " << iter() << " ... ";
table2.erase(iter); table2.erase(iter);
Info<< "deleted" << endl; Info<< "erased" << endl;
} }
Info<< "\ntable1" << table1 << nl Info<< "\ntable1" << table1 << nl
@ -135,6 +135,24 @@ int main()
Info<< "removed an element - test table1 != table3 : " Info<< "removed an element - test table1 != table3 : "
<< (table1 != table3) << nl; << (table1 != table3) << nl;
// insert a few things into table2
table2.set("ada", 14.0);
table2.set("aeq", 15.0);
table2.set("aaw", 16.0);
table2.set("abs", 17.0);
table2.set("adx", 20.0);
Info<< "\ntable1" << table1 << nl
<< "\ntable2" << table2 << nl;
label nErased = table1.erase(table2);
Info<< "\nerase table2 keys from table1 (removed "
<< nErased << " elements)" << nl
<< "\ntable1" << table1 << nl
<< "\ntable2" << table2 << nl;
Info<< "\nclearStorage table3 ... "; Info<< "\nclearStorage table3 ... ";
table3.clearStorage(); table3.clearStorage();
Info<< table3 << nl; Info<< table3 << nl;

View File

@ -100,12 +100,12 @@ int main()
<< "\ntable2" << table1 << nl << "\ntable2" << table1 << nl
<< "\ntable3" << table3 << nl; << "\ntable3" << table3 << nl;
Info<< "\ndelete table2" << nl; Info<< "\nerase table2 by iterator" << nl;
forAllIter(HASHTABLE_CLASS<double>, table2, iter) forAllIter(HASHTABLE_CLASS<double>, table2, iter)
{ {
Info<< "deleting " << iter.key() << " => " << iter() << " ... "; Info<< "erasing " << iter.key() << " => " << iter() << " ... ";
table2.erase(iter); table2.erase(iter);
Info<< "deleted" << endl; Info<< "erased" << endl;
} }
Info<< "\ntable1" << table1 << nl Info<< "\ntable1" << table1 << nl
@ -135,6 +135,24 @@ int main()
Info<< "removed an element - test table1 != table3 : " Info<< "removed an element - test table1 != table3 : "
<< (table1 != table3) << nl; << (table1 != table3) << nl;
// insert a few things into table2
table2.set("ada", 14.0);
table2.set("aeq", 15.0);
table2.set("aaw", 16.0);
table2.set("abs", 17.0);
table2.set("adx", 20.0);
Info<< "\ntable1" << table1 << nl
<< "\ntable2" << table2 << nl;
label nErased = table1.erase(table2);
Info<< "\nerase table2 keys from table1 (removed "
<< nErased << " elements)" << nl
<< "\ntable1" << table1 << nl
<< "\ntable2" << table2 << nl;
Info<< "\nclearStorage table3 ... "; Info<< "\nclearStorage table3 ... ";
table3.clearStorage(); table3.clearStorage();
Info<< table3 << nl; Info<< table3 << nl;
@ -144,5 +162,4 @@ int main()
return 0; return 0;
} }
// ************************************************************************* // // ************************************************************************* //

View File

@ -0,0 +1,3 @@
foamVersionString.C
EXE = $(FOAM_USER_APPBIN)/foamVersionString

View File

@ -0,0 +1,2 @@
/* EXE_INC = -I$(LIB_SRC)/cfdTools/include */
/* EXE_LIBS = -lfiniteVolume */

View File

@ -22,21 +22,26 @@ License
along with OpenFOAM; if not, write to the Free Software Foundation, along with OpenFOAM; if not, write to the Free Software Foundation,
Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
\*----------------------------------------------------------------------------*/ Application
foamVersionString.C
#include "moleculeCloud.H" Description
Print the OpenFOAM version strings.
Simultaneously the smallest possible program to use a minimal bit of
the OpenFOAM library
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // \*---------------------------------------------------------------------------*/
void Foam::moleculeCloud::calculateExternalForce() #include <iostream>
#include "foamVersion.H"
int main()
{ {
iterator mol(this->begin()); std::cerr
<< "build " << Foam::FOAMbuild << "\n"
<< "version " << Foam::FOAMversion << "\n";
for (mol = this->begin(); mol != this->end(); ++mol) return 0;
{
mol().A() += gravity_;
}
} }
// ************************************************************************* // // ************************************************************************* //

View File

@ -413,7 +413,7 @@ bool limitRefinementLevel
} }
} }
if (addCutCells.size() > 0) if (addCutCells.size())
{ {
// Add cells to cutCells. // Add cells to cutCells.
@ -479,7 +479,7 @@ void doRefinement
{ {
const labelList& added = addedCells[oldCellI]; const labelList& added = addedCells[oldCellI];
if (added.size() > 0) if (added.size())
{ {
// Give all cells resulting from split the refinement level // Give all cells resulting from split the refinement level
// of the master. // of the master.
@ -895,7 +895,7 @@ int main(int argc, char *argv[])
<< " Selected for refinement :" << cutCells.size() << nl << " Selected for refinement :" << cutCells.size() << nl
<< endl; << endl;
if (cutCells.size() == 0) if (cutCells.empty())
{ {
Info<< "Stopping refining since 0 cells selected to be refined ..." Info<< "Stopping refining since 0 cells selected to be refined ..."
<< nl << endl; << nl << endl;

View File

@ -358,18 +358,21 @@ int main(int argc, char *argv[])
( (
dict.lookup("facesToTriangulate") dict.lookup("facesToTriangulate")
); );
bool cutBoundary = bool cutBoundary =
pointsToMove.size() > 0 (
|| edgesToSplit.size() > 0 pointsToMove.size()
|| facesToTriangulate.size() > 0; || edgesToSplit.size()
|| facesToTriangulate.size()
);
List<Pair<point> > edgesToCollapse(dict.lookup("edgesToCollapse")); List<Pair<point> > edgesToCollapse(dict.lookup("edgesToCollapse"));
bool collapseEdge = edgesToCollapse.size() > 0; bool collapseEdge = edgesToCollapse.size();
List<Pair<point> > cellsToPyramidise(dict.lookup("cellsToSplit")); List<Pair<point> > cellsToPyramidise(dict.lookup("cellsToSplit"));
bool cellsToSplit = cellsToPyramidise.size() > 0; bool cellsToSplit = cellsToPyramidise.size();
//List<Tuple<pointField,point> > //List<Tuple<pointField,point> >
// cellsToCreate(dict.lookup("cellsToCreate")); // cellsToCreate(dict.lookup("cellsToCreate"));
@ -523,7 +526,7 @@ int main(int argc, char *argv[])
Info<< nl << "There was a problem in one of the inputs in the" Info<< nl << "There was a problem in one of the inputs in the"
<< " dictionary. Not modifying mesh." << endl; << " dictionary. Not modifying mesh." << endl;
} }
else if (cellToPyrCentre.size() > 0) else if (cellToPyrCentre.size())
{ {
Info<< nl << "All input cells located. Modifying mesh." << endl; Info<< nl << "All input cells located. Modifying mesh." << endl;
@ -555,7 +558,7 @@ int main(int argc, char *argv[])
Info << "Writing modified mesh to time " << runTime.value() << endl; Info << "Writing modified mesh to time " << runTime.value() << endl;
mesh.write(); mesh.write();
} }
else if (edgeToPos.size() > 0) else if (edgeToPos.size())
{ {
Info<< nl << "All input edges located. Modifying mesh." << endl; Info<< nl << "All input edges located. Modifying mesh." << endl;

View File

@ -336,7 +336,7 @@ int main(int argc, char *argv[])
) )
{} {}
if (refCells.size() > 0) if (refCells.size())
{ {
Info<< "Collected " << refCells.size() << " cells that need to be" Info<< "Collected " << refCells.size() << " cells that need to be"
<< " refined to get closer to overall 2:1 refinement level limit" << " refined to get closer to overall 2:1 refinement level limit"

View File

@ -652,7 +652,7 @@ int main(int argc, char *argv[])
// Remove cut cells from cellsToCut (Note:only relevant if -readSet) // Remove cut cells from cellsToCut (Note:only relevant if -readSet)
forAll(cuts.cellLoops(), cellI) forAll(cuts.cellLoops(), cellI)
{ {
if (cuts.cellLoops()[cellI].size() > 0) if (cuts.cellLoops()[cellI].size())
{ {
//Info<< "Removing cut cell " << cellI << " from wishlist" //Info<< "Removing cut cell " << cellI << " from wishlist"
// << endl; // << endl;

View File

@ -584,7 +584,7 @@ int main(int argc, char *argv[])
forAll (rawPatches, patchI) forAll (rawPatches, patchI)
{ {
if (rawPatches[patchI].size() > 0 && cfxPatchTypes[patchI] != "BLKBDY") if (rawPatches[patchI].size() && cfxPatchTypes[patchI] != "BLKBDY")
{ {
// Check if this name has been already created // Check if this name has been already created
label existingPatch = -1; label existingPatch = -1;

View File

@ -1,4 +1,4 @@
/*---------------------------------------------------------------------------*\ /*--------------------------------*- C++ -*----------------------------------*\
========= | ========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | \\ / O peration |
@ -1486,7 +1486,7 @@ int main(int argc, char *argv[])
} }
defaultBoundaryFaces.shrink(); defaultBoundaryFaces.shrink();
if(defaultBoundaryFaces.size() != 0) if (defaultBoundaryFaces.size())
{ {
Warning << " fluent mesh has " << defaultBoundaryFaces.size() Warning << " fluent mesh has " << defaultBoundaryFaces.size()
<< " undefined boundary faces." << endl << " undefined boundary faces." << endl
@ -1695,7 +1695,7 @@ int main(int argc, char *argv[])
// soon negating the need for double output // soon negating the need for double output
if (writeSets) if (writeSets)
{ {
if (cellGroupZoneID.size() > 1 ) if (cellGroupZoneID.size() > 1)
{ {
Info<< "Writing cell sets" << endl; Info<< "Writing cell sets" << endl;

View File

@ -667,7 +667,7 @@ void readCells
const labelList& zCells = zoneCells[zoneI]; const labelList& zCells = zoneCells[zoneI];
if (zCells.size() > 0) if (zCells.size())
{ {
Info<< " " << zoneI << '\t' << zCells.size() << endl; Info<< " " << zoneI << '\t' << zCells.size() << endl;
} }
@ -778,7 +778,7 @@ int main(int argc, char *argv[])
forAll(zoneCells, zoneI) forAll(zoneCells, zoneI)
{ {
if (zoneCells[zoneI].size() > 0) if (zoneCells[zoneI].size())
{ {
nValidCellZones++; nValidCellZones++;
} }
@ -910,7 +910,7 @@ int main(int argc, char *argv[])
const labelList& zFaces = zoneFaces[zoneI]; const labelList& zFaces = zoneFaces[zoneI];
if (zFaces.size() > 0) if (zFaces.size())
{ {
nValidFaceZones++; nValidFaceZones++;
@ -940,7 +940,7 @@ int main(int argc, char *argv[])
forAll(zoneCells, zoneI) forAll(zoneCells, zoneI)
{ {
if (zoneCells[zoneI].size() > 0) if (zoneCells[zoneI].size())
{ {
label physReg = zoneToPhys[zoneI]; label physReg = zoneToPhys[zoneI];
@ -979,7 +979,7 @@ int main(int argc, char *argv[])
forAll(zoneFaces, zoneI) forAll(zoneFaces, zoneI)
{ {
if (zoneFaces[zoneI].size() > 0) if (zoneFaces[zoneI].size())
{ {
label physReg = zoneToPhys[zoneI]; label physReg = zoneToPhys[zoneI];
@ -1011,7 +1011,7 @@ int main(int argc, char *argv[])
} }
} }
if (cz.size() > 0 || fz.size() > 0) if (cz.size() || fz.size())
{ {
mesh.addZones(List<pointZone*>(0), fz, cz); mesh.addZones(List<pointZone*>(0), fz, cz);
} }

View File

@ -752,7 +752,7 @@ int main(int argc, char *argv[])
List<faceList> patchFaceVerts; List<faceList> patchFaceVerts;
if (dofVertIndices.size() > 0) if (dofVertIndices.size())
{ {
// Use the vertex constraints to patch. Is of course bit dodgy since // Use the vertex constraints to patch. Is of course bit dodgy since
// face goes on patch if all its vertices are on a constraint. // face goes on patch if all its vertices are on a constraint.

View File

@ -242,7 +242,7 @@ int main(int argc, char *argv[])
} }
if (vertsToBoundary.size() > 0) if (vertsToBoundary.size())
{ {
// Didn't find cells connected to boundary faces. // Didn't find cells connected to boundary faces.
WarningIn(args.executable()) WarningIn(args.executable())

View File

@ -229,7 +229,7 @@ void simpleMarkFeatures
if (doNotPreserveFaceZones) if (doNotPreserveFaceZones)
{ {
if (faceZones.size() > 0) if (faceZones.size())
{ {
WarningIn("simpleMarkFeatures(..)") WarningIn("simpleMarkFeatures(..)")
<< "Detected " << faceZones.size() << "Detected " << faceZones.size()
@ -239,7 +239,7 @@ void simpleMarkFeatures
} }
else else
{ {
if (faceZones.size() > 0) if (faceZones.size())
{ {
Info<< "Detected " << faceZones.size() Info<< "Detected " << faceZones.size()
<< " faceZones. Preserving these by marking their" << " faceZones. Preserving these by marking their"

View File

@ -38,10 +38,7 @@ Class
// * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * // // * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * //
namespace Foam defineTypeNameAndDebug(Foam::meshDualiser, 0);
{
defineTypeNameAndDebug(meshDualiser, 0);
}
// * * * * * * * * * * * * * Private Member Functions * * * * * * * * * * * // // * * * * * * * * * * * * * Private Member Functions * * * * * * * * * * * //
@ -1083,7 +1080,7 @@ void Foam::meshDualiser::setRefinement
{ {
label pointI = multiCellFeaturePoints[i]; label pointI = multiCellFeaturePoints[i];
if (pointToDualCells_[pointI].size() > 0) if (pointToDualCells_[pointI].size())
{ {
FatalErrorIn FatalErrorIn
( (
@ -1133,7 +1130,7 @@ void Foam::meshDualiser::setRefinement
// Normal points // Normal points
forAll(mesh_.points(), pointI) forAll(mesh_.points(), pointI)
{ {
if (pointToDualCells_[pointI].size() == 0) if (pointToDualCells_[pointI].empty())
{ {
pointToDualCells_[pointI].setSize(1); pointToDualCells_[pointI].setSize(1);
pointToDualCells_[pointI][0] = meshMod.addCell pointToDualCells_[pointI][0] = meshMod.addCell

View File

@ -136,7 +136,7 @@ void sammMesh::readCouples()
forAll (curFaces, faceI) forAll (curFaces, faceI)
{ {
if (curFaces[faceI].size() == 0) if (curFaces[faceI].empty())
{ {
zeroSizeFound++; zeroSizeFound++;
} }
@ -153,7 +153,7 @@ void sammMesh::readCouples()
forAll (oldFaces, faceI) forAll (oldFaces, faceI)
{ {
if (oldFaces[faceI].size() > 0) if (oldFaces[faceI].size())
{ {
curFaces[nFaces] = oldFaces[faceI]; curFaces[nFaces] = oldFaces[faceI];

View File

@ -45,8 +45,7 @@ void starMesh::createCoupleMatches()
// existing points list // existing points list
// Estimate the number of cells affected by couple matches // Estimate the number of cells affected by couple matches
const label cellMapSize = const label cellMapSize = min
min
( (
cellShapes_.size()/10, cellShapes_.size()/10,
couples_.size()*2 couples_.size()*2
@ -1097,7 +1096,7 @@ void starMesh::createCoupleMatches()
<< "edges to consider: " << edgesToConsider << endl; << "edges to consider: " << edgesToConsider << endl;
# endif # endif
if (edgesToConsider.size() == 0) if (edgesToConsider.empty())
{ {
FatalErrorIn("void starMesh::createCoupleMatches()") FatalErrorIn("void starMesh::createCoupleMatches()")
<< setprecision(12) << setprecision(12)
@ -1420,7 +1419,7 @@ void starMesh::createCoupleMatches()
} // end of arbitrary match } // end of arbitrary match
} }
if (couples_.size() > 0) if (couples_.size())
{ {
// Loop through all cells and reset faces for removal to zero size // Loop through all cells and reset faces for removal to zero size
const labelList crfToc = cellRemovedFaces.toc(); const labelList crfToc = cellRemovedFaces.toc();
@ -1442,7 +1441,7 @@ void starMesh::createCoupleMatches()
cellFaces_[curCell][curRemovedFacesIter()].setSize(0); cellFaces_[curCell][curRemovedFacesIter()].setSize(0);
} }
if (curRemovedFaces.size() > 0) if (curRemovedFaces.size())
{ {
// reset the shape pointer to unknown // reset the shape pointer to unknown
cellShapes_[curCell] = cellShape(*unknownPtr_, labelList(0)); cellShapes_[curCell] = cellShape(*unknownPtr_, labelList(0));
@ -1468,7 +1467,7 @@ void starMesh::createCoupleMatches()
// copy original faces that have not been removed // copy original faces that have not been removed
forAll (oldFaces, faceI) forAll (oldFaces, faceI)
{ {
if (oldFaces[faceI].size() > 0) if (oldFaces[faceI].size())
{ {
newFaces[nNewFaces] = oldFaces[faceI]; newFaces[nNewFaces] = oldFaces[faceI];
nNewFaces++; nNewFaces++;
@ -1491,7 +1490,7 @@ void starMesh::createCoupleMatches()
// reset the size of the face list // reset the size of the face list
newFaces.setSize(nNewFaces); newFaces.setSize(nNewFaces);
if (curAddedFaces.size() > 0) if (curAddedFaces.size())
{ {
// reset the shape pointer to unknown // reset the shape pointer to unknown
cellShapes_[curCell] = cellShape(*unknownPtr_, labelList(0)); cellShapes_[curCell] = cellShape(*unknownPtr_, labelList(0));

View File

@ -264,7 +264,7 @@ starMesh::starMesh
readCouples(); readCouples();
if (couples_.size() > 0) if (couples_.size())
{ {
createCoupleMatches(); createCoupleMatches();
} }

View File

@ -160,7 +160,7 @@ int main(int argc, char *argv[])
{ {
nodeStream.getLine(line); nodeStream.getLine(line);
} }
while((line.size() > 0) && (line[0] == '#')); while (line.size() && line[0] == '#');
IStringStream nodeLine(line); IStringStream nodeLine(line);
@ -193,7 +193,7 @@ int main(int argc, char *argv[])
{ {
nodeStream.getLine(line); nodeStream.getLine(line);
if ((line.size() > 0) && (line[0] != '#')) if (line.size() && line[0] != '#')
{ {
IStringStream nodeLine(line); IStringStream nodeLine(line);
@ -237,7 +237,7 @@ int main(int argc, char *argv[])
{ {
eleStream.getLine(line); eleStream.getLine(line);
} }
while((line.size() > 0) && (line[0] == '#')); while (line.size() && line[0] == '#');
IStringStream eleLine(line); IStringStream eleLine(line);
@ -281,7 +281,7 @@ int main(int argc, char *argv[])
{ {
eleStream.getLine(line); eleStream.getLine(line);
if ((line.size() > 0) && (line[0] != '#')) if (line.size() && line[0] != '#')
{ {
IStringStream eleLine(line); IStringStream eleLine(line);
@ -356,7 +356,7 @@ int main(int argc, char *argv[])
{ {
faceStream.getLine(line); faceStream.getLine(line);
} }
while((line.size() > 0) && (line[0] == '#')); while (line.size() && line[0] == '#');
IStringStream faceLine(line); IStringStream faceLine(line);
@ -398,7 +398,7 @@ int main(int argc, char *argv[])
{ {
faceStream.getLine(line); faceStream.getLine(line);
if ((line.size() > 0) && (line[0] != '#')) if (line.size() && line[0] != '#')
{ {
IStringStream faceLine(line); IStringStream faceLine(line);

View File

@ -118,7 +118,7 @@ Foam::label Foam::blockMesh::numZonedBlocks() const
forAll(*this, blockI) forAll(*this, blockI)
{ {
if (operator[](blockI).blockDef().zoneName().size() > 0) if (operator[](blockI).blockDef().zoneName().size())
{ {
num++; num++;
} }

View File

@ -278,7 +278,7 @@ int main(int argc, char *argv[])
const labelListList& blockCells = b.cells(); const labelListList& blockCells = b.cells();
const word& zoneName = b.blockDef().zoneName(); const word& zoneName = b.blockDef().zoneName();
if (zoneName.size() > 0) if (zoneName.size())
{ {
HashTable<label>::const_iterator iter = zoneMap.find(zoneName); HashTable<label>::const_iterator iter = zoneMap.find(zoneName);

View File

@ -1,6 +1,6 @@
Info<< "Creating merge patch pairs" << nl << endl; Info<< "Creating merge patch pairs" << nl << endl;
if (mergePatchPairs.size() > 0) if (mergePatchPairs.size())
{ {
// Create and add point and face zones and mesh modifiers // Create and add point and face zones and mesh modifiers
List<pointZone*> pz(mergePatchPairs.size()); List<pointZone*> pz(mergePatchPairs.size());

View File

@ -177,7 +177,7 @@ Foam::label Foam::checkTopology
primitivePatch::surfaceTopo pTyp = pp.surfaceType(); primitivePatch::surfaceTopo pTyp = pp.surfaceType();
if (pp.size() == 0) if (pp.empty())
{ {
Pout<< setw(34) << "ok (empty)"; Pout<< setw(34) << "ok (empty)";
} }
@ -232,7 +232,7 @@ Foam::label Foam::checkTopology
Pout<< endl; Pout<< endl;
} }
if (points.size() > 0) if (points.size())
{ {
Pout<< " <<Writing " << points.size() Pout<< " <<Writing " << points.size()
<< " conflicting points to set " << " conflicting points to set "

View File

@ -172,7 +172,7 @@ void filterPatches(polyMesh& mesh)
if (isA<processorPolyPatch>(pp)) if (isA<processorPolyPatch>(pp))
{ {
if (pp.size() > 0) if (pp.size())
{ {
allPatches.append allPatches.append
( (
@ -586,7 +586,7 @@ int main(int argc, char *argv[])
// 1. Add all new patches // 1. Add all new patches
// ~~~~~~~~~~~~~~~~~~~~~~ // ~~~~~~~~~~~~~~~~~~~~~~
if (patchSources.size() > 0) if (patchSources.size())
{ {
// Old and new patches. // Old and new patches.
DynamicList<polyPatch*> allPatches(patches.size()+patchSources.size()); DynamicList<polyPatch*> allPatches(patches.size()+patchSources.size());

View File

@ -34,10 +34,7 @@ License
// * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * // // * * * * * * * * * * * * * * Static Data Members * * * * * * * * * * * * * //
namespace Foam defineTypeNameAndDebug(Foam::mergePolyMesh, 1);
{
defineTypeNameAndDebug(mergePolyMesh, 1);
}
// * * * * * * * * * * * * * Private Member Functions * * * * * * * * * * * // // * * * * * * * * * * * * * Private Member Functions * * * * * * * * * * * //
@ -142,7 +139,7 @@ Foam::mergePolyMesh::mergePolyMesh(const IOobject& io)
// Point zones // Point zones
wordList curPointZoneNames = pointZones().names(); wordList curPointZoneNames = pointZones().names();
if (curPointZoneNames.size() > 0) if (curPointZoneNames.size())
{ {
pointZoneNames_.setCapacity(2*curPointZoneNames.size()); pointZoneNames_.setCapacity(2*curPointZoneNames.size());
} }
@ -155,7 +152,7 @@ Foam::mergePolyMesh::mergePolyMesh(const IOobject& io)
// Face zones // Face zones
wordList curFaceZoneNames = faceZones().names(); wordList curFaceZoneNames = faceZones().names();
if (curFaceZoneNames.size() > 0) if (curFaceZoneNames.size())
{ {
faceZoneNames_.setCapacity(2*curFaceZoneNames.size()); faceZoneNames_.setCapacity(2*curFaceZoneNames.size());
} }
@ -167,7 +164,7 @@ Foam::mergePolyMesh::mergePolyMesh(const IOobject& io)
// Cell zones // Cell zones
wordList curCellZoneNames = cellZones().names(); wordList curCellZoneNames = cellZones().names();
if (curCellZoneNames.size() > 0) if (curCellZoneNames.size())
{ {
cellZoneNames_.setCapacity(2*curCellZoneNames.size()); cellZoneNames_.setCapacity(2*curCellZoneNames.size());
} }

View File

@ -48,7 +48,7 @@ string getLine(std::ifstream& is)
{ {
std::getline(is, line); std::getline(is, line);
} }
while((line.size() > 0) && (line[0] == '#')); while (line.size() && line[0] == '#');
return line; return line;
} }
@ -60,13 +60,13 @@ labelList parseVertices(const string& line)
DynamicList<label> verts; DynamicList<label> verts;
// Assume 'l' is followed by space. // Assume 'l' is followed by space.
label endNum = 1; string::size_type endNum = 1;
do do
{ {
label startNum = line.find_first_not_of(' ', endNum); string::size_type startNum = line.find_first_not_of(' ', endNum);
if (startNum == label(string::npos)) if (startNum == string::npos)
{ {
break; break;
} }
@ -74,7 +74,7 @@ labelList parseVertices(const string& line)
endNum = line.find(' ', startNum); endNum = line.find(' ', startNum);
string vertexSpec; string vertexSpec;
if (endNum != label(string::npos)) if (endNum != string::npos)
{ {
vertexSpec = line.substr(startNum, endNum-startNum); vertexSpec = line.substr(startNum, endNum-startNum);
} }
@ -83,10 +83,10 @@ labelList parseVertices(const string& line)
vertexSpec = line.substr(startNum, line.size() - startNum); vertexSpec = line.substr(startNum, line.size() - startNum);
} }
label slashPos = vertexSpec.find('/'); string::size_type slashPos = vertexSpec.find('/');
label vertI = 0; label vertI = 0;
if (slashPos != label(string::npos)) if (slashPos != string::npos)
{ {
IStringStream intStream(vertexSpec.substr(0, slashPos)); IStringStream intStream(vertexSpec.substr(0, slashPos));

View File

@ -485,7 +485,7 @@ int main(int argc, char *argv[])
{ {
const labelList& added = oldToNew[oldCellI]; const labelList& added = oldToNew[oldCellI];
if (added.size() > 0) if (added.size())
{ {
forAll(added, i) forAll(added, i)
{ {

View File

@ -89,7 +89,7 @@ void backup
const word& toName const word& toName
) )
{ {
if (fromSet.size() > 0) if (fromSet.size())
{ {
Pout<< " Backing up " << fromName << " into " << toName << endl; Pout<< " Backing up " << fromName << " into " << toName << endl;
@ -284,7 +284,7 @@ void printAllSets(const polyMesh& mesh, Ostream& os)
polyMesh::meshSubDir/"sets" polyMesh::meshSubDir/"sets"
); );
IOobjectList cellSets(objects.lookupClass(cellSet::typeName)); IOobjectList cellSets(objects.lookupClass(cellSet::typeName));
if (cellSets.size() > 0) if (cellSets.size())
{ {
os << "cellSets:" << endl; os << "cellSets:" << endl;
forAllConstIter(IOobjectList, cellSets, iter) forAllConstIter(IOobjectList, cellSets, iter)
@ -294,7 +294,7 @@ void printAllSets(const polyMesh& mesh, Ostream& os)
} }
} }
IOobjectList faceSets(objects.lookupClass(faceSet::typeName)); IOobjectList faceSets(objects.lookupClass(faceSet::typeName));
if (faceSets.size() > 0) if (faceSets.size())
{ {
os << "faceSets:" << endl; os << "faceSets:" << endl;
forAllConstIter(IOobjectList, faceSets, iter) forAllConstIter(IOobjectList, faceSets, iter)
@ -304,7 +304,7 @@ void printAllSets(const polyMesh& mesh, Ostream& os)
} }
} }
IOobjectList pointSets(objects.lookupClass(pointSet::typeName)); IOobjectList pointSets(objects.lookupClass(pointSet::typeName));
if (pointSets.size() > 0) if (pointSets.size())
{ {
os << "pointSets:" << endl; os << "pointSets:" << endl;
forAllConstIter(IOobjectList, pointSets, iter) forAllConstIter(IOobjectList, pointSets, iter)
@ -347,7 +347,7 @@ bool doCommand
bool ok = true; bool ok = true;
// Set to work on // Set to work on
autoPtr<topoSet> currentSetPtr(NULL); autoPtr<topoSet> currentSetPtr;
word sourceType; word sourceType;
@ -383,7 +383,7 @@ bool doCommand
currentSet.resize(max(currentSet.size(), typSize)); currentSet.resize(max(currentSet.size(), typSize));
} }
if (!currentSetPtr.valid()) if (currentSetPtr.empty())
{ {
Pout<< " Cannot construct/load set " Pout<< " Cannot construct/load set "
<< topoSet::localPath(mesh, setName) << endl; << topoSet::localPath(mesh, setName) << endl;
@ -522,7 +522,7 @@ bool doCommand
Pout<< fIOErr.message().c_str() << endl; Pout<< fIOErr.message().c_str() << endl;
if (sourceType.size() != 0) if (sourceType.size())
{ {
Pout<< topoSetSource::usage(sourceType).c_str(); Pout<< topoSetSource::usage(sourceType).c_str();
} }
@ -533,7 +533,7 @@ bool doCommand
Pout<< fErr.message().c_str() << endl; Pout<< fErr.message().c_str() << endl;
if (sourceType.size() != 0) if (sourceType.size())
{ {
Pout<< topoSetSource::usage(sourceType).c_str(); Pout<< topoSetSource::usage(sourceType).c_str();
} }
@ -571,7 +571,7 @@ commandStatus parseType
IStringStream& is IStringStream& is
) )
{ {
if (setType.size() == 0) if (setType.empty())
{ {
Pout<< "Type 'help' for usage information" << endl; Pout<< "Type 'help' for usage information" << endl;
@ -689,7 +689,7 @@ commandStatus parseAction(const word& actionName)
{ {
commandStatus stat = INVALID; commandStatus stat = INVALID;
if (actionName.size() != 0) if (actionName.size())
{ {
try try
{ {
@ -792,7 +792,7 @@ int main(int argc, char *argv[])
std::getline(*fileStreamPtr, rawLine); std::getline(*fileStreamPtr, rawLine);
if (rawLine.size() > 0) if (rawLine.size())
{ {
Pout<< "Doing:" << rawLine << endl; Pout<< "Doing:" << rawLine << endl;
} }
@ -821,7 +821,7 @@ int main(int argc, char *argv[])
# endif # endif
} }
if (rawLine.size() == 0 || rawLine[0] == '#') if (rawLine.empty() || rawLine[0] == '#')
{ {
continue; continue;
} }

View File

@ -100,10 +100,10 @@ void checkPatch(const polyBoundaryMesh& bMesh, const word& name)
<< exit(FatalError); << exit(FatalError);
} }
if (bMesh[patchI].size() != 0) if (bMesh[patchI].size())
{ {
FatalErrorIn("checkPatch(const polyBoundaryMesh&, const word&)") FatalErrorIn("checkPatch(const polyBoundaryMesh&, const word&)")
<< "Patch " << name << " is present but not of zero size" << "Patch " << name << " is present but non-zero size"
<< exit(FatalError); << exit(FatalError);
} }
} }

View File

@ -488,17 +488,18 @@ labelList getNonRegionCells(const labelList& cellRegion, const label regionI)
} }
// Get per region-region interface the sizes. // Get per region-region interface the sizes. If sumParallel sums sizes.
// If sumParallel does merge. // Returns interfaces as straight list for looping in identical order.
EdgeMap<label> getInterfaceSizes void getInterfaceSizes
( (
const polyMesh& mesh, const polyMesh& mesh,
const labelList& cellRegion, const labelList& cellRegion,
const bool sumParallel const bool sumParallel,
edgeList& interfaces,
EdgeMap<label>& interfaceSizes
) )
{ {
EdgeMap<label> interfaceSizes;
forAll(mesh.faceNeighbour(), faceI) forAll(mesh.faceNeighbour(), faceI)
{ {
label ownRegion = cellRegion[mesh.faceOwner()[faceI]]; label ownRegion = cellRegion[mesh.faceOwner()[faceI]];
@ -585,7 +586,12 @@ EdgeMap<label> getInterfaceSizes
} }
} }
return interfaceSizes; // Make sure all processors have interfaces in same order
interfaces = interfaceSizes.toc();
if (sumParallel)
{
Pstream::scatter(interfaces);
}
} }
@ -705,11 +711,7 @@ autoPtr<mapPolyMesh> createRegionMesh
if (otherRegion != -1) if (otherRegion != -1)
{ {
edge interface edge interface(regionI, otherRegion);
(
min(regionI, otherRegion),
max(regionI, otherRegion)
);
// Find the patch. // Find the patch.
if (regionI < otherRegion) if (regionI < otherRegion)
@ -848,6 +850,7 @@ void createAndWriteRegion
const polyBoundaryMesh& newPatches = newMesh().boundaryMesh(); const polyBoundaryMesh& newPatches = newMesh().boundaryMesh();
newPatches.checkParallelSync(true);
// Delete empty patches // Delete empty patches
// ~~~~~~~~~~~~~~~~~~~~ // ~~~~~~~~~~~~~~~~~~~~
@ -863,22 +866,21 @@ void createAndWriteRegion
{ {
const polyPatch& pp = newPatches[patchI]; const polyPatch& pp = newPatches[patchI];
if if (!isA<processorPolyPatch>(pp))
( {
!isA<processorPolyPatch>(pp) if (returnReduce(pp.size(), sumOp<label>()) > 0)
&& returnReduce(pp.size(), sumOp<label>()) > 0
)
{ {
oldToNew[patchI] = newI++; oldToNew[patchI] = newI++;
} }
} }
}
// Same for processor patches (but need no reduction) // Same for processor patches (but need no reduction)
forAll(newPatches, patchI) forAll(newPatches, patchI)
{ {
const polyPatch& pp = newPatches[patchI]; const polyPatch& pp = newPatches[patchI];
if (isA<processorPolyPatch>(pp) && pp.size() > 0) if (isA<processorPolyPatch>(pp) && pp.size())
{ {
oldToNew[patchI] = newI++; oldToNew[patchI] = newI++;
} }
@ -983,10 +985,15 @@ void createAndWriteRegion
} }
// Create for every region-region interface with non-zero size two patches.
// First one is for minimumregion to maximumregion.
// Note that patches get created in same order on all processors (if parallel)
// since looping over synchronised 'interfaces'.
EdgeMap<label> addRegionPatches EdgeMap<label> addRegionPatches
( (
fvMesh& mesh, fvMesh& mesh,
const regionSplit& cellRegion, const regionSplit& cellRegion,
const edgeList& interfaces,
const EdgeMap<label>& interfaceSizes, const EdgeMap<label>& interfaceSizes,
const wordList& regionNames const wordList& regionNames
) )
@ -998,15 +1005,12 @@ EdgeMap<label> addRegionPatches
EdgeMap<label> interfaceToPatch(cellRegion.nRegions()); EdgeMap<label> interfaceToPatch(cellRegion.nRegions());
// Keep start of added patches for later. forAll(interfaces, interI)
label minAddedPatchI = labelMax;
forAllConstIter(EdgeMap<label>, interfaceSizes, iter)
{ {
if (iter() > 0) const edge& e = interfaces[interI];
{
const edge& e = iter.key();
if (interfaceSizes[e] > 0)
{
label patchI = addPatch label patchI = addPatch
( (
mesh, mesh,
@ -1025,12 +1029,9 @@ EdgeMap<label> addRegionPatches
<< " " << mesh.boundaryMesh()[patchI].name() << " " << mesh.boundaryMesh()[patchI].name()
<< endl; << endl;
interfaceToPatch.insert(iter.key(), patchI); interfaceToPatch.insert(e, patchI);
minAddedPatchI = min(minAddedPatchI, patchI);
} }
} }
//Info<< "minAddedPatchI:" << minAddedPatchI << endl;
return interfaceToPatch; return interfaceToPatch;
} }
@ -1049,7 +1050,7 @@ label findCorrespondingZone
labelList regionCells = findIndices(cellRegion, regionI); labelList regionCells = findIndices(cellRegion, regionI);
if (regionCells.size() == 0) if (regionCells.empty())
{ {
// My local portion is empty. Maps to any empty cellZone. Mark with // My local portion is empty. Maps to any empty cellZone. Mark with
// special value which can get overwritten by other processors. // special value which can get overwritten by other processors.
@ -1200,7 +1201,7 @@ int main(int argc, char *argv[])
boolList blockedFace; boolList blockedFace;
// Read from faceSet // Read from faceSet
if (blockedFacesName.size() > 0) if (blockedFacesName.size())
{ {
faceSet blockedFaceSet(mesh, blockedFacesName); faceSet blockedFaceSet(mesh, blockedFacesName);
Info<< "Read " << returnReduce(blockedFaceSet.size(), sumOp<label>()) Info<< "Read " << returnReduce(blockedFaceSet.size(), sumOp<label>())
@ -1348,24 +1349,26 @@ int main(int argc, char *argv[])
// Sizes of interface between regions. From pair of regions to number of // Sizes of interface between regions. From pair of regions to number of
// faces. // faces.
EdgeMap<label> interfaceSizes edgeList interfaces;
( EdgeMap<label> interfaceSizes;
getInterfaceSizes getInterfaceSizes
( (
mesh, mesh,
cellRegion, cellRegion,
true // sum in parallel? true, // sum in parallel?
)
interfaces,
interfaceSizes
); );
Info<< "Region\tRegion\tFaces" << nl Info<< "Region\tRegion\tFaces" << nl
<< "------\t------\t-----" << endl; << "------\t------\t-----" << endl;
forAllConstIter(EdgeMap<label>, interfaceSizes, iter) forAll(interfaces, interI)
{ {
const edge& e = iter.key(); const edge& e = interfaces[interI];
Info<< e[0] << '\t' << e[1] << '\t' << iter() << nl; Info<< e[0] << '\t' << e[1] << '\t' << interfaceSizes[e] << nl;
} }
Info<< endl; Info<< endl;
@ -1511,6 +1514,7 @@ int main(int argc, char *argv[])
( (
mesh, mesh,
cellRegion, cellRegion,
interfaces,
interfaceSizes, interfaceSizes,
regionNames regionNames
) )

View File

@ -82,7 +82,7 @@ void checkPatch(const polyBoundaryMesh& bMesh, const word& name)
<< exit(FatalError); << exit(FatalError);
} }
if (bMesh[patchI].size() == 0) if (bMesh[patchI].empty())
{ {
FatalErrorIn("checkPatch(const polyBoundaryMesh&, const word&)") FatalErrorIn("checkPatch(const polyBoundaryMesh&, const word&)")
<< "Patch " << name << " is present but zero size" << "Patch " << name << " is present but zero size"

View File

@ -152,7 +152,7 @@ int main(int argc, char *argv[])
); );
if (args.options().size() == 0) if (args.options().empty())
{ {
FatalErrorIn(args.executable()) FatalErrorIn(args.executable())
<< "No options supplied, please use one or more of " << "No options supplied, please use one or more of "

View File

@ -42,6 +42,7 @@ int main(int argc, char *argv[])
{ {
# include "addTimeOptions.H" # include "addTimeOptions.H"
# include "addRegionOption.H"
# include "setRootCase.H" # include "setRootCase.H"
# include "createTime.H" # include "createTime.H"
@ -53,7 +54,7 @@ int main(int argc, char *argv[])
runTime.setTime(Times[startTime], startTime); runTime.setTime(Times[startTime], startTime);
# include "createMesh.H" # include "createNamedMesh.H"
for (label i=startTime; i<endTime; i++) for (label i=startTime; i<endTime; i++)
{ {

View File

@ -104,7 +104,7 @@ void domainDecomposition::distributeCells()
} }
} }
if (sameProcFaces.size() > 0) if (sameProcFaces.size())
{ {
Info<< "Selected " << sameProcFaces.size() Info<< "Selected " << sameProcFaces.size()
<< " faces whose owner and neighbour cell should be kept on the" << " faces whose owner and neighbour cell should be kept on the"
@ -123,7 +123,7 @@ void domainDecomposition::distributeCells()
*this *this
); );
if (sameProcFaces.size() == 0) if (sameProcFaces.empty())
{ {
cellToProc_ = decomposePtr().decompose(cellCentres()); cellToProc_ = decomposePtr().decompose(cellCentres());
} }

View File

@ -100,7 +100,7 @@ int main(int argc, char *argv[])
args args
); );
if (!timeDirs.size()) if (timeDirs.empty())
{ {
FatalErrorIn(args.executable()) FatalErrorIn(args.executable())
<< "No times selected" << "No times selected"
@ -113,20 +113,10 @@ int main(int argc, char *argv[])
{ {
regionPrefix = regionName; regionPrefix = regionName;
} }
// Set all times (on reconstructed mesh and on processor meshes)
runTime.setTime(timeDirs[0], 0);
mesh.readUpdate();
forAll (databases, procI)
{
databases[procI].setTime(timeDirs[0], 0);
}
// Read all meshes and addressing to reconstructed mesh // Read all meshes and addressing to reconstructed mesh
processorMeshes procMeshes(databases, regionName); processorMeshes procMeshes(databases, regionName);
// check face addressing for meshes that have been decomposed // check face addressing for meshes that have been decomposed
// with a very old foam version // with a very old foam version
# include "checkFaceAddressingComp.H" # include "checkFaceAddressingComp.H"
@ -319,7 +309,7 @@ int main(int argc, char *argv[])
} }
if (cloudObjects.size() > 0) if (cloudObjects.size())
{ {
// Pass2: reconstruct the cloud // Pass2: reconstruct the cloud
forAllConstIter(HashTable<IOobjectList>, cloudObjects, iter) forAllConstIter(HashTable<IOobjectList>, cloudObjects, iter)

View File

@ -558,7 +558,7 @@ void ensightFieldAscii
GeometricField<Type, fvPatchField, volMesh> vf(fieldObject, mesh); GeometricField<Type, fvPatchField, volMesh> vf(fieldObject, mesh);
if (!patchNames.size()) if (patchNames.empty())
{ {
if (Pstream::master()) if (Pstream::master())
{ {
@ -633,7 +633,7 @@ void ensightFieldAscii
const word& patchName = iter.key(); const word& patchName = iter.key();
const labelList& patchProcessors = iter(); const labelList& patchProcessors = iter();
if (!patchNames.size() || patchNames.found(patchName)) if (patchNames.empty() || patchNames.found(patchName))
{ {
if (patchIndices.found(patchName)) if (patchIndices.found(patchName))
{ {
@ -734,7 +734,7 @@ void ensightFieldBinary
GeometricField<Type, fvPatchField, volMesh> vf(fieldObject, mesh); GeometricField<Type, fvPatchField, volMesh> vf(fieldObject, mesh);
if (!patchNames.size()) if (patchNames.empty())
{ {
if (Pstream::master()) if (Pstream::master())
{ {
@ -805,7 +805,7 @@ void ensightFieldBinary
const word& patchName = iter.key(); const word& patchName = iter.key();
const labelList& patchProcessors = iter(); const labelList& patchProcessors = iter();
if (!patchNames.size() || patchNames.found(patchName)) if (patchNames.empty() || patchNames.found(patchName))
{ {
if (patchIndices.found(patchName)) if (patchIndices.found(patchName))
{ {

View File

@ -129,7 +129,7 @@ Foam::ensightMesh::ensightMesh
{ {
wordList patchNameList(IStringStream(args.options()["patches"])()); wordList patchNameList(IStringStream(args.options()["patches"])());
if (!patchNameList.size()) if (patchNameList.empty())
{ {
patchNameList = allPatchNames_.toc(); patchNameList = allPatchNames_.toc();
} }
@ -163,7 +163,7 @@ Foam::ensightMesh::ensightMesh
label nHexes = 0; label nHexes = 0;
label nPolys = 0; label nPolys = 0;
if (!patchNames_.size()) if (patchNames_.empty())
{ {
forAll(cellShapes, celli) forAll(cellShapes, celli)
{ {
@ -267,7 +267,7 @@ Foam::ensightMesh::ensightMesh
const word& patchName = iter.key(); const word& patchName = iter.key();
nFacePrimitives nfp; nFacePrimitives nfp;
if (!patchNames_.size() || patchNames_.found(patchName)) if (patchNames_.empty() || patchNames_.found(patchName))
{ {
if (patchIndices_.found(patchName)) if (patchIndices_.found(patchName))
{ {
@ -403,7 +403,7 @@ void Foam::ensightMesh::writePrimsBinary
numElem = cellShapes.size(); numElem = cellShapes.size();
if (cellShapes.size() > 0) if (cellShapes.size())
{ {
// All the cellShapes have the same number of elements! // All the cellShapes have the same number of elements!
int numIntElem = cellShapes.size()*cellShapes[0].size(); int numIntElem = cellShapes.size()*cellShapes[0].size();
@ -917,7 +917,7 @@ void Foam::ensightMesh::writeAscii
labelList pointOffsets(Pstream::nProcs(), 0); labelList pointOffsets(Pstream::nProcs(), 0);
if (!patchNames_.size()) if (patchNames_.empty())
{ {
label nPoints = points.size(); label nPoints = points.size();
Pstream::gather(nPoints, sumOp<label>()); Pstream::gather(nPoints, sumOp<label>());
@ -1044,7 +1044,7 @@ void Foam::ensightMesh::writeAscii
{ {
const labelList& patchProcessors = iter(); const labelList& patchProcessors = iter();
if (!patchNames_.size() || patchNames_.found(iter.key())) if (patchNames_.empty() || patchNames_.found(iter.key()))
{ {
const word& patchName = iter.key(); const word& patchName = iter.key();
const nFacePrimitives& nfp = nPatchPrims_.find(patchName)(); const nFacePrimitives& nfp = nPatchPrims_.find(patchName)();
@ -1244,7 +1244,7 @@ void Foam::ensightMesh::writeBinary
labelList pointOffsets(Pstream::nProcs(), 0); labelList pointOffsets(Pstream::nProcs(), 0);
if (!patchNames_.size()) if (patchNames_.empty())
{ {
label nPoints = points.size(); label nPoints = points.size();
Pstream::gather(nPoints, sumOp<label>()); Pstream::gather(nPoints, sumOp<label>());
@ -1373,7 +1373,7 @@ void Foam::ensightMesh::writeBinary
iCount ++; iCount ++;
const labelList& patchProcessors = iter(); const labelList& patchProcessors = iter();
if (!patchNames_.size() || patchNames_.found(iter.key())) if (patchNames_.empty() || patchNames_.found(iter.key()))
{ {
const word& patchName = iter.key(); const word& patchName = iter.key();
const nFacePrimitives& nfp = nPatchPrims_.find(patchName)(); const nFacePrimitives& nfp = nPatchPrims_.find(patchName)();

View File

@ -57,7 +57,7 @@ void writeEnsDataBinary
std::ofstream& ensightFile std::ofstream& ensightFile
) )
{ {
if (sf.size() > 0) if (sf.size())
{ {
List<float> temp(sf.size()); List<float> temp(sf.size());

View File

@ -77,9 +77,9 @@ forAllIter(HashTable<HashTable<word> >, cloudFields, cloudIter)
} }
} }
if (!cloudIter().size()) if (cloudIter().empty())
{ {
Info<< "removing cloud " << cloudName<< endl; Info<< "removing cloud " << cloudName << endl;
cloudFields.erase(cloudIter); cloudFields.erase(cloudIter);
} }
} }

View File

@ -227,7 +227,7 @@ int main(int argc, char *argv[])
# include "getFieldNames.H" # include "getFieldNames.H"
bool hasLagrangian = false; bool hasLagrangian = false;
if ((sprayScalarNames.size() > 0) || (sprayVectorNames.size() > 0)) if (sprayScalarNames.size() || sprayVectorNames.size())
{ {
hasLagrangian = true; hasLagrangian = true;
} }

View File

@ -80,7 +80,7 @@ for(label i=0; i < nTypes; i++)
wordList lagrangianScalarNames = objects.names("scalarField"); wordList lagrangianScalarNames = objects.names("scalarField");
wordList lagrangianVectorNames = objects.names("vectorField"); wordList lagrangianVectorNames = objects.names("vectorField");
if (particles.size() > 0) if (particles.size())
{ {
# include "gmvOutputLagrangian.H" # include "gmvOutputLagrangian.H"
} }

View File

@ -49,16 +49,11 @@ forAll(lagrangianScalarNames, i)
) )
); );
if (s.size() != 0) if (s.size())
{ {
gmvFile << name << nl; gmvFile << name << nl;
for for (label n = 0; n < s.size(); n++)
(
label n = 0;
n < s.size();
n++
)
{ {
gmvFile << s[n] << token::SPACE; gmvFile << s[n] << token::SPACE;
} }
@ -85,16 +80,11 @@ forAll(lagrangianVectorNames, i)
) )
); );
if (v.size() != 0) if (v.size())
{ {
gmvFile << name + "x" << nl; gmvFile << name + "x" << nl;
for for (label n = 0; n < v.size(); n++)
(
label n = 0;
n < v.size();
n++
)
{ {
gmvFile << v[n].x() << token::SPACE; gmvFile << v[n].x() << token::SPACE;
} }
@ -102,12 +92,7 @@ forAll(lagrangianVectorNames, i)
gmvFile << name + "y" << nl; gmvFile << name + "y" << nl;
for for (label n = 0; n < v.size(); n++)
(
label n = 0;
n < v.size();
n++
)
{ {
gmvFile << v[n].y() << token::SPACE; gmvFile << v[n].y() << token::SPACE;
} }
@ -115,19 +100,13 @@ forAll(lagrangianVectorNames, i)
gmvFile << name + "z" << nl; gmvFile << name + "z" << nl;
for for (label n = 0; n < v.size(); n++)
(
label n = 0;
n < v.size();
n++
)
{ {
gmvFile << v[n].z() << token::SPACE; gmvFile << v[n].z() << token::SPACE;
} }
gmvFile << nl; gmvFile << nl;
} }
} }

View File

@ -47,16 +47,11 @@ forAll(lagrangianScalarNames, i)
) )
); );
if (s.size() != 0) if (s.size())
{ {
gmvFile << name << nl; gmvFile << name << nl;
for for (label n = 0; n < s.size(); n++)
(
label n = 0;
n < s.size();
n++
)
{ {
gmvFile << s[n] << token::SPACE; gmvFile << s[n] << token::SPACE;
} }

View File

@ -339,7 +339,7 @@ int main(int argc, char *argv[])
( (
args.options().found("time") args.options().found("time")
|| args.options().found("latestTime") || args.options().found("latestTime")
|| cellSetName.size() > 0 || cellSetName.size()
|| regionName != polyMesh::defaultRegion || regionName != polyMesh::defaultRegion
) )
{ {

View File

@ -58,7 +58,7 @@ void readFields
++iter ++iter
) )
{ {
if (!selectedFields.size() || selectedFields.found(iter()->name())) if (selectedFields.empty() || selectedFields.found(iter()->name()))
{ {
fields.set fields.set
( (

View File

@ -47,7 +47,7 @@ Foam::vtkMesh::vtkMesh
subsetter_(baseMesh), subsetter_(baseMesh),
setName_(setName) setName_(setName)
{ {
if (setName.size() > 0) if (setName.size())
{ {
// Read cellSet using whole mesh // Read cellSet using whole mesh
cellSet currentSet(baseMesh_, setName_); cellSet currentSet(baseMesh_, setName_);
@ -71,7 +71,7 @@ Foam::polyMesh::readUpdateState Foam::vtkMesh::readUpdate()
topoPtr_.clear(); topoPtr_.clear();
if (setName_.size() > 0) if (setName_.size())
{ {
Info<< "Subsetting mesh based on cellSet " << setName_ << endl; Info<< "Subsetting mesh based on cellSet " << setName_ << endl;

View File

@ -105,13 +105,13 @@ public:
//- Check if running subMesh //- Check if running subMesh
bool useSubMesh() const bool useSubMesh() const
{ {
return setName_.size() > 0; return setName_.size();
} }
//- topology //- topology
const vtkTopo& topo() const const vtkTopo& topo() const
{ {
if (!topoPtr_.valid()) if (topoPtr_.empty())
{ {
topoPtr_.reset(new vtkTopo(mesh())); topoPtr_.reset(new vtkTopo(mesh()));
} }

View File

@ -154,6 +154,11 @@ class vtkPV3Foam
return size_; return size_;
} }
bool empty() const
{
return (size_ == 0);
}
void reset() void reset()
{ {
start_ = -1; start_ = -1;

View File

@ -147,14 +147,14 @@ int USERD_set_filenames
bool lagrangianNamesFound = false; bool lagrangianNamesFound = false;
label n = 0; label n = 0;
while ((!lagrangianNamesFound) && (n<Num_time_steps)) while (!lagrangianNamesFound && n < Num_time_steps)
{ {
runTime.setTime(TimeList[n+1], n+1); runTime.setTime(TimeList[n+1], n+1);
Cloud<passiveParticle> lagrangian(*meshPtr); Cloud<passiveParticle> lagrangian(*meshPtr);
n++; n++;
if (lagrangian.size()>0) if (lagrangian.size())
{ {
lagrangianNamesFound = true; lagrangianNamesFound = true;
} }

View File

@ -4,7 +4,6 @@ nVar -= Num_variables - nSprayVariables;
if (nVar >= 0) if (nVar >= 0)
{ {
word name = lagrangianScalarNames[nVar]; word name = lagrangianScalarNames[nVar];
IOField<scalar> s IOField<scalar> s
@ -20,15 +19,9 @@ if (nVar >= 0)
) )
); );
if (s.size() != 0) if (s.size())
{ {
for (label n = 0; n < s.size(); n++)
for
(
label n = 0;
n < s.size();
n++
)
{ {
var_array[n+1] = s[n]; var_array[n+1] = s[n];
} }
@ -36,7 +29,7 @@ if (nVar >= 0)
} }
else else
{ {
//Info << "getLagrangianScalar: nVar = " << nVar << endl; // Info << "getLagrangianScalar: nVar = " << nVar << endl;
return Z_UNDEF; return Z_UNDEF;
} }

View File

@ -21,16 +21,10 @@ if (nVar >= 0)
) )
); );
if (v.size() != 0) if (v.size())
{ {
for for (label n = 0; n < v.size(); n++)
(
label n = 0;
n < v.size();
n++
)
{ {
if (component == 0) if (component == 0)
{ {
var_array[n+1] = v[n].x(); var_array[n+1] = v[n].x();

View File

@ -235,7 +235,7 @@ static void createFieldNames
HashSet<word> surfScalarHash; HashSet<word> surfScalarHash;
HashSet<word> surfVectorHash; HashSet<word> surfVectorHash;
if (setName.size() == 0) if (setName.empty())
{ {
forAll(Times, timeI) forAll(Times, timeI)
{ {
@ -536,13 +536,12 @@ void user_query_file_function
fileName caseName(rootAndCase.name()); fileName caseName(rootAndCase.name());
// handle trailing '/' // handle trailing '/'
if (caseName.size() == 0) if (caseName.empty())
{ {
caseName = rootDir.name(); caseName = rootDir.name();
rootDir = rootDir.path(); rootDir = rootDir.path();
} }
Info<< "rootDir : " << rootDir << endl Info<< "rootDir : " << rootDir << endl
<< "caseName : " << caseName << endl << "caseName : " << caseName << endl
<< "setName : " << setName << endl; << "setName : " << setName << endl;

View File

@ -150,7 +150,7 @@ const Foam::fvMesh& Foam::readerDatabase::mesh() const
<< "No mesh set" << abort(FatalError); << "No mesh set" << abort(FatalError);
} }
if (setName_.size() == 0) if (setName_.empty())
{ {
return *meshPtr_; return *meshPtr_;
} }
@ -265,7 +265,7 @@ void Foam::readerDatabase::loadMesh()
IOobject::AUTO_WRITE IOobject::AUTO_WRITE
); );
if (setName_.size() != 0) if (setName_.size())
{ {
Info<< "Subsetting mesh based on cellSet " << setName_ << endl; Info<< "Subsetting mesh based on cellSet " << setName_ << endl;
@ -294,7 +294,7 @@ Foam::polyMesh::readUpdateState Foam::readerDatabase::setTime
// Update loaded mesh // Update loaded mesh
meshChange = meshPtr_->readUpdate(); meshChange = meshPtr_->readUpdate();
if ((setName_.size() != 0) && (meshChange != polyMesh::UNCHANGED)) if (setName_.size() && meshChange != polyMesh::UNCHANGED)
{ {
Info<< "Subsetting mesh based on " << setName_ << endl; Info<< "Subsetting mesh based on " << setName_ << endl;

View File

@ -206,7 +206,7 @@ void mapLagrangian(const meshToMesh& meshToMeshInterp)
// Do closer inspection for unmapped particles // Do closer inspection for unmapped particles
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if (unmappedSource.size() > 0) if (unmappedSource.size())
{ {
meshSearch targetSearcher(meshTarget, false); meshSearch targetSearcher(meshTarget, false);
@ -237,7 +237,7 @@ void mapLagrangian(const meshToMesh& meshToMeshInterp)
Info<< " after additional mesh searching found " Info<< " after additional mesh searching found "
<< targetParcels.size() << " parcels in target mesh." << endl; << targetParcels.size() << " parcels in target mesh." << endl;
if (addParticles.size() > 0) if (addParticles.size())
{ {
IOPosition<passiveParticle>(targetParcels).write(); IOPosition<passiveParticle>(targetParcels).write();

View File

@ -0,0 +1,3 @@
mdInitialise.C
EXE = $(FOAM_APPBIN)/mdInitialise

View File

@ -0,0 +1,18 @@
EXE_INC = \
-I$(LIB_SRC)/meshTools/lnInclude \
-I$(LIB_SRC)/dynamicMesh/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecule/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/potential/lnInclude \
-I$(LIB_SRC)/lagrangian/molecularDynamics/molecularMeasurements/lnInclude \
-I$(LIB_SRC)/lagrangian/basic/lnInclude \
-I$(LIB_SRC)/finiteVolume/lnInclude
EXE_LIBS = \
-lmeshTools \
-ldynamicMesh \
-lfiniteVolume \
-llagrangian \
-lmolecule \
-lpotential \
-lmolecularMeasurements

View File

@ -0,0 +1,95 @@
/*---------------------------------------------------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration |
\\ / A nd | Copyright (C) 1991-2008 OpenCFD Ltd.
\\/ M anipulation |
-------------------------------------------------------------------------------
License
This file is part of OpenFOAM.
OpenFOAM is free software; you can redistribute it and/or modify it
under the terms of the GNU General Public License as published by the
Free Software Foundation; either version 2 of the License, or (at your
option) any later version.
OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
for more details.
You should have received a copy of the GNU General Public License
along with OpenFOAM; if not, write to the Free Software Foundation,
Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
\*---------------------------------------------------------------------------*/
#include "md.H"
#include "fvCFD.H"
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
int main(int argc, char *argv[])
{
# include "setRootCase.H"
# include "createTime.H"
# include "createMesh.H"
IOdictionary mdInitialiseDict
(
IOobject
(
"mdInitialiseDict",
runTime.system(),
runTime,
IOobject::MUST_READ,
IOobject::NO_WRITE,
false
)
);
IOdictionary idListDict
(
IOobject
(
"idList",
mesh.time().constant(),
mesh,
IOobject::NO_READ,
IOobject::AUTO_WRITE
)
);
potential pot(mesh, mdInitialiseDict, idListDict);
moleculeCloud molecules(mesh, pot, mdInitialiseDict);
label totalMolecules = molecules.size();
if (Pstream::parRun())
{
reduce(totalMolecules, sumOp<label>());
}
Info<< nl << "Total number of molecules added: " << totalMolecules
<< nl << endl;
IOstream::defaultPrecision(15);
if (!mesh.write())
{
FatalErrorIn(args.executable())
<< "Failed writing moleculeCloud."
<< nl << exit(FatalError);
}
Info<< nl << "ClockTime = " << runTime.elapsedClockTime() << " s"
<< nl << endl;
Info << nl << "End\n" << endl;
return 0;
}
// ************************************************************************* //

View File

@ -2,7 +2,7 @@
========= | ========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox \\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | \\ / O peration |
\\ / A nd | Copyright (C) 1991-2009 OpenCFD Ltd. \\ / A nd | Copyright (C) 1991-2008 OpenCFD Ltd.
\\/ M anipulation | \\/ M anipulation |
------------------------------------------------------------------------------- -------------------------------------------------------------------------------
License License

Some files were not shown because too many files have changed in this diff Show More