#2771
jb
Member

Hi Albert,

Thanks again.
Except for a small difference on the line “int mult = 2 / (d…” I didn’t find anything wrong. I actually think the problem is not with the slip-free bc, but potentially with the pressure outlet.

Because I want to run high Reynolds number cases I have opted for a pressure driven + periodic domain approach and I have implemented a fringe region to set the inflow.

However, I have two questions:

1. when i try to run my simulations with MPI they fail over the following line of code (that I use to initialize the fringe region). For a simulation on 2 processors, the code fails when iCloc = 1.

Code:
for (int iCloc = 0; iCloc < noOfCuboids; iCloc++) {
BlockGeometryStructure2D<T>& tmp = superGeometry.getBlockGeometry(iCloc);
dom_origin = tmp.getOrigin();

}

Something goes wrong when I ask for the origin. Any suggestion what could be wrong here?

2.
I have tried to make the code as fast as possible by calculating as much as possible for the fringe region only once in the initialization phase. However, depending on the number of lattices included in the fringe region, the code has a slow down of 40-50%. I use the following function in the file superLattice2D.hh. Any suggestion on how I can reduce the computational cost for the fringe region?

Code:
template< typename T, template<typename U> class Lattice>
void SuperLattice2D<T,Lattice>::defineFringe( int* fringe_iCloc, int* fringe_iX, int* fringe_iY, T* fringe_weight, int fieldBeginsAt, int sizeOfField, T velocity, int fringe_N)
{
T fringeF[1];
T output[2];
for (int iR = 0; iR < fringe_N ; ++iR){
_extendedBlockLattices[ fringe_iCloc[iR] ].get( fringe_iX[iR] , fringe_iY[iR]).computeU(output);
fringeF[0] = fringe_weight[iR]*(output[0]-velocity);
_extendedBlockLattices[ fringe_iCloc[iR] ].get( fringe_iX[iR] , fringe_iY[iR]).defineExternalField (
fieldBeginsAt, sizeOfField, fringeF);
}

Thanks a lot, your help is really appreciated,
Juliaan