Overlap for the FD BC and the simultaneous use of Bouzidi and a Thermal lattice
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › Bug Reports › Overlap for the FD BC and the simultaneous use of Bouzidi and a Thermal lattice
- This topic has 4 replies, 3 voices, and was last updated 3 years, 7 months ago by mathias.
-
AuthorPosts
-
July 20, 2021 at 5:27 pm #5830vaillantParticipant
Dear developers,
I think I spotted two bugs in the olb-1.3r1 version, and from what I see, they are still there in the olb-1.4r0, but I’m not sure since I’m still working on the previous version.
First of all, when the FD boundary condition is used,
setInterpolatedVelocityBoundary
is called and set the overlap withint _overlap = 1;
.
Then, it callsaddPoints2CommBC(sLattice,std::forward<decltype(indicator)>(indicator), _overlap);
. If I’m understanding correctly this algorithm,addPoints2CommBC
will add to_commBC
the cells in the overlap if they are neighbours with a cell having a material non equal to zero. With an overlap of 1, only the first layer of overlap can be add.
But then, whenOuterVelocityEdgeProcessor3D
is used for external cells, it callsinterpolateGradients
, which it self usesinterpolateVector
. And this last function go take the value of cells two spatial steps away with :blockLattice.get ( iX+(direction==0 ? (-2*orientation):0), iY+(direction==1 ? (-2*orientation):0), iZ+(direction==2 ? (-2*orientation):0) ).computeU(u2);
But with an overlap of 1, this cell isn’t known if the external cell is near the cuboid border.
Consequently, it should beint _overlap = 2;
.This issue caused a crash in some of my computations. At first, I managed to make it work by using
slattice.communicate();
(which make communications with an overlap of 2) in the main file without understand where the bug came from. But after some digging, I pretty sure it come fromint _overlap = 1;
.The second bug is when you use Bouzidi BC and you want to use a Thermal lattice too.
InNavierStokesAdvectionDiffusionCouplingPostProcessor3D
when it callprocessSubDomain
, the velocity of the first lattice is put in the second lattice thanks to:
auto u = tPartner->get(iX,iY,iZ).template getField<descriptors::VELOCITY>(); blockLattice.get(iX,iY,iZ).computeU(u.data());
But for a boundary cell,computeU(u.data())
compute a wrong value since the dynamic with Bouzidi BC isNoDynamics
, and itscomputeU
do :for (int iD=0; iD<DESCRIPTOR::d; ++iD) { u[iD] = T(); }
Best regards
July 21, 2021 at 3:10 pm #5834AdrianKeymasterThank you very much for the bug reports!
I will take a closer on how to fix this in the general case but from your description this is definitiely something that should be handled differently. An overlap of 1 is correct for the interpolated velocity boundary by itself but doesn’t take into account the possibility of neighbor momenta requiring their own neighborhood – in the worst case this relationship could extend arbitrarily far. In a different context we are already investigating whether to communicate the momenta data from neighboring blocks instead of enabling only-partial computation from the overlap.
July 21, 2021 at 3:13 pm #5835AdrianKeymasterw.r.t. the second bug: Could you provide more details in which context you use this? NoDynamics for inactive boundary cells should be correct as Bouzidi only reconstructs the populations streaming out of the boundary and not of the non-fluid cells by themselves.
July 23, 2021 at 6:18 pm #5850vaillantParticipantFor the second bug, let’s take one boundary cell. For initializing Bouzidi,
setBouzidiVelocityBoundary
is called, which add aVelocityBouzidiLinearPostProcessorGenerator3D
to the neighbour fluid cell. This Post Processor is doing the Bouzidi algorithm directly on the fluid node so I understand NoDynamics is correct for the boundary node since it just store the population.This works well with one lattice. But if you add another lattice to resolve the Advection-diffusion Equation, there will be a problem.
If we still take the same boundary cell, we initialize it with
setAdvectionDiffusionTemperatureBoundary
, and if we assume for example that the boundary cell is an edge, sodiscreteNormal[0] == 3
, the dynamic is thenAdvectionDiffusionEdgesDynamics
and the momenta isEquilibriumBM
.
When arrive the moment to collide this boundary cell,collide
ofAdvectionDiffusionEdgesDynamics
need the velocity of the fluid to compute the equilibrium, so it calls :
auto u = cell.template getField<descriptors::VELOCITY>();
We go look at what have been stored in this field. It’s done in
NavierStokesAdvectionDiffusionCouplingPostProcessor3D
inprocessSubDomain
with :auto u = tPartner->get(iX,iY,iZ).template getField<descriptors::VELOCITY>(); blockLattice.get(iX,iY,iZ).computeU(u.data()); tPartner->get(iX,iY,iZ).template setField<descriptors::VELOCITY>(u);
The blockLattice here is one of the first lattice. So on the boundary cell, the dynamic is
NoDynamics
and socomputeU(u.data())
will do :for (int iD=0; iD<DESCRIPTOR::d; ++iD) { u[iD] = T(); }
.
So even if there is a velocity applied on the boundary cell, the value store in the field of the second lattice will be 0. So inAdvectionDiffusionEdgesDynamics
, when you compute the equilibrium withlbH::equilibriumFirstOrder(unknownIndexes[iPop], temperature, u.data())
, u comes from the field and so it is 0, and you will have a value of the equilibrium accordingly. In the meantime, the neighbour fluid cell retrieves the correct velocity from his field sincecomputeU(u.data())
of for exampleBGKdynamics
would have give it the right value.
So to resume, for the thermal lattice, the boundary cell ‘see’ no velocity while its fluid neighbour ‘see’ a velocity. This is why, I think, it crash when I try it. And with correcting the value of the velocity in the field, it’s working fine.August 6, 2021 at 5:03 pm #5897mathiasKeymasterBouzidi for ADE is currently not supported.
-
AuthorPosts
- You must be logged in to reply to this topic.