Skip to content

Overlap for the FD BC and the simultaneous use of Bouzidi and a Thermal lattice

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB Bug Reports Overlap for the FD BC and the simultaneous use of Bouzidi and a Thermal lattice

Viewing 5 posts - 1 through 5 (of 5 total)
  • Author
    Posts
  • #5830
    vaillant
    Participant

    Dear developers,

    I think I spotted two bugs in the olb-1.3r1 version, and from what I see, they are still there in the olb-1.4r0, but I’m not sure since I’m still working on the previous version.

    First of all, when the FD boundary condition is used, setInterpolatedVelocityBoundary is called and set the overlap with int _overlap = 1;.
    Then, it calls addPoints2CommBC(sLattice,std::forward<decltype(indicator)>(indicator), _overlap);. If I’m understanding correctly this algorithm, addPoints2CommBC will add to _commBC the cells in the overlap if they are neighbours with a cell having a material non equal to zero. With an overlap of 1, only the first layer of overlap can be add.
    But then, when OuterVelocityEdgeProcessor3D is used for external cells, it calls interpolateGradients, which it self uses interpolateVector. And this last function go take the value of cells two spatial steps away with :

        blockLattice.get (
          iX+(direction==0 ? (-2*orientation):0),
          iY+(direction==1 ? (-2*orientation):0),
          iZ+(direction==2 ? (-2*orientation):0) ).computeU(u2);

    But with an overlap of 1, this cell isn’t known if the external cell is near the cuboid border.
    Consequently, it should be int _overlap = 2;.

    This issue caused a crash in some of my computations. At first, I managed to make it work by using slattice.communicate(); (which make communications with an overlap of 2) in the main file without understand where the bug came from. But after some digging, I pretty sure it come from int _overlap = 1;.

    The second bug is when you use Bouzidi BC and you want to use a Thermal lattice too.
    In NavierStokesAdvectionDiffusionCouplingPostProcessor3D when it call processSubDomain, the velocity of the first lattice is put in the second lattice thanks to:
    auto u = tPartner->get(iX,iY,iZ).template getField<descriptors::VELOCITY>(); blockLattice.get(iX,iY,iZ).computeU(u.data());
    But for a boundary cell, computeU(u.data()) compute a wrong value since the dynamic with Bouzidi BC is NoDynamics, and its computeU do :

      for (int iD=0; iD<DESCRIPTOR::d; ++iD) {
        u[iD] = T();
      }

    Best regards

    #5834
    Adrian
    Keymaster

    Thank you very much for the bug reports!

    I will take a closer on how to fix this in the general case but from your description this is definitiely something that should be handled differently. An overlap of 1 is correct for the interpolated velocity boundary by itself but doesn’t take into account the possibility of neighbor momenta requiring their own neighborhood – in the worst case this relationship could extend arbitrarily far. In a different context we are already investigating whether to communicate the momenta data from neighboring blocks instead of enabling only-partial computation from the overlap.

    #5835
    Adrian
    Keymaster

    w.r.t. the second bug: Could you provide more details in which context you use this? NoDynamics for inactive boundary cells should be correct as Bouzidi only reconstructs the populations streaming out of the boundary and not of the non-fluid cells by themselves.

    #5850
    vaillant
    Participant

    For the second bug, let’s take one boundary cell. For initializing Bouzidi, setBouzidiVelocityBoundary is called, which add a VelocityBouzidiLinearPostProcessorGenerator3D to the neighbour fluid cell. This Post Processor is doing the Bouzidi algorithm directly on the fluid node so I understand NoDynamics is correct for the boundary node since it just store the population.

    This works well with one lattice. But if you add another lattice to resolve the Advection-diffusion Equation, there will be a problem.

    If we still take the same boundary cell, we initialize it with setAdvectionDiffusionTemperatureBoundary, and if we assume for example that the boundary cell is an edge, so discreteNormal[0] == 3, the dynamic is then AdvectionDiffusionEdgesDynamics and the momenta is EquilibriumBM.
    When arrive the moment to collide this boundary cell, collide of AdvectionDiffusionEdgesDynamics need the velocity of the fluid to compute the equilibrium, so it calls :
    auto u = cell.template getField<descriptors::VELOCITY>();

    We go look at what have been stored in this field. It’s done in NavierStokesAdvectionDiffusionCouplingPostProcessor3D in processSubDomain with :

    auto u = tPartner->get(iX,iY,iZ).template getField<descriptors::VELOCITY>();
    blockLattice.get(iX,iY,iZ).computeU(u.data());
    tPartner->get(iX,iY,iZ).template setField<descriptors::VELOCITY>(u);

    The blockLattice here is one of the first lattice. So on the boundary cell, the dynamic is NoDynamics and so computeU(u.data()) will do :

      for (int iD=0; iD<DESCRIPTOR::d; ++iD) {
        u[iD] = T();
      }

    .
    So even if there is a velocity applied on the boundary cell, the value store in the field of the second lattice will be 0. So in AdvectionDiffusionEdgesDynamics, when you compute the equilibrium with lbH::equilibriumFirstOrder(unknownIndexes[iPop], temperature, u.data()), u comes from the field and so it is 0, and you will have a value of the equilibrium accordingly. In the meantime, the neighbour fluid cell retrieves the correct velocity from his field since computeU(u.data()) of for example BGKdynamics would have give it the right value.
    So to resume, for the thermal lattice, the boundary cell ‘see’ no velocity while its fluid neighbour ‘see’ a velocity. This is why, I think, it crash when I try it. And with correcting the value of the velocity in the field, it’s working fine.

    #5897
    mathias
    Keymaster

    Bouzidi for ADE is currently not supported.

Viewing 5 posts - 1 through 5 (of 5 total)
  • You must be logged in to reply to this topic.