Skip to content

Modifying field values for neighboring cells inside a postprocessor

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB General Topics Modifying field values for neighboring cells inside a postprocessor

Viewing 10 posts - 31 through 40 (of 40 total)
  • Author
    Posts
  • #9322
    Danial.Khazaeipoul
    Participant

    Thank you, Mathias, for the explanation. The code I’ve implemented works well in a single partition setup where localCellID == globalCellID. However, in configurations with multiple partitions or blocks, localCellID != globalCellID, leading to non-unique IDs.

    For local cell IDs, which are of type CellID or std::uint32_t, there’s an advantage in CCL algorithms as these IDs are unique and remain consistent throughout the simulation. Because of this, I’m looking for a way to generate global cell IDs with similar characteristics. For example, I am thinking about a method like the following which creates a unique global CellID for each cell:

    std::uint32_t globalCellID = blockID × cellsPerBlock + localCellID

    #9326
    Adrian
    Keymaster

    Yes, the cell IDs are block-local. Computing a global ID in the way you posted will work if you use the maximum global block size for the multiplier.

    • This reply was modified 2 months ago by Adrian.
    #9333
    Danial.Khazaeipoul
    Participant

    Just to confirm my understanding, by

    if you use the maximum global block size

    Are you referring to the fact that cuboids may not have an equal number of associated cells, and that using the maximum block size can prevent overlapping global cell IDs?

    #9334
    Adrian
    Keymaster

    Yes, the cuboids commonly do not have exactly the same number of cells. You can also see this in the cuboid geometry output.

    #9362
    Danial.Khazaeipoul
    Participant

    Using the cell interface, is there any method that can be used to identify the global cuboid number to which the current cell belongs?

    #9363
    Adrian
    Keymaster

    No, this is not intended as all individual threads of a kernel always only process cells of the same block (all LBM processing is split into per block operators). You should be able to handle the cuboid number / cell ID mapping at the same place where you perform the inter block communication of your algorithm.

    However, in principle nothing prevents you from passing the cuboid number as a block-specific parameter to your operator (with application scope PerCellWithParameters)

    #9474
    Danial.Khazaeipoul
    Participant

    Hello Adrian,

    Is my understanding correct regarding the use of a block-specific parameter passed to an operator? Suppose I have an offset vector, synchronized across all processors, which records the number of cells in each preceding block, shown below.

    struct CELL_COUNT : public descriptors::TYPED_FIELD_BASE<std::size_t,1> { };
    std::vector<std::size_t> offset(_sLattice.getCuboidGeometry().getNc(), 0);

    The size of the offset vector matches the number of global cuboids. For a domain with 4 blocks, the resulting offset vector might look like {0, 431433, 855297, 1279161}, as an example. Given this setup, will the following method yield the expected results for the CELL_COUNT, as shown in the image?

    for (int iC = 0; iC < _sLattice.getLoadBalancer().size(); ++iC)
    {
      auto& block = _sLattice.getBlock(iC);
      block.template setParameter<FreeSurface::CELL_COUNT>(offset[_sLattice.getLoadBalancer().glob(iC)]);
    }

    Please note that the image is only for demonstration purposes. When an operator processes a cell, will calling the parameter use the correct value of CELL_COUNT based on the block in which the cell resides?

    #9478
    Adrian
    Keymaster

    Yes, this should work as you convert the local cuboid index iC to the global one for accessing the offset array.

    #9541
    Danial.Khazaeipoul
    Participant

    Dear Adrian,

    I believe I’ve identified the issue with the CCL implementation when MPI is enabled. Thank you for your insights so far in working toward a robust solution. Since you’ve already reviewed the code, you’re familiar with how cell IDs are used to assign bubble IDs and to navigate between cells in each kernel to access their bubble IDs.

    I’ll do my best to clearly explain the problem. The CCL algorithm assumes that cell IDs follow a consistent order in each direction within a block, which works fine for a single block. However, when MPI is enabled and the domain is divided into multiple blocks, this consistency breaks down for neighboring cells located in different blocks. As a result, the bubble ID assignment becomes incorrect for blocks other than Block 0. In the worst-case scenario, this can lead to illegal memory access, especially since blocks may have an unequal number of cells, see the below picture.

    When the orange cell in Block #2 is being processed, it accesses the bubble ID of the red cell, as the red cell is a valid neighbor. However, since the red cell belongs to a different block, its cell ID does not follow the block-local order of cell IDs in Block #2. When the orange cell adopts the bubble ID of the red cell, it may incorrectly reference a cell within Block #2, as indicated by the blue arrow. This can result in either assigning an incorrect bubble ID to the orange cell or, in the worst-case scenario, causing illegal memory access. This happens because Block #2 may not contain a cell with the cell ID from Block #0.

    Is it possible to defer the automatic communication between blocks for a specific global data structure until it is explicitly triggered when needed?

    Regards,
    Danial

    #9555
    Adrian
    Keymaster

    Sure, the overlap communication only happens at discrete and configurable stages. i.e. you can move all communication related to the CCL implementation into a distinct (or more than one) communicator stage fully within your control. See e.g. User Guide Section 2.2.3.

    You can create a new communicator stage for CCL simply by requesting it:

    
    namespace stage { struct CCL { }; }
    
    {
      auto& comm = sLattice.getCommunicator(stage::CCL{});
      comm.template requestField<descriptors::POPULATION>();
      comm.requestOverlap(1);
      comm.exchangeRequests();
    }
    

    this stage will only be triggered if you explicitly call sLattice.getCommunicator(stage::CCL{}).communicate(). It is also possible to request more than one field or other subsets of the overlap than just a constant width if necessary.

Viewing 10 posts - 31 through 40 (of 40 total)
  • You must be logged in to reply to this topic.