Skip to content

load coarse mesh data to a fine mesh

Due to recent bot attacks we have changed the sign-up process. If you want to participate in our forum, first register on this website and then send a message via our contact form.

Forums OpenLB General Topics load coarse mesh data to a fine mesh

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #10968
    nipinl
    Participant

    What would be the simplest way to load coarse simulation data as a starting point for a simulation with fine mesh? I did not find any straight forward way of doing this. Both coarse and fine mesh are uniform. For simplicity, we can think of, say N = 50 to N = 100 with exactly same geometry.

    #10969
    Adrian
    Keymaster

    You can use the AnalyticalFfromSuperF interpolation functor to interpolate data (exposed by functors, e.g. SuperLatticeFieldF) between resolutions. The geometry setup should be separate and will need special handling depending on the case (as there is no generic way to transfer the boundary setup between resolutions for arbitrary BCs)

    #10971
    nipinl
    Participant

    I made coarse and fine versions of superGeometry, SuperLattice based on resolutions Ncoarse and Nfine; and enforced same boundary conditions for both lattices separately. The simulation is planned as: Coarse run –> interpolate –> fine run; all in a single (cpp file). A helper function (provided below) is made to initialize fine lattice from Coarse.
    However, I’m unsuccessful in carrying out the initialization. The provided version partially copies data to fine lattice. If I change “…uAnalytical(coarseU, false, false)” to “…uAnalytical(coarseU, true, true)” to enable communication, MPI throws following error (line “coarseU.getSuperStructure().communicate();” is commented out when false changes to true). I was wondering where exactly I’m making a mistake here.

    ERROR:
    [initializeFineFromCoarse] Interpolating coarse lattice onto fine lattice … [initializeFineFromCoarse] Defined rhoAnalytical … [tri-login02:00000] *** An error occurred in MPI_Bcast [tri-login02:00000] *** reported by process [2194538497,169] [tri-login02:00000] *** on communicator MPI_COMM_WORLD [tri-login02:00000] *** MPI_ERR_TRUNCATE: message truncated [tri-login02:00000] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [tri-login02:00000] *** and MPI will try to terminate your MPI job as well) ————————————————————————– prterun has exited due to process rank 169 with PID 0 on node tri-login02 calling “abort”. This may have caused other processes in the application to be terminated by signals sent by prterun (as reported here).

    // Interpolate a coarse simulation onto the current (fine) lattice
    template <typename T, typename DESCRIPTOR>
    void initializeFineFromCoarse(
      SuperLattice<T, DESCRIPTOR>& sLatticeFine,
      SuperGeometry<T,3>&          superGeometryFine,
      SuperLattice<T, DESCRIPTOR>& sLatticeCoarse,
      SuperGeometry<T,3>&          superGeometryCoarse)
    {
      OstreamManager clout(std::cout, "initializeFineFromCoarse");
    
      SuperLatticeVelocity3D<T, DESCRIPTOR> coarseU(sLatticeCoarse);
    
      coarseU.getSuperStructure().communicate();
    
      AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, false, false);
    
      AnalyticalConst3D<T,T> rhoAnalytical(1.);
    
      sLatticeFine.defineRhoU(superGeometryFine, 1, rhoAnalytical, uAnalytical);
      sLatticeFine.iniEquilibrium(superGeometryFine, 1, rhoAnalytical, uAnalytical);
    
      sLatticeFine.defineRhoU(superGeometryFine, 2, rhoAnalytical, uAnalytical);
      sLatticeFine.iniEquilibrium(superGeometryFine, 2, rhoAnalytical, uAnalytical);
    
      sLatticeFine.initialize();
    }
    #10974
    Adrian
    Keymaster

    You should not need the communicateToAllflags here, overlap communication (to ensure the interpolation stencil has the correct neighbor information) is enough. This all relies on the exact same block decomposition being used between the resolutions (s.t. both the coarse and fine information for any given cell reside on the same process). Can you confirm whether your code works (for a smaller resolution) on a single process / detail the way you execute this?

    #10975
    nipinl
    Participant

    Hi Adrain,
    First of all, thank you for helping me out here.
    When run in serial, the current code works almost correct, just that it did not copy at x=xMax plane and at the edge (xMin,yMin) https://www.dropbox.com/scl/fi/dcyze2b6b745c389vldld/serialCopy.png?rlkey=wobmt9paxqqc0awidnfannffn&st=pjacrhww&dl=0

    However, when using AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, /*communicateToAll =*/ false, /*communicateOverlap =*/ true); for serial run, it fails citing “double free or corruption (!prev) tri0570:1320121] *** Process received signal *** ”

    Regarding the decomposition, I use

    CuboidDecomposition3D<T> cuboidDecompositionCoarse(
        cuboidCoarse, converterCoarse.getPhysDeltaX(), noOfCuboids);
    CuboidDecomposition3D<T> cuboidDecompositionFine(
        cuboidFine, converterFine.getPhysDeltaX(), noOfCuboids);

    I was wondering, how can we use same decomposition for both coarse and fine, since N and PhysDx are different?

    #10976
    nipinl
    Participant

    Image: https://www.dropbox.com/scl/fi/dcyze2b6b745c389vldld/serialCopy.png?rlkey=wobmt9paxqqc0awidnfannffn&st=agsktu3f&dl=0

    [left (top and bottom) is the converged coarse run and right is the vtm file for fine lattice at time=0. ]

    • This reply was modified 3 weeks ago by nipinl. Reason: image
    • This reply was modified 3 weeks ago by nipinl. Reason: added description
    #10982
    Adrian
    Keymaster

    On not copying the “edges”: You may need to trigger an additional overlap communication as each process only updates its own local data with the (interpolated) values.

    On the memory corruption: Did you maybe accidentally destruct the old lattice before / during interpolation? Seeing just the message I can only guess.

    The decomposition will be the same if you use the default settings (which you do) and the same number of processes load balanced by the same load balancer.

    #10987
    nipinl
    Participant

    Hi Adrian,
    Thank you very much for the suggestions.

    
      AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, /*communicateToAll   =*/ false,  /*communicateOverlap =*/ true);
    

    with using loadbalancer of coarse for decomposing fine worked !!
    Best,
    Nipin

    For future readers, following was employed for decomposition:
    Coarse:

    IndicatorCuboid3D<T> cuboidCoarse(extendCoarse, originCoarse);
    CuboidDecomposition3D<T> cuboidDecompositionCoarse(
        cuboidCoarse, converterCoarse.getPhysDeltaX(), noOfCuboids);
    
    HeuristicLoadBalancer<T> loadBalancerCoarse(cuboidDecompositionCoarse);
    
    SuperGeometry<T,3> superGeometryCoarse(
        cuboidDecompositionCoarse, loadBalancerCoarse, 4);
    

    Fine:

    IndicatorCuboid3D<T> cuboidFine(extendFine, originFine);
    CuboidDecomposition3D<T> cuboidDecompositionFine(
        cuboidFine, converterFine.getPhysDeltaX(), noOfCuboids);
    SuperGeometry<T,3> superGeometryFine(
        cuboidDecompositionFine, loadBalancerCoarse, 4)
Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.