load coarse mesh data to a fine mesh
› Forums › OpenLB › General Topics › load coarse mesh data to a fine mesh
- This topic has 7 replies, 2 voices, and was last updated 2 weeks ago by nipinl.
-
AuthorPosts
-
November 14, 2025 at 7:37 am #10968nipinlParticipant
What would be the simplest way to load coarse simulation data as a starting point for a simulation with fine mesh? I did not find any straight forward way of doing this. Both coarse and fine mesh are uniform. For simplicity, we can think of, say N = 50 to N = 100 with exactly same geometry.
November 14, 2025 at 11:39 am #10969AdrianKeymasterYou can use the
AnalyticalFfromSuperFinterpolation functor to interpolate data (exposed by functors, e.g.SuperLatticeFieldF) between resolutions. The geometry setup should be separate and will need special handling depending on the case (as there is no generic way to transfer the boundary setup between resolutions for arbitrary BCs)November 15, 2025 at 8:38 am #10971nipinlParticipantI made coarse and fine versions of superGeometry, SuperLattice based on resolutions Ncoarse and Nfine; and enforced same boundary conditions for both lattices separately. The simulation is planned as: Coarse run –> interpolate –> fine run; all in a single (cpp file). A helper function (provided below) is made to initialize fine lattice from Coarse.
However, I’m unsuccessful in carrying out the initialization. The provided version partially copies data to fine lattice. If I change “…uAnalytical(coarseU, false, false)” to “…uAnalytical(coarseU, true, true)” to enable communication, MPI throws following error (line “coarseU.getSuperStructure().communicate();” is commented out when false changes to true). I was wondering where exactly I’m making a mistake here.ERROR:
[initializeFineFromCoarse] Interpolating coarse lattice onto fine lattice … [initializeFineFromCoarse] Defined rhoAnalytical … [tri-login02:00000] *** An error occurred in MPI_Bcast [tri-login02:00000] *** reported by process [2194538497,169] [tri-login02:00000] *** on communicator MPI_COMM_WORLD [tri-login02:00000] *** MPI_ERR_TRUNCATE: message truncated [tri-login02:00000] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, [tri-login02:00000] *** and MPI will try to terminate your MPI job as well) ————————————————————————– prterun has exited due to process rank 169 with PID 0 on node tri-login02 calling “abort”. This may have caused other processes in the application to be terminated by signals sent by prterun (as reported here).// Interpolate a coarse simulation onto the current (fine) lattice template <typename T, typename DESCRIPTOR> void initializeFineFromCoarse( SuperLattice<T, DESCRIPTOR>& sLatticeFine, SuperGeometry<T,3>& superGeometryFine, SuperLattice<T, DESCRIPTOR>& sLatticeCoarse, SuperGeometry<T,3>& superGeometryCoarse) { OstreamManager clout(std::cout, "initializeFineFromCoarse"); SuperLatticeVelocity3D<T, DESCRIPTOR> coarseU(sLatticeCoarse); coarseU.getSuperStructure().communicate(); AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, false, false); AnalyticalConst3D<T,T> rhoAnalytical(1.); sLatticeFine.defineRhoU(superGeometryFine, 1, rhoAnalytical, uAnalytical); sLatticeFine.iniEquilibrium(superGeometryFine, 1, rhoAnalytical, uAnalytical); sLatticeFine.defineRhoU(superGeometryFine, 2, rhoAnalytical, uAnalytical); sLatticeFine.iniEquilibrium(superGeometryFine, 2, rhoAnalytical, uAnalytical); sLatticeFine.initialize(); }November 17, 2025 at 10:50 am #10974AdrianKeymasterYou should not need the
communicateToAllflags here, overlap communication (to ensure the interpolation stencil has the correct neighbor information) is enough. This all relies on the exact same block decomposition being used between the resolutions (s.t. both the coarse and fine information for any given cell reside on the same process). Can you confirm whether your code works (for a smaller resolution) on a single process / detail the way you execute this?November 17, 2025 at 11:57 am #10975nipinlParticipantHi Adrain,
First of all, thank you for helping me out here.
When run in serial, the current code works almost correct, just that it did not copy at x=xMax plane and at the edge (xMin,yMin) https://www.dropbox.com/scl/fi/dcyze2b6b745c389vldld/serialCopy.png?rlkey=wobmt9paxqqc0awidnfannffn&st=pjacrhww&dl=0However, when using AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, /*communicateToAll =*/ false, /*communicateOverlap =*/ true); for serial run, it fails citing “double free or corruption (!prev) tri0570:1320121] *** Process received signal *** ”
Regarding the decomposition, I use
CuboidDecomposition3D<T> cuboidDecompositionCoarse( cuboidCoarse, converterCoarse.getPhysDeltaX(), noOfCuboids); CuboidDecomposition3D<T> cuboidDecompositionFine( cuboidFine, converterFine.getPhysDeltaX(), noOfCuboids);I was wondering, how can we use same decomposition for both coarse and fine, since N and PhysDx are different?
November 17, 2025 at 11:59 am #10976nipinlParticipant[left (top and bottom) is the converged coarse run and right is the vtm file for fine lattice at time=0. ]
November 20, 2025 at 2:08 pm #10982AdrianKeymasterOn not copying the “edges”: You may need to trigger an additional overlap communication as each process only updates its own local data with the (interpolated) values.
On the memory corruption: Did you maybe accidentally destruct the old lattice before / during interpolation? Seeing just the message I can only guess.
The decomposition will be the same if you use the default settings (which you do) and the same number of processes load balanced by the same load balancer.
November 24, 2025 at 7:44 am #10987nipinlParticipantHi Adrian,
Thank you very much for the suggestions.AnalyticalFfromSuperF3D<T,T> uAnalytical(coarseU, /*communicateToAll =*/ false, /*communicateOverlap =*/ true);with using loadbalancer of coarse for decomposing fine worked !!
Best,
NipinFor future readers, following was employed for decomposition:
Coarse:IndicatorCuboid3D<T> cuboidCoarse(extendCoarse, originCoarse); CuboidDecomposition3D<T> cuboidDecompositionCoarse( cuboidCoarse, converterCoarse.getPhysDeltaX(), noOfCuboids); HeuristicLoadBalancer<T> loadBalancerCoarse(cuboidDecompositionCoarse); SuperGeometry<T,3> superGeometryCoarse( cuboidDecompositionCoarse, loadBalancerCoarse, 4);Fine:
IndicatorCuboid3D<T> cuboidFine(extendFine, originFine); CuboidDecomposition3D<T> cuboidDecompositionFine( cuboidFine, converterFine.getPhysDeltaX(), noOfCuboids); SuperGeometry<T,3> superGeometryFine( cuboidDecompositionFine, loadBalancerCoarse, 4) -
AuthorPosts
- You must be logged in to reply to this topic.
