Communication between BlockLattices
October 26, 2023 at 12:32 pm #7856
Hello, openLB team,
I am reading this paper(https://www.sciencedirect.com/science/article/pii/S0898122120301875?via%3Dihub).
I have a question as in 3.2.4 Hybrid parallelization.
Dose the communication of BlockLattice on the same execution process (For exaple cuboid1 and cudoid2 in the paper) synchonize using MPI even though the execution process is only one?October 26, 2023 at 12:43 pm #7857AdrianKeymaster
This depends on the specific platforms as the communication subsystem can implement different requests for different pairings of platforms (e.g. CPU-CPU or CPU-GPU communication)
If we are talking only about CPU blocks then it currently doesn’t use MPI for intra-process synchronization of blocks. However, this is an implementation detail that may change in the future and has no impact on application / model code.
(the paper was published prior to the big GPU/SIMD refactor for release 1.5, you can look at the user guide for mor up to date information)October 26, 2023 at 1:22 pm #7858
Thank you for your kind reply.
Sorry I counld not understand I said “However, this is an implementation detail that may change in the future and has no impact on application / model code.” What is “application / model code”?
And,I get other 3 questions.
1.In the only CPU communication, I recon that the intra-process communication of BlockLattice is used by refer to the memory on the ovelapping cell for synchronizing BlockLattice. Is it right and where are the codes written on release1.6?
2.The inter-process communication of BlockLattice is used by MPI on any platformas on release1.6?
3. Dose the number of MPI match the number of SuperLattice?October 26, 2023 at 1:34 pm #7859
Sorry I got an additional question.
4. On the specific process(SuperLattice), can 2 blockLatices in this process be separated into openMP and CUDA for intra-block calclation.October 26, 2023 at 3:24 pm #7860AdrianKeymaster
By application / model code I mean the code the users of OpenLB write (e.g. if they implement new boundary conditions, collision steps, … or new simulation setups).
1: By intra-process I mean communication between blocks that belong to the same process (i.e. communication inside a process). You can start by reading the SuperCommunicator implementation (although the only point to that would be if you want to modify the communication code, if you only want to use OpenLB and/or add new models to it you do not need to understand any of the details here – that is the entire point of having a framework 🙂 I only write this because I know that your research is on the model and not on the HPC side of things)
2: Inter-process communication in OpenLB is and always was based on MPI. Only the specific usage of MPI has changed over time.
3: I am not sure what you mean. Each process has its own instance of the SuperLattice. Each of those instances holds the blocks assigned to the respective process.
4: Yes, again, this is heterogeneous processing (we previously talked about this)
- You must be logged in to reply to this topic.