Question about running warning
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › General Topics › Question about running warning
- This topic has 11 replies, 3 voices, and was last updated 3 months, 2 weeks ago by Bobbie.
-
AuthorPosts
-
October 13, 2024 at 6:44 pm #9364BobbieParticipant
Hello everyone, I obtained the following warning information from the console while running the code, how should I adjust the code to solve this problem?
The warning information:no discreteNormal is found.
Thank you for your help sincerely.October 13, 2024 at 7:02 pm #9365AdrianKeymasterThis points at an issue in the geometry setup, specifically that a discrete normal can not be obtained for one or more cells. This means that the boundary conditions likely won’t work as intended. Did you already take a look at the geometry in Paraview?
October 14, 2024 at 2:40 am #9366BobbieParticipantThank you for your attention. I have reviewed the geometrical model in Paraview, and the details are clearly visible. However, I am unsure which step is causing the above issue.
October 14, 2024 at 10:43 am #9369AdrianKeymasterWhich case and boundary condition is this?
October 14, 2024 at 12:22 pm #9380BobbieParticipantHello,Adrian. This code is used to simulate the multi-phase flow inside a porous media, so I combined the codes of porous media (like resolvedRock3d) with the example of multiphase flow (FreeEnergy multi-phase model). And then the above issue occurred at running time.
But when I changed the noofCuboids to 1, the code seemed to work, but the terminal still displayed the above issue(no discreteNormal is found).October 14, 2024 at 12:25 pm #9381BobbieParticipantAlthough the code is running, the calculation process is very long. Is this related to the change of value of noofCuboids?
October 21, 2024 at 1:02 pm #9391AdrianKeymasterThe number of cuboids definitely plays an important role in the total simulation performance. However, the concrete importance depends heavily on the exact execution mode you are using. e.g. MPI-only vs. OMP vs. Hybrid vs. Single GPU vs. Multi-GPU…
For a start: What mode are you using on what hardware and what is the achieved throughput in MLUPs? For which problem size?
October 29, 2024 at 7:58 am #9446BobbieParticipantHello,Adrian. Thank you for your answer. Given that my current computer is equipped with multiple CPU cores and threads, I’d like to leverage CPU parallel computation to accelerate the simulation process. May I inquire if this approach aligns with the MPI (Message Passing Interface) you mentioned, as I’m not very familiar with these concepts?
October 29, 2024 at 9:27 am #9447mathiasKeymasterIts explained in the user guide!
October 29, 2024 at 10:27 am #9449BobbieParticipantYeah, Mathias. I found that information in Section 10.9 (Lesson 9: Run your Programs on a Parallel Machine) of the User Guide. After adhering to the instructions to modify the config.mk file, it seems that parallel computation across multiple CPU cores has been successfully implemented without encountering any errors or warnings. Thank you once again for your assistance.
October 29, 2024 at 12:20 pm #9450AdrianKeymasterGood to hear! Depending on your specific simulation setup you may gain additional performance on CPU by using OpenMP together with vectorization.
October 29, 2024 at 1:59 pm #9451BobbieParticipantYes, I’m truly grateful for your invaluable assistance, as it has provided me with valuable insights on how to enhance the efficiency of my calculations.
-
AuthorPosts
- You must be logged in to reply to this topic.