April 23, 2021 at 3:24 pm #5618MikeParticipant
I wonder if openlb can implement multi-meshing tech now.For example,in the boundary layer area we choose a larger resolution comparing with others.If can,how?
Also,I find there are two different resolutions in bstep2d case,but are they both implemented in the case?
Cause I only observe that N=60″ resolution is used.
MikeApril 28, 2021 at 11:03 am #5631
No, OpenLB currently doesn’t support local grid refinement. Various approaches exist as prototypes but it is not clear when this will be at a stage where we include it in a release as most efforts are currently focused on improving general performance and supporting GPUs.
Thus the bstep2d case only uses a fixed resolution for the whole lattice. The other
Mresolution is an remainder from development, thanks for pointing out this possible source of confusion!May 11, 2021 at 9:47 am #5668MikeParticipant
thanks for your reply!
MikeJune 29, 2021 at 2:41 pm #5757
In some example cases, there is a command named “SuperLatticeRefinementMetricKnudsen3D”, isn’t it a method to do the grid refinement?
mengqiangJune 29, 2021 at 2:55 pm #5758
This is indeed related to grid refinement.
SuperLatticeRefinementMetricKnudsen3Dimplements a refinement criterion  to help determine where to refine to which degree. This is one part of a prototype implementation  that I pulled into the release as it is also useful in other contexts. e.g. to check whether a simulation gets close to diverging or to identify problematic regions.
June 29, 2021 at 3:24 pm #5761
- This reply was modified 2 years, 8 months ago by Adrian.
I wonder whether there is an available version of OpenLB to simulation on GPUs? I have tried the paralle simulation on CPUs, but the increase of efficiency is not significant (450s by serial and 350s by paralle of 3 cores). Although that may be because my simulation case is not very complex and the number of cores of my CPU is low. I want to achieve a real-time simulation by OpenLB, just like this paper: http://dx.doi.org/10.1016/j.buildenv.2017.08.048. As you can see, the efficiency of their model is very high. Is that possible to achieve that by OpenLB? Thanks.
mengqiangJune 29, 2021 at 3:45 pm #5762
No, currently there is no GPU support in OpenLB – however this is at the moment the main focus of my work. The next release will also include support for vectorization on CPUs which also delivers significant speedups.
Which parallel mode did you use? OpenMPI or OpenMP? For OpenMPI our efficiency is around ~80% so you should be able to get your desired timesteps per second by parallel execution on CPUs. Of course if you want to do this on a single host then there is no real way around GPUs – which is one of the reasons why I am focusing on it. E.g. a GPU LBM code of my own in some cases easily outperforms simulations on ~12 nodes of a small cluster using just a single GPU. This is also driven by the desire to fully utilize our new supercomputer at KIT.June 29, 2021 at 3:58 pm #5763
Your work is so great! Wish your job sucessful.
I have chosen the OpenMPI mode, but the efficiency is only increased around 25%. I run OpenLB in cygwin on Win10, I am not sure wheter this has a effect.
Besides, the paralle achievenment in cygwin on Windows system can not realized according to the technical report “https://www.openlb.net/wp-content/uploads/2020/11/olb-tr4.pdf”, I think that’s because the modification of the latest openmpi package in cygwin, and I had done some modifications, otherwise there will be an error ouputed in the console.
I hope you can further guide me to achieve the efficiency increase, thank you.
Furthermore, could you kindly tell me when the next version of OpenLb would be released? Very expect! 🙂
- You must be logged in to reply to this topic.