Skip to content

Optimizing OpenLB for Large-Scale Simulations

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB General Topics Optimizing OpenLB for Large-Scale Simulations

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #9971
    Chlnldoe
    Participant

    Hello

    I have been working with OpenLB for fluid dynamics simulations but I’m running into performance bottlenecks when scaling up to larger grids. As my problem size increases, memory usage skyrockets, and execution time becomes impractical. I’ve tried adjusting refinement levels and tuning parallelization settings, but I’m still struggling to achieve efficient performance on high-resolution cases.

    One concern is whether my current domain decomposition strategy is optimal for my hardware. I’ve seen discussions about improving load balancing and tweaking MPI Splunk course settings, but I’m not sure which parameters would have the most impact. Additionally, I wonder if there are specific compiler optimizations or data structure modifications that could help reduce memory overhead without sacrificing accuracy.

    Has anyone successfully optimized OpenLB for large-scale problems? What strategies worked best for improving both memory efficiency and computational speed? I’d appreciate any insights or benchmarking results from others working on similar challenges.

    Thank you !!

    #9974
    Adrian
    Keymaster

    Yes, we do this all the time. If you tell me details about your setup and environment I will be able to provide guidance.

    #9975
    sfraniatte
    Participant

    Hello,

    I achieved a significant performance gain by changing the number of cuboids. I started from the aorta3d example, which used too many cuboids (8 per CPU core in parallel), and I greatly reduced this number (1 per core with 32 cores).

    Hope this helps. Good luck!

    Sylvain

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.