Hello
I have been working with OpenLB for fluid dynamics simulations but I’m running into performance bottlenecks when scaling up to larger grids. As my problem size increases, memory usage skyrockets, and execution time becomes impractical. I’ve tried adjusting refinement levels and tuning parallelization settings, but I’m still struggling to achieve efficient performance on high-resolution cases.
One concern is whether my current domain decomposition strategy is optimal for my hardware. I’ve seen discussions about improving load balancing and tweaking MPI Splunk course settings, but I’m not sure which parameters would have the most impact. Additionally, I wonder if there are specific compiler optimizations or data structure modifications that could help reduce memory overhead without sacrificing accuracy.
Has anyone successfully optimized OpenLB for large-scale problems? What strategies worked best for improving both memory efficiency and computational speed? I’d appreciate any insights or benchmarking results from others working on similar challenges.
Thank you !!