Skip to content

Category: News

OpenLB Release 1.6 available for download

The developer team is very happy to announce the release of the next version of OpenLB. The updated open-source Lattice Boltzmann (LB) code is now available for download.

Major new features include performance-optimized and GPU-enabled multi-lattice coupling alongside a new subgrid-scale particle system. This is augmented by a rich collection of bugfixes and general usability improvements.

Release notes

Major new features

  • New performance-optimized and GPU-enabled multi-lattice coupling
  • New subgrid-scale particle system

General improvements

  • New GPU-enabled Bouzidi implementation
  • Alternative handling of Bouzidi distances using new Yu post processor
  • GPU support for 3D free surface simulations
  • General usability improvements to dynamics, non-local, coupling operator parameterization
  • Support for asynchronous background post-processing / VTK output in GPU-based simulations
  • Support for heterogeneous simulations
  • Mixed compilation mode enabling different compilers for SIMD / GPU platforms
  • Reproducible compilation environments declared using Nix Flakes

New examples

  • adsorption/adsorption3d
  • adsorption/microMixer3d
  • reaction/advectionDiffusionReaction2d(Solver)
  • reaction/reaction2d
  • optimization/domainIdentification3d
  • optimization/domainIdentificationPoiseuille2d
  • optimization/showcaseADf
  • optimization/showcaseRosenbrock
  • optimization/testFlowOpti3d
  • freeSurface/breakingDam3d

Examples with full GPU support

  • turbulence/nozzle3d
  • turbulence/aorta3d
  • turbulence/venturi3d
  • turbulence/tgv3d
  • laminar/powerLaw2d
  • laminar/poiseuille(2,3)d
  • laminar/bstep(2,3)d
  • laminar/cylinder(2,3)d
  • laminar/cavity(2,3)d
  • laminar/cavity3dBenchmark
  • laminar/poiseuille(2,3)dEoc
  • freeSurface/fallingDrop(2,3)d
  • freeSurface/deepFallingDrop2d
  • freeSurface/rayleighInstability3d
  • freeSurface/breakingDam(2,3)d
  • advectionDiffusionReaction/advectionDiffusion(1,2,3)d
  • multiComponent/phaseSeparation(2,3)d
  • multiComponent/rayleighTaylor(2,3)d
  • thermal/squareCavity(2,3)d
  • thermal/rayleighBenard(2,3)d

Coupling in Action

Analogously to lattice-local post processors, inter-lattice coupling operators may now be declared as plain classes consisting of application scope, parameters and a generic apply method. For illustration we can consider the coupling between two lattices, targeting Navier Stokes and Advection Diffusion respectively, using the Boussinesq approximation:

struct NavierStokesAdvectionDiffusionCoupling {
  // Declare that we want cell-wise coupling with some global parameters
  static constexpr OperatorScope scope = OperatorScope::PerCellWithParameters;

  // Declare the two parameters custom to this coupling operator
  struct FORCE_PREFACTOR : public descriptors::FIELD_BASE<0,1> { };
  struct T0 : public descriptors::FIELD_BASE<1> { };

  // Declare which parameters are required
  using parameters = meta::list<FORCE_PREFACTOR,T0>;

  template <typename CELLS, typename PARAMETERS>
  void apply(CELLS& cells, PARAMETERS& parameters) any_platform
  {
    // Get the cell of the NavierStokes lattice
    auto& cellNSE = cells.template get<names::NavierStokes>();
    // Get the cell of the Temperature lattice
    auto& cellADE = cells.template get<names::Temperature>();

    // Computation of the Bousinessq force
    auto forcePrefactor = parameters.template get<FORCE_PREFACTOR>();
    auto temperatureDifference = cellADE.computeRho() - parameters.template get<T0>();
    auto bousinessqForce = forcePrefactor * temperatureDifference;
    cellNSE.template setField<descriptors::FORCE>(boussinesqForce);

    // Velocity coupling
    auto u = cellADE.template getField<descriptors::VELOCITY>();
    cellNSE.computeU(u.data());
    cellADE.template setField<descriptors::VELOCITY>(u);
  }
};

Coupling operators are instantiated using the SuperLatticeCoupling class template provided with a list of names and assigned lattices.

SuperLattice<T,DESCRIPTOR_NSE> sLatticeNSE(sGeometry);
SuperLattice<T,DESCRIPTOR_ADE> sLatticeADE(sGeometry);
// [...]
SuperLatticeCoupling coupling(
  NavierStokesAdvectionDiffusionCoupling{},
  names::NavierStokes{}, sLatticeNSE,  // `sLatticeNSE` will be referred to by `names::NavierStokes`
  names::Temperature{},  sLatticeADE); // `sLatticeADE` will be referred to by `names::Temperature`
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::T0>(...);
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::FORCE_PREFACTOR>(...);
// [...]
coupling.execute();

All coupling operators that are implemented in this new more compact style will transparently work on all of OpenLB’s target platforms, including GPUs.

Mixed compilation mode

Different from the initial GPU-supporting release OpenLB 1.5, where the entire code had to be compiled using nvcc and MPI support required manual definition of the relevant include and linker flags, this new release offers a more fine grained mixed compilation mode.

Specifically, it is possible to specify different compilers for the GPU_CUDA platform and the CPU-targeting platforms within the same build. This way, the GPU-side of things is automatically compiled into a separate shared library that is linked to the core application. Such separation is essential for fully supporting the vectorized CPU_SIMD platform alongside GPU_CUDA in a single heterogeneous executable.

Analogously to other compilation modes, example configs are provided in config/.

CXX             := mpic++
CC              := gcc

# Compiler flags for the core application and `CPU_*` platform support
CXXFLAGS        := -O3 -Wall -march=native -mtune=native
CXXFLAGS        += -std=c++17

# Parallel mode, one of `NONE`, `MPI` or `HYBRID`
PARALLEL_MODE   := MPI

# Platforms, optionally add `CPU_SIMD` for vectorized CPU execution
PLATFORMS       := CPU_SISD GPU_CUDA

# Compiler to use for the `GPU_CUDA` platform
CUDA_CXX        := nvcc
CUDA_CXXFLAGS   := -O3 -std=c++17
# Adjust to enable resolution of libcuda, libcudart, libcudadevrt
CUDA_LDFLAGS    := -L/run/opengl-driver/lib
# for e.g. RTX 30* (Ampere), see table in `rules.mk` for other options
CUDA_ARCH       := 86

# Default floating point type
FLOATING_POINT_TYPE := float

# Set to `OFF` if tinyxml and zlib are provided by the environment
USE_EMBEDDED_DEPENDENCIES := ON

The mixed mode is automatically enabled as soon as a separate CUDA compiler is specified using the CUDA_CXX environment variable. Following this, the compilation of the core library and individual applications is identical from the user’s perspective.

One additional advantage is that the compilation-time-intensive GPU kernels do not need to be recompiled for every code change. Instead make no-cuda-recompile allows for compiling the core application without GPU re-instantiation as long as no new operators are introduced (e.g. if only the geometry setup, parameters or post processing is changed after an initial full compilation).

For convenience, various tested compilation environments are reproducibly declared using Nix Flakes. E.g. instantiating a Multi-GPU compilation environment is as easy as removing the default config.mk and calling nix develop .#env-gcc-openmpi-cuda in the OpenLB root.

A guide for setting up (Multi-)GPU support for OpenLB on Windows WSL is also available (PDF).

Citation

If you want to cite OpenLB 1.6 you can use:

A. Kummerländer, S. Avis, H. Kusumaatmaja, F. Bukreev, M. Crocoll, D. Dapelo, N. Hafen, S. Ito, J. Jeßberger, J.E. Marquardt, J. Mödl, T. Pertzel, F. Prinz, F. Raichle, M. Schecher, S. Simonis, D. Teutscher, and M.J. Krause.

OpenLB Release 1.6: Open Source Lattice Boltzmann Code.

Version 1.6. Apr. 2023.

DOI: 10.5281/zenodo.7773497

General metadata is also available as a CITATION.cff file following the standard Citation File Format (CFF).

DOI

Supported Systems

OpenLB is able to utilize vectorization (AVX2/AVX-512) on x86 CPUs [1] and NVIDIA GPUs for block-local processing. CPU targets may additionally utilize OpenMP for shared memory parallelization while any communication between individual processes is performed using MPI.

It has been successfully employed for simulations on computers ranging from low-end smartphones up to supercomputers.

The present release has been explicitly tested in the following environments:

  • NixOS 22.11 and unstable (Nix Flake provided)
  • Ubuntu 20.04, 22.04
  • Red Hat Enterprise Linux 8.x (HoreKa, BwUniCluster2)
  • Windows 10, 11 (WSL)
  • MacOS 13

as well as compilers:

  • GCC 9 and later
  • Clang 13 and later
  • Intel C++ 2021.4 and later
  • NVIDIA CUDA 11.4 and later
  • NVIDIA HPC SDK 21.3 and later
  • MPI libraries OpenMPI 3.1, 4.1 (CUDA-awareness required for Multi-GPU); Intel MPI 2021.3.0 and later

[1]: Other CPU targets are also supported, e.g. common Smartphone ARM CPUs and Apple M1/M2.

Spring School 2023 in Greenwich/London (UK) – Register Now

Registration is now open for the Sixth Spring School on Lattice Boltzmann Methods with OpenLB Software Lab that will be held in Greenwich/London, UK from 5th to 9th of June 2023. The spring school introduces scientists and applicants from industry to the theory of LBM and trains them on practical problems. The first half of the week is dedicated to the theoretical fundamentals of LBM up to ongoing research on selected topics. Followed by mentored training on case studies using OpenLB in the second half, where the participants gain deep insights into LBM and its applications. This educational concept offers a comprehensive and personal guided approach to LBM. Participants also benefit from the knowledge exchange during the poster session, coffee breaks, and the excursion. We look forward to your participation.

Keep in mind that the number of participants is limited and that the registration follows a first come first serve principle.

On behalf of the spring school executive committee, Nicolas Hafen, Mathias J. Krause, Jan E. Marquardt, Timothy Reis, Choi-Hong Lai, Tao Gao, Andrew Kao

OpenLB Community YouTube Channel Update

We have just released a new video on our OpenLB YouTube Channel. 

Simulation of a breaking dam using the free surface model included in OpenLB 1.5.

Free surface implementation by Claudius Holeksa, example case and visualization by Maximilian Schecher.

Recent Performance Benchmarks of OpenLB 1.5 on the HoreKa Supercomputer at KIT

Following up on the performance-focused release of OpenLB 1.5 we updated our Performance showcases to include scalability plots on up to 128 CPU-only resp. Multi-GPU nodes of the HoreKa supercomputer at the Karlsruhe Institute of Technology (KIT). These results were presented at the 25th Results and Review Workshop of the HLRS this October and are accepted for publication in the annual proceedings on High Performance Computing in Science and Engineering.

The following plots document the per-node performance in Billions of Cell Updates per Second (GLUPs) for various problem sizes of the established lid driven cavity benchmark case. Highlights include weak scaling efficiencies up to 1.01 for hybrid AVX-512 vectorized CPU resp. up to 0.9 for CUDA GPU execution alongside a total peak performance of 1.33 Trillion Cell Updates per Second when using 512 NVIDIA A100 GPUs. Further details including individual strong scaling values are available in the performance section.

Scalability of OpenLB 1.5 on HoreKa using hybrid execution (MPI + OpenMP + AVX-512 Vectorization)

Scalability of OpenLB 1.5 on HoreKa using multi GPU execution (MPI + CUDA)

Plots, vectorization and CUDA GPU implementation contributed by Adrian Kummerländer.

A. Kummerländer, F. Bukreev, S. Berg, M. Dorn and M.J. Krause. Advances in Computational Process Engineering using Lattice Boltzmann Methods on High Performance Computers for Solving Fluid Flow Problems. In: High Performance Computing in Science and Engineering ’22 (accepted).

Highly-resolved Nozzle Simulation Performed Using Multi-GPU Support

We have just released a new video on our OpenLB YouTube Channel. 

In order to showcase the usability and performance of OpenLB’s GPU support, a turbulent nozzle flow was simulated on the HoreKa (KIT, Germany) supercomputer. This case was adapted for higher resolution from the turbulence/nozzle3d example included in OpenLB 1.5. It utilizes a Smagorinsky LES model for the bulk flow and non-local interpolated boundaries for the in- and outflow conditions on top of a single-precision D3Q19 lattice. The simulation’s 2.5 billion cells were computed on 120 Nvidia A100 GPUs divided into 30 nodes. This resulted in a performance of ~250 billion cell updates per second. ParaView was utilized to generate the visualization.

Simulation and Visualization: Adrian Kummerländer

LBM Spring School with OpenLB Software Lab in Kraków successfully finished

The executive committee announces the closing of the fifth LBM Spring School with OpenLB Software Lab. We were happy to host 51 participants from 8 countries, including 4 invited speakers in Kraków, Poland. This year’s poster award goes to Pavel Eichler (Czech Technical University in Prague).

Next year, the 6th spring school is planned to take place at the University of Greenwich in England/UK from 2023 June 5th to 9th. 

On behalf of the spring school executive committee, Nicolas Hafen, Mathias J. Krause, Jan E. Marquardt, Paweł Madejski, Tomasz Kuś, Navaneethan Subramanian, Maciej Bujalski, Karolina Chmiel.

2nd Call for the Fifth Spring School – Early Bird by 10th of May

Early bird registration is open until the 10th of May 2022 for the Fifth Spring School on Lattice Boltzmann Methods with OpenLB Software Lab. It is held in Kraków, Poland, from 6th to 10th of June 2022. The school offers a special lecture on LBM on high performance computers and, for the first time, using GPUs with OpenLB (v. 1.5).

On behalf of the spring school executive committee

OpenLB release 1.5 available for download

The developer team is very happy to announce the release of the next version of OpenLB. The updated open-source Lattice Boltzmann (LB) code is now available for download.

Major new features include support for GPUs using CUDA, vectorized collision steps on SIMD CPUs, a new implementation of our resolved particle system as well as the possibility of simulating free surface flows and reactions.

Release notes

Core changes and features:

  • Support for GPUs using CUDA
  • Support for SIMD collision steps (AVX2 / AVX-512)
  • New Dynamics concept (including Momenta)
  • New PostProcessor concept
  • New resolved particle system implementation
  • New automatic code generation of CSE-optimized functions

New physical models:

  • Reactions
  • Free surface flows

Other changes:

  • Solver class for structuring simulations
  • Forward Algorithmic Differentiation
  • Finite difference methods (FDM) and LBM for advection diffusion (reaction) equations (ADRE)

New examples:

  • Free surface flows:
    • fallingDrop(2,3)d
    • breakingDam2d
    • deepFallingDrop2d
    • rayleighInstability3d
  • New free energy examples:
    • binaryShearFlow2d
    • fourRollMill2d
  • New ADRE examples based on FDM and LBM
    • advectionDiffusion3d
    • advectionDiffusionPipe3d
    • advectionDiffusionReaction2d
    • reactionFiniteDifferences2d

Compatibility tested on:

  • Systems
    • Various Linux distributions
      • NixOS 21.11
      • Ubuntu 20.04.4 LTS
      • Red Hat Enterprise Linux 8.2
    • Windows WSL 1 and 2
    • Mac OS 11.6
  • Compilers
    • GCC 9, 10, 11
    • Clang 13
    • Intel C++ 19, 2021.4
    • Nvidia CUDA 11.4
    • Nvidia HPC SDK 21.3
  • MPI
    • OpenMPI 3.1, 4.1
    • Intel MPI 2021.3.0

GPU support

Early benchmarks confirm good GPU utilization for the established lid driven cavity benchmark with local velocity boundaries including non local edge and corner treatment. For example a single precision 1000^3 cavity is simulated at a cell throughput of 42.2 GLUPs on two HoreKa GPU nodes featuring four Nvidia A100 accelerators each, compared to 24.8 GLUPs on a single node. This yields a strong parallel efficiency of 85%.

The same benchmark on two CPU-only nodes utilizing AVX-512 and hybrid parallelization yields a performance of 2.7 GLUPs, leading to a speedup of 15.6 for the GPU code.

Other cases such as a turbulent nozzle flow with non-local interpolated boundaries and Smagorinsky BGK LES also perform well. E.g. for a nozzle flow resolved by 360 million cells and distributed to two GPU nodes we obtain ~36 GLUPs.

As this is the first public release of GPU support in OpenLB, not all features are currently supported. However, due to extensive refactoring of our Dynamics and Post Processor concepts the vast majority of local collision steps and a core set of non local boundaries work transparently on GPUs. This includes support for large scale simulations on multi GPU clusters.

Existing legacy post processors are straight forward to adapt to the new approach. The core idea of which is to implement both local- and non-local cell operations as abstract templates accepting the concept of a cell instead of a specific implementation thereof. For further details see the userguide, specifically the sections on Dynamics and Post Processors.

Examples that work on GPUs without changing a single line of code include:

  • laminar/cavity(2,3)d
  • laminar/cavity3dBenchmark
  • laminar/cylinder(2,3)d
  • laminar/poiseuille(2,3)d
  • laminar/bstep(2,3)d
  • laminar/powerLaw2d
  • turbulence/nozzle3d
  • turbulence/venturi3d
  • turbulence/tgv3d
  • advectionDiffusionReaction/advectionDiffusion3d
  • freeSurface/(deep)fallingDrop2d
  • freeSurface/breakingDam2d

On CPUs, all existing post processors, dynamics and all other features continue to be supported.

We are hard at work expanding the list of GPU-aware features and examples, specifically full boundary condition coverage as well as particle and multi-lattice coupling. Feel free in contacting us if you are interested in joining our efforts on further developing this or any other aspect of the open source framework OpenLB.

Citation

A. Kummerländer, S. Avis, H. Kusumaatmaja, F. Bukreev, D. Dapelo, S. Großmann, N. Hafen, C. Holeksa, A. Husfeldt, J. Jeßberger, L. Kronberg, J. Marquardt, J. Mödl, J. Nguyen, T. Pertzel, S. Simonis, L. Springmann, N. Suntoyo, D. Teutscher, M. Zhong and M.J. Krause.

OpenLB Release 1.5: Open Source Lattice Boltzmann Code. Version 1.5. Apr. 2022.

doi: 10.5281/zenodo.6469606. url: https://doi.org/10.5281/zenodo.6469606

PS: Please consider joining the developer team by contributing your code. Together we can strengthen the LB community by sharing our research in an open and reproducible way! Feel free to contact us here.