Skip to content

Category: News

LBM Spring School in Greenwich successfully finished

The executive committee is happy to announce the closing of the 6th LBM Spring School with OpenLB Software Lab. We hosted 50 participants from 15 countries this year. Congratulations to Martijn Gobes from the Netherlands for winning our poster award.

We are already busy planing next years spring school. The 7th spring school is planned to take place in Heidelberg/Karlsruhe in Germany from March 4th to 8th 2024. 

Thank you all for attending the 6th spring school in Greenwich!

On behalf of the spring school executive committee.

OpenLB Community YouTube Channel Update

We have just released a new video on our OpenLB YouTube Channel about Multi-GPU Simulation of Turbulent Mixing Using an LES Lattice Boltzmann Model and OpenLB.

Accurate simulations of species transport and mixing with reactions in fluids are a grand challenge in CFD because they require resolving relevant turbulent structure down to the Bachelor scales. We present here our first results for our approach on simulating turbulent confined impinging jets (CIJ) micromixer [Johnson & Prud’homme 2003]. With the help of OpenLB (https://www.openlb.net/) it is now possible to perform an LES-Lattice Boltzmann Method of that case with a newly developed stabilized species transport. The two turbulent inlets are set up with the vortex method and the wall is mapped with a Bouzidi ansatz for a higher precision. The simulation is meshed in parallel in OpenLB with 248 millions cells which are load-balanced and distributed to 24 A100 GPUs of the HoreKA cluster at KIT. The simulation has taken 40 hours to complete 5.4 ms of real time (17.4 residence times). Two species are simulated but only one is visualized.

Simulation & Visualization: Fedor Bukreev, Adrian Kummerländer

OpenLB Release 1.6 available for download

The developer team is very happy to announce the release of the next version of OpenLB. The updated open-source Lattice Boltzmann (LB) code is now available for download.

Major new features include performance-optimized and GPU-enabled multi-lattice coupling alongside a new subgrid-scale particle system. This is augmented by a rich collection of bugfixes and general usability improvements.

Release notes

Major new features

  • New performance-optimized and GPU-enabled multi-lattice coupling
  • New subgrid-scale particle system

General improvements

  • New GPU-enabled Bouzidi implementation
  • Alternative handling of Bouzidi distances using new Yu post processor
  • GPU support for 3D free surface simulations
  • General usability improvements to dynamics, non-local, coupling operator parameterization
  • Support for asynchronous background post-processing / VTK output in GPU-based simulations
  • Support for heterogeneous simulations
  • Mixed compilation mode enabling different compilers for SIMD / GPU platforms
  • Reproducible compilation environments declared using Nix Flakes

New examples

  • adsorption/adsorption3d
  • adsorption/microMixer3d
  • reaction/advectionDiffusionReaction2d(Solver)
  • reaction/reaction2d
  • optimization/domainIdentification3d
  • optimization/domainIdentificationPoiseuille2d
  • optimization/showcaseADf
  • optimization/showcaseRosenbrock
  • optimization/testFlowOpti3d
  • freeSurface/breakingDam3d

Examples with full GPU support

  • turbulence/nozzle3d
  • turbulence/aorta3d
  • turbulence/venturi3d
  • turbulence/tgv3d
  • laminar/powerLaw2d
  • laminar/poiseuille(2,3)d
  • laminar/bstep(2,3)d
  • laminar/cylinder(2,3)d
  • laminar/cavity(2,3)d
  • laminar/cavity3dBenchmark
  • laminar/poiseuille(2,3)dEoc
  • freeSurface/fallingDrop(2,3)d
  • freeSurface/deepFallingDrop2d
  • freeSurface/rayleighInstability3d
  • freeSurface/breakingDam(2,3)d
  • advectionDiffusionReaction/advectionDiffusion(1,2,3)d
  • multiComponent/phaseSeparation(2,3)d
  • multiComponent/rayleighTaylor(2,3)d
  • thermal/squareCavity(2,3)d
  • thermal/rayleighBenard(2,3)d

Coupling in Action

Analogously to lattice-local post processors, inter-lattice coupling operators may now be declared as plain classes consisting of application scope, parameters and a generic apply method. For illustration we can consider the coupling between two lattices, targeting Navier Stokes and Advection Diffusion respectively, using the Boussinesq approximation:

struct NavierStokesAdvectionDiffusionCoupling {
  // Declare that we want cell-wise coupling with some global parameters
  static constexpr OperatorScope scope = OperatorScope::PerCellWithParameters;

  // Declare the two parameters custom to this coupling operator
  struct FORCE_PREFACTOR : public descriptors::FIELD_BASE<0,1> { };
  struct T0 : public descriptors::FIELD_BASE<1> { };

  // Declare which parameters are required
  using parameters = meta::list<FORCE_PREFACTOR,T0>;

  template <typename CELLS, typename PARAMETERS>
  void apply(CELLS& cells, PARAMETERS& parameters) any_platform
  {
    // Get the cell of the NavierStokes lattice
    auto& cellNSE = cells.template get<names::NavierStokes>();
    // Get the cell of the Temperature lattice
    auto& cellADE = cells.template get<names::Temperature>();

    // Computation of the Bousinessq force
    auto forcePrefactor = parameters.template get<FORCE_PREFACTOR>();
    auto temperatureDifference = cellADE.computeRho() - parameters.template get<T0>();
    auto bousinessqForce = forcePrefactor * temperatureDifference;
    cellNSE.template setField<descriptors::FORCE>(boussinesqForce);

    // Velocity coupling
    auto u = cellADE.template getField<descriptors::VELOCITY>();
    cellNSE.computeU(u.data());
    cellADE.template setField<descriptors::VELOCITY>(u);
  }
};

Coupling operators are instantiated using the SuperLatticeCoupling class template provided with a list of names and assigned lattices.

SuperLattice<T,DESCRIPTOR_NSE> sLatticeNSE(sGeometry);
SuperLattice<T,DESCRIPTOR_ADE> sLatticeADE(sGeometry);
// [...]
SuperLatticeCoupling coupling(
  NavierStokesAdvectionDiffusionCoupling{},
  names::NavierStokes{}, sLatticeNSE,  // `sLatticeNSE` will be referred to by `names::NavierStokes`
  names::Temperature{},  sLatticeADE); // `sLatticeADE` will be referred to by `names::Temperature`
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::T0>(...);
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::FORCE_PREFACTOR>(...);
// [...]
coupling.execute();

All coupling operators that are implemented in this new more compact style will transparently work on all of OpenLB’s target platforms, including GPUs.

Mixed compilation mode

Different from the initial GPU-supporting release OpenLB 1.5, where the entire code had to be compiled using nvcc and MPI support required manual definition of the relevant include and linker flags, this new release offers a more fine grained mixed compilation mode.

Specifically, it is possible to specify different compilers for the GPU_CUDA platform and the CPU-targeting platforms within the same build. This way, the GPU-side of things is automatically compiled into a separate shared library that is linked to the core application. Such separation is essential for fully supporting the vectorized CPU_SIMD platform alongside GPU_CUDA in a single heterogeneous executable.

Analogously to other compilation modes, example configs are provided in config/.

CXX             := mpic++
CC              := gcc

# Compiler flags for the core application and `CPU_*` platform support
CXXFLAGS        := -O3 -Wall -march=native -mtune=native
CXXFLAGS        += -std=c++17

# Parallel mode, one of `NONE`, `MPI` or `HYBRID`
PARALLEL_MODE   := MPI

# Platforms, optionally add `CPU_SIMD` for vectorized CPU execution
PLATFORMS       := CPU_SISD GPU_CUDA

# Compiler to use for the `GPU_CUDA` platform
CUDA_CXX        := nvcc
CUDA_CXXFLAGS   := -O3 -std=c++17
# Adjust to enable resolution of libcuda, libcudart, libcudadevrt
CUDA_LDFLAGS    := -L/run/opengl-driver/lib
# for e.g. RTX 30* (Ampere), see table in `rules.mk` for other options
CUDA_ARCH       := 86

# Default floating point type
FLOATING_POINT_TYPE := float

# Set to `OFF` if tinyxml and zlib are provided by the environment
USE_EMBEDDED_DEPENDENCIES := ON

The mixed mode is automatically enabled as soon as a separate CUDA compiler is specified using the CUDA_CXX environment variable. Following this, the compilation of the core library and individual applications is identical from the user’s perspective.

One additional advantage is that the compilation-time-intensive GPU kernels do not need to be recompiled for every code change. Instead make no-cuda-recompile allows for compiling the core application without GPU re-instantiation as long as no new operators are introduced (e.g. if only the geometry setup, parameters or post processing is changed after an initial full compilation).

For convenience, various tested compilation environments are reproducibly declared using Nix Flakes. E.g. instantiating a Multi-GPU compilation environment is as easy as removing the default config.mk and calling nix develop .#env-gcc-openmpi-cuda in the OpenLB root.

A guide for setting up (Multi-)GPU support for OpenLB on Windows WSL is also available (PDF).

Citation

If you want to cite OpenLB 1.6 you can use:

A. Kummerländer, S. Avis, H. Kusumaatmaja, F. Bukreev, M. Crocoll, D. Dapelo, N. Hafen, S. Ito, J. Jeßberger, J.E. Marquardt, J. Mödl, T. Pertzel, F. Prinz, F. Raichle, M. Schecher, S. Simonis, D. Teutscher, and M.J. Krause.

OpenLB Release 1.6: Open Source Lattice Boltzmann Code.

Version 1.6. Apr. 2023.

DOI: 10.5281/zenodo.7773497

General metadata is also available as a CITATION.cff file following the standard Citation File Format (CFF).

DOI

Supported Systems

OpenLB is able to utilize vectorization (AVX2/AVX-512) on x86 CPUs [1] and NVIDIA GPUs for block-local processing. CPU targets may additionally utilize OpenMP for shared memory parallelization while any communication between individual processes is performed using MPI.

It has been successfully employed for simulations on computers ranging from low-end smartphones up to supercomputers.

The present release has been explicitly tested in the following environments:

  • NixOS 22.11 and unstable (Nix Flake provided)
  • Ubuntu 20.04, 22.04
  • Red Hat Enterprise Linux 8.x (HoreKa, BwUniCluster2)
  • Windows 10, 11 (WSL)
  • MacOS 13

as well as compilers:

  • GCC 9 and later
  • Clang 13 and later
  • Intel C++ 2021.4 and later
  • NVIDIA CUDA 11.4 and later
  • NVIDIA HPC SDK 21.3 and later
  • MPI libraries OpenMPI 3.1, 4.1 (CUDA-awareness required for Multi-GPU); Intel MPI 2021.3.0 and later

[1]: Other CPU targets are also supported, e.g. common Smartphone ARM CPUs and Apple M1/M2.

Spring School 2023 in Greenwich/London (UK) – Register Now

Registration is now open for the Sixth Spring School on Lattice Boltzmann Methods with OpenLB Software Lab that will be held in Greenwich/London, UK from 5th to 9th of June 2023. The spring school introduces scientists and applicants from industry to the theory of LBM and trains them on practical problems. The first half of the week is dedicated to the theoretical fundamentals of LBM up to ongoing research on selected topics. Followed by mentored training on case studies using OpenLB in the second half, where the participants gain deep insights into LBM and its applications. This educational concept offers a comprehensive and personal guided approach to LBM. Participants also benefit from the knowledge exchange during the poster session, coffee breaks, and the excursion. We look forward to your participation.

Keep in mind that the number of participants is limited and that the registration follows a first come first serve principle.

On behalf of the spring school executive committee, Nicolas Hafen, Mathias J. Krause, Jan E. Marquardt, Timothy Reis, Choi-Hong Lai, Tao Gao, Andrew Kao

OpenLB Community YouTube Channel Update

We have just released a new video on our OpenLB YouTube Channel. 

Simulation of a breaking dam using the free surface model included in OpenLB 1.5.

Free surface implementation by Claudius Holeksa, example case and visualization by Maximilian Schecher.

Recent Performance Benchmarks of OpenLB 1.5 on the HoreKa Supercomputer at KIT

Following up on the performance-focused release of OpenLB 1.5 we updated our Performance showcases to include scalability plots on up to 128 CPU-only resp. Multi-GPU nodes of the HoreKa supercomputer at the Karlsruhe Institute of Technology (KIT). These results were presented at the 25th Results and Review Workshop of the HLRS this October and are accepted for publication in the annual proceedings on High Performance Computing in Science and Engineering.

The following plots document the per-node performance in Billions of Cell Updates per Second (GLUPs) for various problem sizes of the established lid driven cavity benchmark case. Highlights include weak scaling efficiencies up to 1.01 for hybrid AVX-512 vectorized CPU resp. up to 0.9 for CUDA GPU execution alongside a total peak performance of 1.33 Trillion Cell Updates per Second when using 512 NVIDIA A100 GPUs. Further details including individual strong scaling values are available in the performance section.

Scalability of OpenLB 1.5 on HoreKa using hybrid execution (MPI + OpenMP + AVX-512 Vectorization)

Scalability of OpenLB 1.5 on HoreKa using multi GPU execution (MPI + CUDA)

Plots, vectorization and CUDA GPU implementation contributed by Adrian Kummerländer.

A. Kummerländer, F. Bukreev, S. Berg, M. Dorn and M.J. Krause. Advances in Computational Process Engineering using Lattice Boltzmann Methods on High Performance Computers for Solving Fluid Flow Problems. In: High Performance Computing in Science and Engineering ’22 (accepted).

Highly-resolved Nozzle Simulation Performed Using Multi-GPU Support

We have just released a new video on our OpenLB YouTube Channel. 

In order to showcase the usability and performance of OpenLB’s GPU support, a turbulent nozzle flow was simulated on the HoreKa (KIT, Germany) supercomputer. This case was adapted for higher resolution from the turbulence/nozzle3d example included in OpenLB 1.5. It utilizes a Smagorinsky LES model for the bulk flow and non-local interpolated boundaries for the in- and outflow conditions on top of a single-precision D3Q19 lattice. The simulation’s 2.5 billion cells were computed on 120 Nvidia A100 GPUs divided into 30 nodes. This resulted in a performance of ~250 billion cell updates per second. ParaView was utilized to generate the visualization.

Simulation and Visualization: Adrian Kummerländer

LBM Spring School with OpenLB Software Lab in Kraków successfully finished

The executive committee announces the closing of the fifth LBM Spring School with OpenLB Software Lab. We were happy to host 51 participants from 8 countries, including 4 invited speakers in Kraków, Poland. This year’s poster award goes to Pavel Eichler (Czech Technical University in Prague).

Next year, the 6th spring school is planned to take place at the University of Greenwich in England/UK from 2023 June 5th to 9th. 

On behalf of the spring school executive committee, Nicolas Hafen, Mathias J. Krause, Jan E. Marquardt, Paweł Madejski, Tomasz Kuś, Navaneethan Subramanian, Maciej Bujalski, Karolina Chmiel.