Skip to content

Month: April 2023

OpenLB Release 1.6 available for download

The developer team is very happy to announce the release of the next version of OpenLB. The updated open-source Lattice Boltzmann (LB) code is now available for download.

Major new features include performance-optimized and GPU-enabled multi-lattice coupling alongside a new subgrid-scale particle system. This is augmented by a rich collection of bugfixes and general usability improvements.

Release notes

Major new features

  • New performance-optimized and GPU-enabled multi-lattice coupling
  • New subgrid-scale particle system

General improvements

  • New GPU-enabled Bouzidi implementation
  • Alternative handling of Bouzidi distances using new Yu post processor
  • GPU support for 3D free surface simulations
  • General usability improvements to dynamics, non-local, coupling operator parameterization
  • Support for asynchronous background post-processing / VTK output in GPU-based simulations
  • Support for heterogeneous simulations
  • Mixed compilation mode enabling different compilers for SIMD / GPU platforms
  • Reproducible compilation environments declared using Nix Flakes

New examples

  • adsorption/adsorption3d
  • adsorption/microMixer3d
  • reaction/advectionDiffusionReaction2d(Solver)
  • reaction/reaction2d
  • optimization/domainIdentification3d
  • optimization/domainIdentificationPoiseuille2d
  • optimization/showcaseADf
  • optimization/showcaseRosenbrock
  • optimization/testFlowOpti3d
  • freeSurface/breakingDam3d

Examples with full GPU support

  • turbulence/nozzle3d
  • turbulence/aorta3d
  • turbulence/venturi3d
  • turbulence/tgv3d
  • laminar/powerLaw2d
  • laminar/poiseuille(2,3)d
  • laminar/bstep(2,3)d
  • laminar/cylinder(2,3)d
  • laminar/cavity(2,3)d
  • laminar/cavity3dBenchmark
  • laminar/poiseuille(2,3)dEoc
  • freeSurface/fallingDrop(2,3)d
  • freeSurface/deepFallingDrop2d
  • freeSurface/rayleighInstability3d
  • freeSurface/breakingDam(2,3)d
  • advectionDiffusionReaction/advectionDiffusion(1,2,3)d
  • multiComponent/phaseSeparation(2,3)d
  • multiComponent/rayleighTaylor(2,3)d
  • thermal/squareCavity(2,3)d
  • thermal/rayleighBenard(2,3)d

Coupling in Action

Analogously to lattice-local post processors, inter-lattice coupling operators may now be declared as plain classes consisting of application scope, parameters and a generic apply method. For illustration we can consider the coupling between two lattices, targeting Navier Stokes and Advection Diffusion respectively, using the Boussinesq approximation:

struct NavierStokesAdvectionDiffusionCoupling {
  // Declare that we want cell-wise coupling with some global parameters
  static constexpr OperatorScope scope = OperatorScope::PerCellWithParameters;

  // Declare the two parameters custom to this coupling operator
  struct FORCE_PREFACTOR : public descriptors::FIELD_BASE<0,1> { };
  struct T0 : public descriptors::FIELD_BASE<1> { };

  // Declare which parameters are required
  using parameters = meta::list<FORCE_PREFACTOR,T0>;

  template <typename CELLS, typename PARAMETERS>
  void apply(CELLS& cells, PARAMETERS& parameters) any_platform
  {
    // Get the cell of the NavierStokes lattice
    auto& cellNSE = cells.template get<names::NavierStokes>();
    // Get the cell of the Temperature lattice
    auto& cellADE = cells.template get<names::Temperature>();

    // Computation of the Bousinessq force
    auto forcePrefactor = parameters.template get<FORCE_PREFACTOR>();
    auto temperatureDifference = cellADE.computeRho() - parameters.template get<T0>();
    auto bousinessqForce = forcePrefactor * temperatureDifference;
    cellNSE.template setField<descriptors::FORCE>(boussinesqForce);

    // Velocity coupling
    auto u = cellADE.template getField<descriptors::VELOCITY>();
    cellNSE.computeU(u.data());
    cellADE.template setField<descriptors::VELOCITY>(u);
  }
};

Coupling operators are instantiated using the SuperLatticeCoupling class template provided with a list of names and assigned lattices.

SuperLattice<T,DESCRIPTOR_NSE> sLatticeNSE(sGeometry);
SuperLattice<T,DESCRIPTOR_ADE> sLatticeADE(sGeometry);
// [...]
SuperLatticeCoupling coupling(
  NavierStokesAdvectionDiffusionCoupling{},
  names::NavierStokes{}, sLatticeNSE,  // `sLatticeNSE` will be referred to by `names::NavierStokes`
  names::Temperature{},  sLatticeADE); // `sLatticeADE` will be referred to by `names::Temperature`
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::T0>(...);
coupling.setParameter<NavierStokesAdvectionDiffusionCoupling::FORCE_PREFACTOR>(...);
// [...]
coupling.execute();

All coupling operators that are implemented in this new more compact style will transparently work on all of OpenLB’s target platforms, including GPUs.

Mixed compilation mode

Different from the initial GPU-supporting release OpenLB 1.5, where the entire code had to be compiled using nvcc and MPI support required manual definition of the relevant include and linker flags, this new release offers a more fine grained mixed compilation mode.

Specifically, it is possible to specify different compilers for the GPU_CUDA platform and the CPU-targeting platforms within the same build. This way, the GPU-side of things is automatically compiled into a separate shared library that is linked to the core application. Such separation is essential for fully supporting the vectorized CPU_SIMD platform alongside GPU_CUDA in a single heterogeneous executable.

Analogously to other compilation modes, example configs are provided in config/.

CXX             := mpic++
CC              := gcc

# Compiler flags for the core application and `CPU_*` platform support
CXXFLAGS        := -O3 -Wall -march=native -mtune=native
CXXFLAGS        += -std=c++17

# Parallel mode, one of `NONE`, `MPI` or `HYBRID`
PARALLEL_MODE   := MPI

# Platforms, optionally add `CPU_SIMD` for vectorized CPU execution
PLATFORMS       := CPU_SISD GPU_CUDA

# Compiler to use for the `GPU_CUDA` platform
CUDA_CXX        := nvcc
CUDA_CXXFLAGS   := -O3 -std=c++17
# Adjust to enable resolution of libcuda, libcudart, libcudadevrt
CUDA_LDFLAGS    := -L/run/opengl-driver/lib
# for e.g. RTX 30* (Ampere), see table in `rules.mk` for other options
CUDA_ARCH       := 86

# Default floating point type
FLOATING_POINT_TYPE := float

# Set to `OFF` if tinyxml and zlib are provided by the environment
USE_EMBEDDED_DEPENDENCIES := ON

The mixed mode is automatically enabled as soon as a separate CUDA compiler is specified using the CUDA_CXX environment variable. Following this, the compilation of the core library and individual applications is identical from the user’s perspective.

One additional advantage is that the compilation-time-intensive GPU kernels do not need to be recompiled for every code change. Instead make no-cuda-recompile allows for compiling the core application without GPU re-instantiation as long as no new operators are introduced (e.g. if only the geometry setup, parameters or post processing is changed after an initial full compilation).

For convenience, various tested compilation environments are reproducibly declared using Nix Flakes. E.g. instantiating a Multi-GPU compilation environment is as easy as removing the default config.mk and calling nix develop .#env-gcc-openmpi-cuda in the OpenLB root.

A guide for setting up (Multi-)GPU support for OpenLB on Windows WSL is also available (PDF).

Citation

If you want to cite OpenLB 1.6 you can use:

A. Kummerländer, S. Avis, H. Kusumaatmaja, F. Bukreev, M. Crocoll, D. Dapelo, N. Hafen, S. Ito, J. Jeßberger, J.E. Marquardt, J. Mödl, T. Pertzel, F. Prinz, F. Raichle, M. Schecher, S. Simonis, D. Teutscher, and M.J. Krause.

OpenLB Release 1.6: Open Source Lattice Boltzmann Code.

Version 1.6. Apr. 2023.

DOI: 10.5281/zenodo.7773497

General metadata is also available as a CITATION.cff file following the standard Citation File Format (CFF).

DOI

Supported Systems

OpenLB is able to utilize vectorization (AVX2/AVX-512) on x86 CPUs [1] and NVIDIA GPUs for block-local processing. CPU targets may additionally utilize OpenMP for shared memory parallelization while any communication between individual processes is performed using MPI.

It has been successfully employed for simulations on computers ranging from low-end smartphones up to supercomputers.

The present release has been explicitly tested in the following environments:

  • NixOS 22.11 and unstable (Nix Flake provided)
  • Ubuntu 20.04, 22.04
  • Red Hat Enterprise Linux 8.x (HoreKa, BwUniCluster2)
  • Windows 10, 11 (WSL)
  • MacOS 13

as well as compilers:

  • GCC 9 and later
  • Clang 13 and later
  • Intel C++ 2021.4 and later
  • NVIDIA CUDA 11.4 and later
  • NVIDIA HPC SDK 21.3 and later
  • MPI libraries OpenMPI 3.1, 4.1 (CUDA-awareness required for Multi-GPU); Intel MPI 2021.3.0 and later

[1]: Other CPU targets are also supported, e.g. common Smartphone ARM CPUs and Apple M1/M2.