Skip to content

Reply To: CUDA MPI usage in two GeForce RTX 2080 Ti GPUs

OpenLB – Open Source Lattice Boltzmann Code Forums on OpenLB General Topics CUDA MPI usage in two GeForce RTX 2080 Ti GPUs Reply To: CUDA MPI usage in two GeForce RTX 2080 Ti GPUs

#6939
Adrian
Keymaster

Your OpenMPI build likely wasn’t compiled with CUDA support. CUDA-aware MPI is required for multi GPU simulations in release 1.5. You can check whether this is available using e.g. ompi_info --parsable --all | grep mpi_built_with_cuda_support:value which should return:

mca:mpi:base:param:mpi_built_with_cuda_support:value:true

If you run this on a cluster there likely is a module already available, otherwise you’ll have to check how this can be installed on your particular distribution (I’ll still be happy to help further). If no package / build option (as e.g. for the declarative Nix shell environment included in the release) is available on your system you’ll have to compile OpenMPI / some other CUDA-aware MPI library manually. One additional option is to use Nvidia’s HPC SDK which includes a CUDA-aware build of OpenMPI (this is the environment I commonly use on our cluster).

Sorry for the unhelpful error message, this will be improved with 1.6 – the latest release was only the first step in OpenLB GPU support.