Skip to content

Reply To: Multi GPUs Calculation

#7623
Adrian
Keymaster

No worries 🙂 So the multi-GPU execution now works with the additional flag?

The gpu::cuda::device::synchronize calls are conditionally enabled only if GPU support is enabled. In general this is an artifact of the work-in-progress nature of heterogeneous computation support in OpenLB. Both this and the SuperLattice::setProcessingContext calls will be transparently hidden in the future.

What do you mean exactly by your last question? Ignoring the mpirun / hardware setup issues, multi-GPU support in OpenLB is transparent in the sense that if A) CUDA-aware MPI support is enabled during comilation and B) the application works on a single GPU then it will work in multi-GPU mode also.

OpenMP is only used for CPU-side parallelization on shared memory systems. Most commonly we use it in HYBRID mode for CPU-only simulations (i.e. each socket of a cluster is assigned a single OpenMPI process using OpenMP parallelization internally).

The performance-critical parts of OpenMPI usage are contained in the SuperCommunicator (and its support infrastructure). This is the class responsible for handling all overlap communication between the individual blocks of the decomposition.