Reply To: Multi GPUs Calculation
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › General Topics › Multi GPUs Calculation › Reply To: Multi GPUs Calculation
In general you do not need NVlink interconnect to use multiple GPUs in OpenLB (as MPI will transparently fall back to PCI device-cpu-device communication, although it is recommended for optimal performance due to better inter-GPU bandwidth).
I assume that OpenLB did not issue a warning on missing CUDA-awareness of MPI (e.g. “The used MPI Library is not CUDA-aware. Multi-GPU execution will fail.”) and that you compiled / installed MPI with CUDA-awareness?
Can you provide me with more details on your system and software environment? (CUDA versions, modified config.mk and so on)
If you use mpirun -np 2 ./cavity3d
only the first of all visible GPUs will be used (as per the warning message). This is why the example configs contain the mpirun -np 2 bash -c ‘export CUDA_VISIBLE_DEVICES=${OMPI_COMM_WORLD_LOCAL_RANK}; ./program’
command which will assign each rank its own GPU.