Reply To: Multi-GPU with grid refinement in OpenLB 1.8-1
› Forums › on OpenLB › General Topics › Multi-GPU with grid refinement in OpenLB 1.8-1 › Reply To: Multi-GPU with grid refinement in OpenLB 1.8-1
I’m running multi-GPU tests in OpenLB 1.8-1 and wanted to confirm whether grid refinement is officially supported on multiple GPUs, and to ask for help with a crash I’m seeing.
Yes, you can use multi-GPU for the grid refinement (there was a bug in the setter in the release that has since been fixed, I just re-confirmed that the sphere case works on multi-GPU in the current head of the public repository.
Both fail on two H200 GPUs at the coupler construction:
Just a side note: Using two H200 is overkill for these cases unless you scale to many hundreds of millions of cells (of course for testing using two is useful).
If yes, are there known limitations or configuration requirements for refinement::lagrava::makeCoarseToFineCoupler on multi-GPU?
Any guidance on fixes or patches would be much appreciated.
I am using the Rohde scheme for large applications on multiple GPUs internally without problems with the coupling itself. The main challenge right now is that the block decomposition and mesh in general needs to be set up manually to fit for refinement. e.g. you can not scale this up easily / adapt the mesh easily without manual intervention. The API for this will be improved in future releases.
[GPU_CUDA:0] Found 2 CUDA devices but only one can be used per MPI process.
It seems like the mpirun command is not quite right (CUDA_VISIBLE_DEVICES is not adjusted, so each process sees both GPUs and selects the first one). In the example configs there is a example SLURM script with a example command for multi-GPU use.
