Reply To: Running examples on multiple GPUs
OpenLB – Open Source Lattice Boltzmann Code › Forums › on OpenLB › General Topics › Running examples on multiple GPUs › Reply To: Running examples on multiple GPUs
June 20, 2024 at 8:09 pm
#8842
Danial.Khazaeipoul
Participant
Currently, I am requesting an interactive allocation using the VNC protocol on the cluster. This means I am not submitting the job through a SLURM script. Instead, I am running the “mpirun” command directly, as if I were on a local PC with two Nvidia cards, as shown in the “nvidia-smi” output.
The cluster runs Rocky Linux operating system.