Skip to content

SLURM script OpenLB

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #6260
    achodankar
    Participant

    Hello Developers,
    There are three options for running the code in parallel in the config file: MPI, OMP, and hybrid. Which would be the best option while running the code on a cluster? Also, how will the options for the SLURM script look like? Is it possible to share a sample SLURM script for running OpenLB code on a cluster?
    I would really appreciate any help in this matter.

    Thank you.

    Yours sincerely,

    Abhijeet C.

    #6263
    Adrian
    Keymaster

    For the current release we recommend usage of the plain MPI-only mode. Hybrid mode will yield an advantage for the upcoming release (due to SIMD support and other architectural improvements).

    The specifics of a SLURM script are quite cluster-dependent. For example, here is a script for MPI-only execution on the HoreKa supercomputer at KIT (MPI-only on 10 nodes, compiled using the Intel C++ compiler and MPI libraries):

    #!/bin/bash
    #SBATCH --partition="cpuonly"
    #SBATCH --nodes=10
    #SBATCH --ntasks=760
    #SBATCH --time=00:30:00
    
    module load mpi/impi/2021.4.0
    
    export OMP_NUM_THREADS=1
    
    mpiexec.hydra ./yourProgram
    

    The documentation of your specific cluster likely includes at least an example script for a plain MPI program. This should also work for OpenLB MPI-only mode.

    #6271
    achodankar
    Participant

    Hello Adrian,
    Thank you very much for your prompt response. This is really helpful.

    Thank you.

    Yours sincerely,

    Abhijeet C.

    #6294
    Anand
    Participant

    hi,
    what is the meaning of “mpiexec.hydra”?
    thank you
    regards
    Anand

    #6297
    Adrian
    Keymaster

    This is one of the main areas where SLURM scripts differ between clusters. You have to replace it with the specific mpi exec command provided by your cluster’s documentation.

    #6298
    Anand
    Participant

    Hi Adrian,
    I use following line and it is working (Pipe.cpp is a code)…

    “mpiexec -np 72 ./Pipe”

    #6300
    Adrian
    Keymaster

    Glad to hear it! You probably can drop the “-np 72” argument, most setups that I’ve encountered automatically set the correct number of processes depending on SLURM variables,

Viewing 7 posts - 1 through 7 (of 7 total)
  • You must be logged in to reply to this topic.