Slurm Migration Guide
The Slurm cluster is currently in a closed beta until summer 2024. If you would like to join the beta program, submit a request.
If you are an existing user of PBS and would like to migrate your workflow to Slurm, you are in the right place. Both systems have similarities, since they are both designed to do a similar task.
PBS to Slurm Command Map
The table below shows common PBS commands and their Slurm counterparts.
PBS Command | Slurm Command | Description |
---|---|---|
qsub | sbatch , srun , or salloc | Batch Job (more information) |
qsub -I | salloc | Interactive Job |
qstat | sstat | Job statistics or information |
qpeek | N/A: Slurm writes output/error in real time | View job logs while running |
qdel | scancel | Cancel Job |
PBS to Slurm Environment Variables Map
The table below shows common PBS environment variables and their Slurm counterparts.
PBS Variable | Slurm Variable | Description |
---|---|---|
$PBS_JOBID | $SLURM_JOB_ID | Job id |
$PBS_JOBNAME | $SLURM_JOB_NAME | Job name |
$PBS_O_WORKDIR | $SLURM_SUBMIT_DIR | Submitting directory |
cat $PBS_NODEFILE | $SLURM_JOB_NODELIST or srun hostname | Nodes allocated to the job |
N/A | $SLURM_NTASKS | Total number of tasks or MPI processes (NOTE: not total number of cores if --cpus-per-task is not 1) |
N/A | $SLURM_CPUS_PER_TASK | Number of CPU cores for each task or MPI process |
Converting PBS Batch Scripts for Slurm
- In PBS, users would use
#PBS
commands in their batch script files to inform the scheduler about what options they wanted to execute their job with. In Slurm, users must use#SBATCH
comments instead. - In PBS, users use
qsub job-script
to submit the job to scheduler; in Slurm, users would usesbatch job-script
instead. - NOTE: The modules loaded before the job is submitted will be carried to
the batch job environment. Therefore, it is highly recommended to put
module purge
at the beginning of the job script.
For example, in PBS, users might use the job script like:
#!/bin/bash
#PBS -N my-job-name
#PBS -l select=2:ncpus=8:mpiprocs=2:mem=2gb:ngpus=1:gpu_model=a100:interconnect=hdr,walltime=02:00:00
export OMP_NUM_THREADS=8
python3 run-my-science-workflow.py
The same script, written for Slurm, would look like (we would recommend to use
the long name for variables, for example using --nodes
instead of -N
):
#!/bin/bash
#SBATCH --job-name my-job-name
#SBATCH --nodes 2
#SBATCH --tasks-per-node 2
#SBATCH --cpus-per-task 8
#SBATCH --gpus-per-node a100:1
#SBATCH --mem 2gb
#SBATCH --time 02:00:00
#SBATCH --constraint interconnect_hdr
export OMP_NUM_THREADS=8
python3 run-my-science-workflow.py
Below are some brief explanations of parameters used here:
--nodes
selects the number of nodes for the job, which is the same to theselect
using along withplace=scatter
in PBS, which means selecting different physical nodes not chunks.--tasks-per-node
is the number of tasks in each node, which is equivalent to thempiprocs
in PBS.--cpus-per-task
controls the number of cores in eachtask
in the above. The default is 1 unless using multi-thread, where--cpus-per-task
is usually set to the number forOMP_NUM_THREADS
.- The total number of cores is not specified explicitly. It would be the value
in
--tasks-per-node
multiplied by--cpus-per-task
. --mem
is for memory per node.--gpus-per-node
can specify the gpu model and number of gpus per node following the format--gpus-per-node <gpu_model>:<gpu_number>
.--time
is the walltime of the job, the max of which is 72 hours for c2 nodes.interconnect
on Slurm has not been fully implemented yet and not commented in the job script for now.
Converting PBS Interactive Job Workflows for Slurm
- In PBS, users use
qsub
to request for interactive job; in Slurm, users will usesalloc
instead. (notice the command for interactive job (salloc
) is different from the one for batch jobsbatch
) - NOTE: The modules loaded before the job is submitted will be carried to
the interactive job environment. Therefore, it is highly recommended to run
module purge
once the interactive job is allocated.
For example, users can use the following command to request a PBS interactive job:
qsub -I -l select=2:ncpus=4:mem=2gb:ngpus=1:gpu_model=a100:interconnect=hdr,walltime=02:00:00
The corresponding command in Slurm would look like:
salloc --nodes 2 --tasks-per-node 4 --cpus-per-task 1 --mem 2gb --time 02:00:00 --gpus-per-node a100:1 --constrains interconnect_fdr
The explanations of the parameters can be found in the above section.
PBS Select Quantity vs Slurm Task Quantity
Although the syntax/usage of Slurm could be similar to PBS, there are some main
differences. The most important one is --nodes
is not required. Its value will
be determined by the tasks requested:
- If
--tasks-per-node
is specified, all the cpu cores will be allocated to the same node, which means the number of nodes is 1 in this case. NOTE: the number of tasks/cpu cores must be smaller than the number of cpu cores on a single node. - If you need more tasks/cpu cores than the number of cpu cores on a single
node, but you don't care which nodes these cpu cores will be allocated,
you can specify
--ntasks
. In this case, you job may wait less time in the queue since it can be landing on different nodes. A potential drawback is the performance of each cpu cores on different nodes might be different considering the heterogeneous nature of Palmetto cluster. - As mentioned above,
--mem
is for the memory per node. Besides--mem
, there are some other options, such as memory per cpu,--mem-per-cpu
and memory per gpu,--mem-per-gpu
.
PBS Queues vs Slurm Partitions
At the moment, there are no priority queues, so the only partition available is
work1
.