Slurm Examples
Overview¶
Here are a few Slurm examples to aid in getting started with different
scenarios. Click on the >
to expand an example.
Single node, single core¶
This example requests resources for a job that cannot use more than one processor (CPU, core). In themselves, R and Python, for example, cannot use more than one core, though both have installable libraries that can use more than one.
Single node, single core example job script
#!/bin/bash
#--------- Slurm preamble, defines the job with #SBATCH statements
#SBATCH --job-name=single_core
# Which partition to use
#SBATCH --partition=short
# Run on a single node with only one cpu/core and 1 GB of memory
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=1gb
# Time limit is expressed as days-hrs:min:sec; this is for 15 min.
#SBATCH --time=15:00
#--------- End Slurm preamble, job commands now follow
# Remove all software modules and load all and only those needed
module purge
# Replace with the modules (if any) needed for your job
module load R
# Run a command to print information about the job to the output file
# This will include a list of the loaded modules
my_job_header
#---- Run your your programs here
# Replace this example with your own software command(s)
Rscript /gpfs1/sw/examples/R/iris.R
Single node, multiple cores¶
This example is similar in most respects to the single node, single core
example above. The key change is to the cpus-per-task
option, with
which more than one core is requested. Please note that you DO NOT increase
the number of tasks!
This scenario is most commonly applicable when 'multiprocessor' software
is run. That could be a compiled program (as our example is), but it
could also be a Python program, an R program, or even just running make
-j N
to run multiple compilations at once.
Single node, multiple cores example job script
#!/bin/bash
#--------- Slurm preamble, defines the job with #SBATCH statements
#SBATCH --job-name=multi_core
# Which partition to use
#SBATCH --partition=short
# Run on a single node with four cpus/cores and 8 GB memory
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
# Time limit is expressed as days-hrs:min:sec; this is for 15 min.
#SBATCH --time=15:00
#--------- End Slurm preamble, job commands now follow
# Remove all software modules and load all and only those needed
# No modules needed for this job
module purge
# Run a command to print information about the job to the output file
# This will include a list of the loaded modules
my_job_header
#---- Run your your programs below this line
# Replace this example with your own software
/gpfs1/sw/examples/slurm/omp_multiply/bin/omp_multiply
Multiple nodes, multiple cores¶
The traditional use of many clusters is to run software that uses many nodes with many cores on each node. The software most commonly used to manage what runs on which nodes is called MPI (message passing interface). In general, software documentation will tell you whether you can (or must) use MPI.
NOTE: Most software cannot use more than one node, so look for MPI in the documentation. There are also two main implementations of MPI that are incompatible: OpenMPI (we support), and MPICH (Intel MPI, which we do not support).
When MPI software runs, one copy of it will run for each 'rank', which is most often the total number of cores for the job. In our example below, we ask for two nodes, each with two tasks -- each task will become a rank -- and one CPU per task. Eacn node will have 2 GB of memory available to its tasks.
Our example software is a program that calculates a numerical approximation of the area under the normal distribution curve.
Multiple nodes, multiple cores example job script
#!/bin/bash
#--------- Slurm preamble, defines the job with #SBATCH statements
#SBATCH --job-name=multi_node
# Which partition to use
#SBATCH --partition=short
# Run on a two nodes with 2 cpus/cores and 2 GB memory per node
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --cpus-per-task=2
#SBATCH --mem=2gb
# Time limit is expressed as days-hrs:min:sec; this is for 15 min.
#SBATCH --time=15:00
#--------- End Slurm preamble, job commands now follow
# Remove all software modules and load all and only those needed
# This job requires the right compiler and MPI.
module purge
module load gcc/13.3.0-xp3epyt openmpi/5.0.7
# Run a command to print information about the job to the output file
# This will include a list of the loaded modules
my_job_header
#---- Run your your programs below this line
# Replace this example with your own software command(s)
mpirun /gpfs1/sw/examples/mpi/integration