Slurm Script Examples
On this page are several scripts that can be used as a starting template for building your own SLURM submission scripts.
-
Basic, Single Threaded Job (simple Python)
Title: run_python.slurm
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1 # Run on a single CPU
#SBATCH --time=00:05:00 # Time limit hrs:min:sec
#SBATCH --account="accountID"JOBID=$( echo ${PBS_JOBID} | cut -f1 -d. )
module load Miniforge3
eval "$(conda shell.bash hook)"# Change Directory to the working directory
cd ${PBS_O_WORKDIR}# Run your python code
python path/to/your/python_file.pyUsage
$ sbatch run_python.slurm
-
GPU Job (Python w/Conda env)
Title: run_python_legate.slurm
#!/bin/bash
#SBATCH --job-name=serial_job_test # Job name
#SBATCH --mail-type=END,FAIL # Mail events (NONE, BEGIN, END, FAIL, ALL)
#SBATCH --mail-user=NetID@kennesaw.edu # Where to send mail
#SBATCH --cpus-per-task=4 # Request 4 cores
#SBATCH --mem=503gb # Job memory request
#SBATCH --gres=gpu:4
#SBATCH --time=00:05:00 # Time limit hrs:min:sec
#SBATCH --output=serial_test_%j.log # Standard output and error log
pwd; hostname; dateJOBID=$( echo ${PBS_JOBID} | cut -f1 -d. )
module load MIniforge3
eval "$(conda shell.bash hook)"
conda activate legate# Change Directory to the working directory
cd ${PBS_O_WORKDIR}# Run your code
python path/to/your/python_file.pydate
Usage
$ sbatch run_python_legate.slurm
Note: If you only request 1 node, 4 is the maximum gpu you can request. You will need at least 4 cpu-cores to make use of 4 gpus.
Note: 503 GB is the maximum memory that you can request.
-
MATLAB Job
Title: run_matlab.slurm
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=12
#SBATCH --partition="defq"
#SBATCH --time=10:00:00
#SBATCH --mail-type="BEGIN,END,FAIL"
#SBATCH --mail-user="netid@kennesaw.edu"
#SBATCH --account="account_name"pwd; hostname; date
JOBID=`echo $SLURM_JOBID | cut -f1 -d.`
module load MATLAB
# Create a temp workdir under work
mkdir -p ${HOME}/work/matlab
export SLURM_MATLAB_WORKDIR=$( mktemp -d -p ${HOME}/work/matlab workdir_XXXXXXXXXX )matlab -nodisplay -nosplash -logfile ${FILE}.log -r "run ${FILE}"
# Delete temp workdir
rm -rf ${SLURM_MATLAB_WORKDIR}date
Usage
sbatch --export=ALL,FILE=${PWD}/matlab_script.m run_matlab.slurm
Note: This command includes an argument that passes the filename into the slurm file variable ${FILE} for better reusability.
Note: some MATLAB code can make use any available cpu resources, so I have asked for 12 cores in this example.
-
Gaussian Job
Title: run_gaussian.slurm
#!/usr/bin/env bash
#SBATCH --partition="defq"
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=16 #SBATCH --time=01:00:00
#SBATCH --export="ALL"
#SBATCH --mail-type="BEGIN,END,FAIL,TIME_LIMIT_90"
#SBATCH --mail-user="netid@kennesaw.edu"
#SBATCH --account="account_name"date;hostname;pwd
# Load the Gaussian Modules
module load Gaussian
export WORK_DIR=$SLURM_SUBMIT_DIR
cd $WORK_DIR
datestamp=$(date +%Y-%m-%d.%H%M%S)
g16 -p=${SLURM_CPUS_PER_TASK} ${FILE} > ${FILE%.*}.${datestamp}.outdate
Usage
$ sbatch --export=ALL,FILE=${PWD}/my_script.com run_gaussian.slurm
Note: the slurm file includes the argument to send an email if 90% of the walltime has expired. Exceding the walltime will kill a job.
-
MPI Job using R
Title: run_R.slurm
#!/bin/bash
#SBATCH --job-name=R_mpi_job
#SBATCH --mail-type=ALL
#SBATCH --mail-user=NetID@kennesaw.edu
#SBATCH --nodes=2
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-node=8
#SBATCH --time=00:30:00pwd; hostname; date
JOBID=$( echo ${SLURM_JOBID} | cut -f1 -d. )
module purge
module load Rcd ${SLURM_SUBMIT_DIR}
mpiexec --quiet R --slave --file=${FILE} > ${FILE}.${JOBID}
date
Usage
$ sbatch --export=ALL,FILE=${PWD}/mycode.R run_R.slurm
Note: this will send mail for start, end, error and when the time limit reaches 50%, 80% and 90%.
-
Vasp_gpu job
Title: run_vasp_gpu.slurm
#!/bin/bash
#SBATCH --job-name=vasptest
#SBATCH --mail-type=ALL
#SBATCH --mail-user=NetID@kennesaw.edu
#SBATCH --nodes=1
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=8
#SBATCH --mem-per-cpu=10GB
#SBATCH --gres=gpu:4
#SBATCH --time=00:30:00module purge
module load cuda/10.0.130 intel/2018 openmpi/4.0.0 vasp/5.4.4srun --mpi=pmix_v3 vasp_gpu
Usage
sbatch vasp_gpu.slurm
Note: untested on vhpc