Log into the login node and obtain a compute node as described in the previous section.
Load the module for the desired application. For example, to load R:
module load R/4.5.1
Launch the application by typing its executable name. For example:
R
End the session when finished to release the allocated node for other users:
exit
Prevents idle CPUs and GPUs to maximize cluster utilization and throughput.
Allows large workloads to run uninterrupted, avoiding failures caused by common issues such as network interruptions or session timeouts.
Prevents a single user from monopolizing nodes.
Enables automated reruns and scaling for multiple simulations.
Users can track progress, output, and resource usage.
The system can balance workloads across multiple nodes.
Tells Slurm what resources your job needs (e.g., CPUs, GPUs, time, partition).
Tells the system what commands to execute once those resources are allocated.
#SBATCH
)
These are instructions to the scheduler. For example:
#!/bin/bash
#SBATCH --job-name=my_analysis # Job name
#SBATCH --partition=compute1 # Partition
#SBATCH --ntasks=1 # Number of tasks (processes, always 1 for non-MPI jobs)
#SBATCH --nodes=1 # Numner of nodes (Alway 1 for non-MPI jobs)
#SBATCH --cpus-per-task=4 # Cores per task
#SBATCH --time=02:00:00 # Time limit (hh:mm:ss)
#SBATCH --output=output.log # Standard output file, or system will create a output file if output is not specified.
#SBATCH --mail-type=ALL
#SBATCH --mail-user=username@utsa.edu. #Job status (starting, finishing, etc) will be sent to this email address.
2. Job Commands
These are the actual commands that run your program:
module load R/4.5.1
Rscript my_script.R
my_job.sh
), submit it on a login node with:
sbatch my_job.sh
Slurm will queue the job and run it when resources are available.