Running Applications Interactively

Running applications interactively on Arc allows you to test, debug, or run graphical or command-line programs directly on a compute node instead of submitting a batch job.

Here are the steps to launch an application interactively:
  1. Log into the login node and obtain a compute node as described in the previous section.

  2. Load the module for the desired application. For example, to load R:

     module load R/4.5.1
  3. Launch the application by typing its executable name. For example:

     R
  4. End the session when finished to release the allocated node for other users:

     exit

Please Note: If your application has a graphical interface (e.g., RStudio or MATLAB GUI), you should log into Arc through the OnDemand Portal at https://arc-ondemand.utsa.edu.

Submitting Batch Jobs

Why Batch Jobs

Batch jobs are preferred on an HPC cluster because they allow efficient, fair, and scalable use of shared computing resources:

1. Efficient Resource Management
  • Batch systems (like Slurm on Arc) schedule jobs to run automatically when resources become available.
  • Prevents idle CPUs and GPUs to maximize cluster utilization and throughput.

  • Allows large workloads to run uninterrupted, avoiding failures caused by common issues such as network interruptions or session timeouts.

2. Fairness and Queue Control
  • The scheduler manages priorities, job limits, and fair-share policies.
  • Prevents a single user from monopolizing nodes.

3. Reproducibility and Automation
  • Batch jobs are defined in scripts that specify software modules, input files, and resource requirements, ensuring that results are reproducible, consistent, and easily shareable.
  • Enables automated reruns and scaling for multiple simulations.

4. Scalability and Monitoring
  • Batch systems handle thousands of jobs simultaneously and provide detailed logs.
  • Users can track progress, output, and resource usage.

  • The system can balance workloads across multiple nodes.

Preparing Job Script

A Slurm job script is a text file that contains a set of commands and resource specifications used to submit and run a job on an HPC cluster managed by the Slurm Workload Manager on Arc. It serves two main purposes:
  1. Tells Slurm what resources your job needs (e.g., CPUs, GPUs, time, partition).

  2. Tells the system what commands to execute once those resources are allocated.

A typical Slurm job script has two sections:

1. Slurm Directives (beginning with #SBATCH)

These are instructions to the scheduler. For example:
#!/bin/bash
#SBATCH --job-name=my_analysis        # Job name
#SBATCH --partition=compute1          # Partition
#SBATCH --ntasks=1                    # Number of tasks (processes, always 1 for non-MPI jobs)
#SBATCH --nodes=1 # Numner of nodes (Alway 1 for non-MPI jobs) #SBATCH --cpus-per-task=4 # Cores per task #SBATCH --time=02:00:00 # Time limit (hh:mm:ss) #SBATCH --output=output.log # Standard output file, or system will create a output file if output is not specified.
#SBATCH --mail-type=ALL
#SBATCH --mail-user=username@utsa.edu. #Job status (starting, finishing, etc) will be sent to this email address.

2. Job Commands

These are the actual commands that run your program:
module load R/4.5.1
Rscript my_script.R

Submitting a Job

After creating your script (e.g., my_job.sh), submit it on a login node with:
sbatch my_job.sh

Slurm will queue the job and run it when resources are available.
Topic revision: r1 - 08 Oct 2025, ZhiweiWang
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback