About Arc Cluster Arc is UTSA's primary high performance (HPC) resource; utilized by researchers, students, faculty and staff from a broad range of disciplines. C...
Checkpointing is the process of saving the execution state of an application such that this saved state can be used to continue the execution at a later time. Typ...
Arc Fair Use Policy Active Jobs Compute nodes are not shared among multiple users. Instead, when a user grabs a compute node, they will be the only user allowed ...
Checkpoint and restart with DMTCP or tools can create large disk images, and the checkpoint details are controlled by the third party system. To avoid the problem...
If your R script is expected to run beyond the 72 hour limit on Arc, we suggest implementing a checkpointing and restart mechanism in your script. This will help ...
Frequently Asked Questions Who can use Arc? The Arc research cluster is available at no charge to all University of Texas at San Antonio students, faculty and st...
Table of Contents LOGGING INTO ARC. 1 ACCESSING COMPUTE NODES ON ARC. 1 RUNNING JOBS INTERACTIVELY WITH THE SRUN COMMAND.. 3 SUBMITTING BATCH JOBS WITH THE SBATCH...
How to request an account on Arc for Faculty Faculty members who wish to have an account on Arc must submit their request via the form at https://utsa.az1.qualtri...
Sample GPU submit script using the Caffe training network Using the Tesla K80 cards: #SBATCH partition="gpu" #SBATCH nodes=1 #SBATCH gres=gpu:k80:1 . /etc/pr...
Getting Help Submit a Service Request Follow this link to access your ServiceNow portal. At the homepage, select the Research tile and select the category for ...
Table of Contents LOGGING INTO ARC. 1 ACCESSING COMPUTE NODES ON ARC. 1 RUNNING JOBS INTERACTIVELY WITH THE SRUN COMMAND.. 3 SUBMITTING BATCH JOBS WITH THE SBATCH...
JAVA Environment JAVA is a high level object oriented programming language that is designed to minimize implementation dependencies. It is a class based general p...
Table of Contents LOGGING INTO ARC. 1 ACCESSING COMPUTE NODES ON ARC. 1 RUNNING JOBS INTERACTIVELY WITH THE SRUN COMMAND.. 3 SUBMITTING BATCH JOBS WITH THE SBATCH...
Managing your Files Users on Arc are automatically granted several locations to store their files. Most users will be storing their files in one of two locations,...
Migrating Data From Shamu to Arc This guide provides several tools for migrating data from the legacy Shamu HPC environment to the new Arc HPC environment. In som...
Module Environments A module system helps in managing access to the software packages and versions that you need. If there are multiple versions of a software ava...
Monitoring your jobs Using the squeue Command Check the status of all jobs on Arc using the squeue command (this is just an example compute names may be differe...
If you have a Python script that is expected to run more than 72 hours on Arc, we suggest you break it into a few smaller tasks, so that each of the tasks runs le...
If you need to install your own programs/modules in python you can do it via creating vms. Some programs may need specific versions of modules to work correctly a...
If the user uses bioconductor for biology projects in R and their R library becomes corrupted you can do this command in the R prompt. BiocManager::valid() At t...
Here's a simple example of a checkpointing program being run with a slurm job script that will automatically generate a restart script for when you need to restar...
Running jobs on Arc * Submitting your Batch Job Submit your code to run on the compute nodes. Create a submission script and specify options for better j...
It's handy to make a package list from your conda virtual environment in case you install something that breaks it or you want to replicate the packages exactly o...
Python virtual environments (VE )are saved in your home directory by default. However, in case you are running out of space in your home directory, you can save t...
The base R package is provided on Arc via modules on the compute nodes. First, start an interactive session on a compute node by tpying: srun p compute1 n 1 t ...
Arc Fair Use Policy Active Jobs Compute nodes will no longer be shared among multiple users. Instead, when a user grabs a compute node, they will be the only use...
This is a simple example of a program that checkpoints using python and the pickle class. It will run for 15 minutes. The script checks for a file called "countin...
Pasted below is the list of software packages that are available on Arc (as of August 23, 2021) through system wide modules. This list will likely change over tim...
Technical Support * For technical support, you can submit a support request for Arc at the following link: https://support.utsa.edu/myportal. Instructions for ...
Slurm (Simple Linux Utility for Resource Management) is a highly configurable workload manager and job scheduler for an HPC cluster. It is open source software ba...
ABAQUS on Arc The ABAQUS software suite from Dassault Systems is used for finite element analysis and computer aided engineering. The ABAQUS software is used in t...
Singularity is available on Arc for running containerized applications that have images built using Docker or Singularity. Users can access the software via the m...
NAMD is a computer software application for molecular dynamics simulation. We have MPI and multiple core versions installed on Arc. The software is accessible via...
OVERVIEW PyTorch is a Python based scientific computing package. It is an automatic differentiation library that is useful to implement neural networks. Just like...
Here is an article about how to train a PyTorch based Deep Learning model using multiple GPU devices across multiple nodes on an HPC cluster: https://tuni itc.git...
PyTorch is a Python based machine learning framework that can run on both CPUs and GPUs. It supports options for taking advantage of GPUs on single nodes or multi...
https://portal.tacc.utexas.edu/user guides To use TACC's Corral system you must request via TACC's User Portal for access: https://portal.tacc.utexas.edu/user g...
With GPU support, you can significantly improve the performance of a Deep Learning model built with Tensorflow. Running a model with a single GPU is very simple. ...
The GPU resources on Arc have been configured as a "consumable resource". This means you can request individual GPU cards, nodes, etc... We have two types of GPU ...
***The OnDemand Portal is Currently not in Production on Arc We will notify users when it has become production ready*** We recently introduced the OnDemand Por...
Using NEC SX Aurora Vector Engines On Arc This guide provides an overview of the NEC SX Aurora TSUBASA Vector Engines (VE) nodes available on the Arc HPC Cluster....
To access the virtual desktop on ARC via the web, you must go to https://portal.arc.utsa.edu/ and log in with your UTSA id, passphrase, and DUO app. Once logged i...
ParaView (ParaView Website) is an open source application for visualizing two and three dimensional data sets. It is a multi platform parallel data analysis and ...
Arc User Guide 1 Arc is the primary High Performance Computing (HPC) system at The University of Texas at San Antonio (UTSA) that can be used for running data in...