About Arc Cluster

Arc is UTSA's primary high-performance (HPC) resource; utilized by researchers, students, faculty and staff from a broad range of disciplines.

Currently, Arc is compromised of 156 nodes totalling 14,864 cores and features:
  • 30 GPU nodes - each containing two CPUs with 20 cores each for a total of 40 cores, 384GB RAM, and each including one V100 Nvidia GPU accelerator
  • 5 GPU nodes - each containing two CPUs with 20 cores each for a total of 40 cores, 384GB RAM, and each including two V100 Nvidia GPU accelerators
  • *Coming Soon -* 2 GPU nodes – each containing 4 V100 GPUs and two CPUs that are Intel Xeon Gold 6248, 20C each (coming soon)
  • Two large-memory nodes, each containing four CPUs with 20 cores each for a total of 80 cores, and each including 1.5TB of RAM
  • Total TFlops across CPUs and GPUs: 387
  • 6032 total CPU cores on compute and GPU nodes
  • 100Gb/s Infiniband storage network connectivity
  • DDN fault-tolerant storage array with 1PB of shared storage utilizing the Lustre file system
  • A cumulative total of 250TB of local scratch space spread evenly among each physical compute node

Storage

Location Available Space Notes
/home 110TB Backed up nightly, with a quota of 25GB per account
/work 1.1PB Not backed up, temporary work space, available to all nodes, no quota
/local 1.5TB Local scratch space subject to deletion at any time, no quota

Queue Name

Node Type

Max Nodes Per Job

Max Duration Per Job

Max jobs in Queue

Max Number of Cores per Node

Local Scratch Disk Space

bigmem

Intel Cascade-Lake

2

72 hours

10

80 physical cores

160 hyper-threads

or virtual cores

1.5TB local scratch storage

compute1

Intel Cascade-Lake

20

72 hours

10

40 physical cores

80 hyper-threads or virtual cores

1.5TB local scratch storage

compute2

Intel Cascade-Lake

20

72 hours

10

40 physical cores

80 hyper-threads

or virtual cores

No local scratch space

computedev

Intel Cascade-Lake

2

2 hours

10

40 physical cores

80 hyper-threads or virtual cores

No local scratch space

gpu1v100

One NVIDIA v100 GPU attached to Intel Cascade-Lake processor

1

72 hours

10

40 physical cores

80 hyper-threads or virtual cores

1.5TB local scratch storage

gpudev

One NVIDIA v100 GPU attached to Intel Cascade-Lake processor

1

2 hours

10

40 physical cores

80 hyper-threads or virtual cores

1.5TB local scratch storage

gpu2v100

Two NVIDIA V100 GPUs attached to an Intel Cascade-Lake processor

1

72 hours

10

40 physical cores

80 hyper-threads or virtual cores

1.5TB local scratch storage

-- AdminUser - 08 Jun 2021
Topic revision: r6 - 28 Oct 2024, TrevorNash
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback