Getting Started with Shamu

Shamu is the Research Support Groups premier cluster consisting of many compute cores and GPU cores.
  • Connect to Shamu using SSH
  • Common Commands
  • Navigating the filesystem
  • X Forwarding
  • Remote Desktop

Connect to Shamu using SSH

You should connect to the Shamu cluster using your campus abc123 and campus passhrase via one of the two login nodes. This example is using "login" as the login node.

ssh -p1209

If you have successfully logged in, you should see something like:

Last login: Fri Apr 29 08:41:39 2016 from

Welcome to the Research Computing cluster Shamu.

Rules for using Shamu:

* Do *NOT* execute directly on the Shamu head node. Grab a compute node with
qlogin or script your jobs with qsub.

* Do *NOT* ssh directly to a compute node to run your code. Offending users will
be locked out temporarily.

We are now gathering statistics to help promote Shamu in the UTSA Research

We are looking for the following information:
  * Number of presentations and papers published
  * Number of grants, awards or funding
  * Number of patents, copyrights, etc...

If you have used the computational resources of Shamu for any of the above, please
email your information to Thanks!

Common Commands

Below are examples of the most commonly use commands you will be using on Shamu. They range from loading a module to checking the status of your jobs.
  • Seeing which modules are available
[gqd693@login-0-0 ~]$ module avail
-------------------------------------------------------- /cm/local/modulefiles --------------------------------------------------------
cluster-tools/7.3 dot gcc/6.1.0 module-git null shared
cmd freeipmi/1.5.2 ipmitool/1.8.17 module-info openldap
------------------------------------------------------- /cm/shared/modulefiles --------------------------------------------------------
abaqus/6.12 cuda80/nsight/8.0.44 mathematica/11.0.1
acml/gcc/64/5.3.1 cuda80/profiler/8.0.44 matlab/R2013a
acml/gcc/fma4/5.3.1 cuda80/toolkit/8.0.44 matlab/R2015a
acml/gcc/mp/64/5.3.1 cudnn/5.0 matlab/R2016a
acml/gcc/mp/fma4/5.3.1 cufflinks/2.2.1 matlab/R2016b
acml/gcc-int64/64/5.3.1 default-environment matlab/R2017a
acml/gcc-int64/fma4/5.3.1 fftw2/openmpi/gcc/64/double/2.1.5 mkl/
acml/gcc-int64/mp/64/5.3.1 fftw2/openmpi/gcc/64/float/2.1.5 mkl/2013.0.079
acml/gcc-int64/mp/fma4/5.3.1 fftw2/openmpi/open64/64/double/2.1.5 mkl/2016.3.210
acml/open64/64/5.3.1 fftw2/openmpi/open64/64/float/2.1.5 mpich/ge/gcc/64/3.2rc2
acml/open64/fma4/5.3.1 fftw3/openmpi/gcc/64/3.3.4 mpich/ge/open64/64/3.2rc2
acml/open64/mp/64/5.3.1 fftw3/openmpi/open64/64/3.3.4 mpiexec/0.84_432
acml/open64/mp/fma4/5.3.1 fiji/fiji mugsy/2.3
acml/open64-int64/64/5.3.1 gamess/current mvapich2/gcc/64/2.2rc1
acml/open64-int64/fma4/5.3.1 gdb/7.11 mvapich2/open64/64/2.2rc1
acml/open64-int64/mp/64/5.3.1 globalarrays/openmpi/gcc/64/5.4 namd2/linux-x86_64-ibverbs
acml/open64-int64/mp/fma4/5.3.1 globalarrays/openmpi/open64/64/5.4 namd2/linux-x86_64-ibverbs-smp-cuda
adina/9.2 gmsh/2.9.3 namd2/linux-x86_64-multicore
anaconda/4.2.0 gnuplot/5.0.1 namd2/linux-x86_64-multicore-CUDA
apbs/ grace/5.1.25 namd2/linux-x86_64-openmpi
bcftools/1.3.1 gsl/2.1 netcdf/gcc/64/4.4.0
bcl2fastq2/2.17 hdf5/1.6.10 netcdf/open64/64/4.4.0
bedtools/2.25.0 hdf5_18/1.8.17 netperf/2.7.0
blacs/openmpi/gcc/64/1.1patch03 hdfview/2.13.0 nwchem/6.6
blacs/openmpi/open64/64/1.1patch03 homer/4.8 octave/4.2.1
blas/gcc/64/3.6.0 hpl/2.2 open64/
blas/open64/64/3.6.0 hwloc/1.11.3 openblas/dynamic/0.2.18
blast/2.2.26 intel/13/64bit openfoam/3.0.1
blast/2.6.0 intel/15/ifort openlava/3.3.3
bonnie++/1.97.1 intel/16/64bit openmpi/gcc/64/1.10.1
bowtie/1.2.0 intel/compiler/64/16.0.4/2016.4.258 openmpi/open64/64/1.10.1
bwa/0.7.15 intel/mpi/64/5.1.3/2016.4.258 paraview/4.4.0
canopy/1.7.4 intel/mpi/mic/5.1.3/2016.4.258 paraview/5.0.0
cellranger/1.2.1 intel-tbb-oss/ia32/44_20160526oss paup/4a150
cellranger/1.3.0 intel-tbb-oss/intel64/44_20160526oss pbspro/
clBLAS/2.12 iozone/3_434 petsc/openmpi/gcc/3.7.2
cmgui/7.3 lammps/20160725 samtools/1.3.1
cplex/12.7.1 lapack/gcc/64/3.6.0 scalapack/openmpi/gcc/64/2.0.2
cuda75/blas/7.5.18 lapack/open64/64/3.6.0 sge/2011.11p1
cuda75/fft/7.5.18 lis/1.5.66 simvascular/2.0.20624
cuda75/gdk/352.79 lsdyna/r910 slurm/16.05.2
cuda75/nsight/7.5.18 lsdyna7/r610 star/2.5.3a
cuda75/profiler/7.5.18 lumerical/device systemc/2.3.1a
cuda75/toolkit/7.5.18 lumerical/fdtd tophat/2.1.1
cuda80/blas/8.0.44 mace/1.1 turbovnc/20160823
cuda80/fft/8.0.44 mace/2.1 vmd/1.9.2
cuda80/gdk/352.79 macs/1.4.2
  • How to load a module
[gqd693@login-0-0 ~]$ module load gcc/6.1.0
[gqd693@login-0-0 ~]$ module list
Currently Loaded Modulefiles:
 1) openmpi/gcc/64/1.10.1 2) gcc/6.1.0
  • Check the status of the cluster queues showing how many jobs are running and how many resources are available.
[gqd693@login-0-0 etc]$ qstat -g c
all.q 0.00 0 0 544 2592 0 2048 
bigmem.q 0.00 0 0 144 144 0 0 
gpu.q 0.00 0 0 32 32 0 0 
ids.q 0.00 0 0 40 40 0 0

Shamu has a high speed Infiniband filesystem consisting of two (2) mounts, /work and /home-new.

/home-new is where your home directories are located

/work is where you should place all of your input or output files from your jobs. If you currently do not have a /work/abc123 directory, please contact the Research Support Group and request a directory to be created.

X Forwarding

X Forwarding is used to forward any X enabled (GUI) program you with to run on Shamu. There are two (2) SSH terminals for Windows that automatically provides the necessary X libraries to accomplish this. They are SmarTTY and MobaXTerm. If you are using Linux or a Mac desktop, you will need to connect to the login node with a special argument to enable X Forwarding:
ssh -Y -p1209

-- AdminUser - 17 May 2016
Topic revision: r8 - 18 May 2017, AdminUser
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding UTSA Research Support Group? Send feedback