Singularity is available on Arc for running containerized applications that have images built using Docker or Singularity. Users can access the software via the module system, as shown below:

$ module load singularity/3.4.1
Pull a container image from the Singularity library

The Singularity containers can be pulled from the Singularity library. The following command will pull the popular lolcow container from the library:

$ singularity pull library://godlovedc/funny/lolcow
$ ls
lolcow_latest.sif
Run a non-MPI Container on Arc

There are multiple ways to run a container using Singularity.

Run a container like a native command:

$ ./lolcow_latest.sif 
____________________________________
/ He is now rising from affluence to \
| poverty. |
| |
\ -- Mark Twain /
------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Use the “run” sub-command:

The "run" sub-command execute the run script of the container as below:

$ singularity run lolcow_latest.sif 
____________________________________
/ Talkers are no good doers. \
| |
\ -- William Shakespeare, "Henry VI" /
------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Use the “exec” sub-command:

If you would like to execute an arbitrary command within your container instead of the run script, you can use the exec sub-command as following:

$ singularity exec lolcow_latest.sif cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

Use the “shell” sub-command

The Singularity “shell” sub-command invokes an interactive shell within a container, and then you can run any commands in the shell:

abc123@shamu ~]$ singularity shell lolcow_latest.sif
Singularity> cowsay Hi
____
< Hi >
----
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

You can type in "exit" to exit from the the container shell back to the shell of the host.

Run directly from the Singularity library without pulling the container

The Singularity “shell” sub-command invokes an interactive shell within a container, and then you can run any commands in the shell:

abc123@shamu ~]$ singularity run library://godlovedc/funny/lolcow
________________________________________
/ Always the dullness of the fool is the \
| whetstone of the wits. |
| |
| -- William Shakespeare, "As You Like |
\ It" /
----------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Submit a batch job to run a Singularity container

Here is a job script example:

#!/bin/bash
#
#SBATCH --job-name=test_mpi
#SBATCH --output=out.txt
#SBATCH --partition=compute1
#SBATCH --mail-type=ALL
#SBATCH --mail-user=abc123@utsa.edu
#SBATCH --ntasks=1
#SBATCH --nodes=1
. /etc/profile.d/modules.sh
module load singularity/3.5.3
singularity run lolcow_latest.sif

Use the following command to submit the job on a login node, and use the cat command to view the output file defined in the job script:

$ sbatch singu.job 
cat out.txt
_________________________________________
< Do something unusual today. Pay a bill. >
-----------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||



Pull a container image from Docker Hub

Docker is a popular container system. Users cannot run Docker images directly using "docker run" command due to security reasons. However, a Docker container image can be pulled by Singularity and is saved as a Singularity container image, which can be executed using the methods above.

The command below will pull the lolcow container image from Docker Hub and save the image as a Singularity container image:

$ singularity pull docker://godlovedc/lolcow
Run a MPI Container on Shamu

MPI containers come with a preinstalled MPI run time environment. It is essential to check the version of the MPI runtime environment inside the container because it must match the version of the runtime MPI environment of the host. You can pull the sample MPI container built by us from the Singularity library:
$ singularity pull library://zhiweiw88/utsa/utsa-centos-openmpi-404:latest

After the container image is downloaded, use the following command to check the version of the MPI runtime environment:
$ singularity exec utsa-centos-openmpi-404_latest.sif mpirun --version
mpirun (Open MPI) 4.0.4

Report bugs to http://www.open-mpi.org/community/help/

The MPI version of the container is openmpi 4.0.4. You need to load the same version of MPI module on Arc to run the container:
$ module load openmpi/4.0.4

If a suitable version of MPI is not available on Arc, please submit a service request by going to the following link: https://support.utsa.edu/myportal .

The container utsa-centos-openmpi-404_latest.sif comes with a compiled MPI program named mpi-sample. Use the following command to run the mpi-sample program on Arc interactively:
abc123@shamu ~]$ mpirun -n 5 singularity exec utsa-centos-openmpi-404.sif mpi-sample
0: We have 5 processors
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,
4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,
parent send 2 to process 1 for processing, parent send 2 to process 2 for processing, parent send 2 to process 3 for processing, parent send 2 to process 4 for processing, result 2 received from process 1, result 4 received from process 2, result 6 received from process 3, result 8 received from process 4,

In most cases, users need to run an MPI container program as a batch job to utilize the computing resources across multiple nodes. Here is an example for the Slurm job script (assuming to run 40 processes on two nodes):
#!/bin/bash
#
#SBATCH --job-name=test_mpi
#SBATCH --output=out.txt
#SBATCH --partition=defq
#SBATCH --mail-type=ALL
#SBATCH --mail-user=abc123@utsa.edu
#SBATCH --ntasks=40
#SBATCH --nodes=2
. /etc/profile.d/modules.sh
module load singularity/3.5.3
module load openmpi/4.0.4

mpirun -n $SLURM_NTASKS singularity exec utsa-centos-openmpi-404.sif mpi-sample
Build Singularity Containers

Building a Singularity container image requires the root privelidge of the computer that you are working on, which is not the case for Arc. A user needs to work on a computer (a desktop of laptop) on that the root access has been granted.

Build a Singularity container from scratch

In most cases, people run a container based on an existing image (previously created or downloaded from the library). In some cases, users may want to build one directly from a Linux operating system. For example, if a user wants to build a container from the centos base, the container definition file looks like below:
Bootstrap: library
From: centos:latest
%runscript
echo "This is what happens when you run the container..."

%post
echo "Hello from inside the container"
yum -y install vim-minimal

In the %post section, you can put in more needed software by adding more "yum install" command. You can build a sandbox container and test if the container meets the requirements:
$ sudo singularity build --sandbox centos-base centos-base.def 

Once the building process is completed, you should see a directory named "centos-base". You enter the sandbox container by the command below to test it out:
$ sudo singularity build --sandbox centos-base centos-base.def 

Once you are happy with the container, use the following command to make the final build:
$ sudo singularity build centos-base.sif centos-base.def

Build a Singularity container from an existing one

The definition file looks very similar to the one above except the first two lines. Assume you already have the container named "centos_8.sif" in the current working directory. The container definition file looks like below:

Bootstrap: localimage
From: centos_8.sif
%runscript
echo "This is what happens when you run the container..."
%post
echo "Hello from inside the container"
yum -y install vim-minimal

The building and testing process are the same as the process for building from a base.
Topic revision: r3 - 19 Aug 2021, AdminUser
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback