RUNNING C, C++, FORTRAN, PYTHON, AND R CODE - BOTH SERIAL AND PARALLEL MODES ARE COVERED WHERE APPLICABLE
This section covers the steps to run sample serial and parallel code in C, C++, Fortran, Python, and R using both interactive and batch job submission modes. Where relevant, separate steps for using Intel and GNU compilers are covered.
Please note that all code should be run only on the compute nodes either interactively or in batch mode. Please DO NOT run any code on the login nodes. Acceptable use of login nodes is as follows: installing code, file transfer, file editing, and job submission and monitoring.
1. CLONE THE GITHUB REPOSITORY
If you want to get a copy of all the programs and scripts used in the examples shown in this document then you can clone the GitHub repository for the examples using the following command:
If you cloned the GitHub repository, you can switch to the documentation directory with the following command to find the scripts and sample programs referred to throughout the document:
Thedocumentation directory structure is shown below:
│ │ ├── helloworldccuda.slurm
│ │ ├── helloworldcppcuda.slurm
│ └── helloworldfcuda.slurm
│ ├── helloworldcgnup.slurm
│ │ └── helloworldcmpi.slurm
│ │ └── helloworldcppmpi.slurm
│ └── helloworldfmpi.slurm
│ │ └── helloworldcmpim.slurm
│ │ │ └── helloworldcopenmp.slurm
│ │ └── helloworldcopenmpi.slurm
│ │ │ ├── hello_world.cpp
│ │ │ └── helloworldopenmpcpp.slurm
│ │ └── helloworldopenmpcppi.slurm
│ │ └── helloworldfopenmp.slurm
│ └── helloworldfopenmpi.slurm
│ │ │ └── helloworldc.slurm
│ │ └── helloworldci.slurm
│ │ │ ├── hello_world.cpp
│ │ │ └── helloworldcpp.slurm
│ │ └── helloworldcppi.slurm
│ │ │ ├── hello_world.f90
│ │ │ └── helloworldf.slurm
│ │ └── helloworldfi.slurm
│ │ └── helloworldpy.slurm
When you switch to the subdirectories within the documentation folder, the Slurm job script files are available with the *.slurmextension. While the program files within the subdirectories have names beginning with hello_world.*, the corresponding files in the code listings shown in the rest of the document have names beginning with program_name.* .
If you do not want to clone the aforementioned GitHub repository, you should be able to copy the code shown in the listings into the respective files (for example: program_name.c) according to the instructions given.
2. COMPILING AND RUNNING A SAMPLE SERIAL C PROGRAM
A sample C program is shown in Listing 1. All this code does is print “Hello World!!” to standard output.
Listing 1: Sample C program - (../documentation/serial_jobs/serialc/GNU/hello_world.c)
If you would like to compile the C example using the GNU C compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ gcc -o helloCOuthello_world.c
If you would like to compile the C example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ icc -o helloCOuthello_world.c
The executable helloCOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash
[username@c001]$ ./helloCOut
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCOut corresponding to the serial program_name.c in batch mode is shown in Listing 2. This script should be run from a login node.
#SBATCH -o helloWorldC.txt
Listing 2: Batch Job Script for C code (../documentation/serial_jobs/serialc/GN/helloworldc.slurm)
The job-script shown in Listing 2 can be submitted as follows:
[username@login001]$ sbatchhelloworldc.slurm
The output from the Slurm batch-job shown in Listing 2 can be checked by opening the output file as follows:
[username@login001]$ cathelloWorldC.txt
3. COMPILING AND RUNNING A SAMPLE SERIAL C++ PROGRAM
A sample C++ program is shown in Listing 3. This program will print “Hello World” to standard output.
Listing 3: Sample C++ program (../documentation/serial_jobs/serialcpp/GNU/hello_world.cpp)
If you would like to compile the C++ example using the GNU CPP compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ g++ -o helloCPPOut\hello_world.cpp
If you would like to compile the C++ example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ icpc -o helloCPPiOut\hello_world.cpp
The executablehelloCPPOut or helloCPPiOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed directly on the terminal:
[username@login001]$srun -p compute1 -n 1 -t 00:05:00 --pty bash
[username@c001]$ ./helloCPPOut
If you are currently on a compute node and would like to switch back to the login node then please enter the exit command as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCPPOutcorresponding to the serial hello_world.cpp in batch mode is shown in Listing 4. This script should be run from a login node.
#SBATCH -o helloWorldCPP.txt
Listing 4: Batch Job Script for C++ code (../documentation/serial_jobs/serialcpp/GNU/hello_world.cpp)
The job-script shown in Listing 4 can be submitted as follows:
[username@login001]$ sbatchhelloworldcpp.slurm
The output from the Slurm batch-job shown in Listing 4 can be checked by opening the output file as follows:
[username@login001]$ cat helloWorldCPP.txt
4. COMPILING AND RUNNING A SAMPLE SERIAL FORTRAN PROGRAM
A sample Fortran program is shown in Listing 5. This program will print “Hello World!!” to the standard output.
print *, 'Hello, World!!'
Listing 5: Sample Fortran Program ( ../documentation/serial_jobs/serialf/GNU/hello_world.f90)
If you would like to compile the Fortran example using the GNU Fortran compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ gfortran -o helloF90Out \hello_world.f90
If you would like to compile the Fortran example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ ifort -o helloF90iOut\Hello_world.f90.f90
The executableshelloF90Outor helloF90iOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash
[username@c001]$ ./helloF90Out
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedexefilenamecorresponding to the serial program_name.f90 in batch mode is shown in Listing 6. This script should be run from a login node.
#SBATCH -o helloWorldF90.txt
Listing 6: Batch Job Script for Fortran code (../ documentation/serial_jobs/serialf/GNU/helloworldf.slurm)
The job-script shown in Listing 6 can be submitted as follows:
[username@login001]$ sbatchhelloworldf.slurm
The output from the Slurm batch-job shown in Listing 6 can be checked by opening the output file as follows:
[username@login001]$ cat helloWorldF90.txt
5. RUNNING A SAMPLE PYTHON PROGRAM USING PYTHON 3
A sample Python program is shown in Listing 7. This program will also print “Hello World!!” to standard output.
Listing 7: Sample Python program (../documentation/serial_jobs/serialpy/hello_world.py)
The Python programcan be run either in batch mode using a Slurm batch job-script or interactively on a compute node.
Running the code in Interactive-Mode: The program can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash
[username@c001]$ python3 hello_world.py
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the code in Batch-Mode: A sample Slurm batch job-script to run the serial program_name.py in batch mode is shown in Listing 8. This script should be run from a login node.
#SBATCH -o helloWorldpy.txt
Listing 8: Batch Job Script for Python code ( ../documentation/serial_jobs/serialpy/hello_world.py)
The job-script shown in Listing 8 can be submitted as follows:
[username@login001]$ sbatchhelloworldpy.slurm
The output from the Slurm batch-job shown in Listing 8 can be checked by opening the output file as follows:
[username@login001]$ cat helloWorldpy.txt
6. COMPILING AND RUNNING A SAMPLE C+MPI CODE IN PARALLEL MODE
A sample C + MPI program is shown in Listing 9. This program will print “Hello world from processor #, rank # out of # processors” to standard output. The “#” signs in the aforementioned quoted text will be replaced with the processor name, rank, and total number of MPI processes participating in the computation.
int main(int argc, char** argv) {
// Initialize the MPI environment
// Get the number of processes
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
// Finalize the MPI environment.
return 0;}
Listing 9: Sample C+MPI code (../documentation/mpi_jobs/mpic/hello_world.c)
If you would like to compile the C + MPI example using the Intel OneAPI compiler and MVAPICH2 MPI library, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ ml mvapich2
[username@login001]$ mpicc -o helloCMPIOut\
The executablehelloCMPIOut can berun either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash
[username@c001]$ mpirun -np 12 ./helloCMPIOut
Hello world from processor c002, rank 10 out of 12 processors
Hello world from processor c002, rank 8 out of 12 processors
Hello world from processor c002, rank 6 out of 12 processors
Hello world from processor c002, rank 9 out of 12 processors
Hello world from processor c002, rank 11 out of 12 processors
Hello world from processor c002, rank 7 out of 12 processors
Hello world from processor c001, rank 4 out of 12 processors
Hello world from processor c001, rank 1 out of 12 processors
Hello world from processor c001, rank 3 out of 12 processors
Hello world from processor c001, rank 5 out of 12 processors
Hello world from processor c001, rank 2 out of 12 processors
Hello world from processor c001, rank 0 out of 12 processors
If you are currently on a compute node and would like to switch back to the login node then please enter the exit command as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCMPIOutcorresponding to the serial program_name.c in batch mode is shown in Listing 10. This script should be run from a login node.
#SBATCH -o helloCMPIOut.txt
mpirun -np 12 ./helloCMPIOut
Listing 10: Batch Job Script for C+MPI code (job_script5.slurm)
The job-script shown in Listing 10 can be submitted as follows:
[username@login001]$ sbatchhelloworldcmpi.slurm
The output from the Slurm batch-job shown in Listing 10 can be checked by opening the output file as follows:
[username@login001]$ cat helloCMPIOut.txt
Hello world from processor c002, rank 10 out of 12 processors
Hello world from processor c002, rank 8 out of 12 processors
Hello world from processor c002, rank 6 out of 12 processors
Hello world from processor c002, rank 9 out of 12 processors
Hello world from processor c002, rank 11 out of 12 processors
Hello world from processor c002, rank 7 out of 12 processors
Hello world from processor c001, rank 4 out of 12 processors
Hello world from processor c001, rank 1 out of 12 processors
Hello world from processor c001, rank 3 out of 12 processors
Hello world from processor c001, rank 5 out of 12 processors
Hello world from processor c001, rank 2 out of 12 processors
Hello world from processor c001, rank 0 out of 12 processors
7. COMPILING AND RUNNING SAMPLE MPI PROGRAM IN C++
A sample C++ MPI program is shown in Listing 11. This program will print “Hello world from processor #, rank # out of # processors” to standard output. The “#” signs in the aforementioned quoted text will be replaced with the processor name, rank, and total number of MPI processes participating in the computation.
If you would like to compile the C++ example using the Intel OneAPI and MVAPICH2 MPI library, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ ml mvapich2
[username@login001]$ mpicxx -o helloCPPMPIOut\
The executablehelloCPPMPIOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
int main(int argc, char** argv) {
// Initialize the MPI environment
// Get the number of processes
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
// Get the rank of the process
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
// Get the name of the processor
char processor_name[MPI_MAX_PROCESSOR_NAME];
MPI_Get_processor_name(processor_name, &name_len);
// Print off a hello world message
cout<<"Hello World from processor "<<processor_name<<", rank "<<world_rank<<"out of "<<world_size<<"processors\n";
// Finalize the MPI environment.
Listing 11: Sample MPI program with C++ (.. /documentation/mpi_jobs/mpicpp/hello_world.cpp)
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash
[username@c001]$ mpirun -np 12 ./helloCPPMPIOut
Hello World from processor c001, rank 0out of 12processors
Hello World from processor c001, rank 1out of 12processors
Hello World from processor c001, rank 2out of 12processors
Hello World from processor c001, rank 5out of 12processors
Hello World from processor c001, rank 3out of 12processors
Hello World from processor c001, rank 4out of 12processors
Hello World from processor c002, rank 6out of 12processors
Hello World from processor c002, rank Hello World from processor c002, rank 8out of 12processors
Hello World from processor c002, rank 11out of 12processors
Hello World from processor c002, rank 10out of 12processors
Hello World from processor c002, rank 9out of 12processors
Note: It is common to see the output printed in a non-deterministic manner - in the example above the process rank 7 and rank 8 overlap each other in the writing step.
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCPPMPIOutcorresponding to the serial program_name.cpp in batch mode is shown in Listing 12. This script should be run from a login node.
#SBATCH -J helloWorldCPPMPI
#SBATCH -o helloWorldCPPMPI.txt
mpirun -np 12 ./helloCPPMPIOut
Listing 12: Batch Job Script for MPI with C++ code (../documentation/mpi_jobs/mpicpp/helloworldcppmpi.slurm)
The job-script shown in Listing 12 can be submitted as follows:
[username@login001]$ sbatchhelloworldcppmpi.slurm
The output from the Slurm batch-job shown in Listing 12 can be checked by opening the output file as follows:
[username@login001]$ cat program_name.txt
Hello World from processor c001, rank 0out of 12processors
Hello World from processor c001, rank 1out of 12processors
Hello World from processor c001, rank 2out of 12processors
Hello World from processor c001, rank 5out of 12processors
Hello World from processor c001, rank 3out of 12processors
Hello World from processor c001, rank 4out of 12processors
Hello World from processor c002, rank 6out of 12processors
Hello World from processor c002, rank 7out of 12processors
Hello World from processor c002, rank 8out of 12processors
Hello World from processor c002, rank 11out of 12processors
Hello World from processor c002, rank 10out of 12processors
Hello World from processor c002, rank 9out of 12processors
Note: It is common to see the output printed in a non-deterministic manner - in the example above the process rank 7 and rank 8 overlap each other in the writing step.
8. COMPILING AND RUNNING A SAMPLE FORTRAN+MPI PROGRAM
A sample Fortran+MPI program is shown in Listing 13. This program will print “Hello world” to the output file as many times as there are MPI processes.
Listing 13: Sample Fortran+MPI code (../documentation/mpi_jobs/mpif/hello_world.f90)
If you would like to compile the Fortran example using the Intel OneAPI and MVAPICH2 MPI library, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ ml mvapich2
[username@login001]$ mpiifort -o helloF90MPIOut \
The executable helloF90MPIOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash
[username@c001]$ mpirun -np 12 ./ helloF90MPIOut
If you are currently on a compute node and would like to switch back to the login node then please enter theexitcommand as follows:
Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloF90MPIOut corresponding to the serial program_name.f90 in batch mode is shown in Listing 14. This script should be run from a login node.
#SBATCH -J helloWorldF90MPI
#SBATCH -o helloWorldF90MPI.txt
mpirun -np 12 ./helloF90MPIOut
Listing 14: Batch Job Script for Fortran+ MPI (../documentation/mpi_jobs/mpif/helloworldfmpi.slurm)
The job-script shown in Listing 14 can be submitted as follows:
[username@login001]$ sbatchhelloworldfmpi.slurm.slurm
The output from the Slurm batch-job shown in Listing 14 can be checked by opening the output file as follows:
[username@login001]$ cat helloWorldF90MPI.txt
9. COMPILING AND RUNNING A SAMPLE C+OPENMP PROGRAM
A sample C+OpenMP program is shown in Listing 15. All this code does is print“Hello World... from thread =”# where # is the thread number to standard output.
int main(int argc, char* argv[]){
// Beginning of parallel region
printf("Hello World... from thread = %d\n",
// Ending of parallel region
Listing 15: Sample C+OpenMP program (../documentation/openmp_jobs/openmpc/GNU/hello_world.c)
If you would like to compile the C example using the GNU C compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ gcc -fopenmp -o helloCOpenMPOut\hello_world.c
If you would like to compile the C example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ icc -qopenmp -o helloCOpenMPiOut\ hello_world.c
Note: Some compiler options are the same for both Intel and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp")
The executablehelloCOpenMPOutor helloCOpenMPiOut can be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 4 -t 00:05:00 --pty bash
[username@c001]$ export OMP_NUM_THREADS=4
[username@c001]$ ./helloCOpenMPOut
Hello World... from thread = 2
Hello World... from thread = 0
Hello World... from thread = 1
Hello World... from thread = 3
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the Executable in Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCOpenMPOutcorresponding to the parallel program_name.c in batch mode is shown in Listing 16. This script should be run from a login node.
#SBATCH -J helloWorldCOpenMP
#SBATCH -o helloWorldCOpenMP.txt
./helloCOpenMPOut
Listing 16: Batch Job Script for C+OpenMP (../documentation/openmp_jobs/openmpc/GNU/helloworldcopenmp.slurm)
The job-script shown in Listing 16 can be submitted as follows:
[username@login001]$ sbatchhelloworldcopenmp.slurm
The output from the Slurm batch-job shown in Listing 16 can be checked by opening the output file as follows:
[username@login001]$ cat helloWorldCOpenMP.txt
Hello World... from thread = 2
Hello World... from thread = 0
Hello World... from thread = 1
Hello World... from thread = 3
10. COMPILING AND RUNNING A SAMPLE OPENMP PROGRAM IN C++
A sample C++ program with OpenMP is shown in Listing 17. All this code does is print “Hello World... from thread =” # where # is the thread number to standard output.
int main(int argc, char* argv[]){
// Beginning of parallel region
cout<<"Hello World... from thread ="<< omp_get_thread_num();
// Ending of parallel region
Listing 17: Sample OpenMP program in C++ (../documentation/openmp_jobs/openmpcpp/GNU/hello_world.cpp)
If you would like to compile the C++ example using the GNU CPP compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ g++ -fopenmp -o helloOpenMPCPPOut\
If you would like to compile the C++ example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0
[username@login001]$ icpc -qopenmp -o helloOpenMPCPPiOut\
Note: Some compiler options are the same for both Intel and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp")
The executablehelloOpenMPCPPOutand helloOpenMPCPPiOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.
Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:
[username@login001]$ srun -p compute1 -n 4 -t 00:05:00 --pty bash
[username@c001]$ export OMP_NUM_THREADS=4
[username@c001]$ ./helloOpenMPCPPiOut
Hello World... from thread =Hello World... from thread =Hello World... from thread =Hello World... from thread =103
If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:
Running the Executable in Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloOpenMPCPPiOutcorresponding to the parallel program_name.cpp in batch mode is shown in Listing 18. This script should be run from a login node.
#SBATCH -J helloOpenMPCPPOut
#SBATCH -o helloOpenMPCPPOut.txt
Listing 18: Batch Job Script for OpenMP in C++ (job_script9.slurm)
The job-script shown in Listing 18 can be submitted as follows:
[username@login001]$ sbatchhelloworldopenmpcpp.slurm
The output from the Slurm batch-job shown in Listing 18 can be checked by opening the output file as follows:
[username@login001]$ cat helloOpenMPCPPOut.txt
Hello World... from thread =Hello World... from thread =Hello World... from thread =Hello World... from thread =103
11. COMPILING AND RUNNING A SAMPLE FORTRAN + OPENMP PROGRAM
A sample Fortran program is shown in Listing 19. All this code does is print to standard output “Hello from process: #”, where # represents different thread numbers.
PROGRAM Parallel_Hello_World
print *,"Hello from process: ", OMP_GET_THREAD_NUM()
END
Listing 19: Sample Fortran+OpenMP program (../ documentation/openmp_jobs/openmpf/GNU/hello_world.f90)
If you would like to compile the Fortran example using the GNU Fortran compiler, you can run the following commands:
[username@login001]$ ml gnu8/8.3.0
[username@login001]$ gfortran -fopenmp -o helloF90OpenMPOut hellow_world.f90
If you would like to compile the Fortran example using the Intel OneAPI, you can run the following commands:
[username@login001]$ ml intel/oneapi/2021.2.0