RUNNING C, C++, FORTRAN, PYTHON, AND R CODE - BOTH SERIAL AND PARALLEL MODES ARE COVERED WHERE APPLICABLE

This section covers the steps to run sample serial and parallel code in C, C++, Fortran, Python, and R using both interactive and batch job submission modes. Where relevant, separate steps for using Intel and GNU compilers are covered.

Please note that all code should be run only on the compute nodes either interactively or in batch mode. Please DO NOT run any code on the login nodes. Acceptable use of login nodes is as follows: installing code, file transfer, file editing, and job submission and monitoring.

1. CLONE THE GITHUB REPOSITORY

If you want to get a copy of all the programs and scripts used in the examples shown in this document then you can clone the GitHub repository for the examples using the following command:

If you cloned the GitHub repository, you can switch to the documentation directory with the following command to find the scripts and sample programs referred to throughout the document:

cddocumentation

Thedocumentation directory structure is shown below:

.

└── documentation

├── cuda_jobs

├── cudac

├── helloworldccuda.slurm

└── hello_world.cu

├── cudacpp

├── helloworldcppcuda.slurm

└── hello_world.cu

└── cudaf

├── hello_world.cuf

└── helloworldfcuda.slurm

├── GNUparallel

├── hello_world.c

├── helloworldcgnup.slurm

└── myout.txt

├── mpi_jobs

├── mpic

├── hello_world.c

└── helloworldcmpi.slurm

├── mpicpp

├── hello_world.cpp

└── helloworldcppmpi.slurm

└── mpif

├── hello_world.f90

└── helloworldfmpi.slurm

├── multiExes

├── mpi_multiexes

├── bye_world.c

├── hello_world.c

└── helloworldcmpim.slurm

└── serial

├── hello_world.c

└── helloworldcm.slurm

├── openmp_jobs

├── openmpc

├── GNU

├── hello_world.c

└── helloworldcopenmp.slurm

└── Intel

├── hello_world.c

└── helloworldcopenmpi.slurm

├── openmpcpp

├── GNU

├── hello_world.cpp

└── helloworldopenmpcpp.slurm

└── Intel

├── hello_world.cpp

└── helloworldopenmpcppi.slurm

└── openmpf

├── GNU

├── hello_world.f90

└── helloworldfopenmp.slurm

└── Intel

├── hello_world.f90

└── helloworldfopenmpi.slurm

├── serial_jobs

├── serialc

├── GNU

├── hello_world.c

└── helloworldc.slurm

└── Intel

├── hello_world.c

└── helloworldci.slurm

├── serialcpp

├── GNU

├── hello_world.cpp

└── helloworldcpp.slurm

└── Intel

├── hello_world.cpp

└── helloworldcppi.slurm

├── serialf

├── GNU

├── hello_world.f90

└── helloworldf.slurm

└── Intel

├── hello_world.f90

└── helloworldfi.slurm

├── serialpy

├── hello_world.py

└── helloworldpy.slurm

└── serialR

├── hello_world.r

└── helloworldr.slurm

When you switch to the subdirectories within the documentation folder, the Slurm job script files are available with the *.slurmextension. While the program files within the subdirectories have names beginning with hello_world.*, the corresponding files in the code listings shown in the rest of the document have names beginning with program_name.* .

If you do not want to clone the aforementioned GitHub repository, you should be able to copy the code shown in the listings into the respective files (for example: program_name.c) according to the instructions given.

2. COMPILING AND RUNNING A SAMPLE SERIAL C PROGRAM

A sample C program is shown in Listing 1. All this code does is print “Hello World!!” to standard output.

# include <stdio.h>

int main(){

printf("Hello World!!");

return 0;

}

Listing 1: Sample C program - (../documentation/serial_jobs/serialc/GNU/hello_world.c)

If you would like to compile the C example using the GNU C compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ gcc -o helloCOuthello_world.c

If you would like to compile the C example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ icc -o helloCOuthello_world.c

The executable helloCOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash

[username@c001]$ ./helloCOut

Hello World!!

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCOut corresponding to the serial program_name.c in batch mode is shown in Listing 2. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldC

#SBATCH -o helloWorldC.txt

#SBATCH -p compute1

#SBATCH -t 00:02:00

#SBATCH -N 1

#SBATCH -n 1

./helloCOut

Listing 2: Batch Job Script for C code (../documentation/serial_jobs/serialc/GN/helloworldc.slurm)

The job-script shown in Listing 2 can be submitted as follows:

[username@login001]$ sbatchhelloworldc.slurm

The output from the Slurm batch-job shown in Listing 2 can be checked by opening the output file as follows:

[username@login001]$ cathelloWorldC.txt

Hello World!!

3. COMPILING AND RUNNING A SAMPLE SERIAL C++ PROGRAM

A sample C++ program is shown in Listing 3. This program will print “Hello World” to standard output.

# include <iostream>

using namespace std;

int main(){

cout<<"Hello World!!";

return 0;

}

Listing 3: Sample C++ program (../documentation/serial_jobs/serialcpp/GNU/hello_world.cpp)

If you would like to compile the C++ example using the GNU CPP compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ g++ -o helloCPPOut\hello_world.cpp

If you would like to compile the C++ example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ icpc -o helloCPPiOut\hello_world.cpp

The executablehelloCPPOut or helloCPPiOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed directly on the terminal:

[username@login001]$srun -p compute1 -n 1 -t 00:05:00 --pty bash

[username@c001]$ ./helloCPPOut

Hello World!!

If you are currently on a compute node and would like to switch back to the login node then please enter the exit command as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCPPOutcorresponding to the serial hello_world.cpp in batch mode is shown in Listing 4. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldCPP

#SBATCH -o helloWorldCPP.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 1

#SBATCH -n 1

./helloCPPOut

Listing 4: Batch Job Script for C++ code (../documentation/serial_jobs/serialcpp/GNU/hello_world.cpp)

The job-script shown in Listing 4 can be submitted as follows:

[username@login001]$ sbatchhelloworldcpp.slurm

The output from the Slurm batch-job shown in Listing 4 can be checked by opening the output file as follows:

[username@login001]$ cat helloWorldCPP.txt

Hello World!!

4. COMPILING AND RUNNING A SAMPLE SERIAL FORTRAN PROGRAM

A sample Fortran program is shown in Listing 5. This program will print “Hello World!!” to the standard output.

program hello

print *, 'Hello, World!!'

end program hello

Listing 5: Sample Fortran Program ( ../documentation/serial_jobs/serialf/GNU/hello_world.f90)

If you would like to compile the Fortran example using the GNU Fortran compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ gfortran -o helloF90Out \hello_world.f90

If you would like to compile the Fortran example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ ifort -o helloF90iOut\Hello_world.f90.f90

The executableshelloF90Outor helloF90iOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash

[username@c001]$ ./helloF90Out

Hello World!!

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedexefilenamecorresponding to the serial program_name.f90 in batch mode is shown in Listing 6. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldF90

#SBATCH -o helloWorldF90.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 1

#SBATCH -n 1

./helloF90Out

Listing 6: Batch Job Script for Fortran code (../ documentation/serial_jobs/serialf/GNU/helloworldf.slurm)

The job-script shown in Listing 6 can be submitted as follows:

[username@login001]$ sbatchhelloworldf.slurm

The output from the Slurm batch-job shown in Listing 6 can be checked by opening the output file as follows:

[username@login001]$ cat helloWorldF90.txt

Hello World!!

5. RUNNING A SAMPLE PYTHON PROGRAM USING PYTHON 3

A sample Python program is shown in Listing 7. This program will also print “Hello World!!” to standard output.

print('Hello World!!')

Listing 7: Sample Python program (../documentation/serial_jobs/serialpy/hello_world.py)

The Python programcan be run either in batch mode using a Slurm batch job-script or interactively on a compute node.

Running the code in Interactive-Mode: The program can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 1 -t 00:05:00 --pty bash

[username@c001]$ python3 hello_world.py

Hello World!!

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the code in Batch-Mode: A sample Slurm batch job-script to run the serial program_name.py in batch mode is shown in Listing 8. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldpy

#SBATCH -o helloWorldpy.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 1

#SBATCH -n 1

python3 hello_world.py

Listing 8: Batch Job Script for Python code ( ../documentation/serial_jobs/serialpy/hello_world.py)

The job-script shown in Listing 8 can be submitted as follows:

[username@login001]$ sbatchhelloworldpy.slurm

The output from the Slurm batch-job shown in Listing 8 can be checked by opening the output file as follows:

[username@login001]$ cat helloWorldpy.txt

Hello World!!

6. COMPILING AND RUNNING A SAMPLE C+MPI CODE IN PARALLEL MODE

A sample C + MPI program is shown in Listing 9. This program will print “Hello world from processor #, rank # out of # processors” to standard output. The “#” signs in the aforementioned quoted text will be replaced with the processor name, rank, and total number of MPI processes participating in the computation.

#include <mpi.h>

#include <stdio.h>

int main(int argc, char** argv) {

// Initialize the MPI environment

MPI_Init(NULL, NULL);

// Get the number of processes

int world_size;

MPI_Comm_size(MPI_COMM_WORLD, &world_size);

// Get the rank of the process

int world_rank;

MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

// Get the name of the processor

char processor_name[MPI_MAX_PROCESSOR_NAME];

int name_len;

MPI_Get_processor_name(processor_name, &name_len);

// Print off a hello world message

printf("Hello world from processor %s, rank %d out of %d processors\n",

processor_name, world_rank, world_size);

// Finalize the MPI environment.

MPI_Finalize();

return 0;}

Listing 9: Sample C+MPI code (../documentation/mpi_jobs/mpic/hello_world.c)

If you would like to compile the C + MPI example using the Intel OneAPI compiler and MVAPICH2 MPI library, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ ml mvapich2

[username@login001]$ mpicc -o helloCMPIOut\

hello_world.c

The executablehelloCMPIOut can berun either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash

[username@c001]$ mpirun -np 12 ./helloCMPIOut

Hello world from processor c002, rank 10 out of 12 processors

Hello world from processor c002, rank 8 out of 12 processors

Hello world from processor c002, rank 6 out of 12 processors

Hello world from processor c002, rank 9 out of 12 processors

Hello world from processor c002, rank 11 out of 12 processors

Hello world from processor c002, rank 7 out of 12 processors

Hello world from processor c001, rank 4 out of 12 processors

Hello world from processor c001, rank 1 out of 12 processors

Hello world from processor c001, rank 3 out of 12 processors

Hello world from processor c001, rank 5 out of 12 processors

Hello world from processor c001, rank 2 out of 12 processors

Hello world from processor c001, rank 0 out of 12 processors

If you are currently on a compute node and would like to switch back to the login node then please enter the exit command as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCMPIOutcorresponding to the serial program_name.c in batch mode is shown in Listing 10. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloCMPIOut

#SBATCH -o helloCMPIOut.txt

#SBATCH -p compute1

#SBATCH -t 00:10:00

#SBATCH -N 2

#SBATCH -n 12

mpirun -np 12 ./helloCMPIOut

Listing 10: Batch Job Script for C+MPI code (job_script5.slurm)

The job-script shown in Listing 10 can be submitted as follows:

[username@login001]$ sbatchhelloworldcmpi.slurm

The output from the Slurm batch-job shown in Listing 10 can be checked by opening the output file as follows:

[username@login001]$ cat helloCMPIOut.txt

Hello world from processor c002, rank 10 out of 12 processors

Hello world from processor c002, rank 8 out of 12 processors

Hello world from processor c002, rank 6 out of 12 processors

Hello world from processor c002, rank 9 out of 12 processors

Hello world from processor c002, rank 11 out of 12 processors

Hello world from processor c002, rank 7 out of 12 processors

Hello world from processor c001, rank 4 out of 12 processors

Hello world from processor c001, rank 1 out of 12 processors

Hello world from processor c001, rank 3 out of 12 processors

Hello world from processor c001, rank 5 out of 12 processors

Hello world from processor c001, rank 2 out of 12 processors

Hello world from processor c001, rank 0 out of 12 processors

7. COMPILING AND RUNNING SAMPLE MPI PROGRAM IN C++

A sample C++ MPI program is shown in Listing 11. This program will print “Hello world from processor #, rank # out of # processors” to standard output. The “#” signs in the aforementioned quoted text will be replaced with the processor name, rank, and total number of MPI processes participating in the computation.

If you would like to compile the C++ example using the Intel OneAPI and MVAPICH2 MPI library, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ ml mvapich2

[username@login001]$ mpicxx -o helloCPPMPIOut\

hello_world.cpp

The executablehelloCPPMPIOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

#include <mpi.h>

#include <iostream>

using namespace std;

int main(int argc, char** argv) {

// Initialize the MPI environment

MPI_Init(NULL, NULL);

// Get the number of processes

int world_size;

MPI_Comm_size(MPI_COMM_WORLD, &world_size);

// Get the rank of the process

int world_rank;

MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);

// Get the name of the processor

char processor_name[MPI_MAX_PROCESSOR_NAME];

int name_len;

MPI_Get_processor_name(processor_name, &name_len);

// Print off a hello world message

cout<<"Hello World from processor "<<processor_name<<", rank "<<world_rank<<"out of "<<world_size<<"processors\n";

// Finalize the MPI environment.

MPI_Finalize();

return 0;}

Listing 11: Sample MPI program with C++ (.. /documentation/mpi_jobs/mpicpp/hello_world.cpp)

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash

[username@c001]$ mpirun -np 12 ./helloCPPMPIOut

Hello World from processor c001, rank 0out of 12processors

Hello World from processor c001, rank 1out of 12processors

Hello World from processor c001, rank 2out of 12processors

Hello World from processor c001, rank 5out of 12processors

Hello World from processor c001, rank 3out of 12processors

Hello World from processor c001, rank 4out of 12processors

Hello World from processor c002, rank 6out of 12processors

Hello World from processor c002, rank Hello World from processor c002, rank 8out of 12processors

7out of 12processors

Hello World from processor c002, rank 11out of 12processors

Hello World from processor c002, rank 10out of 12processors

Hello World from processor c002, rank 9out of 12processors

Note: It is common to see the output printed in a non-deterministic manner - in the example above the process rank 7 and rank 8 overlap each other in the writing step.

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCPPMPIOutcorresponding to the serial program_name.cpp in batch mode is shown in Listing 12. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldCPPMPI

#SBATCH -o helloWorldCPPMPI.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 2

#SBATCH -n 12

mpirun -np 12 ./helloCPPMPIOut

Listing 12: Batch Job Script for MPI with C++ code (../documentation/mpi_jobs/mpicpp/helloworldcppmpi.slurm)

The job-script shown in Listing 12 can be submitted as follows:

[username@login001]$ sbatchhelloworldcppmpi.slurm

The output from the Slurm batch-job shown in Listing 12 can be checked by opening the output file as follows:

[username@login001]$ cat program_name.txt

Hello World from processor c001, rank 0out of 12processors

Hello World from processor c001, rank 1out of 12processors

Hello World from processor c001, rank 2out of 12processors

Hello World from processor c001, rank 5out of 12processors

Hello World from processor c001, rank 3out of 12processors

Hello World from processor c001, rank 4out of 12processors

Hello World from processor c002, rank 6out of 12processors

Hello World from processor c002, rank 7out of 12processors

Hello World from processor c002, rank 8out of 12processors

Hello World from processor c002, rank 11out of 12processors

Hello World from processor c002, rank 10out of 12processors

Hello World from processor c002, rank 9out of 12processors

Note: It is common to see the output printed in a non-deterministic manner - in the example above the process rank 7 and rank 8 overlap each other in the writing step.

8. COMPILING AND RUNNING A SAMPLE FORTRAN+MPI PROGRAM

A sample Fortran+MPI program is shown in Listing 13. This program will print “Hello world” to the output file as many times as there are MPI processes.

program hello

include 'mpif.h'

call MPI_INIT (ierr)

print *, "Hello world"

call MPI_FINALIZE (ierr)

end program hello

Listing 13: Sample Fortran+MPI code (../documentation/mpi_jobs/mpif/hello_world.f90)

If you would like to compile the Fortran example using the Intel OneAPI and MVAPICH2 MPI library, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ ml mvapich2

[username@login001]$ mpiifort -o helloF90MPIOut \

hello_world.f90

The executable helloF90MPIOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 12 -N 2 -t 00:05:00 --pty bash

[username@c001]$ mpirun -np 12 ./ helloF90MPIOut

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

If you are currently on a compute node and would like to switch back to the login node then please enter theexitcommand as follows:

[username@c001]$ exit

Running the Executable Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloF90MPIOut corresponding to the serial program_name.f90 in batch mode is shown in Listing 14. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldF90MPI

#SBATCH -o helloWorldF90MPI.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 2

#SBATCH -n 12

mpirun -np 12 ./helloF90MPIOut

Listing 14: Batch Job Script for Fortran+ MPI (../documentation/mpi_jobs/mpif/helloworldfmpi.slurm)

The job-script shown in Listing 14 can be submitted as follows:

[username@login001]$ sbatchhelloworldfmpi.slurm.slurm

The output from the Slurm batch-job shown in Listing 14 can be checked by opening the output file as follows:

[username@login001]$ cat helloWorldF90MPI.txt

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

Hello world

9. COMPILING AND RUNNING A SAMPLE C+OPENMP PROGRAM

A sample C+OpenMP program is shown in Listing 15. All this code does is printHello World... from thread =# where # is the thread number to standard output.

#include <omp.h>

#include <stdio.h>

#include <stdlib.h>

int main(int argc, char* argv[]){

// Beginning of parallel region

#pragma omp parallel

{

printf("Hello World... from thread = %d\n",

omp_get_thread_num());

}

// Ending of parallel region

return 0;}

Listing 15: Sample C+OpenMP program (../documentation/openmp_jobs/openmpc/GNU/hello_world.c)

If you would like to compile the C example using the GNU C compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ gcc -fopenmp -o helloCOpenMPOut\hello_world.c

If you would like to compile the C example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ icc -qopenmp -o helloCOpenMPiOut\ hello_world.c

Note: Some compiler options are the same for both Intel and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp")

The executablehelloCOpenMPOutor helloCOpenMPiOut can be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 4 -t 00:05:00 --pty bash

[username@c001]$ export OMP_NUM_THREADS=4

[username@c001]$ ./helloCOpenMPOut

Hello World... from thread = 2

Hello World... from thread = 0

Hello World... from thread = 1

Hello World... from thread = 3

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the Executable in Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloCOpenMPOutcorresponding to the parallel program_name.c in batch mode is shown in Listing 16. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloWorldCOpenMP

#SBATCH -o helloWorldCOpenMP.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 1

#SBATCH -n 4

export OMP_NUM_THREADS=4

./helloCOpenMPOut

Listing 16: Batch Job Script for C+OpenMP (../documentation/openmp_jobs/openmpc/GNU/helloworldcopenmp.slurm)

The job-script shown in Listing 16 can be submitted as follows:

[username@login001]$ sbatchhelloworldcopenmp.slurm

The output from the Slurm batch-job shown in Listing 16 can be checked by opening the output file as follows:

[username@login001]$ cat helloWorldCOpenMP.txt

Hello World... from thread = 2

Hello World... from thread = 0

Hello World... from thread = 1

Hello World... from thread = 3

10. COMPILING AND RUNNING A SAMPLE OPENMP PROGRAM IN C++

A sample C++ program with OpenMP is shown in Listing 17. All this code does is print “Hello World... from thread =” # where # is the thread number to standard output.

#include <omp.h>

#include <iostream>

#include <stdlib.h>

using namespace std;

int main(int argc, char* argv[]){

// Beginning of parallel region

#pragma omp parallel

{

cout<<"Hello World... from thread ="<< omp_get_thread_num();

cout<<"\n";}

// Ending of parallel region

return 0;}

Listing 17: Sample OpenMP program in C++ (../documentation/openmp_jobs/openmpcpp/GNU/hello_world.cpp)

If you would like to compile the C++ example using the GNU CPP compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ g++ -fopenmp -o helloOpenMPCPPOut\

hello_world.cpp

If you would like to compile the C++ example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$ icpc -qopenmp -o helloOpenMPCPPiOut\

hello_world.cpp

Note: Some compiler options are the same for both Intel and GNU (e.g. "-o"), while others are different (e.g. "-qopenmp" vs "-fopenmp")

The executablehelloOpenMPCPPOutand helloOpenMPCPPiOutcan be run either in a batch mode using a Slurm batch job-script or interactively on a compute node.

Running the Executable in Interactive-Mode: The executable can be run interactively on a compute node using the following set of commands and the output will be displayed on the terminal:

[username@login001]$ srun -p compute1 -n 4 -t 00:05:00 --pty bash

[username@c001]$ export OMP_NUM_THREADS=4

[username@c001]$ ./helloOpenMPCPPiOut

Hello World... from thread =Hello World... from thread =Hello World... from thread =Hello World... from thread =103

2

If you are currently on a compute node and would like to switch back to the login node then please enter theexit command as follows:

[username@c001]$ exit

Running the Executable in Batch-Mode: A sample Slurm batch job-script to run the executable namedhelloOpenMPCPPiOutcorresponding to the parallel program_name.cpp in batch mode is shown in Listing 18. This script should be run from a login node.

#!/bin/bash

#SBATCH -J helloOpenMPCPPOut

#SBATCH -o helloOpenMPCPPOut.txt

#SBATCH -p compute1

#SBATCH -t 00:05:00

#SBATCH -N 1

#SBATCH -n 4

export OMP_NUM_THREADS=4

./helloOpenMPCPPOut

Listing 18: Batch Job Script for OpenMP in C++ (job_script9.slurm)

The job-script shown in Listing 18 can be submitted as follows:

[username@login001]$ sbatchhelloworldopenmpcpp.slurm

The output from the Slurm batch-job shown in Listing 18 can be checked by opening the output file as follows:

[username@login001]$ cat helloOpenMPCPPOut.txt

Hello World... from thread =Hello World... from thread =Hello World... from thread =Hello World... from thread =103

2

11. COMPILING AND RUNNING A SAMPLE FORTRAN + OPENMP PROGRAM

A sample Fortran program is shown in Listing 19. All this code does is print to standard output “Hello from process: #”, where # represents different thread numbers.

PROGRAM Parallel_Hello_World

USE OMP_LIB

!$OMP PARALLEL

print *,"Hello from process: ", OMP_GET_THREAD_NUM()

!$OMP END PARALLEL

END

Listing 19: Sample Fortran+OpenMP program (../ documentation/openmp_jobs/openmpf/GNU/hello_world.f90)

If you would like to compile the Fortran example using the GNU Fortran compiler, you can run the following commands:

[username@login001]$ ml gnu8/8.3.0

[username@login001]$ gfortran -fopenmp -o helloF90OpenMPOut hellow_world.f90

If you would like to compile the Fortran example using the Intel OneAPI, you can run the following commands:

[username@login001]$ ml intel/oneapi/2021.2.0

[username@login001]$