NAMD is a computer software application for molecular dynamics simulation. We have MPI and multiple core versions installed on Arc. The software is accessible via the module system.

NAMD Multiple Core Version

The multiple core version can be used in both interactive and batch modes. For the interactive mode, use the following command to log onto a compute node and load the NAMD module:

$ srun -p compute1 -n 1 -N 1 -t 24:00:00 --pty bash
$ module load namd/2.14-multicore

Change to your working directory, and use the following command to run the program:

$ namd2 +p5 +idlepoll apoa1.namd

"+p5" in the above command indicates that the program runs on 5 cores parallelly. You can use up to 80 cores on an Arc compute node.

You can use the "time" command to find out how the parallelization improves the speed. For example, the program will take four minutes and fifty-eight seconds if using one core as shown below:

$ time namd2 +p1 +idlepoll apoa1.namd
.......
real 4m58.831s
user 4m57.673s
sys 0m0.379s

With 40 cores, it will only take 14 seconds:

$ time namd2 +p40 +idlepoll apoa1.namd
.......
real 0m14.399s
user 9m17.049s
sys 0m2.300s

With 80 cores, it takes about 13 seconds.

$ time namd2 +p80 +idlepoll apoa1.namd
.......
real 0m13.313s
user 16m53.106s
sys 0m5.349s

It does not speed up a lot from 40 to 80 cores because there are only 40 physical cores on each node, although it shows 80 virtual cores with hyper-threading enabled.

The multiple core version can also be used in batch mode. Here is a sample job script:

#!/bin/bash
#SBATCH --job-name=namd # Change to your job name
#SBATCH --partition=compute1
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --time=00:10:00

module load namd/2.14-multicore
namd2 +p80 +idlepoll apoa1.namd

Assuming the job script is saved in file test.ps, you can use the following command to submit the batch job on a login node:

$ sbatch test.ps

As no output file is specified in the job script, the result will be saved in slurm-JOB-ID.out.

NAMD MPI Version

The multiple-core version can run on only one node. The MPI version comes in handy if you would like to run the program across multiple nodes to achieve optimum performance.

Here is a sample script for the MPI version:

#!/bin/bash
#SBATCH --job-name=namd # Change to your job name
#SBATCH --partition=compute1
#SBATCH --ntasks=80
#SBATCH --nodes=2
#SBATCH --time=01:10:00

module load namd/2.14-mpi
mpirun -n $SLURM_NTASKS namd2 apoa1.namd

Assuming the job script is saved in file test-mpi.ps, you can use the following command to submit the batch job on a login node:
$ sbatch test-mpi.ps

Same as the batch job for the multicore version, the result can be found in the file slurm-JOB-ID.out.

NAMD Multiple GPU Version

The CUDA (NVIDIA's graphics processor programming platform) version of NAMD is available on Arc. You can use it interactively on a GPU node or submit a batch job to a GPU partition. Here is a sample job script for submitting a NAMD GPU job to the gpu1v100 partition:

#!/bin/bash
#SBATCH --job-name=namd # Change to your job name
#SBATCH --partition=gpu1v100
#SBATCH --ntasks=1
#SBATCH --nodes=1
#SBATCH --time=00:10:00

module load namd/2.14-gpu
namd2 +p8 +setcpuaffinity apoa1.namd

With +setcpuaffinity, all CPU cores will be used besides the GPU device(s).
Topic revision: r5 - 28 Oct 2024, AdminUser
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback