Here's a simple example of a checkpointing program being run with a slurm job script that will automatically generate a restart script for when you need to restart from your checkpoint when your wall time has expired on your job.

The program is a simple python program that increments a variable and stores a checkpoint of the current value in a json file.

import time
import os
import json
import argparse

count = {}
checkpoint_file = "checkpoint.json"

def counting_sheep(c):

    while True:
        print("%i sheep" % c['sheep'])
        c['sheep'] += 1

        with open(checkpoint_file,'w') as f:
            json.dump(c, f)

            if c['sheep'] == 500:

    print("Wake up!")

#----- execution code goes after here -----------                                                                                                                  
parser = argparse.ArgumentParser()

parser.add_argument('-c','--checkpoint', type=str, help="Path to my checkpoint file to restart")

args = parser.parse_args()

my_checkpoint_file = args.checkpoint

if not my_checkpoint_file is None:

    with open(my_checkpoint_file, 'r') as f:
        count = json.load(f)


    count = {'sheep': 0}


The program is run just an execution on the command line like this:

If the program is terminated, it can be restarted by giving it the checkpoint json file as an argument so it continues where it left off.
python -c checkpoint.json

The slurm script that autogenerates the restart script looks like this:

#SBATCH --job-name=counting_sheep                                                                                                                                  
#SBATCH --error=logs/slurm-%j.err # Error File                                                                                                                     
#SBATCH --output=logs/slurm-%j.out # Output File                                                                                                                   
#SBATCH --requeue                                                                                                                                                  
#SBATCH --open-mode=append                                                                                                                                         
#SBATCH --partition=testing                                                                                                                                        
#SBATCH --time=0-00:05:00 ### Wall clock time limit in Days-HH:MM:SS                                                                                               


echo "This job id ${SLURM_JOBID}"
echo "Creating resubmit script"
echo "#!/bin/bash" > $RESUBMIT_SCRIPT
echo "#SBATCH --job-name=counting_sheep" >> $RESUBMIT_SCRIPT
echo "#SBATCH --error=logs/slurm-%j.err # Error File" >> $RESUBMIT_SCRIPT
echo "#SBATCH --output=logs/slurm-%j.out # Output File" >> $RESUBMIT_SCRIPT
echo "#SBATCH --requeue" >> $RESUBMIT_SCRIPT
echo "#SBATCH --open-mode=append" >> $RESUBMIT_SCRIPT
echo "#SBATCH --partition=testing" >> $RESUBMIT_SCRIPT
echo "#SBATCH --dependency=afterany:${SLURM_JOBID}" >> $RESUBMIT_SCRIPT
echo "#SBATCH --time=0-00:05:00 ### Wall clock time limit in Days-HH:MM:SS" >> $RESUBMIT_SCRIPT

echo "submitting restart script: $RESUBMIT_SCRIPT"

# Restart submitted here                                                                                                                                           

# Run my initial script here.                                                                                                                                      

As you can see, the regenerate script essentially copies most of the slurm commands into a new script with the addition of a flag called '-- dependency'. What this does is copy the current job id into it so that the restart script knows to wait until this current script has stopped before starting. For ease of testing the wall time (--time flag) on this job and the restart job has been set to 5 minutes. The restart command also provides the checkpoint file to the program.

Submitting the script automatically submits the restart script and instructs it to wait until the first job has finished. You can see it waiting being dependent on the first job finishing.
[abc123@login01 checkpoint]$ sbatch
Submitted batch job 12514
[abc123@login01 checkpoint]$ squeue -u abc123
             12515   testing counting   jjh526 PD       0:00      1 (Dependency)
             12514   testing counting   jjh526  R       0:07      1 compute032

-- Mando - 14 Jul 2020
Topic revision: r1 - 14 Jul 2020, AdminUser
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding UTSA Research Support Group? Send feedback