Skip to main content

Sample Batch Scripts for Running Jobs on the Peregrine System

Here are some sample batch scripts for running jobs on Peregrine.

Useful Environment Variables for Use Inside a Job

PBS_O_WORKDIR is set to the location the job was submitted from. Jobs should be run in the users /scratch directory, not in their /home directory.

PBS_NODEFILE points to a file containing a list of nodes allocated to the job.

#!/bin/bash
#PBS -l nodes=1:ppn=1,walltime=500

#PBS -N test1
#PBS -j oe
#PBS -A CSC001
#PBS -q debug

cd /scratch/<userid>/my_directory
./a.out
#!/bin/bash
#PBS -l walltime=4:00:00           # WALLTIME limit
#PBS -l nodes=1                    # one node
#PBS -N test1                      # Name of job
#PBS -A CSC001                     # project handle

cd $PBS_O_WORKDIR
./a.out

This job requests 4 nodes with 24 processes per node using nodes in the short queue.

#!/bin/bash
#PBS -l walltime=24:00:00 # WALLTIME limit
#PBS -q short                        # short queue
#PBS -l nodes=4:ppn=24               # Number of nodes, put 24 processes on each
#PBS -N test                         # Name of job
#PBS -A CSC001                       # Project handle

cd $PBS_O_WORKDIR
mpirun -np 96 /path/to/executable
#!/bin/bash
#PBS -j oe
#PBS -l nodes=2:ppn=2
#PBS -q short
#PBS -l qos=high # Ask for high priority
#PBS -A CSC001
#PBS -l walltime=0:10:00

mpirun -np 4 ./hello_mpi

Undersubscribing Nodes

Because jobs are given exclusive access to all the resources (cores, memory, etc.) on the nodes, your job may use all the memory even if it leaves some cores unused.

For jobs that need more memory per MPI task than is available when using all cores on the node, one may request that the job be run with a smaller number of MPI tasks or job processes per node.

Below is a sample script for running with 4 MPI tasks on each of 4 nodes, for a total of 16 MPI tasks.

#!/bin/bash
#PBS -l walltime=24:00:00             # WALLTIME
#PBS -l nodes=4:ppn=4                 # Number of nodes and processes per node
#PBS -lfeature=16core
#PBS -N test
#PBS -joe
#PBS -A CSC001
cd $PBS_O_WORKDIR
mpirun -np 16 /path/to/executable

For jobs that need even more memory, one may request nodes with the 256GB feature, for example:

#!/bin/bash
#PBS -l walltime=24:00:00 
#PBS -l nodes=1:ppn=4       # Number of nodes and processes per node
#PBS -lfeature=256GB
#PBS -q bigmem # the bigmem queue has 52 nodes with 256 GB of memory
#PBS -N test
#PBS -joe
#PBS -A CSC001
cd $PBS_O_WORKDIR
./a.out

To submit jobs on Peregrine, the Torque qsub command should be used:

% qsub <batch_file> -A <project-handle>