NCSA Delta

The NCSA Delta documentation can be found  here .

Account Info

  • Run quota on the login node to see which projects you have access to and the corresponding directories
  • Run accounts on the login node to get the account names you have access to and the remaining hours

Setup

Bashrc

Add the following to your ~/.bashrc:
export WORK=/work/hdd/beiu/$USER
export PROJECT=/projects/beiu/$USER
export SHARED=/projects/beiu/shared
export SOFTWARE=$SHARED/software
module use --append $SHARED/modulefiles

Python

You have to initialize your Anaconda environment the first time before using it.
module load anaconda3_gpu/23.9.0
conda init

File Management

Shared software can be found at /projects/beiu/shared/software
The Delta documentation does a thorough job describing the different  file systems .
  • Run calculations out of /work/hdd/beiu/$USER
  • You have a /projects/beiu/$USER directory for sharing data with others.
  • You have a home directory (/u/$USER) with 90 GB storage that is meant for storing software and scripts, but calculations are not run here.
Note: The work and projects directories are not backed up. This means you must have a backup plan in place. Syncing data to one of our cloud services using Syncovery on your local machine would be wise.

Job Submission

The NCSA documentation has many great  examples of submission scripts . Note that there are many different types of hardware on Delta, as shown  here . You will have to select the desired hardware at job submission time with --partition and make sure that the --ntasks-per-node and --gpus-per-node are  compatible with that hardware . The A100s are fastest.

GPU VASP

Note that Delta has many different types of hardware.

One A40 GPU on One Node

The following is good for debugging.
#!/bin/bash
#SBATCH --job-name="vasp"
#SBATCH --partition=gpuA40x4
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:1
#SBATCH --gpu-bind=closest
#SBATCH --account=beiu-delta-gpu
#SBATCH --mem=50G
#SBATCH -t 00:10:00

source ~/.bashrc
module purge
module load vasp/6.5.1

srun vasp_std > vasp.out # or vasp_gam 1x1x1 k-points

Four A100 GPUs on One Node

#!/bin/bash
#SBATCH --job-name="vasp"
#SBATCH --partition=gpuA100x4
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=16
#SBATCH --gres=gpu:4
#SBATCH --gpu-bind=closest
#SBATCH --account=beiu-delta-gpu
#SBATCH --mem=0
#SBATCH -t 00:10:00

source ~/.bashrc
module purge
module load vasp/6.5.1

srun vasp_std > vasp.out # or vasp_gam 1x1x1 k-points