SDSC Expanse

The SDSC User Guide can be found  here . You should read it in full.

Account Info

Run the following on the login node to see which projects you have access to. The project ID is the same as the account ID to use in your Slurm script.
expanse-client user -r expanse

Setup

Bashrc

Add the following to your ~/.bashrc:
export SCRATCH=/expanse/lustre/scratch/$USER/temp_project
export PROJECT=/expanse/lustre/projects/nwu178/$USER
export SHARED=/expanse/lustre/projects/nwu178/arosen/shared
export SOFTWARE=$SHARED/software
module use --append $SHARED/modulefiles

Python

You will need to install Miniconda. There are no system-wide Anaconda modules to use.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh
Make sure you select Y when it asks if you want to add Anaconda to your ~/.bashrc.

File Management

Expanse has the following file systems:
  • A project directory at /expanse/lustre/projects/nwu178/$USER that is purged 90 days after the allocation period ends. Your jobs should run here, but note there is limited file space (see the NSF ACCESS  allocations page ).
  • A scratch directory at /expanse/lustre/scratch/$USER/temp_project that purges all files 90 days after they are created. There's no need to run jobs here compared to the project directory, but it has virtually unlimited space so you can store large data here temporarily as needed.
  • A shared project folder at /expanse/lustre/projects/nwu178/arosen/shared that contains group software. Do not run calculations here. It is meant for shared data and executables.
  • Your home directory found at /home/$USER. You have 100 GB of space here, but calculations should not be run here. It is meant for personal software (e.g. Anaconda).
Note: The Lustre filesystem is not backed up. This means you must have a backup plan in place. Syncing data to one of our cloud services using Syncovery on your local machine would be wise.

Job Submission

Job submission works similarly as for the CPU nodes on Tiger (see  🔔Submitting Jobs ).

VASP

If you don't include the -mpi=pmi2 line in your srun call, it will not parallelize.
#!/bin/bash
#SBATCH --job-name="vasp"
#SBATCH --partition=debug
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --mem=0
#SBATCH --account=nwu178
#SBATCH --export=ALL
#SBATCH -t 00:30:00
#SBATCH --constraint="lustre"

source ~/.bashrc
module load vasp/6.5.1

srun -mpi=pmi2 vasp_std > vasp.out # or vasp_gam for 1x1x1 kpoints