Perlmutter

The NERSC user guide can be found  here .

Setup

General Tips

  • Fill out the  VASP License Confirmation Request  form to get access to system-level VASP modules.
  • It is recommended that you install  sshproxy  to make multi-factor authentication less annoying. Refer to  🔑SSH Proxy at NERSC  for additional details.
  • If you would like to use MongoDB on NERSC,  fill out a ticket  and the NERSC staff will host one for you that you can access from the machine. Make sure to select MongoDB and Production-quality. For the description you can put: "Workflows carried out using Jobflow and Jobflow-Remote to orchestrate DFT calculations with VASP. The database needs to be accessible from the login and compute nodes." You can use the following information:
  • Database name: <Username>.nersc.gov
  • Database Type: MongoDB
  • Approximate size: 10 GB
  • Service level: Production
  • Description: I am using Jobflow and Jobflow-Remote (successor to FireWorks) to orchestrate calculations. MongoDB is used to store metadata about the job states as well as to store the calculation outputs. The database needs to be access from the login node, compute node, and ideally from my laptop (e.g. for viewing with Studio3T).
  • NERSC project: See  💰Grant Details  for the corresponding project.
  • Duration: <Insert roughly how many years you have left in the group>
  • For instructions on connecting Studio3T to your MongoDB at NERSC, check out  🎋Studio3T at NERSC .

Bashrc

Add the following to your ~/.bashrc. Here, it assumes we are operating out of the m5034 project directory.
export CFS=/global/cfs/cdirs/m5034/$USER
export COMMON=/global/common/software/m5034/common
export SOFTWARE=$COMMON/software
module use --append $COMMON/modulefiles

Filesystems

The NERSC documentation does a thorough job describing the different  file systems .
  • Run calculations out of $SCRATCH, which is available without defining anything in your ~/.bashrc. Data may be purged after 8 weeks, so this is for temporary storage only.
  • If using quacc, you can set the SCRATCH_DIR setting to $SCRATCH to ensure calculations are always running in $SCRATCH. The RESULTS_DIR will default to your current working directory or can be set manually (e.g. to a folder in $CFS).
  • Store data medium-term in $CFS. It is not backed up but is also not purged.
  • Compiled codes can be found in $SOFTWARE.
  • You have a home directory that is meant for storing small software and scripts, but calculations are not to be run here.

Job Submission

VASP

The following parallelization flags have been reasonably well-optimized for usage on Perlmutter. For the account ID, check the  Iris platform  or  💰Grant Details .
#!/bin/bash
#SBATCH -N 1
#SBATCH -C gpu
#SBATCH -G 4
#SBATCH -q debug
#SBATCH -J test
#SBATCH -t 00:30:00
#SBATCH -A AccountID_g

module load python
module load vasp/6.5.1_gpu

export OMP_NUM_THREADS=8
export OMP_PLACES=threads
export OMP_PROC_BIND=spread

srun -N 1 -n 4 -c 32 --cpu_bind=cores -G 4 --gpu-bind=none stdbuf --output=L vasp_std # or vasp_gam

Jobflow-Remote

Refer to  👾JFR Setup on Perlmutter  for how to set up and use Jobflow-Remote on Perlmutter.