Write the Slurm file as submit.sh and submit it as sbatch submit.sh
Tiger3
Copy and paste the script below into the same folder as the job you are trying to run (e.g. /scratch/gpfs/ROSENGROUP/<your_big_folder>/<folder_youre_running_in>/submit.sh).
Python Script
submit.sh
#!/bin/bash
#SBATCH --job-name=python# create a short name for your job
#SBATCH --nodes=1 # node count
#SBATCH --ntasks-per-node=1 # total number of tasks per node
#SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem=4G# memory (up to 1 TB per node)
#SBATCH --time=00:10:00 # total run time limit(HH:MM:SS)
#SBATCH --account=rosengroup
source ~/.bashrc
module purge
conda activate cms
python job.py>job_output_file
Once your job.py file and submit.sh files are saved, submit the job by running this command from the folder that submit.sh and job.py are in:
sbatch submit.sh
Now your job has entered the queue! You can check out your queued jobs using the command:
squeue -u<YourNetID>
VASP
To run VASP, we modify the Slurm submission script so that we load the necessary modules and run the VASP executable via the srun command.
See the Research Computing documentation on performing a Scaling Analysis to determine suitable values for nodes , ntasks-per-node, and cpus-per-task. Generally, you do not need more than (at most) a couple of nodes for routine calculations. For ntasks-per-node and cpus-per-task, they must multiply to the number of CPU cores per node (i.e. 112).
#!/bin/bash
#SBATCH --job-name=vasp # create a short name for your job
#SBATCH --nodes=1 # node count
#SBATCH --ntasks-per-node=112 # total number of tasks per node
#SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem=64G # memory (up to 1 TB per node)
#SBATCH --time=00:10:00 # total run time limit (HH:MM:SS)
#SBATCH --account=rosengroup
source ~/.bashrc
exportOMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
exportSRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK
module purge
module load intel-oneapi/2024.2
module load intel-mpi/oneapi/2021.13
module load intel-mkl/2024.2
module load hdf5/oneapi-2024.2/1.14.4
srun vasp_std > vasp.out
Note that the script above assumes that you have the vasp_std executable in your PATH, as described in 🙏Getting Started.
ASE w/ VASP
Running a VASP calculation via ASE works essentially the same way as running VASP directly except now you call a Python script and define the VASP parallelization flags by setting the ASE_VASP_COMMAND environment variable as defined in the ASE documentation.
#!/bin/bash
#SBATCH --job-name=vasp # create a short name for your job
#SBATCH --nodes=1 # node count
#SBATCH --ntasks-per-node=112 # total number of tasks per node
#SBATCH --cpus-per-task=1 # cpu-cores per task (>1 if multi-threaded tasks)
#SBATCH --mem=64G # memory (up to 1 TB per node)
#SBATCH --time=00:10:00 # total run time limit (HH:MM:SS)
#SBATCH --account=rosengroup
source ~/.bashrc
exportOMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
exportSRUN_CPUS_PER_TASK=$SLURM_CPUS_PER_TASK
exportASE_VASP_COMMAND="srun vasp_std"# or "srun vasp_gam" for 1x1x1 kpts