UVA CS Project Directories
https://www.cs.virginia.edu/computing/doku.php?id=project_directories
Project directory /p/nmg5g
UVA CS Slurm cluster
https://www.cs.virginia.edu/computing/doku.php?id=compute_slurm#resource_limits
Create a project-scoped env
Pick a path under your project dir (example: /p/nmg5g/condaenvs/py312):
# on a login node (portal) or short salloc
module load miniforge
mkdir -p /p/nmg5g/condaenvs
# create the env *by path* so it lives in /p
conda create -y --prefix /p/nmg5g/condaenvs/py312 python=3.12To use it interactively:
module load miniforge
# make 'conda activate' available in non-interactive shells
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate /p/nmg5g/condaenvs/py312
python -VKeep caches out of /u (optional but recommended)
By default, Conda’s package cache and named envs live in your home. Redirect them to /p with a .condarc:
rhe9cf@portal03:~$ cd
rhe9cf@portal03:~$ pwd
/u/rhe9cf# create/edit ~/.condarc
cat >> ~/.condarc <<'YAML'
envs_dirs:
- /p/nmg5g/conda/envs
pkgs_dirs:
- /p/nmg5g/conda/pkgs
YAML
mkdir -p /p/nmg5g/conda/envs /p/nmg5g/conda/pkgsConda reads .condarc to set envs_dirs and pkgs_dirs. You can also use conda config —add envs_dirs … / —add pkgs_dirs …
Use the env in Slurm jobs
create hello.py
import sys, multiprocessing as mp
print("Python:", sys.version)
print("CPUs visible:", mp.cpu_count())create conda_example.slurm
#!/bin/bash
#SBATCH -p cpu
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
#SBATCH -t 01:00:00
#SBATCH -J conda-demo
#SBATCH --output=slurm-%A.out
set -euo pipefail
module --ignore_cache purge
module load miniforge
# make `conda activate` work in batch shells
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate /p/nmg5g/condaenvs/py312
echo "Python: $(python -V) @ $(which python)"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# use heredoc with the *activated* interpreter
srun python hello.pyrun:
sbatch conda_example.slurmsqueue -u rhe9cf