UVA CS Project Directories
https://www.cs.virginia.edu/computing/doku.php?id=project_directories
Project directory /p/nmg5g
Attach CS Project Directory/CIFS to Arch Linux
Install SMB/CIFS tools
sudo pacman -Syu --needed cifs-utils gvfs-smb smbclient keyutilsCreate a mount point
sudo mkdir -p /mnt/nmg5gQuick test mount
Use the same server and share as on Proxmox, and similar vers= if you set it there:
sudo mount -t cifs //samba.cs.virginia.edu/p/nmg5g /mnt/nmg5g -o "username=rhe9cf,domain=CSDOM,vers=3.0,uid=$(id -u),gid=$(id -g),file_mode=0644,dir_mode=0755"- Enter the password when prompted and check the contents: ls /mnt/nmg5g.
- If it fails, try another vers= (e.g. 2.1, 3.1.1).
umount:
sudo umount /mnt/nmg5gpersistent, create credentials and fstab
Create credentials file
sudo mkdir -p /etc/samba
sudo vim /etc/samba/creds.nmg5gContents:
username=rhe9cf
password=YOUR_PASSWORD_HERE
domain=CSDOMThen:
sudo chmod 600 /etc/samba/creds.nmg5gAdd fstab entry
Edit /etc/fstab:
sudo vim /etc/fstabAdd this line (adjust uid/gid if needed):
//samba.cs.virginia.edu/p/nmg5g /mnt/nmg5g cifs _netdev,nofail,credentials=/etc/samba/creds.nmg5g,vers=3.0,uid=1000,gid=1000,file_mode=0644,dir_mode=0755 0 0Test
sudo umount /mnt/nmg5g # ignore error if not mounted
sudo mount -a
ls /mnt/nmg5gProxmox’s CIFS storage
UVA CS Slurm cluster
https://www.cs.virginia.edu/computing/doku.php?id=compute_slurm#resource_limits
Create a project-scoped env
Pick a path under your project dir (example: /p/nmg5g/condaenvs/py312):
# on a login node (portal) or short salloc
module load miniforge
mkdir -p /p/nmg5g/condaenvs
# create the env *by path* so it lives in /p
conda create -y --prefix /p/nmg5g/condaenvs/py312 python=3.12To use it interactively:
module load miniforge
# make 'conda activate' available in non-interactive shells
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate /p/nmg5g/condaenvs/py312
python -VKeep caches out of /u (optional but recommended)
By default, Conda’s package cache and named envs live in your home. Redirect them to /p with a .condarc:
rhe9cf@portal03:~$ cd
rhe9cf@portal03:~$ pwd
/u/rhe9cf# create/edit ~/.condarc
cat >> ~/.condarc <<'YAML'
envs_dirs:
- /p/nmg5g/conda/envs
pkgs_dirs:
- /p/nmg5g/conda/pkgs
YAML
mkdir -p /p/nmg5g/conda/envs /p/nmg5g/conda/pkgsConda reads .condarc to set envs_dirs and pkgs_dirs. You can also use conda config —add envs_dirs … / —add pkgs_dirs …
Use the env in Slurm jobs
create hello.py
import sys, multiprocessing as mp
print("Python:", sys.version)
print("CPUs visible:", mp.cpu_count())create conda_example.slurm
#!/bin/bash
#SBATCH -p cpu
#SBATCH --cpus-per-task=4
#SBATCH --mem=8G
#SBATCH -t 01:00:00
#SBATCH -J conda-demo
#SBATCH --output=slurm-%A.out
set -euo pipefail
module --ignore_cache purge
module load miniforge
# make `conda activate` work in batch shells
source "$(conda info --base)/etc/profile.d/conda.sh"
conda activate /p/nmg5g/condaenvs/py312
echo "Python: $(python -V) @ $(which python)"
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# use heredoc with the *activated* interpreter
srun python hello.pyrun:
sbatch conda_example.slurmsqueue -u rhe9cf