Software
Available software and how to use it
The cluster provides software through environment modules (Lmod). Modules let you load specific software versions into your session without conflicts.
Module basics
# List all available modules
module avail
# Load a module
module load R/4.5.2
# See what you have loaded
module list
# Unload a module
module unload R
# Unload everything
module purgeModules set up PATH, library paths, and other environment variables so the software “just works” when loaded.
Add module load commands to your ~/.bashrc to load software automatically on login:
echo 'module load R/4.5.2' >> ~/.bashrcAvailable software
R
R is managed via rig (R Installation Manager) and available as a module:
module avail R # See available versions
module load R/4.5.2 # Load RSee R on the Cluster for details on package installation, parallelism, and workflows.
Python
We recommend using uv for Python environment management rather than a system-wide Python installation. See Python on the Cluster.
PLINK (genomics)
Both PLINK versions are available:
module load plink/1.9 # PLINK 1.9
module load spack/1.1.1 # Load spack modules first...
module load plink2/2.0.0-a.6.9 # ...then PLINK 2.0The binaries are named plink and plink2 respectively, so both can be loaded at the same time if needed.
PLINK 1.9 is available directly via module load plink/1.9. PLINK 2.0 is installed via Spack, so you need to load the spack module first to make it visible.
CUDA toolkit
The CUDA toolkit (nvcc, libraries, headers) is installed via Spack for GPU development. To see available versions:
spack find cudaTo load a specific version:
# Option 1: spack load (works without loading the spack module first)
spack load cuda@12.4.0
# Option 2: via environment modules
module load spack/1.1.1
module avail cuda # See available versions
module load cuda/12.4.0This gives you nvcc, CUDA headers, and libraries. The NVIDIA driver is already installed on the GPU node — you only need the toolkit for compiling CUDA code.
You don’t need to load CUDA manually for Python (PyTorch) or R (torch) GPU work — those frameworks bundle their own CUDA runtime. The CUDA module is for compiling custom CUDA code (.cu files) or building software that depends on the CUDA toolkit.
Spack (HPC package manager)
Spack is an HPC package manager used to install optimized scientific software. The cluster has a system-wide Spack installation with packages built for our AMD Zen3 hardware using the AMD AOCC compiler.
Loading Spack and its packages
module load spack/1.1.1This gives you:
- The
spackcommand (for querying and loading packages) - Access to all Spack-installed modules via
module avail
# List all Spack-installed packages
spack find
# Load a package
spack load <package>
# Or use the module system
module avail # Shows spack-installed modules too
module load plink2/2.0.0-a.6.9Installing your own packages
If you need software that isn’t installed system-wide, you can install packages in your home directory that build on top of the system installation. This requires a one-time setup.
One-time setup
Run these commands once to configure Spack for user-local installs:
mkdir -p ~/.spack
cat > ~/.spack/upstreams.yaml << 'EOF'
upstreams:
system:
install_tree: /srv/software/spack/opt
EOF
cat > ~/.spack/config.yaml << 'EOF'
config:
install_tree:
root: ~/spack/opt
source_cache: ~/.spack/cache/source
misc_cache: ~/.spack/cache/misc
build_stage:
- $tempdir/$user/spack-stage
EOF
cat > ~/.spack/modules.yaml << 'EOF'
modules:
default:
enable:
- lmod
roots:
lmod: ~/spack/modules
lmod:
core_compilers:
- aocc@5.1.0
- gcc@11.5.0
hierarchy: []
hash_length: 0
hide_implicits: true
all:
autoload: direct
projections:
all: '{name}/{version}'
EOFThis configures:
upstreams.yaml: Tells Spack to reuse system-installed packages as dependencies instead of rebuilding everythingconfig.yaml: Redirects the install tree, source cache, and build staging to your home directory (otherwise Spack would try to write to root-owned system paths)modules.yaml: Redirects module file generation to your home directory
Installing packages
module load spack/1.1.1
spack install <package>Spack will reuse system-installed dependencies and only build what’s missing.
The cluster CPUs are AMD Zen3. For computationally intensive software, build with the AMD AOCC compiler:
spack install <package> %aocc@5.1.0If a package fails to build with AOCC, fall back to GCC:
spack install <package> %gcc@11.5.0If you don’t specify a compiler, Spack will choose one automatically.
Package installation compiles from source, which can be slow. Only use this if the software you need isn’t already installed. For common requests, ask the admin to install it system-wide – admin-installed packages are available for everyone and optimized for the cluster hardware.
Requesting software
If you need software that isn’t available, contact the cluster admin. System-wide installations are preferred over per-user installs because they are optimized for the cluster hardware and shared across all users.