Quick Start with R

From login to running R on a compute node in 5 minutes

Modified

2026-02-13

This guide gets you started quickly. We’ll explain the why later – for now, let’s get you running R on the cluster.

1. Connect to the cluster

SSH into the head node (see Connecting to the Cluster for detailed setup):

ssh <username>@10.10.11.165

Replace <username> with your cluster username. You’ll land on the head node – think of it as the reception desk of the cluster.

Note

If you haven’t set up SSH keys yet, you’ll need your password. See the connection guide for setting up passwordless access.

ImportantDon’t compute on the head node

The head node is shared by everyone and is only for light tasks: installing packages, editing scripts, submitting jobs. Heavy computations will be terminated.

2. Load R

The cluster uses modules to manage software versions. To use R:

module load R/4.5.2

Check available R versions with module avail R.

TipMake it permanent

Add module load R/4.5.2 to your ~/.bashrc so R is available every time you log in. If you later need a different version, swap it out:

module unload R           # Remove current R
module load R/4.4.3       # Load a different version

Or edit your ~/.bashrc to change the version there.

TipPer-project environments

Instead of a global ~/.bashrc setting, you can use direnv to automatically load the right R version when you enter a project directory. See Tips & Tools – direnv for setup instructions.

3. Install packages (on the head node)

The head node has internet access, compute nodes don’t. Install your packages here:

R
# Inside R
install.packages("tidyverse")
install.packages("data.table")
# ... any other packages you need

Packages are installed to your home directory and will be available on all nodes.

4. Get a compute node

Now request an interactive session on a compute node:

salloc --cpus-per-task=4 --mem=8G --time=02:00:00

This asks Slurm (the job scheduler) for:

  • 4 CPU cores
  • 8 GB of RAM
  • 2 hours of time

Once allocated, you’re automatically connected to a compute node. Your prompt might change to show the node name (e.g., gnode01).

5. Run R on the compute node

Now you can run R with actual computing power:

R
# Your heavy computations go here
library(data.table)
dt <- fread("large_dataset.csv")
# ... do your analysis

When done, type exit to release the node and return to the head node.

Quick reference

Task Command
Connect to cluster ssh <username>@bips-cluster
Load R module load R/4.5.2
See available modules module avail
Request interactive session salloc --cpus-per-task=4 --mem=8G --time=02:00:00
Check your running jobs squeue --me
Cancel a job scancel <jobid>
End interactive session exit

Next steps

Now that you’ve got the basics:

  • Cluster Basics – Understand the architecture: nodes, cores, modules, and why this workflow exists
  • Software – Available software, how modules work, and how to install your own packages
  • Interactive Jobs – More options for interactive work
  • Batch Jobs – Submit jobs that run without you being connected