Interactive Jobs
Working directly on compute nodes
Interactive jobs give you a shell on a compute node where you can run commands, test code, and do exploratory analysis – just like working on the head node, but with dedicated resources.
Basic interactive session
salloc --cpus-per-task=8 --mem=8G --time=02:00:00This: 1. Requests 8 cores, 8GB RAM, for 2 hours 2. Waits for resources to become available 3. Connects you to a compute node
Your prompt will change to show the node name:
burk hnode ~
❯ salloc --cpus-per-task=8 --mem=8G --time=02:00:00
salloc: Granted job allocation 40295
salloc: Nodes node01 are ready for job
[burk@node01]~% echo "Hello from $(hostname)"
Hello from node01
Loading modules
Modules loaded on the head node are inherited by your interactive session. Load them before running salloc:
module load R/4.5.2
salloc --cpus-per-task=4 --mem=8G --time=02:00:00
R # R is available because you loaded it before sallocIf you forget, you can still load modules after connecting, but it’s cleaner to set up your environment first.
Running commands with srun
Sometimes you want to run a single command on a compute node without an interactive shell. Use srun:
# Run R script on a compute node
module load R/4.5.2
srun --cpus-per-task=4 --mem=8G --time=01:00:00 Rscript analysis.RThe difference: - salloc – Get a shell, run multiple commands interactively - srun – Run one command, exit when done
Common workflows
R development
# On head node
module load R/4.5.2
# Get interactive session
salloc --cpus-per-task=4 --mem=16G --time=04:00:00
# Now on compute node
R
# ... develop, test, iterate ...
# When done
quit()
exitTesting before batch submission
Use interactive mode to test your script works before submitting a long batch job:
module load R/4.5.2
salloc --cpus-per-task=2 --mem=4G --time=00:30:00
# Test your script
Rscript my_analysis.R --test-mode
# If it works, exit and submit as batch job
exit
sbatch my_analysis.slurmDebugging a failed batch job
If a batch job failed, reproduce the environment interactively:
# Request similar resources to your batch job
salloc --cpus-per-task=4 --mem=8G --time=01:00:00
# Load the same modules
module load R/4.5.2
# Run the script manually to see errors
Rscript my_script.RKeeping sessions alive with tmux
If your SSH connection drops, your interactive session is lost. Use tmux to run persistent sessions that survive disconnects – start your salloc inside tmux, detach, and reattach later. See Tips & Tools – tmux for a full guide.
Interactive session limits
Interactive sessions (salloc) automatically use the interactive QoS, which has these limits:
| Constraint | Limit |
|---|---|
| Max duration | 1 day |
| Max sessions per user | 2 |
| Max CPUs per user | 192 |
If you need longer runs or more concurrent jobs, use batch jobs with a different QoS instead.
Tips
Don’t forget to exit
When you’re done, type exit to release the compute node. Idle sessions waste resources and may count against your limits.
Request appropriate resources
Don’t request 32 cores and 128GB if you only need 4 cores and 8GB. Over-requesting: - Makes your job wait longer to start - Wastes cluster resources - May hit QoS limits faster
Check your allocation
Once on a compute node, verify your resources:
# See your allocation
squeue --me
# Check available cores
nproc
# Check available memory (in kB)
grep MemTotal /proc/meminfo