FAQ
Frequently asked questions
General
How do I get access to the cluster?
Contact Lukas (burk@leibniz-bips.de) to request an account. You’ll receive your username and initial password. See Connecting to the Cluster for setup instructions.
Why can’t I install packages on compute nodes?
Compute nodes don’t have internet access. Install packages on the head node, which has internet. Since your home directory is shared via NFS, packages installed there are available on all nodes.
My SSH connection keeps dropping. What can I do?
Use tmux to maintain persistent sessions:
tmux new -s work
# ... do your work ...
# If disconnected, reconnect with:
tmux attach -t workAlso check your SSH client settings for keepalive options.
How do I transfer files to/from the cluster?
There are several options depending on your comfort level:
Command line (rsync, recommended):
# Upload a project folder to the cluster
rsync -avP ~/my_project/ <user>@bipscluster:/srv/home/<user>/my_project/
# Download results to your local machine
rsync -avP <user>@bipscluster:/srv/home/<user>/project/results/ ~/Downloads/results/rsync only transfers files that have changed, shows progress, and can resume interrupted transfers. See Tips > File transfer for more options and flags.
Command line (scp, simple but basic):
# Upload a file
scp data.csv <user>@bipscluster:/srv/home/<user>/project/
# Download a file
scp <user>@bipscluster:/srv/home/<user>/project/results.csv .scp works like cp but over SSH. It’s simpler than rsync but always copies everything (no incremental sync) and can’t resume.
Graphical (SFTP clients):
If you prefer drag-and-drop, use an SFTP client:
| Application | Platform |
|---|---|
| FileZilla | Windows, macOS, Linux |
| WinSCP | Windows |
| Cyberduck | macOS, Windows |
Connect using the same hostname, username, and port as SSH. Choose SFTP (not plain FTP) as the protocol. See Tips > GUI applications for setup details.
- Active project data →
/srv/home/<user>/(your home directory, fast NVMe, shared across all nodes) - Shared project data →
/mnt/sas/projects/<project>/(restricted access, managed by PIs) - Research group data →
/mnt/sas/groups/<group>/(shared with your research group) - Personal archive data →
/mnt/sas/users/<user>/(bulk HDD storage) - Do not store data in
/optor/tmpon compute nodes
How do I get access to a project directory?
Project directories under /mnt/sas/projects/ have restricted access – only approved project members can read or write data there. Access is granted by the responsible PI and enforced by the cluster administrator. Some projects have time-limited access that expires automatically.
If you need access to a project, ask the responsible PI to request it. If you believe your access has expired in error, contact the cluster administrator.
Jobs and Slurm
Why is my job stuck in “pending”?
Check the reason with:
squeue --me --longCommon reasons:
- Resources: Waiting for CPUs/memory to free up
- Priority: Other jobs are ahead in the queue
- QOSMaxJobsPerUserLimit: You’ve hit your job limit
My job failed with “OUT_OF_MEMORY”. What now?
Your job exceeded its memory allocation. Request more memory:
#SBATCH --mem=16G # or higherCheck how much memory your previous job actually used:
sacct -j <jobid> --format=JobID,MaxRSSMy job hit the time limit. How do I fix it?
Request more time:
#SBATCH --time=12:00:00 # or longerOr use a QoS that allows longer jobs:
#SBATCH --qos=longHow do I cancel a job?
scancel <jobid> # Cancel one job
scancel --me # Cancel all your jobsCan I run jobs that continue after I log out?
Yes, use batch jobs (sbatch) instead of interactive sessions. Batch jobs run independently of your terminal session.
R
R can’t find my packages in a batch job
Make sure you: 1. Load the same R version you used when installing packages — on the head node, before submitting 2. Installed packages on the head node (which has internet)
# On the head node: load R, then submit
module load R/4.5.3 # Same version as when you installed
sbatch my_job.slurmSlurm inherits your environment (including PATH) by default, so the R version you loaded before sbatch is the one used inside the job.
How do I use multiple cores in R?
Request multiple cores and use future or mirai (recommended), or base R parallel:
salloc --cpus-per-task=8 --mem=16G --time=02:00:00library(future)
library(future.apply)
plan(multicore, workers = availableCores())
results <- future_lapply(data, my_function)See R on the Cluster — Parallel R for full details including mirai and multi-node parallelism.
Should I use renv?
Yes, if you care about reproducibility. renv creates isolated package environments per project, ensuring your analysis can be reproduced later or by collaborators.
Run renv::restore() on the head node (which has internet) to download and install packages, then submit your jobs normally.
What is /localdisk and when should I use it?
Each node has a fast local SSD. When you run a Slurm job, a per-job scratch directory is automatically created at /localdisk/slurm-<jobid> and cleaned up when the job ends. The environment variables TMPDIR and LOCALDISK_DIR point to it.
Use local scratch for temporary files that benefit from fast I/O (intermediate results, caches). In R, tempdir() automatically uses it. See Cluster Resources for details.
How much memory / how many CPUs should I request?
Start small and check actual usage with sacct:
sacct -j <jobid> --format=JobID,Elapsed,MaxRSS,NCPUsMaxRSS shows peak memory. Request ~20% more than the peak to be safe. For CPUs, only request more cores if your code actually uses them (e.g. via parallel::mclapply or --workers flags). Requesting more than you use wastes resources for everyone.
Python
Should I use conda or uv?
We recommend uv for most use cases – it’s faster and simpler. Use conda/micromamba if you need non-Python dependencies (like CUDA libraries) that are easier to install via conda.
My Python script can’t find modules
Make sure you’re running within your environment:
uv run python script.py
# or
source .venv/bin/activate && python script.pyGetting help
Something isn’t working. Who do I contact?
Contact Lukas (burk@leibniz-bips.de) with:
- What you were trying to do
- The exact commands you ran
- Any error messages
- Your job ID (if applicable)