3 Commits

Author SHA1 Message Date
a729e570f2 Add guide on using nix in marenostrum 5 2026-03-05 16:48:51 +01:00
5f18335d14 Update SLURM instructions in fox
We no longer need to use srun to enter the allocated machine. Make sure
that the default allocation time is also specified.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2026-02-04 12:42:03 +01:00
29427b4b00 Update CUDA instructions in fox
Ask users to clone the repository with the development shells instead,
so we can keep the repository easily updated.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2026-02-04 12:41:58 +01:00
2 changed files with 178 additions and 51 deletions

147
content/doc/mn5.md Normal file
View File

@@ -0,0 +1,147 @@
---
title: "Using nix in marenostrum"
description: "How to use nix-portable to run nix on marenostrum without privileges"
date: 2026-03-04
---
# Obtaining nix-portable
[nix-portable][1] provides a static nix with a virtualised `/nix/store` that
allows running `nix` without root.
There is version already installed in `/gpfs/projects/bsc15/nix-portable/bin`,
you can use that and skip to [Set up](#set-up).
If you want to obtain it yourself by following the instructions on [nix-portable][1]
summarized below:
```bash
curl -L https://github.com/DavHau/nix-portable/releases/latest/download/nix-portable-$(uname -m) > ./nix-portable
chmod +x ./nix-portable
ln -s nix-portable nix
ln -s nix-portable nix-build
ln -s nix-portable nix-channel
ln -s nix-portable nix-collect-garbage
ln -s nix-portable nix-copy-closure
ln -s nix-portable nix-daemon
ln -s nix-portable nix-env
ln -s nix-portable nix-hash
ln -s nix-portable nix-instantiate
ln -s nix-portable nix-prefetch-url
ln -s nix-portable nix-shell
ln -s nix-portable nix-store
```
# Set up
Add `nix-portable` and the symlinks to your `$PATH`. The default virtualisation
method does not work, so you must set `NP_RUNTIME` to `bwrap` to override it. If
that is not set, you will get an error when setting up the namespace.
Optionally, you can set `NP_LOCATION` to change the location of your `/nix/store`.
By default it will be at `$HOME/.nix-portable`:
```bash
export PATH="$PATH:/gpfs/projects/bsc15/nix-portable/bin" # or the path of your install
export NP_RUNTIME=bwrap
export NP_LOCATION="$HOME" # defaults to $HOME if not set
```
## Configuring nix
After its first run, `nix-portable` will download and populate a local
`/nix/store` along with `bwrap`, `busybox` and all the other tools it needs.
These files are located inside `$NP_LOCATION/.nix-portable` with the nix store in `$NP_LOCATION/.nix-portable/nix` and the nix configuration file (`man nix.conf`)
in `$NP_LOCATION/.nix-portable/conf/nix.conf`.
When using jungle, we recommend adding our substituter to `nix.conf` with:
```ini
extra-substituters = https://jungle.bsc.es/cache
extra-trusted-public-keys = jungle.bsc.es:pEc7MlAT0HEwLQYPtpkPLwRsGf80ZI26aj29zMw/HH0=
```
See [hut#binary-cache][2] for more details.
Additionally, you can add an registry entry for jungle:
```
nix registry add jungle git+https://jungle.bsc.es/git/rarias/jungle
```
This should allow running builds with: `nix build jungle#<package>`.
**NOTE:** This does not pin jungle to any commit, and it may move once
the repository changes. To have proper reproducible builds, use [flakes][3].
# Building and Running
If everything has gone well, you should now be able to use nix in marenostrum,
provided your node has internet access.
```bash
nix build nixpkgs#hello
```
Keep in mind that the resulting symlink will be broken, since it requires the
`nix-portable` virtualised filesystem to run:
```console
$ file result
result: broken symbolic link to /nix/store/8qi947kixhz1nw83dkwxm6d0wndprqkj-hello-2.12.2
```
You will have to either use `nix run` to run the binary through nix or enter
a shell with `nix shell/develop` where `/nix/store` will be available:
```console
$ nix run nixpkgs#hello
Hello, world!
$ nix shell nixpkgs#hello
bash-5.1$ hello
Hello, world!
bash-5.1$ exit
$ nix run nixpkgs#bashInteractive
[user@glogin4 ~]$ readlink -f result
/nix/store/8qi947kixhz1nw83dkwxm6d0wndprqkj-hello-2.12.2
[user@glogin4 ~]$ ./result/bin/hello
Hello, world!
[user@glogin4 ~]$ exit
```
# Transferring derivations
You can transfer derivations between your local machine and marenostrum. You can
check if communication works with `nix store info`:
```console
$ nix store info --store ssh-ng://<user>@glogin1.bsc.es
Store URL: ssh://<user>@glogin1.bsc.es
Version: 2.20.6
Trusted: 1
```
Then, you can send derivations between mn5 and another nix machine through ssh
with:
```bash
nix copy --to ssh-ng://<user>@glogin1.bsc.es jungle#ovni
nix copy --from ssh-ng://<user>@glogin1.bsc.es /nix/store/<path>
```
Note that when copying *from* mn5, you must provide the full path in the nix
store.
# Known issues
- `builtins.fetchGit` is currently broken due to permission issues with the ssh
configuration files.
[1]: https://github.com/DavHau/nix-portable
[2]: /hut/#binary-cache
[3]: /doc/quickstart/#creating-a-flakenix

View File

@@ -22,19 +22,16 @@ the detailed specifications:
## Access
To access the machine, request a SLURM session from [apex](/apex) using the `fox`
partition. If you need the machine for performance measurements, use an
exclusive reservation:
partition and set the time for the reservation (the default is 1 hour). If you
need the machine for performance measurements, use an exclusive reservation:
apex% salloc -p fox --exclusive
apex% salloc -p fox -t 02:00:00 --exclusive
fox%
Otherwise, specify the CPUs that you need so other users can also use the node
at the same time:
apex% salloc -p fox -c 8
Then use srun to execute an interactive shell:
apex% srun --pty $SHELL
apex% salloc -p fox -t 02:00:00 -c 8
fox%
Make sure you get all CPUs you expect:
@@ -46,55 +43,38 @@ Follow [these steps](/access) if you don't have access to apex or fox.
## CUDA
To use CUDA, you can use the following `flake.nix` placed in a new directory to
load all the required dependencies:
To use CUDA you'll need to load the NVIDIA `nvcc` compiler and some additional
libraries in the environment. Clone the
[following
example](https://jungle.bsc.es/git/rarias/devshell/src/branch/main/cuda) and
modify the `flake.nix` if needed to add additional packages.
```nix
{
inputs.jungle.url = "jungle";
Then just run `nix develop` from the same directory to spawn a new shell with
the CUDA environment:
outputs = { jungle, ... }: {
devShell.x86_64-linux = let
pkgs = jungle.nixosConfigurations.fox.pkgs;
in pkgs.mkShell {
name = "cuda-env-shell";
buildInputs = with pkgs; [
git gitRepo gnupg autoconf curl
procps gnumake util-linux m4 gperf unzip
fox% git clone https://jungle.bsc.es/git/rarias/devshell
# Cuda packages (more at https://search.nixos.org/packages)
cudatoolkit linuxPackages.nvidia_x11
cudaPackages.cuda_cudart.static
cudaPackages.libcusparse
fox% cd devshell/cuda
libGLU libGL
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5 stdenv.cc binutils
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=/var/run/opengl-driver/lib
export SMS=50
'';
};
};
}
```
fox% nix develop
Then just run `nix develop` from the same directory:
% mkdir cuda
% cd cuda
% vim flake.nix
[...]
% nix develop
$ nvcc -V
fox$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:19:38_PST_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0
fox$ make
nvcc -ccbin g++ -m64 -Wno-deprecated-gpu-targets -o cudainfo cudainfo.cpp
fox$ ./cudainfo
./cudainfo Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 2 CUDA Capable device(s)
...
## AMD uProf