Compare commits

2 Commits

Author SHA1 Message Date
5f18335d14 Update SLURM instructions in fox
We no longer need to use srun to enter the allocated machine. Make sure
that the default allocation time is also specified.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2026-02-04 12:42:03 +01:00
29427b4b00 Update CUDA instructions in fox
Ask users to clone the repository with the development shells instead,
so we can keep the repository easily updated.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2026-02-04 12:41:58 +01:00

View File

@@ -22,19 +22,16 @@ the detailed specifications:
## Access ## Access
To access the machine, request a SLURM session from [apex](/apex) using the `fox` To access the machine, request a SLURM session from [apex](/apex) using the `fox`
partition. If you need the machine for performance measurements, use an partition and set the time for the reservation (the default is 1 hour). If you
exclusive reservation: need the machine for performance measurements, use an exclusive reservation:
apex% salloc -p fox --exclusive apex% salloc -p fox -t 02:00:00 --exclusive
fox%
Otherwise, specify the CPUs that you need so other users can also use the node Otherwise, specify the CPUs that you need so other users can also use the node
at the same time: at the same time:
apex% salloc -p fox -c 8 apex% salloc -p fox -t 02:00:00 -c 8
Then use srun to execute an interactive shell:
apex% srun --pty $SHELL
fox% fox%
Make sure you get all CPUs you expect: Make sure you get all CPUs you expect:
@@ -46,55 +43,38 @@ Follow [these steps](/access) if you don't have access to apex or fox.
## CUDA ## CUDA
To use CUDA, you can use the following `flake.nix` placed in a new directory to To use CUDA you'll need to load the NVIDIA `nvcc` compiler and some additional
load all the required dependencies: libraries in the environment. Clone the
[following
example](https://jungle.bsc.es/git/rarias/devshell/src/branch/main/cuda) and
modify the `flake.nix` if needed to add additional packages.
```nix Then just run `nix develop` from the same directory to spawn a new shell with
{ the CUDA environment:
inputs.jungle.url = "jungle";
outputs = { jungle, ... }: { fox% git clone https://jungle.bsc.es/git/rarias/devshell
devShell.x86_64-linux = let
pkgs = jungle.nixosConfigurations.fox.pkgs;
in pkgs.mkShell {
name = "cuda-env-shell";
buildInputs = with pkgs; [
git gitRepo gnupg autoconf curl
procps gnumake util-linux m4 gperf unzip
# Cuda packages (more at https://search.nixos.org/packages) fox% cd devshell/cuda
cudatoolkit linuxPackages.nvidia_x11
cudaPackages.cuda_cudart.static
cudaPackages.libcusparse
libGLU libGL fox% nix develop
xorg.libXi xorg.libXmu freeglut
xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib
ncurses5 stdenv.cc binutils
];
shellHook = ''
export CUDA_PATH=${pkgs.cudatoolkit}
export LD_LIBRARY_PATH=/var/run/opengl-driver/lib
export SMS=50
'';
};
};
}
```
Then just run `nix develop` from the same directory: fox$ nvcc -V
% mkdir cuda
% cd cuda
% vim flake.nix
[...]
% nix develop
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation Copyright (c) 2005-2025 NVIDIA Corporation
Built on Tue_Feb_27_16:19:38_PST_2024 Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.4, V12.4.99 Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.4.r12.4/compiler.33961263_0 Build cuda_12.8.r12.8/compiler.35583870_0
fox$ make
nvcc -ccbin g++ -m64 -Wno-deprecated-gpu-targets -o cudainfo cudainfo.cpp
fox$ ./cudainfo
./cudainfo Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 2 CUDA Capable device(s)
...
## AMD uProf ## AMD uProf