From 29427b4b00667b8ec821217376a1093b0d9d1db4 Mon Sep 17 00:00:00 2001 From: Rodrigo Arias Mallo Date: Wed, 4 Feb 2026 10:03:46 +0100 Subject: [PATCH 1/2] Update CUDA instructions in fox MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Ask users to clone the repository with the development shells instead, so we can keep the repository easily updated. Reviewed-by: Aleix Boné --- content/fox/_index.md | 69 ++++++++++++++++--------------------------- 1 file changed, 26 insertions(+), 43 deletions(-) diff --git a/content/fox/_index.md b/content/fox/_index.md index dedb9e1..c5ea9ba 100644 --- a/content/fox/_index.md +++ b/content/fox/_index.md @@ -46,55 +46,38 @@ Follow [these steps](/access) if you don't have access to apex or fox. ## CUDA -To use CUDA, you can use the following `flake.nix` placed in a new directory to -load all the required dependencies: +To use CUDA you'll need to load the NVIDIA `nvcc` compiler and some additional +libraries in the environment. Clone the +[following +example](https://jungle.bsc.es/git/rarias/devshell/src/branch/main/cuda) and +modify the `flake.nix` if needed to add additional packages. -```nix -{ - inputs.jungle.url = "jungle"; +Then just run `nix develop` from the same directory to spawn a new shell with +the CUDA environment: - outputs = { jungle, ... }: { - devShell.x86_64-linux = let - pkgs = jungle.nixosConfigurations.fox.pkgs; - in pkgs.mkShell { - name = "cuda-env-shell"; - buildInputs = with pkgs; [ - git gitRepo gnupg autoconf curl - procps gnumake util-linux m4 gperf unzip + fox% git clone https://jungle.bsc.es/git/rarias/devshell - # Cuda packages (more at https://search.nixos.org/packages) - cudatoolkit linuxPackages.nvidia_x11 - cudaPackages.cuda_cudart.static - cudaPackages.libcusparse + fox% cd devshell/cuda - libGLU libGL - xorg.libXi xorg.libXmu freeglut - xorg.libXext xorg.libX11 xorg.libXv xorg.libXrandr zlib - ncurses5 stdenv.cc binutils - ]; - shellHook = '' - export CUDA_PATH=${pkgs.cudatoolkit} - export LD_LIBRARY_PATH=/var/run/opengl-driver/lib - export SMS=50 - ''; - }; - }; -} -``` + fox% nix develop -Then just run `nix develop` from the same directory: - - % mkdir cuda - % cd cuda - % vim flake.nix - [...] - % nix develop - $ nvcc -V + fox$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver - Copyright (c) 2005-2024 NVIDIA Corporation - Built on Tue_Feb_27_16:19:38_PST_2024 - Cuda compilation tools, release 12.4, V12.4.99 - Build cuda_12.4.r12.4/compiler.33961263_0 + Copyright (c) 2005-2025 NVIDIA Corporation + Built on Fri_Feb_21_20:23:50_PST_2025 + Cuda compilation tools, release 12.8, V12.8.93 + Build cuda_12.8.r12.8/compiler.35583870_0 + + fox$ make + nvcc -ccbin g++ -m64 -Wno-deprecated-gpu-targets -o cudainfo cudainfo.cpp + + fox$ ./cudainfo + ./cudainfo Starting... + + CUDA Device Query (Runtime API) version (CUDART static linking) + + Detected 2 CUDA Capable device(s) + ... ## AMD uProf -- 2.51.2 From 5f18335d14126d2fef134c0cd441771436f7dfa1 Mon Sep 17 00:00:00 2001 From: Rodrigo Arias Mallo Date: Wed, 4 Feb 2026 10:05:05 +0100 Subject: [PATCH 2/2] Update SLURM instructions in fox MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit We no longer need to use srun to enter the allocated machine. Make sure that the default allocation time is also specified. Reviewed-by: Aleix Boné --- content/fox/_index.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/content/fox/_index.md b/content/fox/_index.md index c5ea9ba..1af3980 100644 --- a/content/fox/_index.md +++ b/content/fox/_index.md @@ -22,19 +22,16 @@ the detailed specifications: ## Access To access the machine, request a SLURM session from [apex](/apex) using the `fox` -partition. If you need the machine for performance measurements, use an -exclusive reservation: +partition and set the time for the reservation (the default is 1 hour). If you +need the machine for performance measurements, use an exclusive reservation: - apex% salloc -p fox --exclusive + apex% salloc -p fox -t 02:00:00 --exclusive + fox% Otherwise, specify the CPUs that you need so other users can also use the node at the same time: - apex% salloc -p fox -c 8 - -Then use srun to execute an interactive shell: - - apex% srun --pty $SHELL + apex% salloc -p fox -t 02:00:00 -c 8 fox% Make sure you get all CPUs you expect: -- 2.51.2