Compare commits

..

381 Commits

Author SHA1 Message Date
e3b9c08748 Monitor GPFS scratch partition 2025-07-11 10:20:08 +02:00
c94e6fa497 Monitor tent node 2025-07-11 10:20:08 +02:00
ba8e6e1888 Monitor GPFS home and projects partitions 2025-07-11 10:20:08 +02:00
3d97fada6d Add pmartin1 user with access to fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-03 11:16:43 +02:00
d1a2bfc90e Add access to fox for rpenacob user
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 16:58:53 +02:00
44e76ce630 Revert "Only allow Vincent to access fox for now"
This reverts commit efac36b186efe6c3814278ae0a284ae346ff9d83.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 16:58:49 +02:00
adb7e0ef35 Add all terminfo files in environment
Fixes problems with the kitty terminal when opening vim or kakoune.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-07-02 16:02:45 +02:00
b0875816f2 Monitor Fox BMC with ICMP probes too
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:51:22 +02:00
592da155a9 Restrict DAC VPN to fox-ipmi machine only
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:51:19 +02:00
5376613ec4 Monitor fox via VPN
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:51:16 +02:00
74891f0784 Add OpenVPN service to connect to fox BMC
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:51:13 +02:00
d66f9f21dd Add ac.upc.edu as name search server
Allows referring to fox.ac.upc.edu directly as fox.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:51:09 +02:00
cb05482b4f Update access instructions
We no longer need to request a petition through BSC, as we will be in
charge of the login. Remove link to the old repository as well and
prefer only email.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:24:51 +02:00
e660268661 Disable kptr_restrict in fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:08:42 +02:00
d45b7ea717 Disable NUMA balancing in fox
See: https://www.kernel.org/doc/html/latest/admin-guide/sysctl/kernel.html#numa-balancing

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:08:02 +02:00
c205fa4e34 Load amd_uncore module in fox
Needed for L3 events in perf.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:07:58 +02:00
5f055388a5 Enable SSH X11 forwarding
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-07-02 15:07:54 +02:00
0bc69789d9 Disable registration in Gitea
Get rid of all the spam accounts they are trying to register.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:18 +02:00
09bc9d9c25 Enable msmtp configuration in tent
Allows gitea to send notifications via email.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:15 +02:00
6b53ab4413 Add GitLab runner with debian docker for PM
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:13 +02:00
4618a149b3 Monitor nix-daemon in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:11 +02:00
448d85ef9d Move nix-daemon exporter to modules
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:09 +02:00
956b99f02a Add p service for pastes
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:07 +02:00
ec2eb8c3ed Enable public-inbox service in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:06 +02:00
09a5bdfbe4 Enable gitea in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:04 +02:00
c49dd15303 Add bsc.es to resolve domain names
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:02 +02:00
38fd0eefa3 Monitor AXLE machine too
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:36:00 +02:00
e386a320ff Use IPv4 for blackbox exporter
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:59 +02:00
5ea8d6a6dd Add public html files to tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:57 +02:00
7b108431dc Add docker GitLab runner for BSC GitLab
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:55 +02:00
e80b4d7c31 Add GitLab shell runner in tent for PM
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:54 +02:00
e4c22e91b2 Enable jungle robot emails for Grafana in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:52 +02:00
27d4f4f272 Add tent key for nix-serve
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:50 +02:00
978087e53a Remove jungle nix cache from tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:48 +02:00
ad9a5bc906 Enable nix cache
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:47 +02:00
7aeb78426e Serve Grafana from subpath
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:45 +02:00
a0d1b31bb6 Add nginx server in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:43 +02:00
a7775f9a8d Add monitoring in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-18 15:35:00 +02:00
7bb11611a8 Disable nix garbage collector in tent
Reviewed-by: Aleix Boné <abonerib@bsc.es>
Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-06-11 16:05:05 +02:00
cf9bcc27e0 Rekey secrets with tent keys
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:20 +02:00
81073540b0 Add tent host key and admin keys
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:16 +02:00
a43f856b53 Create directories in /vault/home for tent users
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:12 +02:00
be231b6d2d Add software RAID in tent using 3 disks
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:10 +02:00
2f2381ad0f Add access to tent to all hut users too
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:06 +02:00
19e90a1ef7 Add hut SSH configuration from outside SSF LAN
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:04 +02:00
090100f180 Don't use proxy in base preset
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:04:00 +02:00
3d48d224c9 Add tent machine from xeon04
We moved the tent machine to the server room in the BSC building and is
now directly connected to the raccoon via NAT.

Fixes: #106
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:03:54 +02:00
0317f42613 Create specific SSF rack configuration
Allow xeon machines to optionally inherit SSF configuration such as the
NFS mount point and the network configuration.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 16:03:49 +02:00
efac36b186 Only allow Vincent to access fox for now
Needed to run benchmarks without interference.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 12:08:57 +02:00
d2385ac639 Use performance governor in fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 12:08:55 +02:00
d28ed0ab69 Add hut as nix cache in fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 12:08:51 +02:00
1ef6f9a2bb Use extra- for substituters and trusted-public-keys
From the nix manual:

> A configuration setting usually overrides any previous value. However,
> for settings that take a list of items, you can prefix the name of the
> setting by extra- to append to the previous value.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-06-11 11:27:37 +02:00
86b7032bbb Use DHCP for Ethernet in fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 10:24:53 +02:00
8c5f4defd7 Use UPC time servers as others are blocked
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-11 10:24:47 +02:00
b802a59868 Create tracing group and add arocanon in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 11:09:41 +02:00
7247f7e665 Extend perf support in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 11:09:30 +02:00
1d555871a5 Enable nixdebuginfod in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:50:01 +02:00
a2535c996d Make raccoon use performance governor
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:35 +02:00
37e60afb54 Enable binfmt emulation in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:33 +02:00
3fe138a418 Disable nix garbage collector in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:31 +02:00
4e7a9f7ce4 Add dbautist user to raccoon machine
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:28 +02:00
a6a1af673a Add node exporter monitoring in raccoon
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:26 +02:00
2a3a7b2fb2 Allow X11 forwarding via SSH
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:23 +02:00
b4ab1c836a Enable linger for user rarias
Allows services to run without a login session.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:45:19 +02:00
fb8b4defa7 Only proxy SSH git remotes via hut in xeon
Other machines like raccoon have direct access.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-06-03 10:44:31 +02:00
1bcfbf8cd6 Add machine map file
Documents the location, board and serial numbers so we can track the
machines if they move around. Some information is unkown.

Using the Nix language to encode the machines location and properties
allows us to later use that information in the configuration of the
machines themselves.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 14:55:58 +02:00
9f43a0e13b Remove fox monitoring via IPMI
We will need to setup an VPN to be able to access fox in its new
location, so for now we simply remove the IPMI monitoring.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:53 +02:00
3a3c3050ef Monitor fox, gateway and UPC anella via ICMP
Fox should reply once the machine is connected to the UPC network.
Monitoring also the gateway and UPC anella allows us to estimate if the
whole network is down or just fox.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:51 +02:00
4419f68948 Update configuration for UPC network
The fox machine will be placed in the UPC network, so we update the
configuration with the new IP and gateway. We won't be able to reach hut
directly so we also remove the host entry and proxy.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:48 +02:00
e51fc9ffa5 Disable home via NFS in fox
It won't be accesible anymore as we won't be in the same LAN.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:46 +02:00
2ae9e9b635 Rekey all secrets
Fox is no longer able to use munge or ceph, so we remove the key and
rekey them.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:44 +02:00
be77f6a5f5 Rotate fox SSH host key
Prevent decrypting old secrets by reading the git history.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:42 +02:00
6316a12a67 Distrust fox SSH key
We no longer will share secrets with fox until we can regain our trust.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:38 +02:00
db663913d8 Remove Ceph module from fox
It will no longer be accesible from the UPC.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:36 +02:00
b4846b0f6c Remove fox from SLURM
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:20 +02:00
64a52801ed Remove pam_slurm_adopt from fox
We no longer will be able to use SLURM from jungle.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-06-02 11:26:02 +02:00
7a2f37aaa2 Add UPC temperature sensor monitoring
These sensors are part of their air quality measurements, which just
happen to be very close to our server room.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-05-29 13:01:37 +02:00
aae6585f66 Add meteocat exporter
Allows us to track ambient temperature changes and estimate the
temperature delta between the server room and exterior temperature.
We should be able to predict when we would need to stop the machines due
to excesive temperature as summer approaches.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-05-29 13:01:29 +02:00
1c15e77c83 Add custom nix-daemon exporter
Allows us to see which derivations are being built in realtime. It is a
bit of a hack, but it seems to work. We simply look at the environment
of the child processes of nix-daemon (usually bash) and then look for
the $name variable which should hold the current derivation being
built. Needs root to be able to read the environ file of the different
nix-daemon processes as they are owned by the nixbld* users.

See: https://discourse.nixos.org/t/query-ongoing-builds/23486
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-05-29 12:57:07 +02:00
82fc3209de Set keep-outputs to true in all machines
From the documentation of keep-outputs, setting it to true would prevent
the GC from removing build time dependencies:

If true, the garbage collector will keep the outputs of non-garbage
derivations. If false (default), outputs will be deleted unless they are
GC roots themselves (or reachable from other roots).

In general, outputs must be registered as roots separately. However,
even if the output of a derivation is registered as a root, the
collector will still delete store paths that are used only at build time
(e.g., the C compiler, or source tarballs downloaded from the network).
To prevent it from doing so, set this option to true.

See: https://nix.dev/manual/nix/2.24/command-ref/conf-file.html#conf-keep-outputs
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2025-04-22 17:27:37 +02:00
abeab18270 Add raccoon node exporter monitoring
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-22 14:50:08 +02:00
1985b58619 Increase data retention to 5 years
Now that we have more space, we can extend the retention time to 5 years
to hold the monitoring metrics. For a year we have:

	# du -sh /var/lib/prometheus2
	13G     /var/lib/prometheus2

So we can expect it to increase to about 65 GiB. In the future we may
want to reduce some adquisition frequency.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-22 14:50:03 +02:00
44bd061823 Don't forward any docker traffic
Access to the 23080 local port will be done by applying the INPUT rules,
which pass through nixos-fw.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-15 14:16:15 +02:00
e8c309f584 Allow traffic from docker to enter port 23080
Before:

  hut% sudo docker run -it --rm alpine /bin/ash -xc 'true | nc -w 3 -v 10.0.40.7 23080'
  + true
  + nc -w 3 -v 10.0.40.7 23080
  nc: 10.0.40.7 (10.0.40.7:23080): Operation timed out

After:

  hut% sudo docker run -it --rm alpine /bin/ash -xc 'true | nc -w 3 -v 10.0.40.7 23080'
  + true
  + nc -w 3 -v 10.0.40.7 23080
  10.0.40.7 (10.0.40.7:23080) open

Fixes: #94
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-15 14:16:10 +02:00
71ae7fb585 Add bscpm04.bsc.es SSH host and public key
Allows fetching repositories from hut and other machines in jungle
without the need to do any extra configuration.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-15 14:15:45 +02:00
8834d561d2 Add nix cache documentation section
Include usage from NixOS and non-NixOS hosts and a test with curl to
ensure it can be reached.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-04-15 14:08:22 +02:00
29daa3c364 Use hut nix cache in owl1, owl2 and raccoon
For owl1 and owl2 directly connect to hut via LAN with HTTP, but for
raccoon pass via the proxy using jungle.bsc.es with HTTPS. There is no
risk of tampering as packages are signed.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-04-15 14:08:17 +02:00
9c503fbefb Clean all iptables rules on stop
Prevents the "iptables: Chain already exists." error by making sure that
we don't leave any chain on start. The ideal solution is to use
iptables-restore instead, which will do the right job. But this needs to
be changed in NixOS entirely.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-15 14:08:14 +02:00
51b6a8b612 Make nginx listen on all interfaces
Needed for local hosts to contact the nix cache via HTTP directly.
We also allow the incoming traffic on port 80.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-15 14:08:07 +02:00
52213d388d Fix nginx /cache regex
`nix-serve` does not handle duplicates in the path:
```
hut$ curl http://127.0.0.1:5000/nix-cache-info
StoreDir: /nix/store
WantMassQuery: 1
Priority: 30
hut$ curl http://127.0.0.1:5000//nix-cache-info
File not found.
```

This meant that the cache was not accessible via:
`curl https://jungle.bsc.es/cache/nix-cache-info` but
`curl https://jungle.bsc.es/cachenix-cache-info` worked.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2025-04-15 14:08:04 +02:00
edf744db8d Add new GitLab runner for gitlab.bsc.es
It uses docker based on alpine and the host nix store, so we can perform
builds but isolate them from the system.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:41:18 +02:00
b82894eaec Remove SLURM partition all
We no longer have homogeneous nodes so it doesn't make much sense to
allocate a mix of them.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:27 +02:00
1c47199891 Add varcila user to hut and fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:25 +02:00
8738bd4eeb Adjust fox slurm config after disabling SMT
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:23 +02:00
7699783aac Add abonerib user to fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:21 +02:00
fee1d4da7e Don't move doc in web output
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:19 +02:00
b77ce7fb56 Add quickstart guide
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:17 +02:00
b4a12625c5 Reject SSH connections without SLURM allocation
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:15 +02:00
302106ea9a Add users to fox
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:13 +02:00
96877de8d9 Add dalvare1 user
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:11 +02:00
8878985be6 Add fox page in jungle website
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:08 +02:00
737578db34 Mount NVME disks in /nvme{0,1}
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:06 +02:00
88555e3f8c Exclude fox from being suspended by slurm
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:04 +02:00
feb2060be7 Use IPMI host names instead of IP addresses
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:15:01 +02:00
00999434c2 Add fox IPMI monitoring
Use agenix to store the credentials safely.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:14:59 +02:00
29d58cc62d Add new fox machine
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-04-08 17:14:42 +02:00
587caf262e Update PM GitLab tokens to new URL
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:13 +01:00
2730404ca5 Fix MPICH build by fetching upstream patches too
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:13 +01:00
84db5e6fd6 Fix papermod theme in website for new hugo
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:13 +01:00
f4f34a3159 flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/de96bd907d5fbc3b14fc33ad37d1b9a3cb15edc6' (2024-07-09)
  → 'github:ryantm/agenix/f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41' (2024-08-10)
• Updated input 'bscpkgs':
    'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=de89197a4a7b162db7df9d41c9d07759d87c5709' (2024-04-24)
  → 'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=6782fc6c5b5a29e84a7f2c2d1064f4bcb1288c0f' (2024-11-29)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/693bc46d169f5af9c992095736e82c3488bf7dbb' (2024-07-14)
  → 'github:NixOS/nixpkgs/9c6b49aeac36e2ed73a8c472f1546f6d9cf1addc' (2025-01-14)

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:13 +01:00
91b8b4a3c5 Set nixpkgs to track nixos-24.11
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:13 +01:00
6cad205269 Add script to monitor GPFS
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 15:43:07 +01:00
c57bf76969 Add BSC machines to ssh config
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:51 +01:00
ad4b615211 Collect statistics from logged users
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:48 +01:00
b4518b59cf Add custom GPFS exporter for MN5
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:46 +01:00
45dc4124a3 Remove exception to fetch task endpoint
It causes the request to go to the website rather than the Gitea
service.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:43 +01:00
bdfe9a48fd Use SSD for boot, then switch to NVME
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:40 +01:00
1b337d31f8 Use NVME as root
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:37 +01:00
717cd5a21e Keep host header for Grafana requests
This was breaking requests due to CSRF check.

See: https://github.com/grafana/grafana/issues/45117#issuecomment-1033842787
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:32 +01:00
def5955614 Ignore logging requests from the gitea runner
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:28 +01:00
0e3c975cb5 Log the client IP not the proxy
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:22 +01:00
93189a575e Ignore misc directory
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:19 +01:00
36592c44eb Create paste directories in /ceph/p
Ensure that all hut users have a paste directory in /ceph/p owned by
themselves. We need to wait for the ceph mount point to create them, so
we use a systemd service that waits for the remote-fs.target.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:16 +01:00
a34e3752a2 Add paste documentation in jungle website
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:13 +01:00
0d2dea94fb Add p command to paste files
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:10 +01:00
7f539d7e06 Use nginx to serve website and other services
Instead of using multiple tunels to forward all our services to the VM
that serves jungle.bsc.es, just use nginx to redirect the traffic from
hut. This allows adding custom rules for paths that are not posible
otherwise.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:23:07 +01:00
f8ec090836 Mount the NVME disk in /nvme
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2025-01-16 14:22:58 +01:00
9a9161fc55 Delay nix-gc until /home is mounted
Prevents starting the garbage collector before the remote FS are
mounted, in particular /home. Otherwise, all the gcroots which have
symlinks in /home will be considered stale and they will be removed.

See: #79
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-09-20 09:45:30 +02:00
1a0cf96fc4 Add dbautist user with access to hut
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-09-20 09:42:02 +02:00
4bd1648074 Set the serial console to ttyS1 in raccoon
Apparently the ttyS0 console doesn't exist but ttyS1 does:

  raccoon% sudo stty -F /dev/ttyS0
  stty: /dev/ttyS0: Input/output error
  raccoon% sudo stty -F /dev/ttyS1
  speed 9600 baud; line = 0;
  -brkint -imaxbel

The dmesg line agrees:

  00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A

The console configuration is then moved from base to xeon to allow
changing it for the raccoon machine.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:56 +02:00
15b114ffd6 Remove setLdLibraryPath and driSupport options
They have been removed from NixOS. The "hardware.opengl" group is now
renamed to "hardware.graphics".

See: 98cef4c273
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:53 +02:00
dd6d8c9735 Add documentation section about GRUB chain loading
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:47 +02:00
e15a3867d4 Add 10 min shutdown jitter to avoid spikes
The shutdown timer will fire at slightly different times for the
different nodes, so we slowly decrease the power consumption.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:44 +02:00
5cad208de6 Don't mount the nix store in owl nodes
Initially we planned to run jobs in those nodes by sharing the same nix
store from hut. However, these nodes are now used to build packages
which are not available in hut. Users also ssh to the nodes, which
doesn't mount the hut store, so it doesn't make much sense to keep
mounting it.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:42 +02:00
c8687f7e45 Emulate other architectures in owl nodes too
Allows cross-compilation of packages for RISC-V that are known to try to
run RISC-V programs in the host.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:39 +02:00
d988ef2eff Program shutdown for August 2nd for all machines
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:36 +02:00
b07929eab3 Enable debuginfod daemon in owl nodes
WARNING: This will introduce noise, as the daemon wakes up from time to
time to check for new packages.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:30 +02:00
b3e397eb4c Set gitea and grafana log level to warn
Prevents filling the journal logs with information messages.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:27 +02:00
5ad2c683ed Set default SLURM job time limit to one hour
Prevents enless jobs from being left forever, while allow users to
request a larger time limit.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:24 +02:00
1f06f0fa0c Allow other jobs to run in unused cores
The current select mechanism was using the memory too as a consumable
resource, which by default only sets 1 MiB per node. As each job already
requests 1 MiB, it prevents other jobs from running.

As we are not really concerned with memory usage, we only use the unused
cores in the select criteria.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:22 +02:00
8ca1d84844 Use authentication tokens for PM GitLab runner
Starting with GitLab 16, there is a new mechanism to authenticate the
runners via authentication tokens, so use it instead.  Older tokens and
runners are also removed, as they are no longer used.

With the new way of managing tokens, both the tags and the locked state
are managed from the GitLab web page.

See: https://docs.gitlab.com/ee/ci/runners/new_creation_workflow.html
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:16 +02:00
998f599be3 flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/1381a759b205dff7a6818733118d02253340fd5e' (2024-04-02)
  → 'github:ryantm/agenix/de96bd907d5fbc3b14fc33ad37d1b9a3cb15edc6' (2024-07-09)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/6143fc5eeb9c4f00163267708e26191d1e918932' (2024-04-21)
  → 'github:NixOS/nixpkgs/693bc46d169f5af9c992095736e82c3488bf7dbb' (2024-07-14)

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:13 +02:00
fcfc6ac149 Allow ptrace to any process of the same user
Allows users to attach GDB to their own processes, without requiring
running the program with GDB from the start. It is only available in
compute nodes, the storage nodes continue with the restricted settings.

Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:09 +02:00
6e87130166 Add abonerib user to hut, raccon, owl1 and owl2
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:07 +02:00
06f9e6ac6b Grant rpenacob access to owl1 and owl2 nodes
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:05 +02:00
da07aedce2 Access private repositories via hut SSH proxy
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:36:03 +02:00
61427a8bf9 Set the default proxy to point to hut
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:35:56 +02:00
958ad1f025 Allow incoming traffic to hut proxy
Reviewed-by: Aleix Boné <abonerib@bsc.es>
2024-09-12 08:35:23 +02:00
1c5f3a856f eudy: koro: fcs: Fix fcs unprotected cpuid all
smp_processor_id() was called in a preepmtible context, which could
invalidate the returned value. However, this was not harmful, because
fcs threads in nosv are pinned.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2024-07-17 11:40:20 +02:00
4e2b80defd Add support for armv7 emulation in hut
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-17 11:12:48 +02:00
1c8efd0877 Monitor raccoon machine via IPMI
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-17 11:12:32 +02:00
4c5e85031b Move vlopez user to jungleUsers for koro host
Access to other machines can be easily added into the "hosts" attribute
without the need to replicate the configuration.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-16 12:35:39 +02:00
5688823fcc Add raccoon motd file
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-16 12:35:38 +02:00
72faf8365b Split xeon specific configuration from base
To accomodate the raccoon knights workstation, some of the configuration
pulled by m/common/main.nix has to be removed. To solve it, the xeon
specific parts are placed into m/common/xeon.nix and only the common
configuration is at m/common/base.nix.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-16 12:35:37 +02:00
0e22d6def8 Control user access to each machine
The users.jungleUsers configuration option behaves like the users.users
option, but defines the list attribute `hosts` for each user, which
filters users so that only the user can only access those hosts.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-16 12:35:34 +02:00
22cc1d33f7 Add PostgreSQL DB for performance test results
The database will hold the performance results of the execution of the
benchmarks. We follow the same setup on knights3 for now.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-07-16 12:35:24 +02:00
15085c8a05 Enable Grafana email alerts
Allows sending Grafana alerts via email too, so we have a reduntant
mechanism in case Slack fails to deliver them.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-31 15:57:38 +02:00
06748dac1d Enable mail notification in Gitea
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-31 10:56:49 +02:00
63851306ac Add msmtp to send notifications via email
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-31 10:56:20 +02:00
2bdc793c8c Allow Ceph traffic to lake2 2024-05-02 17:43:48 +02:00
85d1c5e34c Fix meta in posts entries
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:32:37 +02:00
e6b7af5272 Fix bogus separator
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:32:34 +02:00
c0ae8770bc Manually add links to the menu
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:32:32 +02:00
5b51e8947f Add link to Gitea in the website
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:32:28 +02:00
db2c6f7e45 Collect Gitea metrics in Prometheus
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:32:25 +02:00
8e8f9e7adb Add Gitea service
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-05-02 17:31:51 +02:00
d2adc3a6d3 Add firewall rules for Ceph and monitoring
The firewall was blocking the monitoring traffic from hut and the Ceph
traffic among OSDs. The rules only allow connecting from the specific
host that they are supposed to be coming from.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:25:11 +02:00
76cd9ea47f Add workaround for MPICH 4.2.0
See: https://github.com/pmodels/mpich/issues/6946

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:25:08 +02:00
2f851bc216 Fix SLURM bug in rank integer sign expansion
See: https://bugs.schedmd.com/show_bug.cgi?id=19324

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:25:05 +02:00
834d3187e5 Merge pmix outputs for MPICH
MPICH expects headers and libraries to be present in the same directory.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:25:03 +02:00
49be0f208c Remove nixseparatedebuginfod input
It has been integrated in nixpkgs, so is no longer required.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:24:58 +02:00
fb23b41dae flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/daf42cb35b2dc614d1551e37f96406e4c4a2d3e4' (2023-10-08)
  → 'github:ryantm/agenix/1381a759b205dff7a6818733118d02253340fd5e' (2024-04-02)
• Updated input 'agenix/darwin':
    'github:lnl7/nix-darwin/87b9d090ad39b25b2400029c64825fc2a8868943' (2023-01-09)
  → 'github:lnl7/nix-darwin/4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d' (2023-11-24)
• Updated input 'agenix/home-manager':
    'github:nix-community/home-manager/32d3e39c491e2f91152c84f8ad8b003420eab0a1' (2023-04-22)
  → 'github:nix-community/home-manager/3bfaacf46133c037bb356193bd2f1765d9dc82c1' (2023-12-20)
• Added input 'agenix/systems':
    'github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e' (2023-04-09)
• Updated input 'bscpkgs':
    'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=e148de50d68b3eeafc3389b331cf042075971c4b' (2023-11-22)
  → 'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=de89197a4a7b162db7df9d41c9d07759d87c5709' (2024-04-24)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/e4ad989506ec7d71f7302cc3067abd82730a4beb' (2023-11-19)
  → 'github:NixOS/nixpkgs/6143fc5eeb9c4f00163267708e26191d1e918932' (2024-04-21)
• Updated input 'nixseparatedebuginfod':
    'github:symphorien/nixseparatedebuginfod/232591f5274501b76dbcd83076a57760237fcd64' (2023-11-05)
  → 'github:symphorien/nixseparatedebuginfod/98d79461660f595637fa710d59a654f242b4c3f7' (2024-03-07)
• Removed input 'nixseparatedebuginfod'
• Removed input 'nixseparatedebuginfod/flake-utils'
• Removed input 'nixseparatedebuginfod/flake-utils/systems'
• Removed input 'nixseparatedebuginfod/nixpkgs'

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-04-25 13:24:29 +02:00
005a67deaf Use google.com probe instead of bsc.es
The main website of the BSC is failing every day around 3:00 AM for
almost one hour, so it is not a very good target. Instead, google.com is
used which should be more reliable. The same robots.txt path is fetched,
as it is smaller than the main page.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-03-05 16:52:21 +01:00
f8097cb5cb Add another HTTPS probe for bsc.es
As all other HTTPS probes pass through the opsproxy01.bsc.es proxy, we
cannot detect a problem in our proxy or in the BSC one. Adding another
target like bsc.es that doesn't use the ops proxy allows us to discern
where the problem lies.

Instead of monitoring https://www.bsc.es/ directly, which will trigger
the whole Drupal server and take a whole second, we just fetch robots.txt
so the overhead on the server is minimal (and returns in less than 10 ms).

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2024-02-13 12:26:56 +01:00
ff792f5f48 Move slurm client in a separate module
Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2024-02-13 11:11:17 +01:00
5c48b43ae0 Enable public-inbox at jungle.bsc.es/lists
The public-inbox service fetches emails from the sourcehut mailing lists
and displays them on the web. The idea is to reduce the dependency on
external services and add a secondary storage for the mailing lists in
case sourcehut goes down or changes the current free plans.

The service is available in https://jungle.bsc.es/lists/ and is open to
the public. It currently mirrors the bscpkgs and jungle mailing list.

We also edited the CSS to improve the readability and have larger fonts
by default.

The service for public-inbox produced by NixOS is not well configured to
fetch emails from an IMAP mail server, so we also manually edit the
service file to enable the network.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-12-15 11:18:08 +01:00
b299ead00b Monitor https://pm.bsc.es/gitlab/ too
The GitLab instance is in the /gitlab endpoint and may fail
independently of https://pm.bsc.es/.

Cc: Víctor López <victor.lopez@bsc.es>
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-12-05 09:56:28 +01:00
a92432cf5a Enable nixseparatedebuginfod module
The module is only enabled on Hut and Eudy because we noticed activity
on the debuginfod service even if no debug session was active.

Reviewed-by: Rodrigo Arias Mallo <rodrigo.arias@bsc.es>
2023-12-04 11:04:52 +01:00
82f5d828c2 Use tmpfs in /tmp
The /tmp directory was using the SSD disk which is not erased across
boots. Nix will use /tmp to perform the builds, so we want it to be as
fast as possible. In general, all the machines have enough space to
handle large builds like LLVM.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-28 12:25:50 +01:00
35a94a9b02 Enable runners for pm.bsc.es/gitlab too
The old runners for the PM gitlab were disabled in configuration in the
last outage, but they remained working until we reboot the node. With
this change we enable the runners for both PM and gitlab.bsc.es.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-24 14:45:23 +01:00
b6bd31e159 Remove complete ceph package from hut
Only the ceph-client is needed.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-24 12:58:54 +01:00
1d4badda5b Fix warning in slurm exporter using vendorHash
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-24 12:58:50 +01:00
bd5214a3b9 Remove old Ceph package overlay
The Ceph package is now integrated in upstream nixpkgs.

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-24 12:58:47 +01:00
c32f6dea97 flake.lock: Update
Flake lock file updates:

• Updated input 'agenix':
    'github:ryantm/agenix/d8c973fd228949736dedf61b7f8cc1ece3236792' (2023-07-24)
  → 'github:ryantm/agenix/daf42cb35b2dc614d1551e37f96406e4c4a2d3e4' (2023-10-08)
• Updated input 'bscpkgs':
    'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=f605f8e5e4a1f392589f1ea2b9ffe2074f72a538' (2023-10-31)
  → 'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=e148de50d68b3eeafc3389b331cf042075971c4b' (2023-11-22)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/e56990880811a451abd32515698c712788be5720' (2023-09-02)
  → 'github:NixOS/nixpkgs/e4ad989506ec7d71f7302cc3067abd82730a4beb' (2023-11-19)

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-24 12:57:44 +01:00
dd341902fc BSC packages are no longer in bsc attribute
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-09 13:40:48 +01:00
190e273112 flake.lock: Update
Flake lock file updates:

• Updated input 'bscpkgs':
    'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=3a4062ac04be6263c64a481420d8e768c2521b80' (2023-09-14)
  → 'git+https://git.sr.ht/~rodarima/bscpkgs?ref=refs/heads/master&rev=f605f8e5e4a1f392589f1ea2b9ffe2074f72a538' (2023-10-31)

Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-09 13:40:48 +01:00
268807d1d0 Switch bscpkgs URL to sourcehut
Reviewed-by: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-11-09 13:40:48 +01:00
2953080fb8 Monitor anella instead of gw.bsc.es
The target gw.bsc.es doesn't reply to our ICMP probes from hut. However,
the anella hop in the tracepath is a good candidate to identify cuts
between the login and the provider and between the provider and external
hosts like Google or Cloudflare DNS.

Reviewed-By: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-10-27 12:46:08 +02:00
9871517be2 Add ICMP probes
These probes check if we can reach several targets via ICMP, which is
not proxied, so they can be used to see if ICMP forwarding is working in
the login node.

In particular, we test if we can reach the Google (8.8.8.8) and
Cloudflare (1.1.1.1) DNS servers, the BSC gateway which responds to ping
only from the intranet and the login node (ssfhead).

Reviewed-By: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-10-25 17:13:03 +02:00
736eacaac5 Enable proxy for Grafana too
The alerts need to contact the slack endpoint, so we add the proxy
environment variables to the grafana systemd service.

Reviewed-By: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-10-25 16:55:56 +02:00
0e66aad099 Make blackbox exporter use the proxy
By default it was trying to reach the targets using the default gateway,
but since the electrical cut of 2023-10-20, the login node has not
enabled forwarding again. So better if we don't rely on it.

Reviewed-By: Aleix Roca Nonell <aleix.rocanonell@bsc.es>
2023-10-25 16:55:24 +02:00
67a4905a0a Don't log SLURM connection attempts from ssfhead 2023-10-06 15:22:04 +02:00
d52d22e0db Add docker runner too 2023-10-06 15:17:07 +02:00
42920c2521 Monitor gitlab.bsc.es too 2023-10-06 15:17:07 +02:00
4acd35e036 Monitor PM webpage via blackbox 2023-10-06 15:17:07 +02:00
621d20db3a Temporarily disable pm runners 2023-10-06 15:17:07 +02:00
0926f6ec1f Add runner for gitlab.bsc.es 2023-10-06 15:17:07 +02:00
61646cb3bd Allow anonymous access to grafana 2023-09-22 10:51:30 +02:00
c0066c4744 Remove user/group when using DynamicUsers 2023-09-22 10:13:06 +02:00
ffd0593f51 Set the SLURM_CONF variable 2023-09-21 22:22:00 +02:00
f49ae0773e Enable slurm-exporter service 2023-09-21 21:40:02 +02:00
8fa3fccecb Add prometheus-slurm-exporter package 2023-09-21 21:34:18 +02:00
9ee7111453 Document the hut shared nix store for SLURM 2023-09-21 13:51:42 +02:00
8de3d2b149 Mount the hut nix store for SLURM jobs 2023-09-20 19:38:43 +02:00
bc62e28ca3 Enable direnv integration 2023-09-20 09:32:58 +02:00
d612a5453c Add System Integration Service Guide document 2023-09-19 15:12:59 +02:00
653d411b9e Remove bscpkgs from the registry and nixPath
This is done to prevent accidental evaluations where the nixpkgs input
of bscpkgs is still pointing to a different version that the one
specified in the jungle flake. Instead use jungle#bscpkgs.X to get a
package from bscpkgs.
2023-09-15 12:00:33 +02:00
51c57dbc41 Add bscpkgs and nixpkgs top level attributes
Allows the evaluation of packages of the intermediate overlays.
2023-09-15 12:00:33 +02:00
33cd40160e Use hut packages as the default package set
Allows the user to directly access nixpkgs and bscpkgs from the top
level as `nix build jungle#htop` and `nix build jungle#bsc.ovni`.
2023-09-15 12:00:28 +02:00
a1e8cfea47 Don't fetch registry flakes from the net 2023-09-15 12:00:28 +02:00
5d72ee3da3 flake.lock: Update
Flake lock file updates:

• Updated input 'bscpkgs':
    'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=6122fef92701701e1a0622550ac0fc5c2beb5906' (2023-09-07)
  → 'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=3a4062ac04be6263c64a481420d8e768c2521b80' (2023-09-14)
2023-09-15 11:50:47 +02:00
fdc6445d47 Revert "Update slurm to 23.02.05.1"
This reverts commit aaefddc44a9073166ac52b8bd56ac96258d3b053.
2023-09-14 15:46:18 +02:00
e88805947e Open ports in firewall of compute nodes 2023-09-14 15:45:43 +02:00
aaefddc44a Update slurm to 23.02.05.1 2023-09-13 17:44:24 +02:00
d9d249411d Monitor storage nodes via IPMI too 2023-09-13 15:57:13 +02:00
c07f75c6bb Specify the space available in /ceph 2023-09-13 14:19:59 +02:00
8d449ba20c Add update post to website 2023-09-12 18:13:38 +02:00
10ca572aec Enable fstrim service 2023-09-12 16:39:45 +02:00
75b0f48715 Serve the nix store from hut 2023-09-12 12:19:43 +02:00
19a451db77 Add encrypted munge key with agenix 2023-09-08 19:05:45 +02:00
ec9be9bb62 Remove unused large port hole in firewall 2023-09-08 18:22:48 +02:00
7ddd1977f3 Make exporters listen in localhost only 2023-09-08 18:13:04 +02:00
7050c505b5 Allow only some ports for srun 2023-09-08 17:51:37 +02:00
033a1fe97b Block ssfhead from reaching our slurm daemon 2023-09-08 17:36:28 +02:00
77cb3c494e Poweroff idle slurm nodes after 1 hour 2023-09-08 16:49:53 +02:00
6db5772ac4 Add IB and IPMI node host names 2023-09-08 13:21:37 +02:00
3e347e673c flake.lock: Update
Flake lock file updates:

• Updated input 'bscpkgs':
    'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=ee24b910a1cb95bd222e253da43238e843816f2f' (2023-09-01)
  → 'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=6122fef92701701e1a0622550ac0fc5c2beb5906' (2023-09-07)
2023-09-07 11:13:45 +02:00
dca274d020 Unlock ovni gitlab runners 2023-09-05 16:59:45 +02:00
c33909f32f Update email contact to jungle mail list 2023-09-05 16:10:58 +02:00
64e856e8b9 flake.lock: Update
Flake lock file updates:

• Updated input 'bscpkgs':
    'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=18d64c352c10f9ce74aabddeba5a5db02b74ec27' (2023-08-31)
  → 'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs/heads/master&rev=ee24b910a1cb95bd222e253da43238e843816f2f' (2023-09-01)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/d680ded26da5cf104dd2735a51e88d2d8f487b4d' (2023-08-19)
  → 'github:NixOS/nixpkgs/e56990880811a451abd32515698c712788be5720' (2023-09-02)
2023-09-05 15:03:26 +02:00
02f40a8217 Add agenix to all nodes 2023-09-04 22:10:43 +02:00
77d43b6da9 Add agenix module to ceph 2023-09-04 22:07:07 +02:00
ab55aac5ff Remove old secrets 2023-09-04 22:04:32 +02:00
9b5bfbb7a3 Mount /ceph in owl1 and owl2 2023-09-04 22:00:36 +02:00
a69a71d1b0 Warn about the owl2 omnipath device 2023-09-04 22:00:17 +02:00
98374bd303 Clean owl2 configuration 2023-09-04 21:59:56 +02:00
3b6be8a2fc Move the ceph client config to an external module 2023-09-04 21:59:04 +02:00
2bb366b9ac Reorganize secrets and ssh keys
The agenix tools needs to read the secrets from a standalone file, but
we also need the same information for the SSH keys.
2023-09-04 21:36:31 +02:00
2d16709648 Add anavarro user 2023-09-04 16:00:01 +02:00
9344daa31c Set zsh inc_append_history option 2023-09-03 16:57:53 +02:00
80c98041b5 Set zsh shell for rarias 2023-09-03 16:46:27 +02:00
3418e57907 Enable zsh and fix key bindings 2023-09-03 16:42:04 +02:00
6848b58e39 Keep a log over time with the config commits 2023-09-03 00:02:14 +02:00
13a70411aa Configure bscpkgs.nixpkgs to follow nixpkgs 2023-09-02 23:37:59 +02:00
f9c77b433a Store nixos config in /etc/nixos/config.rev 2023-09-02 23:37:11 +02:00
9d487845f6 Enable binary emulation for other architectures 2023-08-31 17:27:08 +02:00
3c99c2a662 Enable watchdog 2023-08-30 16:32:17 +02:00
7d09108c9f Enable all osd on boot in lake2 2023-08-30 16:32:17 +02:00
0f0a861896 Scrape lake2 too 2023-08-29 12:33:26 +02:00
beb0d5940e Also enable monitoring in lake2 2023-08-29 12:29:41 +02:00
70321ce237 Scrape metrics from bay 2023-08-29 11:58:00 +02:00
5bd1d67333 Add monitoring in the bay node 2023-08-29 11:53:32 +02:00
fad9df61e1 Add fio tool 2023-08-29 11:27:50 +02:00
d2a80c8c18 Add ceph tools in hut too 2023-08-28 17:58:21 +02:00
599613d139 Switch ceph logs to journal 2023-08-28 17:58:08 +02:00
ac4fa9abd4 Update ceph to 18.2.0 in overlay 2023-08-25 18:20:21 +02:00
cb3a7b19f7 Move pkgs overlay to overlay.nix 2023-08-25 18:12:00 +02:00
f5d6bf627b Enable ceph osd daemons in lake2 2023-08-25 14:54:51 +02:00
f1ce815edd Add the lake2 hostname to the hosts 2023-08-25 14:44:35 +02:00
a2075cfd65 Use the sda for lake2 2023-08-25 13:40:10 +02:00
8f1f6f92a8 Remove netboot module 2023-08-25 13:39:01 +02:00
3416416864 Disable pixiecore in hut for now 2023-08-25 13:21:00 +02:00
815888fb07 Add PXE helper 2023-08-25 12:05:33 +02:00
029d9cb1db Enable netboot again for PXE 2023-08-24 19:08:23 +02:00
95fa67ede1 Specify the disk by path 2023-08-24 15:27:37 +02:00
a19347161f Prepare lake2 config after bootstrap
The disk ID is different under NixOS.
2023-08-24 13:54:53 +02:00
58c1cc1f7c Add lake2 bootstrap config 2023-08-24 12:30:46 +02:00
b06399dc70 Add section to enable serial console 2023-08-24 12:29:44 +02:00
077eece6b9 Add agenix to PATH in hut 2023-08-23 17:42:50 +02:00
b3ef53de51 Store ceph secret key in age
This allows a node to mount the ceph FS without any extra ceph
configuration in /etc/ceph.
2023-08-23 17:26:44 +02:00
e0852ee89b Add rarias key for secrets 2023-08-23 17:15:26 +02:00
dfffc0bdce Add ceph metrics to prometheus 2023-08-22 16:33:55 +02:00
8257c245b1 Mount the ceph filesystem in hut 2023-08-22 16:15:46 +02:00
cd5853cf53 Add ceph config in bay 2023-08-22 15:58:48 +02:00
b677b827d4 Add the bay host name 2023-08-22 15:56:09 +02:00
b1d5185cca Remove netboot and fixes 2023-08-22 12:12:15 +02:00
a7e66e2246 Add bay node 2023-08-22 12:12:15 +02:00
480c97e952 Update flake 2023-08-22 11:28:54 +02:00
f8fb5fa4ff Monitor power from other nodes via LAN 2023-08-22 11:28:54 +02:00
acf9b71f04 Increase prometheus retention time to one year 2023-08-22 11:28:54 +02:00
bf692e6e4e Don't set all_proxy 2023-08-22 11:28:54 +02:00
c242b65e47 Update nixpkgs to fix docker problem 2023-07-28 14:24:51 +02:00
55d6c17776 Allow access to devices for node_exporter 2023-07-28 13:55:35 +02:00
14b173f67e GRUB version no longer needed 2023-07-27 17:22:20 +02:00
b9001cdf7d Upgrade flake: nixpkgs, bscpkgs and agenix 2023-07-27 17:19:17 +02:00
f892d43b47 Kill slurmd remaining processes on upgrade 2023-07-27 14:49:20 +02:00
d9e9ee6e3a Add details to request access in the web 2023-07-25 16:07:22 +02:00
79adbe76a8 koro: Add vlopez user 2023-07-21 13:00:43 +02:00
66fb848ba8 Add koro node 2023-07-21 13:00:08 +02:00
40b1a8f0df eudy: Add fcsv3 and intermediate versions for testing 2023-07-21 11:27:51 +02:00
a0b9d10b14 eudy: Enable memory overcommit 2023-07-21 11:27:51 +02:00
4c309dea2f eudy: disable all cpu mitigations 2023-07-21 11:27:51 +02:00
b3a397eee4 Add jungle.bsc.es hugo website 2023-07-21 10:52:23 +02:00
7c1fe1455b Enable NTP using the BSC time server 2023-06-30 14:02:15 +02:00
2d4b178895 Add the ssfhead node as gateway 2023-06-30 14:01:35 +02:00
4dd25f2f89 Use our host names first by default 2023-06-23 16:22:18 +02:00
6dcd9d8144 Add DNS tools to resolve hosts 2023-06-23 16:15:45 +02:00
31be81d2b1 Lower perf_event_paranoid to -1 2023-06-23 16:01:27 +02:00
826cfdf43f Set perf paranoid to 0 by default 2023-06-21 16:24:19 +02:00
a1f258c5ce Add perf to packages 2023-06-21 15:41:06 +02:00
1c1d3f3231 Allow srun to specify the cpu binding
The task/affinity plugin needs to be selected.
2023-06-21 13:16:23 +02:00
623d46c03f Move authorized keys to users.nix 2023-06-20 14:08:34 +02:00
518a4d6af3 Add rpenacob user 2023-06-20 12:54:26 +02:00
60077948d6 Add osumb to the system packages 2023-06-16 19:22:41 +02:00
c76bfa7f86 flake.lock: Update
Flake lock file updates:

• Updated input 'bscpkgs':
    'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs%2fheads%2fmaster&rev=c775ee4d6f76aded05b08ae13924c302f18f9b2c' (2023-04-26)
  → 'git+https://pm.bsc.es/gitlab/rarias/bscpkgs.git?ref=refs%2fheads%2fmaster&rev=cbe9af5d042e9d5585fe2acef65a1347c68b2fbd' (2023-06-16)
2023-06-16 18:33:54 +02:00
6c10933e80 Set mpi to mpich by default in bscpkgs 2023-06-16 18:26:51 +02:00
6402605b1f Add missing parameter to extend 2023-06-16 18:26:51 +02:00
1724535495 Use explicit order in overlays 2023-06-16 18:26:51 +02:00
5b41670f36 Replace mpi inside bsc attribute 2023-06-16 18:26:51 +02:00
ab04855382 Add mpich overlay 2023-06-16 18:26:51 +02:00
684d5e41c5 Add coments in slurm config 2023-06-16 18:26:50 +02:00
316ea18e24 Add eudy host key to known hosts 2023-06-16 17:29:48 +02:00
c916157fcc Rename xeon08 to eudy
From Eudyptula, a little penguin.
2023-06-16 17:16:05 +02:00
4e9409db10 Update rebuild script for all nodes 2023-06-16 12:13:07 +02:00
94320d9256 Add ssh host keys 2023-06-16 12:01:12 +02:00
9f5941c2be Set the name of the slurm cluster to jungle 2023-06-16 12:00:54 +02:00
fba0f7b739 Change owl hostnames 2023-06-16 11:42:39 +02:00
2e95281af5 Add owl and all partition 2023-06-16 11:34:00 +02:00
f4ac9f3186 Simplify flake and expose host pkgs
The configuration of the machines is now moved to m/
2023-06-16 11:31:31 +02:00
f787343f29 Rename xeon07 to hut 2023-06-14 17:28:40 +02:00
70304d26ff Remove profiles older than 30 days with gc 2023-06-14 17:28:39 +02:00
76c10ec22e Add ncdu to system packages 2023-06-14 17:28:39 +02:00
011e8c2bf8 Move arocanon user from xeon08 to common 2023-06-14 16:22:43 +02:00
c1f138a9c1 xeon08: Add config for kernel non-voluntary preemption 2023-06-14 16:17:33 +02:00
1552eeca12 xeon08: Add perf 2023-06-14 15:42:20 +02:00
8769f3d418 xeon08: Enable lttng lockdep tracepoints 2023-06-14 15:42:20 +02:00
a4c254fcd6 xeon08: Add lttng module and tools 2023-06-14 15:42:20 +02:00
24fb1846d2 Serve grafana in https://jungle.bsc.es/grafana 2023-05-31 18:12:14 +02:00
5e77d0b86c Add tree command 2023-05-31 18:11:34 +02:00
494fda126c Add file to system packages 2023-05-31 18:11:34 +02:00
5cfa2f9611 Add gnumake to system packages 2023-05-31 18:11:34 +02:00
9539a24bdb Add cmake to system packages 2023-05-31 18:11:34 +02:00
98c4d924dd Add ix to common packages 2023-05-31 18:11:34 +02:00
7aae967c65 Improve documentation 2023-05-26 11:38:27 +02:00
49f7edddac Add gitignore 2023-05-26 11:38:27 +02:00
2f055d9fc5 Set intel_pstate=passive and disable frequency boost 2023-05-26 11:38:26 +02:00
108abffd2a Add xeon08 basic config 2023-05-26 11:38:26 +02:00
4c19ad66e3 Add nixos-config.nix to easily enable nix repl 2023-05-26 11:29:59 +02:00
19c01aeb1d Automatically resume restarted nodes in SLURM 2023-05-18 12:48:04 +02:00
fc90b40310 Allow public dashboards in grafana 2023-05-09 18:53:31 +02:00
81de0effb1 Add hal ssh key 2023-05-09 18:37:38 +02:00
5ce93ff85a Increase the number of CPUs to 56 for nOS-V docker 2023-05-02 17:47:57 +02:00
c020b9f5d6 Allow 5 concurrent buils in the gitlab-runner 2023-05-02 17:38:10 +02:00
f47734b524 Simplify bash prompt 2023-04-28 18:15:04 +02:00
ca3a7d98f5 Roolback to bash as default shell
Zsh doesn't behave properly, it needs further configuration.
2023-04-28 17:59:19 +02:00
0d5609ecc2 Use pmix by default in slurm 2023-04-28 17:07:48 +02:00
818edccb34 Increase locked memory to 1 GiB 2023-04-28 12:34:51 +02:00
2815f5bcfd Use the latest kernel 2023-04-28 11:51:38 +02:00
c1bbbd7793 Disable osnoise and hwlat tracer for now
Reuse nix cache to avoid rebuilding the kernel.
2023-04-28 11:19:47 +02:00
aa1dd14b62 Update nixpkgs to nixos-unstable 2023-04-28 11:18:37 +02:00
399103a9b4 Update nixpkgs 2023-04-28 11:13:46 +02:00
74639d3ece Update ib interface name in xeon02
It seems to be plugged in another PCI port
2023-04-27 18:29:32 +02:00
613a76ac29 Add steps in install documentation 2023-04-27 17:30:53 +02:00
c3ea8864bb Add minimal netboot module to build kexec image 2023-04-27 16:36:15 +02:00
919f211536 Add xeon02 configuration 2023-04-27 16:28:12 +02:00
141d77e2b6 Refacto slurm configuration into compute/control 2023-04-27 16:27:04 +02:00
44fcb97ec7 Lock flakes and add inputs 2023-04-27 13:52:59 +02:00
543983e9f3 Test flakes 2023-04-26 14:27:02 +02:00
95bbeeb646 Enable slurm in xeon01 2023-04-26 14:10:36 +02:00
de2af79810 Use xeon07 as control machine 2023-04-26 14:10:36 +02:00
b9aff1dba5 Remove xeon07 overlay to load upstream slurm 2023-04-26 14:10:36 +02:00
7da979bed2 Add script to rebuild configuration 2023-04-26 14:09:23 +02:00
cfe37640ea Add configuration for xeon01 2023-04-26 11:44:00 +00:00
096e407571 Load overlays from /config 2023-04-26 11:44:00 +00:00
ae31b546e7 Move net.nix to common 2023-04-26 11:44:00 +00:00
c3a2766bb7 Remove host specific network options from net.nix 2023-04-26 11:44:00 +00:00
b568bb36d4 Move ssh.nix to common 2023-04-26 11:44:00 +00:00
55f784e6b7 Move overlays.nix to common 2023-04-26 11:44:00 +00:00
dfab84b0ba Move users.nix to common 2023-04-26 11:44:00 +00:00
8f66ba824a Move common options from configuration.nix 2023-04-26 11:44:00 +00:00
79bd4398f3 Move the remaining hw config to common 2023-04-26 11:44:00 +00:00
b44afdaaa1 Move boot config to common/boot.nix 2023-04-26 11:44:00 +00:00
9528fab3ef Move filesystems config to common/fs.nix 2023-04-26 11:44:00 +00:00
7e82885d84 Use partition labels for / and swap 2023-04-26 11:44:00 +00:00
57ed0cf319 Move fs.nix to common 2023-04-26 11:44:00 +00:00
b043ee3b1d Move boot.nix to common 2023-04-26 11:44:00 +00:00
9e3bdaabb6 Move disk selection to configuration.nix 2023-04-26 11:44:00 +00:00
77f72ac939 Add common directory 2023-04-26 11:44:00 +00:00
fa25a68571 Add server board documentation 2023-04-24 10:10:08 +02:00
Rodrigo Arias
ea0f406849 Add BSC SSF slides 2023-04-24 09:47:11 +02:00
Rodrigo Arias
9df6be1b6b Add SEL troubleshooting guide 2023-04-21 13:31:11 +02:00
351 changed files with 27205 additions and 52182 deletions

View File

@ -1,20 +0,0 @@
name: CI
on:
push:
branches:
- master
pull_request:
branches:
- master
jobs:
build:all:
runs-on: native
steps:
- uses: https://gitea.com/ScMi1/checkout@v1.4
- run: nix build -L --no-link --print-out-paths .#bsc.ci.all
build:cross:
runs-on: native
steps:
- uses: https://gitea.com/ScMi1/checkout@v1.4
- run: nix build -L --no-link --print-out-paths .#bsc.ci.cross

2
.gitignore vendored
View File

@ -1,3 +1,3 @@
**.swp
*.swp
/result
/misc

View File

@ -1,6 +0,0 @@
build:bsc-ci.all:
stage: build
tags:
- nix
script:
- nix build -L --no-link --print-out-paths .#bsc-ci.all

21
COPYING
View File

@ -1,21 +0,0 @@
Copyright (c) 2020-2025 Barcelona Supercomputing Center
Copyright (c) 2003-2020 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

View File

@ -1,9 +0,0 @@
# Jungle
This repository provides two components that can be used independently:
- A Nix overlay with packages used at BSC (formerly known as bscpkgs). Access
them directly with `nix shell .#<pkgname>`.
- NixOS configurations for jungle machines. Use `nixos-rebuild switch --flake .`
to upgrade the current machine.

View File

@ -1,19 +0,0 @@
let
bscOverlay = import ./overlay.nix;
# read flake.lock and determine revision from there
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
inherit (lock.nodes.nixpkgs.locked) rev narHash;
fetchedNixpkgs = builtins.fetchTarball {
url = "https://github.com/NixOS/nixpkgs/archive/${rev}.tar.gz";
sha256 = narHash;
};
in
{ overlays ? [ ]
, nixpkgs ? fetchedNixpkgs
, ...
}@attrs:
import nixpkgs (
(builtins.removeAttrs attrs [ "overlays" "nixpkgs" ]) //
{ overlays = [ bscOverlay ] ++ overlays; }
)

Binary file not shown.

Binary file not shown.

BIN
doc/bsc-ssf.pdf Normal file

Binary file not shown.

View File

@ -1,30 +0,0 @@
# Maintainers
## Role of a maintainer
The responsibilities of maintainers are quite lax, and similar in spirit to
[nixpkgs' maintainers][1]:
The main responsibility of a maintainer is to keep the packages they
maintain in a functioning state, and keep up with updates. In order to do
that, they are empowered to make decisions over the packages they maintain.
That being said, the maintainer is not alone in proposing changes to the
packages. Anybody (both bots and humans) can send PRs to bump or tweak the
package.
In practice, this means that when updating or proposing changes to a package,
we will notify maintainers by mentioning them in Gitea so they can test changes
and give feedback.
Since we do bi-yearly release cycles, there is no expectation from maintainers
to update packages at each upstream release. Nevertheless, on each release cycle
we may request help from maintainers when updating or testing their packages.
## Becoming a maintainer
You'll have to add yourself in the `maintainers.nix` list; your username should
match your `bsc.es` email. Then you can add yourself to the `meta.maintainers`
of any package you are interested in maintaining.
[1]: [https://github.com/NixOS/nixpkgs/tree/nixos-25.05/maintainers]

View File

@ -1,46 +0,0 @@
#!/bin/sh
# Trims the jungle repository by moving the website to its own repository and
# removing it from jungle. It also removes big pdf files and kernel
# configurations so the jungle repository is small.
set -e
if [ -e oldjungle -o -e newjungle -o -e website ]; then
echo "remove oldjungle/, newjungle/ and website/ first"
exit 1
fi
# Clone the old jungle repo
git clone gitea@tent:rarias/jungle.git oldjungle
# First split the website into a new repository
mkdir website && git -C website init -b master
git-filter-repo \
--path web \
--subdirectory-filter web \
--source oldjungle \
--target website
# Then remove the website, pdf files and big kernel configs
mkdir newjungle && git -C newjungle init -b master
git-filter-repo \
--invert-paths \
--path web \
--path-glob 'doc*.pdf' \
--path-glob '**/kernel/configs/lockdep' \
--path-glob '**/kernel/configs/defconfig' \
--source oldjungle \
--target newjungle
set -x
du -sh oldjungle newjungle website
# 57M oldjungle
# 2,3M newjungle
# 6,4M website
du -sh --exclude=.git oldjungle newjungle website
# 30M oldjungle
# 700K newjungle
# 3,5M website

111
flake.lock generated
View File

@ -1,25 +1,128 @@
{
"nodes": {
"agenix": {
"inputs": {
"darwin": "darwin",
"home-manager": "home-manager",
"nixpkgs": [
"nixpkgs"
],
"systems": "systems"
},
"locked": {
"lastModified": 1723293904,
"narHash": "sha256-b+uqzj+Wa6xgMS9aNbX4I+sXeb5biPDi39VgvSFqFvU=",
"owner": "ryantm",
"repo": "agenix",
"rev": "f6291c5935fdc4e0bef208cfc0dcab7e3f7a1c41",
"type": "github"
},
"original": {
"owner": "ryantm",
"repo": "agenix",
"type": "github"
}
},
"bscpkgs": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1732868163,
"narHash": "sha256-qck4h298AgcNI6BnGhEwl26MTLXjumuJVr+9kak7uPo=",
"ref": "refs/heads/master",
"rev": "6782fc6c5b5a29e84a7f2c2d1064f4bcb1288c0f",
"revCount": 952,
"type": "git",
"url": "https://git.sr.ht/~rodarima/bscpkgs"
},
"original": {
"type": "git",
"url": "https://git.sr.ht/~rodarima/bscpkgs"
}
},
"darwin": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1700795494,
"narHash": "sha256-gzGLZSiOhf155FW7262kdHo2YDeugp3VuIFb4/GGng0=",
"owner": "lnl7",
"repo": "nix-darwin",
"rev": "4b9b83d5a92e8c1fbfd8eb27eda375908c11ec4d",
"type": "github"
},
"original": {
"owner": "lnl7",
"ref": "master",
"repo": "nix-darwin",
"type": "github"
}
},
"home-manager": {
"inputs": {
"nixpkgs": [
"agenix",
"nixpkgs"
]
},
"locked": {
"lastModified": 1703113217,
"narHash": "sha256-7ulcXOk63TIT2lVDSExj7XzFx09LpdSAPtvgtM7yQPE=",
"owner": "nix-community",
"repo": "home-manager",
"rev": "3bfaacf46133c037bb356193bd2f1765d9dc82c1",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1752436162,
"narHash": "sha256-Kt1UIPi7kZqkSc5HVj6UY5YLHHEzPBkgpNUByuyxtlw=",
"lastModified": 1736867362,
"narHash": "sha256-i/UJ5I7HoqmFMwZEH6vAvBxOrjjOJNU739lnZnhUln8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "dfcd5b901dbab46c9c6e80b265648481aafb01f8",
"rev": "9c6b49aeac36e2ed73a8c472f1546f6d9cf1addc",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-25.05",
"ref": "nixos-24.11",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"agenix": "agenix",
"bscpkgs": "bscpkgs",
"nixpkgs": "nixpkgs"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

View File

@ -1,22 +1,19 @@
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11";
agenix.url = "github:ryantm/agenix";
agenix.inputs.nixpkgs.follows = "nixpkgs";
bscpkgs.url = "git+https://git.sr.ht/~rodarima/bscpkgs";
bscpkgs.inputs.nixpkgs.follows = "nixpkgs";
};
outputs = { self, nixpkgs, ... }:
outputs = { self, nixpkgs, agenix, bscpkgs, ... }:
let
mkConf = name: nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
specialArgs = { inherit nixpkgs; theFlake = self; };
specialArgs = { inherit nixpkgs bscpkgs agenix; theFlake = self; };
modules = [ "${self.outPath}/m/${name}/configuration.nix" ];
};
# For now we only support x86
system = "x86_64-linux";
pkgs = import nixpkgs {
inherit system;
overlays = [ self.overlays.default ];
config.allowUnfree = true;
};
in
{
nixosConfigurations = {
@ -30,23 +27,11 @@ in
lake2 = mkConf "lake2";
raccoon = mkConf "raccoon";
fox = mkConf "fox";
apex = mkConf "apex";
weasel = mkConf "weasel";
};
bscOverlay = import ./overlay.nix;
overlays.default = self.bscOverlay;
# full nixpkgs with our overlay applied
legacyPackages.${system} = pkgs;
hydraJobs = self.legacyPackages.${system}.bsc.hydraJobs;
# propagate nixpkgs lib, so we can do bscpkgs.lib
lib = nixpkgs.lib // {
maintainers = nixpkgs.lib.maintainers // {
bsc = import ./pkgs/maintainers.nix;
};
packages.x86_64-linux = self.nixosConfigurations.hut.pkgs // {
bscpkgs = bscpkgs.packages.x86_64-linux;
nixpkgs = nixpkgs.legacyPackages.x86_64-linux;
};
};
}

View File

@ -2,28 +2,25 @@
# here all the public keys
rec {
hosts = {
hut = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICO7jIp6JRnRWTMDsTB/aiaICJCl4x8qmKMPSs4lCqP1 hut";
owl1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqMEXO0ApVsBA6yjmb0xP2kWyoPDIWxBB0Q3+QbHVhv owl1";
owl2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHurEYpQzNHqWYF6B9Pd7W8UPgF3BxEg0BvSbsA7BAdK owl2";
eudy = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+WYPRRvZupqLAG0USKmd/juEPmisyyJaP8hAgYwXsG eudy";
koro = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIImiTFDbxyUYPumvm8C4mEnHfuvtBY1H8undtd6oDd67 koro";
bay = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvGBzpRQKuQYHdlUQeAk6jmdbkrhmdLwTBqf3el7IgU bay";
lake2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINo66//S1yatpQHE/BuYD/Gfq64TY7ZN5XOGXmNchiO0 lake2";
fox = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDwItIk5uOJcQEVPoy/CVGRzfmE1ojrdDcI06FrU4NFT fox";
tent = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFAtTpHtdYoelbknD/IcfBlThwLKJv/dSmylOgpg3FRM tent";
apex = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvUFjSfoxXnKwXhEFXx5ckRKJ0oewJ82mRitSMNMKjh apex";
weasel = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFLJrQ8BF6KcweQV8pLkSbFT+tbDxSG9qxrdQE65zJZp weasel";
raccoon = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGNQttFvL0dNEyy7klIhLoK4xXOeM2/K9R7lPMTG3qvK raccoon";
hut = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICO7jIp6JRnRWTMDsTB/aiaICJCl4x8qmKMPSs4lCqP1 hut";
owl1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMqMEXO0ApVsBA6yjmb0xP2kWyoPDIWxBB0Q3+QbHVhv owl1";
owl2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHurEYpQzNHqWYF6B9Pd7W8UPgF3BxEg0BvSbsA7BAdK owl2";
eudy = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL+WYPRRvZupqLAG0USKmd/juEPmisyyJaP8hAgYwXsG eudy";
koro = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIImiTFDbxyUYPumvm8C4mEnHfuvtBY1H8undtd6oDd67 koro";
bay = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICvGBzpRQKuQYHdlUQeAk6jmdbkrhmdLwTBqf3el7IgU bay";
lake2 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINo66//S1yatpQHE/BuYD/Gfq64TY7ZN5XOGXmNchiO0 lake2";
fox = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDwItIk5uOJcQEVPoy/CVGRzfmE1ojrdDcI06FrU4NFT fox";
tent = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFAtTpHtdYoelbknD/IcfBlThwLKJv/dSmylOgpg3FRM tent";
};
hostGroup = with hosts; rec {
compute = [ owl1 owl2 fox raccoon ];
playground = [ eudy koro weasel ];
untrusted = [ fox ];
compute = [ owl1 owl2 ];
playground = [ eudy koro ];
storage = [ bay lake2 ];
monitor = [ hut ];
login = [ apex ];
system = storage ++ monitor ++ login;
system = storage ++ monitor;
safe = system ++ compute;
all = safe ++ playground;
};
@ -31,7 +28,6 @@ rec {
admins = {
"rarias@hut" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE1oZTPtlEXdGt0Ak+upeCIiBdaDQtcmuWoTUCVuSVIR rarias@hut";
"rarias@tent" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIwlWSBTZi74WTz5xn6gBvTmCoVltmtIAeM3RMmkh4QZ rarias@tent";
"rarias@fox" = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDSbw3REAKECV7E2c/e2XJITudJQWq2qDSe2N1JHqHZd rarias@fox";
root = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIII/1TNArcwA6D47mgW4TArwlxQRpwmIGiZDysah40Gb root@hut";
};
}

View File

@ -1,69 +0,0 @@
{ lib, config, pkgs, ... }:
{
imports = [
../common/xeon.nix
../common/ssf/hosts.nix
../module/ceph.nix
../module/hut-substituter.nix
../module/slurm-server.nix
./nfs.nix
./wireguard.nix
];
# Don't install grub MBR for now
boot.loader.grub.device = "nodev";
boot.initrd.kernelModules = [
"megaraid_sas" # For HW RAID
];
environment.systemPackages = with pkgs; [
storcli # To manage HW RAID
];
fileSystems."/home" = {
device = "/dev/disk/by-label/home";
fsType = "ext4";
};
# No swap, there is plenty of RAM
swapDevices = lib.mkForce [];
networking = {
hostName = "apex";
defaultGateway = "84.88.53.233";
nameservers = [ "8.8.8.8" ];
# Public facing interface
interfaces.eno1.ipv4.addresses = [ {
address = "84.88.53.236";
prefixLength = 29;
} ];
# Internal LAN to our Ethernet switch
interfaces.eno2.ipv4.addresses = [ {
address = "10.0.40.30";
prefixLength = 24;
} ];
# Infiniband over Omnipath switch (disconnected for now)
# interfaces.ibp5s0 = {};
nat = {
enable = true;
internalInterfaces = [ "eno2" ];
externalInterface = "eno1";
};
};
networking.firewall = {
extraCommands = ''
# Blackhole BSC vulnerability scanner (OpenVAS) as it is spamming our
# logs. Insert as first position so we also protect SSH.
iptables -I nixos-fw 1 -p tcp -s 192.168.8.16 -j nixos-fw-refuse
# Same with opsmonweb01.bsc.es which seems to be trying to access via SSH
iptables -I nixos-fw 2 -p tcp -s 84.88.52.176 -j nixos-fw-refuse
'';
};
}

View File

@ -1,48 +0,0 @@
{ ... }:
{
services.nfs.server = {
enable = true;
lockdPort = 4001;
mountdPort = 4002;
statdPort = 4000;
exports = ''
/home 10.0.40.0/24(rw,async,no_subtree_check,no_root_squash)
/home 10.106.0.0/24(rw,async,no_subtree_check,no_root_squash)
'';
};
networking.firewall = {
# Check with `rpcinfo -p`
extraCommands = ''
# Accept NFS traffic from compute nodes but not from the outside
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 111 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 2049 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 4000 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 4001 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 4002 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 20048 -j nixos-fw-accept
# Same but UDP
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 111 -j nixos-fw-accept
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 2049 -j nixos-fw-accept
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 4000 -j nixos-fw-accept
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 4001 -j nixos-fw-accept
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 4002 -j nixos-fw-accept
iptables -A nixos-fw -p udp -s 10.0.40.0/24 --dport 20048 -j nixos-fw-accept
# Accept NFS traffic from wg0
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 111 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 2049 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 4000 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 4001 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 4002 -j nixos-fw-accept
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.0/24 --dport 20048 -j nixos-fw-accept
# Same but UDP
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 111 -j nixos-fw-accept
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 2049 -j nixos-fw-accept
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 4000 -j nixos-fw-accept
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 4001 -j nixos-fw-accept
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 4002 -j nixos-fw-accept
iptables -A nixos-fw -p udp -i wg0 -s 10.106.0.0/24 --dport 20048 -j nixos-fw-accept
'';
};
}

View File

@ -1,42 +0,0 @@
{ config, ... }:
{
networking.firewall = {
allowedUDPPorts = [ 666 ];
};
age.secrets.wgApex.file = ../../secrets/wg-apex.age;
# Enable WireGuard
networking.wireguard.enable = true;
networking.wireguard.interfaces = {
# "wg0" is the network interface name. You can name the interface arbitrarily.
wg0 = {
ips = [ "10.106.0.30/24" ];
listenPort = 666;
privateKeyFile = config.age.secrets.wgApex.path;
# Public key: VwhcN8vSOzdJEotQTpmPHBC52x3Hbv1lkFIyKubrnUA=
peers = [
{
name = "fox";
publicKey = "VfMPBQLQTKeyXJSwv8wBhc6OV0j2qAxUpX3kLHunK2Y=";
allowedIPs = [ "10.106.0.1/32" ];
endpoint = "fox.ac.upc.edu:666";
# Send keepalives every 25 seconds. Important to keep NAT tables alive.
persistentKeepalive = 25;
}
{
name = "raccoon";
publicKey = "QUfnGXSMEgu2bviglsaSdCjidB51oEDBFpnSFcKGfDI=";
allowedIPs = [ "10.106.0.236/32" "192.168.0.0/16" "10.0.44.0/24" ];
}
];
};
};
networking.hosts = {
"10.106.0.1" = [ "fox" ];
"10.106.0.236" = [ "raccoon" ];
"10.0.44.4" = [ "tent" ];
};
}

View File

@ -3,7 +3,6 @@
{
imports = [
../common/ssf.nix
../module/hut-substituter.nix
../module/monitoring.nix
];

View File

@ -3,7 +3,6 @@
# Includes the basic configuration for an Intel server.
imports = [
./base/agenix.nix
./base/always-power-on.nix
./base/august-shutdown.nix
./base/boot.nix
./base/env.nix
@ -11,7 +10,6 @@
./base/hw.nix
./base/net.nix
./base/nix.nix
./base/sys-devices.nix
./base/ntp.nix
./base/rev.nix
./base/ssh.nix

View File

@ -1,8 +1,9 @@
{ pkgs, ... }:
{ agenix, ... }:
{
imports = [ ../../module/agenix.nix ];
imports = [ agenix.nixosModules.default ];
# Add agenix to system packages
environment.systemPackages = [ pkgs.agenix ];
environment.systemPackages = [
agenix.packages.x86_64-linux.default
];
}

View File

@ -1,8 +0,0 @@
{
imports = [
../../module/power-policy.nix
];
# Turn on as soon as we have power
power.policy = "always-on";
}

View File

@ -1,12 +1,12 @@
{
# Shutdown all machines on August 3rd at 22:00, so we can protect the
# Shutdown all machines on August 2nd at 11:00 AM, so we can protect the
# hardware from spurious electrical peaks on the yearly electrical cut for
# manteinance that starts on August 4th.
systemd.timers.august-shutdown = {
description = "Shutdown on August 3rd for maintenance";
description = "Shutdown on August 2nd for maintenance";
wantedBy = [ "timers.target" ];
timerConfig = {
OnCalendar = "*-08-03 22:00:00";
OnCalendar = "*-08-02 11:00:00";
RandomizedDelaySec = "10min";
Unit = "systemd-poweroff.service";
};

View File

@ -3,8 +3,8 @@
{
environment.systemPackages = with pkgs; [
vim wget git htop tmux pciutils tcpdump ripgrep nix-index nixos-option
nix-diff ipmitool freeipmi ethtool lm_sensors cmake gnumake file tree
ncdu config.boot.kernelPackages.perf ldns pv
nix-diff ipmitool freeipmi ethtool lm_sensors ix cmake gnumake file tree
ncdu config.boot.kernelPackages.perf ldns
# From bsckgs overlay
osumb
];

View File

@ -1,4 +1,4 @@
{ pkgs, lib, ... }:
{ pkgs, ... }:
{
networking = {
@ -10,14 +10,10 @@
allowedTCPPorts = [ 22 ];
};
# Make sure we use iptables
nftables.enable = lib.mkForce false;
hosts = {
"84.88.53.236" = [ "ssfhead.bsc.es" "ssfhead" ];
"84.88.51.152" = [ "raccoon" ];
"84.88.51.142" = [ "raccoon-ipmi" ];
"192.168.11.12" = [ "bscpm04.bsc.es" ];
"192.168.11.15" = [ "gitlab-internal.bsc.es" ];
};
};
}

View File

@ -1,12 +1,11 @@
{ pkgs, nixpkgs, theFlake, ... }:
{ pkgs, nixpkgs, bscpkgs, theFlake, ... }:
{
nixpkgs.overlays = [
(import ../../../overlay.nix)
bscpkgs.bscOverlay
(import ../../../pkgs/overlay.nix)
];
nixpkgs.config.allowUnfree = true;
nix = {
nixPath = [
"nixpkgs=${nixpkgs}"

View File

@ -1,9 +0,0 @@
{
nix.settings.system-features = [ "sys-devices" ];
programs.nix-required-mounts.enable = true;
programs.nix-required-mounts.allowedPatterns.sys-devices.paths = [
"/sys/devices/system/cpu"
"/sys/devices/system/node"
];
}

View File

@ -56,7 +56,7 @@
home = "/home/Computational/rpenacob";
description = "Raúl Peñacoba";
group = "Computational";
hosts = [ "apex" "owl1" "owl2" "hut" "tent" "fox" ];
hosts = [ "owl1" "owl2" "hut" "tent" "fox" ];
hashedPassword = "$6$TZm3bDIFyPrMhj1E$uEDXoYYd1z2Wd5mMPfh3DZAjP7ztVjJ4ezIcn82C0ImqafPA.AnTmcVftHEzLB3tbe2O4SxDyPSDEQgJ4GOtj/";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFYfXg37mauGeurqsLpedgA2XQ9d4Nm0ZGo/hI1f7wwH rpenacob@bsc"
@ -69,10 +69,10 @@
home = "/home/Computational/anavarro";
description = "Antoni Navarro";
group = "Computational";
hosts = [ "apex" "hut" "tent" "raccoon" "fox" "weasel" ];
hashedPassword = "$6$EgturvVYXlKgP43g$gTN78LLHIhaF8hsrCXD.O6mKnZSASWSJmCyndTX8QBWT6wTlUhcWVAKz65lFJPXjlJA4u7G1ydYQ0GG6Wk07b1";
hosts = [ "hut" "tent" "raccoon" "fox" ];
hashedPassword = "$6$QdNDsuLehoZTYZlb$CDhCouYDPrhoiB7/seu7RF.Gqg4zMQz0n5sA4U1KDgHaZOxy2as9pbIGeF8tOHJKRoZajk5GiaZv0rZMn7Oq31";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMsbM21uepnJwPrRe6jYFz8zrZ6AYMtSEvvt4c9spmFP toni@delltoni"
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILWjRSlKgzBPZQhIeEtk6Lvws2XNcYwHcwPv4osSgst5 anavarro@ssfhead"
];
};
@ -82,7 +82,7 @@
home = "/home/Computational/abonerib";
description = "Aleix Boné";
group = "Computational";
hosts = [ "apex" "owl1" "owl2" "hut" "tent" "raccoon" "fox" "weasel" ];
hosts = [ "owl1" "owl2" "hut" "tent" "raccoon" "fox" ];
hashedPassword = "$6$V1EQWJr474whv7XJ$OfJ0wueM2l.dgiJiiah0Tip9ITcJ7S7qDvtSycsiQ43QBFyP4lU0e0HaXWps85nqB4TypttYR4hNLoz3bz662/";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIIFiqXqt88VuUfyANkZyLJNiuroIITaGlOOTMhVDKjf abonerib@bsc"
@ -95,7 +95,7 @@
home = "/home/Computational/vlopez";
description = "Victor López";
group = "Computational";
hosts = [ "apex" "koro" ];
hosts = [ "koro" ];
hashedPassword = "$6$0ZBkgIYE/renVqtt$1uWlJsb0FEezRVNoETTzZMx4X2SvWiOsKvi0ppWCRqI66S6TqMBXBdP4fcQyvRRBt0e4Z7opZIvvITBsEtO0f0";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGMwlUZRf9jfG666Qa5Sb+KtEhXqkiMlBV2su3x/dXHq victor@arch"
@ -108,7 +108,7 @@
home = "/home/Computational/dbautist";
description = "Dylan Bautista Cases";
group = "Computational";
hosts = [ "apex" "hut" "tent" "raccoon" ];
hosts = [ "hut" "tent" "raccoon" ];
hashedPassword = "$6$a2lpzMRVkG9nSgIm$12G6.ka0sFX1YimqJkBAjbvhRKZ.Hl090B27pdbnQOW0wzyxVWySWhyDDCILjQELky.HKYl9gqOeVXW49nW7q/";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAb+EQBoS98zrCwnGKkHKwMLdYABMTqv7q9E0+T0QmkS dbautist@bsc-848818791"
@ -121,7 +121,7 @@
home = "/home/Computational/dalvare1";
description = "David Álvarez";
group = "Computational";
hosts = [ "apex" "hut" "tent" "fox" ];
hosts = [ "hut" "tent" "fox" ];
hashedPassword = "$6$mpyIsV3mdq.rK8$FvfZdRH5OcEkUt5PnIUijWyUYZvB1SgeqxpJ2p91TTe.3eQIDTcLEQ5rxeg.e5IEXAZHHQ/aMsR5kPEujEghx0";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGEfy6F4rF80r4Cpo2H5xaWqhuUZzUsVsILSKGJzt5jF dalvare1@ssfhead"
@ -134,7 +134,7 @@
home = "/home/Computational/varcila";
description = "Vincent Arcila";
group = "Computational";
hosts = [ "apex" "hut" "tent" "fox" ];
hosts = [ "hut" "tent" "fox" ];
hashedPassword = "$6$oB0Tcn99DcM4Ch$Vn1A0ulLTn/8B2oFPi9wWl/NOsJzaFAWjqekwcuC9sMC7cgxEVb.Nk5XSzQ2xzYcNe5MLtmzkVYnRS1CqP39Y0";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKGt0ESYxekBiHJQowmKpfdouw0hVm3N7tUMtAaeLejK vincent@varch"
@ -154,32 +154,6 @@
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIV5LEAII5rfe1hYqDYIIrhb1gOw7RcS1p2mhOTqG+zc pedro@pedro-ThinkPad-P14s-Gen-2a"
];
};
csiringo = {
uid = 9653;
isNormalUser = true;
home = "/home/Computational/csiringo";
description = "Cesare Siringo";
group = "Computational";
hosts = [ ];
hashedPassword = "$6$0IsZlju8jFukLlAw$VKm0FUXbS.mVmPm3rcJeizTNU4IM5Nmmy21BvzFL.cQwvlGwFI1YWRQm6gsbd4nbg47mPDvYkr/ar0SlgF6GO1";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHA65zvvG50iuFEMf+guRwZB65jlGXfGLF4HO+THFaed csiringo@bsc.es"
];
};
acinca = {
uid = 9654;
isNormalUser = true;
home = "/home/Computational/acinca";
description = "Arnau Cinca";
group = "Computational";
hosts = [ "apex" "hut" "fox" "owl1" "owl2" ];
hashedPassword = "$6$S6PUeRpdzYlidxzI$szyvWejQ4hEN76yBYhp1diVO5ew1FFg.cz4lKiXt2Idy4XdpifwrFTCIzLTs5dvYlR62m7ekA5MrhcVxR5F/q/";
openssh.authorizedKeys.keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFmMqKqPg4uocNOr3O41kLbZMOMJn3m2ZdN1JvTR96z3 bsccns@arnau-bsc"
];
};
};
groups = {

View File

@ -3,8 +3,7 @@
imports = [
./xeon.nix
./ssf/fs.nix
./ssf/hosts.nix
./ssf/hosts-remote.nix
./ssf/net.nix
./ssf/ssh.nix
];
}

View File

@ -1,9 +0,0 @@
{ pkgs, ... }:
{
networking.hosts = {
# Remote hosts visible from compute nodes
"10.106.0.236" = [ "raccoon" ];
"10.0.44.4" = [ "tent" ];
};
}

View File

@ -1,23 +0,0 @@
{ pkgs, ... }:
{
networking.hosts = {
# Login
"10.0.40.30" = [ "apex" ];
# Storage
"10.0.40.40" = [ "bay" ]; "10.0.42.40" = [ "bay-ib" ]; "10.0.40.141" = [ "bay-ipmi" ];
"10.0.40.41" = [ "oss01" ]; "10.0.42.41" = [ "oss01-ib0" ]; "10.0.40.142" = [ "oss01-ipmi" ];
"10.0.40.42" = [ "lake2" ]; "10.0.42.42" = [ "lake2-ib" ]; "10.0.40.143" = [ "lake2-ipmi" ];
# Xeon compute
"10.0.40.1" = [ "owl1" ]; "10.0.42.1" = [ "owl1-ib" ]; "10.0.40.101" = [ "owl1-ipmi" ];
"10.0.40.2" = [ "owl2" ]; "10.0.42.2" = [ "owl2-ib" ]; "10.0.40.102" = [ "owl2-ipmi" ];
"10.0.40.3" = [ "xeon03" ]; "10.0.42.3" = [ "xeon03-ib" ]; "10.0.40.103" = [ "xeon03-ipmi" ];
#"10.0.40.4" = [ "tent" ]; "10.0.42.4" = [ "tent-ib" ]; "10.0.40.104" = [ "tent-ipmi" ];
"10.0.40.5" = [ "koro" ]; "10.0.42.5" = [ "koro-ib" ]; "10.0.40.105" = [ "koro-ipmi" ];
"10.0.40.6" = [ "weasel" ]; "10.0.42.6" = [ "weasel-ib" ]; "10.0.40.106" = [ "weasel-ipmi" ];
"10.0.40.7" = [ "hut" ]; "10.0.42.7" = [ "hut-ib" ]; "10.0.40.107" = [ "hut-ipmi" ];
"10.0.40.8" = [ "eudy" ]; "10.0.42.8" = [ "eudy-ib" ]; "10.0.40.108" = [ "eudy-ipmi" ];
};
}

View File

@ -9,6 +9,14 @@
defaultGateway = "10.0.40.30";
nameservers = ["8.8.8.8"];
proxy = {
default = "http://hut:23080/";
noProxy = "127.0.0.1,localhost,internal.domain,10.0.40.40,hut";
# Don't set all_proxy as go complains and breaks the gitlab runner, see:
# https://github.com/golang/go/issues/16715
allProxy = null;
};
firewall = {
extraCommands = ''
# Prevent ssfhead from contacting our slurmd daemon
@ -19,5 +27,64 @@
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 60000:61000 -j nixos-fw-accept
'';
};
extraHosts = ''
10.0.40.30 ssfhead
# Node Entry for node: mds01 (ID=72)
10.0.40.40 bay mds01 mds01-eth0
10.0.42.40 bay-ib mds01-ib0
10.0.40.141 bay-ipmi mds01-ipmi0 mds01-ipmi
# Node Entry for node: oss01 (ID=73)
10.0.40.41 oss01 oss01-eth0
10.0.42.41 oss01-ib0
10.0.40.142 oss01-ipmi0 oss01-ipmi
# Node Entry for node: oss02 (ID=74)
10.0.40.42 lake2 oss02 oss02-eth0
10.0.42.42 lake2-ib oss02-ib0
10.0.40.143 lake2-ipmi oss02-ipmi0 oss02-ipmi
# Node Entry for node: xeon01 (ID=15)
10.0.40.1 owl1 xeon01 xeon01-eth0
10.0.42.1 owl1-ib xeon01-ib0
10.0.40.101 owl1-ipmi xeon01-ipmi0 xeon01-ipmi
# Node Entry for node: xeon02 (ID=16)
10.0.40.2 owl2 xeon02 xeon02-eth0
10.0.42.2 owl2-ib xeon02-ib0
10.0.40.102 owl2-ipmi xeon02-ipmi0 xeon02-ipmi
# Node Entry for node: xeon03 (ID=17)
10.0.40.3 xeon03 xeon03-eth0
10.0.42.3 xeon03-ib0
10.0.40.103 xeon03-ipmi0 xeon03-ipmi
# Node Entry for node: xeon04 (ID=18)
10.0.40.4 xeon04 xeon04-eth0
10.0.42.4 xeon04-ib0
10.0.40.104 xeon04-ipmi0 xeon04-ipmi
# Node Entry for node: xeon05 (ID=19)
10.0.40.5 koro xeon05 xeon05-eth0
10.0.42.5 koro-ib xeon05-ib0
10.0.40.105 koro-ipmi xeon05-ipmi0
# Node Entry for node: xeon06 (ID=20)
10.0.40.6 xeon06 xeon06-eth0
10.0.42.6 xeon06-ib0
10.0.40.106 xeon06-ipmi0 xeon06-ipmi
# Node Entry for node: xeon07 (ID=21)
10.0.40.7 hut xeon07 xeon07-eth0
10.0.42.7 hut-ib xeon07-ib0
10.0.40.107 hut-ipmi xeon07-ipmi0 xeon07-ipmi
# Node Entry for node: xeon08 (ID=22)
10.0.40.8 eudy xeon08 xeon08-eth0
10.0.42.8 eudy-ib xeon08-ib0
10.0.40.108 eudy-ipmi xeon08-ipmi0 xeon08-ipmi
'';
};
}

8
m/common/ssf/ssh.nix Normal file
View File

@ -0,0 +1,8 @@
{
# Connect to intranet git hosts via proxy
programs.ssh.extraConfig = ''
# Connect to BSC machines via hut proxy too
Host amdlogin1.bsc.es armlogin1.bsc.es hualogin1.bsc.es glogin1.bsc.es glogin2.bsc.es fpgalogin1.bsc.es
ProxyCommand nc -X connect -x hut:23080 %h %p
'';
}

View File

@ -9,7 +9,6 @@
./cpufreq.nix
./fs.nix
./users.nix
../module/hut-substituter.nix
../module/debuginfod.nix
];

File diff suppressed because it is too large Load Diff

10333
m/eudy/kernel/configs/lockdep Normal file

File diff suppressed because it is too large Load Diff

View File

@ -4,18 +4,9 @@
imports = [
../common/base.nix
../common/xeon/console.nix
../module/amd-uprof.nix
../module/emulation.nix
../module/nvidia.nix
../module/slurm-client.nix
../module/hut-substituter.nix
./wireguard.nix
];
# Don't turn off on August as UPC has different dates.
# Fox works fine on power cuts.
systemd.timers.august-shutdown.enable = false;
# Select the this using the ID to avoid mismatches
boot.loader.grub.device = "/dev/disk/by-id/wwn-0x500a07514b0c1103";
@ -23,7 +14,7 @@
swapDevices = lib.mkForce [];
boot.initrd.availableKernelModules = [ "xhci_pci" "ahci" "nvme" "usbhid" "usb_storage" "sd_mod" ];
boot.kernelModules = [ "kvm-amd" "amd_uncore" "amd_hsmp" ];
boot.kernelModules = [ "kvm-amd" "amd_uncore" ];
hardware.cpu.amd.updateMicrocode = lib.mkDefault config.hardware.enableRedistributableFirmware;
hardware.cpu.intel.updateMicrocode = lib.mkForce false;
@ -31,21 +22,14 @@
# Use performance for benchmarks
powerManagement.cpuFreqGovernor = "performance";
services.amd-uprof.enable = true;
# Disable NUMA balancing
boot.kernel.sysctl."kernel.numa_balancing" = 0;
# Expose kernel addresses
boot.kernel.sysctl."kernel.kptr_restrict" = 0;
# Disable NMI watchdog to save one hw counter (for AMD uProf)
boot.kernel.sysctl."kernel.nmi_watchdog" = 0;
services.openssh.settings.X11Forwarding = true;
services.fail2ban.enable = true;
networking = {
timeServers = [ "ntp1.upc.edu" "ntp2.upc.edu" ];
hostName = "fox";
@ -63,20 +47,23 @@
interfaces.enp1s0f0np0.useDHCP = true;
};
# Recommended for new graphics cards
hardware.nvidia.open = true;
# Use hut for cache
nix.settings = {
extra-substituters = [ "https://jungle.bsc.es/cache" ];
extra-trusted-public-keys = [ "jungle.bsc.es:pEc7MlAT0HEwLQYPtpkPLwRsGf80ZI26aj29zMw/HH0=" ];
};
# Configure Nvidia driver to use with CUDA
hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.production;
hardware.graphics.enable = true;
nixpkgs.config.allowUnfree = true;
nixpkgs.config.nvidia.acceptLicense = true;
services.xserver.videoDrivers = [ "nvidia" ];
# Mount NVME disks
fileSystems."/nvme0" = { device = "/dev/disk/by-label/nvme0"; fsType = "ext4"; };
fileSystems."/nvme1" = { device = "/dev/disk/by-label/nvme1"; fsType = "ext4"; };
# Mount the NFS home
fileSystems."/nfs/home" = {
device = "10.106.0.30:/home";
fsType = "nfs";
options = [ "nfsvers=3" "rsize=1024" "wsize=1024" "cto" "nofail" ];
};
# Make a /nvme{0,1}/$USER directory for each user.
systemd.services.create-nvme-dirs = let
# Take only normal users in fox
@ -93,20 +80,4 @@
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = script;
};
# Only allow SSH connections from users who have a SLURM allocation
# See: https://slurm.schedmd.com/pam_slurm_adopt.html
security.pam.services.sshd.rules.account.slurm = {
control = "required";
enable = true;
modulePath = "${pkgs.slurm}/lib/security/pam_slurm_adopt.so";
args = [ "log_level=debug5" ];
order = 999999; # Make it last one
};
# Disable systemd session (pam_systemd.so) as it will conflict with the
# pam_slurm_adopt.so module. What happens is that the shell is first adopted
# into the slurmstepd task and then into the systemd session, which is not
# what we want, otherwise it will linger even if all jobs are gone.
security.pam.services.sshd.startSession = lib.mkForce false;
}

View File

@ -1,54 +0,0 @@
{ config, ... }:
{
networking.firewall = {
allowedUDPPorts = [ 666 ];
};
age.secrets.wgFox.file = ../../secrets/wg-fox.age;
networking.wireguard.enable = true;
networking.wireguard.interfaces = {
# "wg0" is the network interface name. You can name the interface arbitrarily.
wg0 = {
# Determines the IP address and subnet of the server's end of the tunnel interface.
ips = [ "10.106.0.1/24" ];
# The port that WireGuard listens to. Must be accessible by the client.
listenPort = 666;
# Path to the private key file.
privateKeyFile = config.age.secrets.wgFox.path;
# Public key: VfMPBQLQTKeyXJSwv8wBhc6OV0j2qAxUpX3kLHunK2Y=
peers = [
# List of allowed peers.
{
name = "apex";
publicKey = "VwhcN8vSOzdJEotQTpmPHBC52x3Hbv1lkFIyKubrnUA=";
# List of IPs assigned to this peer within the tunnel subnet. Used to configure routing.
allowedIPs = [ "10.106.0.30/32" "10.0.40.7/32" ];
}
{
name = "raccoon";
publicKey = "QUfnGXSMEgu2bviglsaSdCjidB51oEDBFpnSFcKGfDI=";
allowedIPs = [ "10.106.0.236/32" "192.168.0.0/16" "10.0.44.0/24" ];
}
];
};
};
networking.hosts = {
"10.106.0.30" = [ "apex" ];
"10.0.40.7" = [ "hut" ];
"10.106.0.236" = [ "raccoon" ];
"10.0.44.4" = [ "tent" ];
};
networking.firewall = {
extraCommands = ''
# Accept slurm connections to slurmd from apex (via wireguard)
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.30/32 -d 10.106.0.1/32 --dport 6818 -j nixos-fw-accept
'';
};
}

View File

@ -3,12 +3,160 @@ modules:
prober: http
timeout: 5s
http:
proxy_url: "http://127.0.0.1:23080"
skip_resolve_phase_with_proxy: true
follow_redirects: true
preferred_ip_protocol: "ip4"
valid_status_codes: [] # Defaults to 2xx
method: GET
http_with_proxy:
prober: http
http:
proxy_url: "http://127.0.0.1:3128"
skip_resolve_phase_with_proxy: true
http_with_proxy_and_headers:
prober: http
http:
proxy_url: "http://127.0.0.1:3128"
proxy_connect_header:
Proxy-Authorization:
- Bearer token
http_post_2xx:
prober: http
timeout: 5s
http:
method: POST
headers:
Content-Type: application/json
body: '{}'
http_post_body_file:
prober: http
timeout: 5s
http:
method: POST
body_file: "/files/body.txt"
http_basic_auth_example:
prober: http
timeout: 5s
http:
method: POST
headers:
Host: "login.example.com"
basic_auth:
username: "username"
password: "mysecret"
http_2xx_oauth_client_credentials:
prober: http
timeout: 5s
http:
valid_http_versions: ["HTTP/1.1", "HTTP/2"]
follow_redirects: true
preferred_ip_protocol: "ip4"
valid_status_codes:
- 200
- 201
oauth2:
client_id: "client_id"
client_secret: "client_secret"
token_url: "https://api.example.com/token"
endpoint_params:
grant_type: "client_credentials"
http_custom_ca_example:
prober: http
http:
method: GET
tls_config:
ca_file: "/certs/my_cert.crt"
http_gzip:
prober: http
http:
method: GET
compression: gzip
http_gzip_with_accept_encoding:
prober: http
http:
method: GET
compression: gzip
headers:
Accept-Encoding: gzip
tls_connect:
prober: tcp
timeout: 5s
tcp:
tls: true
tcp_connect_example:
prober: tcp
timeout: 5s
imap_starttls:
prober: tcp
timeout: 5s
tcp:
query_response:
- expect: "OK.*STARTTLS"
- send: ". STARTTLS"
- expect: "OK"
- starttls: true
- send: ". capability"
- expect: "CAPABILITY IMAP4rev1"
smtp_starttls:
prober: tcp
timeout: 5s
tcp:
query_response:
- expect: "^220 ([^ ]+) ESMTP (.+)$"
- send: "EHLO prober\r"
- expect: "^250-STARTTLS"
- send: "STARTTLS\r"
- expect: "^220"
- starttls: true
- send: "EHLO prober\r"
- expect: "^250-AUTH"
- send: "QUIT\r"
irc_banner_example:
prober: tcp
timeout: 5s
tcp:
query_response:
- send: "NICK prober"
- send: "USER prober prober prober :prober"
- expect: "PING :([^ ]+)"
send: "PONG ${1}"
- expect: "^:[^ ]+ 001"
icmp:
prober: icmp
timeout: 5s
icmp:
preferred_ip_protocol: "ip4"
dns_udp_example:
prober: dns
timeout: 5s
dns:
query_name: "www.prometheus.io"
query_type: "A"
valid_rcodes:
- NOERROR
validate_answer_rrs:
fail_if_matches_regexp:
- ".*127.0.0.1"
fail_if_all_match_regexp:
- ".*127.0.0.1"
fail_if_not_matches_regexp:
- "www.prometheus.io.\t300\tIN\tA\t127.0.0.1"
fail_if_none_matches_regexp:
- "127.0.0.1"
validate_authority_rrs:
fail_if_matches_regexp:
- ".*127.0.0.1"
validate_additional_rrs:
fail_if_matches_regexp:
- ".*127.0.0.1"
dns_soa:
prober: dns
dns:
query_name: "prometheus.io"
query_type: "SOA"
dns_tcp_example:
prober: dns
dns:
transport_protocol: "tcp" # defaults to "udp"
preferred_ip_protocol: "ip4" # defaults to "ip6"
query_name: "www.prometheus.io"

View File

@ -7,9 +7,11 @@
../module/ceph.nix
../module/debuginfod.nix
../module/emulation.nix
../module/slurm-client.nix
./gitlab-runner.nix
./monitoring.nix
./nfs.nix
./slurm-server.nix
./nix-serve.nix
./public-inbox.nix
./gitea.nix

View File

@ -2,10 +2,20 @@
N=500
t=$(timeout 5 ssh bsc015557@glogin2.bsc.es "timeout 3 command time -f %e touch /gpfs/projects/bsc15/bsc015557/gpfs.{1..$N} 2>&1; rm -f /gpfs/projects/bsc15/bsc015557/gpfs.{1..$N}")
t_proj=$(timeout 5 ssh bsc015557@glogin2.bsc.es "timeout 3 command time -f %e touch /gpfs/projects/bsc15/bsc015557/gpfs.{1..$N} 2>&1; rm -f /gpfs/projects/bsc15/bsc015557/gpfs.{1..$N}")
t_scratch=$(timeout 5 ssh bsc015557@glogin2.bsc.es "timeout 3 command time -f %e touch /gpfs/scratch/bsc15/rodrigo/probe/gpfs.{1..$N} 2>&1; rm -f /gpfs/scratch/bsc15/rodrigo/probe/gpfs.{1..$N}")
t_home=$(timeout 5 ssh bsc015557@glogin2.bsc.es "timeout 3 command time -f %e touch /home/bsc/bsc015557/.gpfs/{1..$N} 2>&1; rm -f /home/bsc/bsc015557/.gpfs/{1..$N}")
if [ -z "$t" ]; then
t="5.00"
if [ -z "$t_proj" ]; then
t_proj="5.00"
fi
if [ -z "$t_scratch" ]; then
t_scratch="5.00"
fi
if [ -z "$t_home" ]; then
t_home="5.00"
fi
cat <<EOF
@ -14,5 +24,7 @@ Content-Type: text/plain; version=0.0.4; charset=utf-8; escaping=values
# HELP gpfs_touch_latency Time to create $N files.
# TYPE gpfs_touch_latency gauge
gpfs_touch_latency $t
gpfs_touch_latency{partition="projects"} $t_proj
gpfs_touch_latency{partition="home"} $t_home
gpfs_touch_latency{partition="scratch"} $t_scratch
EOF

View File

@ -267,6 +267,14 @@
}
];
}
{
job_name = "tent";
static_configs = [
{
targets = [ "127.0.0.1:29002" ]; # Node exporter
}
];
}
];
};
}

View File

@ -2,13 +2,10 @@
let
website = pkgs.stdenv.mkDerivation {
name = "jungle-web";
src = pkgs.fetchgit {
url = "https://jungle.bsc.es/git/rarias/jungle-website.git";
rev = "739bf0175a7f05380fe7ad7023ff1d60db1710e1";
hash = "sha256-ea5DzhYTzZ9TmqD+x95rdNdLbxPnBluqlYH2NmBYmc4=";
};
src = theFlake;
buildInputs = [ pkgs.hugo ];
buildPhase = ''
cd web
rm -rf public/
hugo
'';

7
m/hut/slurm-server.nix Normal file
View File

@ -0,0 +1,7 @@
{ ... }:
{
services.slurm = {
server.enable = true;
};
}

View File

@ -4,7 +4,7 @@
- xeon03-ipmi
- xeon04-ipmi
- koro-ipmi
- weasel-ipmi
- xeon06-ipmi
- hut-ipmi
- eudy-ipmi
# Storage

View File

@ -4,7 +4,6 @@
imports = [
../common/ssf.nix
../module/monitoring.nix
../module/hut-substituter.nix
];
boot.loader.grub.device = "/dev/disk/by-id/wwn-0x55cd2e414d53563a";

View File

@ -6,7 +6,7 @@
switch-opa = { pos=41; size=1; };
# SSF login
apex = { pos=39; size=2; label="SSFHEAD"; board="R2208WTTYSR"; contact="rodrigo.arias@bsc.es"; };
ssfhead = { pos=39; size=2; label="SSFHEAD"; board="R2208WTTYSR"; contact="operations@bsc.es"; };
# Storage
bay = { pos=38; size=1; label="MDS01"; board="S2600WT2R"; sn="BQWL64850303"; contact="rodrigo.arias@bsc.es"; };
@ -19,7 +19,7 @@
xeon03 = { pos=33; size=1; label="SSF-XEON03"; board="S2600WTTR"; sn="BQWL64750826"; contact="rodrigo.arias@bsc.es"; };
# Slot 34 empty
koro = { pos=31; size=1; label="SSF-XEON05"; board="S2600WTTR"; sn="BQWL64954293"; contact="rodrigo.arias@bsc.es"; };
weasel = { pos=30; size=1; label="SSF-XEON06"; board="S2600WTTR"; sn="BQWL64750846"; contact="antoni.navarro@bsc.es"; };
xeon06 = { pos=30; size=1; label="SSF-XEON06"; board="S2600WTTR"; sn="BQWL64750846"; contact="antoni.navarro@bsc.es"; };
hut = { pos=29; size=1; label="SSF-XEON07"; board="S2600WTTR"; sn="BQWL64751184"; contact="rodrigo.arias@bsc.es"; };
eudy = { pos=28; size=1; label="SSF-XEON08"; board="S2600WTTR"; sn="BQWL64756586"; contact="aleix.rocanonell@bsc.es"; };

View File

@ -1,357 +0,0 @@
{
config,
options,
lib,
pkgs,
...
}:
with lib;
let
cfg = config.age;
isDarwin = lib.attrsets.hasAttrByPath [ "environment" "darwinConfig" ] options;
ageBin = config.age.ageBin;
users = config.users.users;
sysusersEnabled =
if isDarwin then
false
else
options.systemd ? sysusers && (config.systemd.sysusers.enable || config.services.userborn.enable);
mountCommand =
if isDarwin then
''
if ! diskutil info "${cfg.secretsMountPoint}" &> /dev/null; then
num_sectors=1048576
dev=$(hdiutil attach -nomount ram://"$num_sectors" | sed 's/[[:space:]]*$//')
newfs_hfs -v agenix "$dev"
mount -t hfs -o nobrowse,nodev,nosuid,-m=0751 "$dev" "${cfg.secretsMountPoint}"
fi
''
else
''
grep -q "${cfg.secretsMountPoint} ramfs" /proc/mounts ||
mount -t ramfs none "${cfg.secretsMountPoint}" -o nodev,nosuid,mode=0751
'';
newGeneration = ''
_agenix_generation="$(basename "$(readlink ${cfg.secretsDir})" || echo 0)"
(( ++_agenix_generation ))
echo "[agenix] creating new generation in ${cfg.secretsMountPoint}/$_agenix_generation"
mkdir -p "${cfg.secretsMountPoint}"
chmod 0751 "${cfg.secretsMountPoint}"
${mountCommand}
mkdir -p "${cfg.secretsMountPoint}/$_agenix_generation"
chmod 0751 "${cfg.secretsMountPoint}/$_agenix_generation"
'';
chownGroup = if isDarwin then "admin" else "keys";
# chown the secrets mountpoint and the current generation to the keys group
# instead of leaving it root:root.
chownMountPoint = ''
chown :${chownGroup} "${cfg.secretsMountPoint}" "${cfg.secretsMountPoint}/$_agenix_generation"
'';
setTruePath = secretType: ''
${
if secretType.symlink then
''
_truePath="${cfg.secretsMountPoint}/$_agenix_generation/${secretType.name}"
''
else
''
_truePath="${secretType.path}"
''
}
'';
installSecret = secretType: ''
${setTruePath secretType}
echo "decrypting '${secretType.file}' to '$_truePath'..."
TMP_FILE="$_truePath.tmp"
IDENTITIES=()
for identity in ${toString cfg.identityPaths}; do
test -r "$identity" || continue
test -s "$identity" || continue
IDENTITIES+=(-i)
IDENTITIES+=("$identity")
done
test "''${#IDENTITIES[@]}" -eq 0 && echo "[agenix] WARNING: no readable identities found!"
mkdir -p "$(dirname "$_truePath")"
[ "${secretType.path}" != "${cfg.secretsDir}/${secretType.name}" ] && mkdir -p "$(dirname "${secretType.path}")"
(
umask u=r,g=,o=
test -f "${secretType.file}" || echo '[agenix] WARNING: encrypted file ${secretType.file} does not exist!'
test -d "$(dirname "$TMP_FILE")" || echo "[agenix] WARNING: $(dirname "$TMP_FILE") does not exist!"
LANG=${
config.i18n.defaultLocale or "C"
} ${ageBin} --decrypt "''${IDENTITIES[@]}" -o "$TMP_FILE" "${secretType.file}"
)
chmod ${secretType.mode} "$TMP_FILE"
mv -f "$TMP_FILE" "$_truePath"
${optionalString secretType.symlink ''
[ "${secretType.path}" != "${cfg.secretsDir}/${secretType.name}" ] && ln -sfT "${cfg.secretsDir}/${secretType.name}" "${secretType.path}"
''}
'';
testIdentities = map (path: ''
test -f ${path} || echo '[agenix] WARNING: config.age.identityPaths entry ${path} not present!'
'') cfg.identityPaths;
cleanupAndLink = ''
_agenix_generation="$(basename "$(readlink ${cfg.secretsDir})" || echo 0)"
(( ++_agenix_generation ))
echo "[agenix] symlinking new secrets to ${cfg.secretsDir} (generation $_agenix_generation)..."
ln -sfT "${cfg.secretsMountPoint}/$_agenix_generation" ${cfg.secretsDir}
(( _agenix_generation > 1 )) && {
echo "[agenix] removing old secrets (generation $(( _agenix_generation - 1 )))..."
rm -rf "${cfg.secretsMountPoint}/$(( _agenix_generation - 1 ))"
}
'';
installSecrets = builtins.concatStringsSep "\n" (
[ "echo '[agenix] decrypting secrets...'" ]
++ testIdentities
++ (map installSecret (builtins.attrValues cfg.secrets))
++ [ cleanupAndLink ]
);
chownSecret = secretType: ''
${setTruePath secretType}
chown ${secretType.owner}:${secretType.group} "$_truePath"
'';
chownSecrets = builtins.concatStringsSep "\n" (
[ "echo '[agenix] chowning...'" ]
++ [ chownMountPoint ]
++ (map chownSecret (builtins.attrValues cfg.secrets))
);
secretType = types.submodule (
{ config, ... }:
{
options = {
name = mkOption {
type = types.str;
default = config._module.args.name;
defaultText = literalExpression "config._module.args.name";
description = ''
Name of the file used in {option}`age.secretsDir`
'';
};
file = mkOption {
type = types.path;
description = ''
Age file the secret is loaded from.
'';
};
path = mkOption {
type = types.str;
default = "${cfg.secretsDir}/${config.name}";
defaultText = literalExpression ''
"''${cfg.secretsDir}/''${config.name}"
'';
description = ''
Path where the decrypted secret is installed.
'';
};
mode = mkOption {
type = types.str;
default = "0400";
description = ''
Permissions mode of the decrypted secret in a format understood by chmod.
'';
};
owner = mkOption {
type = types.str;
default = "0";
description = ''
User of the decrypted secret.
'';
};
group = mkOption {
type = types.str;
default = users.${config.owner}.group or "0";
defaultText = literalExpression ''
users.''${config.owner}.group or "0"
'';
description = ''
Group of the decrypted secret.
'';
};
symlink = mkEnableOption "symlinking secrets to their destination" // {
default = true;
};
};
}
);
in
{
imports = [
(mkRenamedOptionModule [ "age" "sshKeyPaths" ] [ "age" "identityPaths" ])
];
options.age = {
ageBin = mkOption {
type = types.str;
default = "${pkgs.age}/bin/age";
defaultText = literalExpression ''
"''${pkgs.age}/bin/age"
'';
description = ''
The age executable to use.
'';
};
secrets = mkOption {
type = types.attrsOf secretType;
default = { };
description = ''
Attrset of secrets.
'';
};
secretsDir = mkOption {
type = types.path;
default = "/run/agenix";
description = ''
Folder where secrets are symlinked to
'';
};
secretsMountPoint = mkOption {
type =
types.addCheck types.str (
s:
(builtins.match "[ \t\n]*" s) == null # non-empty
&& (builtins.match ".+/" s) == null
) # without trailing slash
// {
description = "${types.str.description} (with check: non-empty without trailing slash)";
};
default = "/run/agenix.d";
description = ''
Where secrets are created before they are symlinked to {option}`age.secretsDir`
'';
};
identityPaths = mkOption {
type = types.listOf types.path;
default =
if isDarwin then
[
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_rsa_key"
]
else if (config.services.openssh.enable or false) then
map (e: e.path) (
lib.filter (e: e.type == "rsa" || e.type == "ed25519") config.services.openssh.hostKeys
)
else
[ ];
defaultText = literalExpression ''
if isDarwin
then [
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_rsa_key"
]
else if (config.services.openssh.enable or false)
then map (e: e.path) (lib.filter (e: e.type == "rsa" || e.type == "ed25519") config.services.openssh.hostKeys)
else [];
'';
description = ''
Path to SSH keys to be used as identities in age decryption.
'';
};
};
config = mkIf (cfg.secrets != { }) (mkMerge [
{
assertions = [
{
assertion = cfg.identityPaths != [ ];
message = "age.identityPaths must be set, for example by enabling openssh.";
}
];
}
(optionalAttrs (!isDarwin) {
# When using sysusers we no longer be started as an activation script
# because those are started in initrd while sysusers is started later.
systemd.services.agenix-install-secrets = mkIf sysusersEnabled {
wantedBy = [ "sysinit.target" ];
after = [ "systemd-sysusers.service" ];
unitConfig.DefaultDependencies = "no";
path = [ pkgs.mount ];
serviceConfig = {
Type = "oneshot";
ExecStart = pkgs.writeShellScript "agenix-install" (concatLines [
newGeneration
installSecrets
chownSecrets
]);
RemainAfterExit = true;
};
};
# Create a new directory full of secrets for symlinking (this helps
# ensure removed secrets are actually removed, or at least become
# invalid symlinks).
system.activationScripts = mkIf (!sysusersEnabled) {
agenixNewGeneration = {
text = newGeneration;
deps = [
"specialfs"
];
};
agenixInstall = {
text = installSecrets;
deps = [
"agenixNewGeneration"
"specialfs"
];
};
# So user passwords can be encrypted.
users.deps = [ "agenixInstall" ];
# Change ownership and group after users and groups are made.
agenixChown = {
text = chownSecrets;
deps = [
"users"
"groups"
];
};
# So other activation scripts can depend on agenix being done.
agenix = {
text = "";
deps = [ "agenixChown" ];
};
};
})
(optionalAttrs isDarwin {
launchd.daemons.activate-agenix = {
script = ''
set -e
set -o pipefail
export PATH="${pkgs.gnugrep}/bin:${pkgs.coreutils}/bin:@out@/sw/bin:/usr/bin:/bin:/usr/sbin:/sbin"
${newGeneration}
${installSecrets}
${chownSecrets}
exit 0
'';
serviceConfig = {
RunAtLoad = true;
KeepAlive.SuccessfulExit = false;
};
};
})
]);
}

View File

@ -1,49 +0,0 @@
{ config, lib, pkgs, ... }:
{
options = {
services.amd-uprof = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Whether to enable AMD uProf.";
};
};
};
# Only setup amd-uprof if enabled
config = lib.mkIf config.services.amd-uprof.enable {
# First make sure that we add the module to the list of available modules
# in the kernel matching the same kernel version of this configuration.
boot.extraModulePackages = with config.boot.kernelPackages; [ amd-uprof-driver ];
boot.kernelModules = [ "AMDPowerProfiler" ];
# Make the userspace tools available in $PATH.
environment.systemPackages = with pkgs; [ amd-uprof ];
# The AMDPowerProfiler module doesn't create the /dev device nor it emits
# any uevents, so we cannot use udev rules to automatically create the
# device. Instead, we run a systemd unit that does it after loading the
# modules.
systemd.services.amd-uprof-device = {
description = "Create /dev/AMDPowerProfiler device";
after = [ "systemd-modules-load.service" ];
wantedBy = [ "multi-user.target" ];
unitConfig.ConditionPathExists = [
"/proc/AMDPowerProfiler/device"
"!/dev/AMDPowerProfiler"
];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
ExecStart = pkgs.writeShellScript "add-amd-uprof-dev.sh" ''
mknod /dev/AMDPowerProfiler -m 666 c $(< /proc/AMDPowerProfiler/device) 0
'';
ExecStop = pkgs.writeShellScript "remove-amd-uprof-dev.sh" ''
rm -f /dev/AMDPowerProfiler
'';
};
};
};
}

View File

@ -6,8 +6,5 @@
{
extra-substituters = [ "http://hut/cache" ];
extra-trusted-public-keys = [ "jungle.bsc.es:pEc7MlAT0HEwLQYPtpkPLwRsGf80ZI26aj29zMw/HH0=" ];
# Set a low timeout in case hut is down
connect-timeout = 3; # seconds
};
}

View File

@ -1,20 +0,0 @@
{ lib, config, pkgs, ... }:
{
# Configure Nvidia driver to use with CUDA
hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.production;
hardware.nvidia.open = lib.mkDefault (builtins.abort "hardware.nvidia.open not set");
hardware.graphics.enable = true;
nixpkgs.config.nvidia.acceptLicense = true;
services.xserver.videoDrivers = [ "nvidia" ];
# enable support for derivations which require nvidia-gpu to be available
# > requiredSystemFeatures = [ "cuda" ];
programs.nix-required-mounts.enable = true;
programs.nix-required-mounts.presets.nvidia-gpu.enable = true;
# They forgot to add the symlink
programs.nix-required-mounts.allowedPatterns.nvidia-gpu.paths = [
config.systemd.tmpfiles.settings.graphics-driver."/run/opengl-driver"."L+".argument
];
environment.systemPackages = [ pkgs.cudainfo ];
}

View File

@ -1,33 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.power.policy;
in
{
options = {
power.policy = mkOption {
type = types.nullOr (types.enum [ "always-on" "previous" "always-off" ]);
default = null;
description = "Set power policy to use via IPMI.";
};
};
config = mkIf (cfg != null) {
systemd.services."power-policy" = {
description = "Set power policy to use via IPMI";
wantedBy = [ "multi-user.target" ];
unitConfig = {
StartLimitBurst = "10";
StartLimitIntervalSec = "10m";
};
serviceConfig = {
ExecStart = "${pkgs.ipmitool}/bin/ipmitool chassis policy ${cfg}";
Type = "oneshot";
Restart = "on-failure";
RestartSec = "5s";
};
};
};
}

View File

@ -1,10 +1,33 @@
{ lib, ... }:
{ config, pkgs, lib, ... }:
{
imports = [
./slurm-common.nix
];
let
suspendProgram = pkgs.writeScript "suspend.sh" ''
#!/usr/bin/env bash
exec 1>>/var/log/power_save.log 2>>/var/log/power_save.log
set -x
export "PATH=/run/current-system/sw/bin:$PATH"
echo "$(date) Suspend invoked $0 $*" >> /var/log/power_save.log
hosts=$(scontrol show hostnames $1)
for host in $hosts; do
echo Shutting down host: $host
ipmitool -I lanplus -H ''${host}-ipmi -P "" -U "" chassis power off
done
'';
resumeProgram = pkgs.writeScript "resume.sh" ''
#!/usr/bin/env bash
exec 1>>/var/log/power_save.log 2>>/var/log/power_save.log
set -x
export "PATH=/run/current-system/sw/bin:$PATH"
echo "$(date) Suspend invoked $0 $*" >> /var/log/power_save.log
hosts=$(scontrol show hostnames $1)
for host in $hosts; do
echo Starting host: $host
ipmitool -I lanplus -H ''${host}-ipmi -P "" -U "" chassis power on
done
'';
in {
systemd.services.slurmd.serviceConfig = {
# Kill all processes in the control group on stop/restart. This will kill
# all the jobs running, so ensure that we only upgrade when the nodes are
@ -12,13 +35,92 @@
# https://github.com/NixOS/nixpkgs/commit/ae93ed0f0d4e7be0a286d1fca86446318c0c6ffb
# https://bugs.schedmd.com/show_bug.cgi?id=2095#c24
KillMode = lib.mkForce "control-group";
# If slurmd fails to contact the control server it will fail, causing the
# node to remain out of service until manually restarted. Always try to
# restart it.
Restart = "always";
RestartSec = "30s";
};
services.slurm.client.enable = true;
services.slurm = {
client.enable = true;
controlMachine = "hut";
clusterName = "jungle";
nodeName = [
"owl[1,2] Sockets=2 CoresPerSocket=14 ThreadsPerCore=2 Feature=owl"
"hut Sockets=2 CoresPerSocket=14 ThreadsPerCore=2"
];
partitionName = [
"owl Nodes=owl[1-2] Default=YES DefaultTime=01:00:00 MaxTime=INFINITE State=UP"
];
# See slurm.conf(5) for more details about these options.
extraConfig = ''
# Use PMIx for MPI by default. It works okay with MPICH and OpenMPI, but
# not with Intel MPI. For that use the compatibility shim libpmi.so
# setting I_MPI_PMI_LIBRARY=$pmix/lib/libpmi.so while maintaining the PMIx
# library in SLURM (--mpi=pmix). See more details here:
# https://pm.bsc.es/gitlab/rarias/jungle/-/issues/16
MpiDefault=pmix
# When a node reboots return that node to the slurm queue as soon as it
# becomes operative again.
ReturnToService=2
# Track all processes by using a cgroup
ProctrackType=proctrack/cgroup
# Enable task/affinity to allow the jobs to run in a specified subset of
# the resources. Use the task/cgroup plugin to enable process containment.
TaskPlugin=task/affinity,task/cgroup
# Power off unused nodes until they are requested
SuspendProgram=${suspendProgram}
SuspendTimeout=60
ResumeProgram=${resumeProgram}
ResumeTimeout=300
SuspendExcNodes=hut
# Turn the nodes off after 1 hour of inactivity
SuspendTime=3600
# Reduce port range so we can allow only this range in the firewall
SrunPortRange=60000-61000
# Use cores as consumable resources. In SLURM terms, a core may have
# multiple hardware threads (or CPUs).
SelectType=select/cons_tres
# Ignore memory constraints and only use unused cores to share a node with
# other jobs.
SelectTypeParameters=CR_Core
# Required for pam_slurm_adopt, see https://slurm.schedmd.com/pam_slurm_adopt.html
# This sets up the "extern" step into which ssh-launched processes will be
# adopted. Alloc runs the prolog at job allocation (salloc) rather than
# when a task runs (srun) so we can ssh early.
PrologFlags=Alloc,Contain,X11
# LaunchParameters=ulimit_pam_adopt will set RLIMIT_RSS in processes
# adopted by the external step, similar to tasks running in regular steps
# LaunchParameters=ulimit_pam_adopt
SlurmdDebug=debug5
#DebugFlags=Protocol,Cgroup
'';
extraCgroupConfig = ''
CgroupPlugin=cgroup/v2
#ConstrainCores=yes
'';
};
# Place the slurm config in /etc as this will be required by PAM
environment.etc.slurm.source = config.services.slurm.etcSlurm;
age.secrets.mungeKey = {
file = ../../secrets/munge-key.age;
owner = "munge";
group = "munge";
};
services.munge = {
enable = true;
password = config.age.secrets.mungeKey.path;
};
}

View File

@ -1,115 +0,0 @@
{ config, pkgs, ... }:
let
suspendProgram = pkgs.writeShellScript "suspend.sh" ''
exec 1>>/var/log/power_save.log 2>>/var/log/power_save.log
set -x
export "PATH=/run/current-system/sw/bin:$PATH"
echo "$(date) Suspend invoked $0 $*" >> /var/log/power_save.log
hosts=$(scontrol show hostnames $1)
for host in $hosts; do
echo Shutting down host: $host
ipmitool -I lanplus -H ''${host}-ipmi -P "" -U "" chassis power off
done
'';
resumeProgram = pkgs.writeShellScript "resume.sh" ''
exec 1>>/var/log/power_save.log 2>>/var/log/power_save.log
set -x
export "PATH=/run/current-system/sw/bin:$PATH"
echo "$(date) Suspend invoked $0 $*" >> /var/log/power_save.log
hosts=$(scontrol show hostnames $1)
for host in $hosts; do
echo Starting host: $host
ipmitool -I lanplus -H ''${host}-ipmi -P "" -U "" chassis power on
done
'';
in {
services.slurm = {
controlMachine = "apex";
clusterName = "jungle";
nodeName = [
"owl[1,2] Sockets=2 CoresPerSocket=14 ThreadsPerCore=2 Feature=owl"
"fox Sockets=8 CoresPerSocket=24 ThreadsPerCore=1"
];
partitionName = [
"owl Nodes=owl[1-2] Default=YES DefaultTime=01:00:00 MaxTime=INFINITE State=UP"
"fox Nodes=fox Default=NO DefaultTime=01:00:00 MaxTime=INFINITE State=UP"
];
# See slurm.conf(5) for more details about these options.
extraConfig = ''
# Use PMIx for MPI by default. It works okay with MPICH and OpenMPI, but
# not with Intel MPI. For that use the compatibility shim libpmi.so
# setting I_MPI_PMI_LIBRARY=$pmix/lib/libpmi.so while maintaining the PMIx
# library in SLURM (--mpi=pmix). See more details here:
# https://pm.bsc.es/gitlab/rarias/jungle/-/issues/16
MpiDefault=pmix
# When a node reboots return that node to the slurm queue as soon as it
# becomes operative again.
ReturnToService=2
# Track all processes by using a cgroup
ProctrackType=proctrack/cgroup
# Enable task/affinity to allow the jobs to run in a specified subset of
# the resources. Use the task/cgroup plugin to enable process containment.
TaskPlugin=task/affinity,task/cgroup
# Power off unused nodes until they are requested
SuspendProgram=${suspendProgram}
SuspendTimeout=60
ResumeProgram=${resumeProgram}
ResumeTimeout=300
SuspendExcNodes=fox
# Turn the nodes off after 1 hour of inactivity
SuspendTime=3600
# Reduce port range so we can allow only this range in the firewall
SrunPortRange=60000-61000
# Use cores as consumable resources. In SLURM terms, a core may have
# multiple hardware threads (or CPUs).
SelectType=select/cons_tres
# Ignore memory constraints and only use unused cores to share a node with
# other jobs.
SelectTypeParameters=CR_Core
# Required for pam_slurm_adopt, see https://slurm.schedmd.com/pam_slurm_adopt.html
# This sets up the "extern" step into which ssh-launched processes will be
# adopted. Alloc runs the prolog at job allocation (salloc) rather than
# when a task runs (srun) so we can ssh early.
PrologFlags=Alloc,Contain,X11
# LaunchParameters=ulimit_pam_adopt will set RLIMIT_RSS in processes
# adopted by the external step, similar to tasks running in regular steps
# LaunchParameters=ulimit_pam_adopt
SlurmdDebug=debug5
#DebugFlags=Protocol,Cgroup
'';
extraCgroupConfig = ''
CgroupPlugin=cgroup/v2
#ConstrainCores=yes
'';
};
# Place the slurm config in /etc as this will be required by PAM
environment.etc.slurm.source = config.services.slurm.etcSlurm;
age.secrets.mungeKey = {
file = ../../secrets/munge-key.age;
owner = "munge";
group = "munge";
};
services.munge = {
enable = true;
password = config.age.secrets.mungeKey.path;
};
}

View File

@ -1,23 +0,0 @@
{ ... }:
{
imports = [
./slurm-common.nix
];
services.slurm.server.enable = true;
networking.firewall = {
extraCommands = ''
# Accept slurm connections to controller from compute nodes
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 6817 -j nixos-fw-accept
# Accept slurm connections from compute nodes for srun
iptables -A nixos-fw -p tcp -s 10.0.40.0/24 --dport 60000:61000 -j nixos-fw-accept
# Accept slurm connections to controller from fox (via wireguard)
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.1/32 --dport 6817 -j nixos-fw-accept
# Accept slurm connections from fox for srun (via wireguard)
iptables -A nixos-fw -p tcp -i wg0 -s 10.106.0.1/32 --dport 60000:61000 -j nixos-fw-accept
'';
};
}

View File

@ -0,0 +1,9 @@
{
programs.ssh.extraConfig = ''
Host ssfhead
HostName ssflogin.bsc.es
Host hut
ProxyJump ssfhead
HostName xeon07
'';
}

View File

@ -3,13 +3,10 @@
{
imports = [
../common/base.nix
../common/ssf/hosts.nix
../module/emulation.nix
../module/debuginfod.nix
../module/nvidia.nix
../module/ssh-hut-extern.nix
../eudy/kernel/perf.nix
./wireguard.nix
../module/hut-substituter.nix
];
# Don't install Grub on the disk yet
@ -41,21 +38,26 @@
};
hosts = {
"10.0.44.4" = [ "tent" ];
"84.88.53.236" = [ "apex" ];
};
};
# Mount the NFS home
fileSystems."/nfs/home" = {
device = "10.106.0.30:/home";
fsType = "nfs";
options = [ "nfsvers=3" "rsize=1024" "wsize=1024" "cto" "nofail" ];
nix.settings = {
extra-substituters = [ "https://jungle.bsc.es/cache" ];
extra-trusted-public-keys = [ "jungle.bsc.es:pEc7MlAT0HEwLQYPtpkPLwRsGf80ZI26aj29zMw/HH0=" ];
};
# Enable performance governor
powerManagement.cpuFreqGovernor = "performance";
hardware.nvidia.open = false; # Maxwell is older than Turing architecture
# Configure Nvidia driver to use with CUDA
hardware.nvidia.package = config.boot.kernelPackages.nvidiaPackages.production;
hardware.graphics.enable = true;
nixpkgs.config.allowUnfree = true;
nixpkgs.config.nvidia.acceptLicense = true;
services.xserver.videoDrivers = [ "nvidia" ];
# Disable garbage collection for now
nix.gc.automatic = lib.mkForce false;
services.openssh.settings.X11Forwarding = true;

View File

@ -1,48 +0,0 @@
{ config, pkgs, ... }:
{
networking.nat = {
enable = true;
enableIPv6 = false;
externalInterface = "eno0";
internalInterfaces = [ "wg0" ];
};
networking.firewall = {
allowedUDPPorts = [ 666 ];
};
age.secrets.wgRaccoon.file = ../../secrets/wg-raccoon.age;
# Enable WireGuard
networking.wireguard.enable = true;
networking.wireguard.interfaces = {
wg0 = {
ips = [ "10.106.0.236/24" ];
listenPort = 666;
privateKeyFile = config.age.secrets.wgRaccoon.path;
# Public key: QUfnGXSMEgu2bviglsaSdCjidB51oEDBFpnSFcKGfDI=
peers = [
{
name = "fox";
publicKey = "VfMPBQLQTKeyXJSwv8wBhc6OV0j2qAxUpX3kLHunK2Y=";
allowedIPs = [ "10.106.0.1/32" ];
endpoint = "fox.ac.upc.edu:666";
persistentKeepalive = 25;
}
{
name = "apex";
publicKey = "VwhcN8vSOzdJEotQTpmPHBC52x3Hbv1lkFIyKubrnUA=";
allowedIPs = [ "10.106.0.30/32" "10.0.40.0/24" ];
endpoint = "ssfhead.bsc.es:666";
persistentKeepalive = 25;
}
];
};
};
networking.hosts = {
"10.106.0.1" = [ "fox.wg" ];
"10.106.0.30" = [ "apex.wg" ];
};
}

View File

@ -3,9 +3,9 @@
{
imports = [
../common/xeon.nix
../common/ssf/hosts.nix
../module/emulation.nix
../module/debuginfod.nix
../module/ssh-hut-extern.nix
./monitoring.nix
./nginx.nix
./nix-serve.nix
@ -15,7 +15,6 @@
../hut/msmtp.nix
../module/p.nix
../module/vpn-dac.nix
../module/hut-substituter.nix
];
# Select the this using the ID to avoid mismatches
@ -34,10 +33,6 @@
nameservers = [ "84.88.52.35" "84.88.52.36" ];
search = [ "bsc.es" "ac.upc.edu" ];
defaultGateway = "10.0.44.1";
hosts = {
"84.88.53.236" = [ "apex" ];
"10.0.44.1" = [ "raccoon" ];
};
};
services.p.enable = true;

View File

@ -2,13 +2,10 @@
let
website = pkgs.stdenv.mkDerivation {
name = "jungle-web";
src = pkgs.fetchgit {
url = "https://jungle.bsc.es/git/rarias/jungle-website.git";
rev = "739bf0175a7f05380fe7ad7023ff1d60db1710e1";
hash = "sha256-ea5DzhYTzZ9TmqD+x95rdNdLbxPnBluqlYH2NmBYmc4=";
};
src = theFlake;
buildInputs = [ pkgs.hugo ];
buildPhase = ''
cd web
rm -rf public/
hugo
'';
@ -70,9 +67,6 @@ in
location /p/ {
alias /var/lib/p/;
}
location /pub/ {
alias /vault/pub/;
}
'';
};
};

View File

@ -1,33 +0,0 @@
{ lib, ... }:
{
imports = [
../common/ssf.nix
../module/hut-substituter.nix
];
# Select this using the ID to avoid mismatches
boot.loader.grub.device = "/dev/disk/by-id/wwn-0x55cd2e414d5356ca";
# No swap, there is plenty of RAM
swapDevices = lib.mkForce [];
# Users with sudo access
users.groups.wheel.members = [ "abonerib" "anavarro" ];
# Run julia installed with juliaup using julia's own libraries:
# NIX_LD_LIBRARY_PATH=~/.julia/juliaup/${VERS}/lib/julia ~/.juliaup/bin/julia
programs.nix-ld.enable = true;
networking = {
hostName = "weasel";
interfaces.eno1.ipv4.addresses = [ {
address = "10.0.40.6";
prefixLength = 24;
} ];
interfaces.ibp5s0.ipv4.addresses = [ {
address = "10.0.42.6";
prefixLength = 24;
} ];
};
}

View File

@ -1,157 +0,0 @@
final: /* Future last stage */
prev: /* Previous stage */
with final.lib;
let
callPackage = final.callPackage;
bscPkgs = {
agenix = prev.callPackage ./pkgs/agenix/default.nix { };
amd-uprof = prev.callPackage ./pkgs/amd-uprof/default.nix { };
bench6 = callPackage ./pkgs/bench6/default.nix { };
bigotes = callPackage ./pkgs/bigotes/default.nix { };
clangOmpss2 = callPackage ./pkgs/llvm-ompss2/default.nix { };
clangOmpss2Nanos6 = callPackage ./pkgs/llvm-ompss2/default.nix { ompss2rt = final.nanos6; };
clangOmpss2Nodes = callPackage ./pkgs/llvm-ompss2/default.nix { ompss2rt = final.nodes; openmp = final.openmp; };
clangOmpss2NodesOmpv = callPackage ./pkgs/llvm-ompss2/default.nix { ompss2rt = final.nodes; openmp = final.openmpv; };
clangOmpss2Unwrapped = callPackage ./pkgs/llvm-ompss2/clang.nix { };
cudainfo = prev.callPackage ./pkgs/cudainfo/default.nix { };
#extrae = callPackage ./pkgs/extrae/default.nix { }; # Broken and outdated
gpi-2 = callPackage ./pkgs/gpi-2/default.nix { };
intelPackages_2023 = callPackage ./pkgs/intel-oneapi/2023.nix { };
jemallocNanos6 = callPackage ./pkgs/nanos6/jemalloc.nix { };
# FIXME: Extend this to all linuxPackages variants. Open problem, see:
# https://discourse.nixos.org/t/whats-the-right-way-to-make-a-custom-kernel-module-available/4636
linuxPackages = prev.linuxPackages.extend (_final: _prev: {
amd-uprof-driver = _prev.callPackage ./pkgs/amd-uprof/driver.nix { };
});
linuxPackages_latest = prev.linuxPackages_latest.extend(_final: _prev: {
amd-uprof-driver = _prev.callPackage ./pkgs/amd-uprof/driver.nix { };
});
lmbench = callPackage ./pkgs/lmbench/default.nix { };
mcxx = callPackage ./pkgs/mcxx/default.nix { };
meteocat-exporter = prev.callPackage ./pkgs/meteocat-exporter/default.nix { };
mpi = final.mpich; # Set MPICH as default
mpich = callPackage ./pkgs/mpich/default.nix { mpich = prev.mpich; };
nanos6 = callPackage ./pkgs/nanos6/default.nix { };
nanos6Debug = final.nanos6.override { enableDebug = true; };
nixtools = callPackage ./pkgs/nixtools/default.nix { };
# Broken because of pkgsStatic.libcap
# See: https://github.com/NixOS/nixpkgs/pull/268791
#nix-wrap = callPackage ./pkgs/nix-wrap/default.nix { };
nodes = callPackage ./pkgs/nodes/default.nix { };
nosv = callPackage ./pkgs/nosv/default.nix { };
openmp = callPackage ./pkgs/llvm-ompss2/openmp.nix { monorepoSrc = final.clangOmpss2Unwrapped.src; version = final.clangOmpss2Unwrapped.version; };
openmpv = final.openmp.override { enableNosv = true; enableOvni = true; };
osumb = callPackage ./pkgs/osu/default.nix { };
ovni = callPackage ./pkgs/ovni/default.nix { };
ovniGit = final.ovni.override { useGit = true; };
paraverKernel = callPackage ./pkgs/paraver/kernel.nix { };
prometheus-slurm-exporter = prev.callPackage ./pkgs/slurm-exporter/default.nix { };
#pscom = callPackage ./pkgs/parastation/pscom.nix { }; # Unmaintaned
#psmpi = callPackage ./pkgs/parastation/psmpi.nix { }; # Unmaintaned
sonar = callPackage ./pkgs/sonar/default.nix { };
stdenvClangOmpss2 = final.stdenv.override { cc = final.clangOmpss2; allowedRequisites = null; };
stdenvClangOmpss2Nanos6 = final.stdenv.override { cc = final.clangOmpss2Nanos6; allowedRequisites = null; };
stdenvClangOmpss2Nodes = final.stdenv.override { cc = final.clangOmpss2Nodes; allowedRequisites = null; };
stdenvClangOmpss2NodesOmpv = final.stdenv.override { cc = final.clangOmpss2NodesOmpv; allowedRequisites = null; };
tagaspi = callPackage ./pkgs/tagaspi/default.nix { };
tampi = callPackage ./pkgs/tampi/default.nix { };
upc-qaire-exporter = prev.callPackage ./pkgs/upc-qaire-exporter/default.nix { };
wxparaver = callPackage ./pkgs/paraver/default.nix { };
};
tests = rec {
hwloc = callPackage ./test/bugs/hwloc.nix { };
#sigsegv = callPackage ./test/reproducers/sigsegv.nix { };
hello-c = callPackage ./test/compilers/hello-c.nix { };
hello-cpp = callPackage ./test/compilers/hello-cpp.nix { };
lto = callPackage ./test/compilers/lto.nix { };
asan = callPackage ./test/compilers/asan.nix { };
intel2023-icx-c = hello-c.override { stdenv = final.intelPackages_2023.stdenv; };
intel2023-icc-c = hello-c.override { stdenv = final.intelPackages_2023.stdenv-icc; };
intel2023-icx-cpp = hello-cpp.override { stdenv = final.intelPackages_2023.stdenv; };
intel2023-icc-cpp = hello-cpp.override { stdenv = final.intelPackages_2023.stdenv-icc; };
intel2023-ifort = callPackage ./test/compilers/hello-f.nix {
stdenv = final.intelPackages_2023.stdenv-ifort;
};
clangOmpss2-lto = lto.override { stdenv = final.stdenvClangOmpss2Nanos6; };
clangOmpss2-asan = asan.override { stdenv = final.stdenvClangOmpss2Nanos6; };
clangOmpss2-task = callPackage ./test/compilers/ompss2.nix {
stdenv = final.stdenvClangOmpss2Nanos6;
};
clangNodes-task = callPackage ./test/compilers/ompss2.nix {
stdenv = final.stdenvClangOmpss2Nodes;
};
clangNosvOpenmp-task = callPackage ./test/compilers/clang-openmp.nix {
stdenv = final.stdenvClangOmpss2Nodes;
};
clangNosvOmpv-nosv = callPackage ./test/compilers/clang-openmp-nosv.nix {
stdenv = final.stdenvClangOmpss2NodesOmpv;
};
clangNosvOmpv-ld = callPackage ./test/compilers/clang-openmp-ld.nix {
stdenv = final.stdenvClangOmpss2NodesOmpv;
};
};
# For now, only build toplevel packages in CI/Hydra
pkgsTopLevel = filterAttrs (_: isDerivation) bscPkgs;
# Native build in that platform doesn't imply cross build works
canCrossCompile = platform: pkg:
(isDerivation pkg) &&
# Must be defined explicitly
(pkg.meta.cross or false) &&
(meta.availableOn platform pkg);
# For now only RISC-V
crossSet = { riscv64 = final.pkgsCross.riscv64.bsc.pkgsTopLevel; };
buildList = name: paths:
final.runCommandLocal name { } ''
printf '%s\n' ${toString paths} | tee $out
'';
buildList' = name: paths:
final.runCommandLocal name { } ''
deps="${toString paths}"
cat $deps
printf '%s\n' $deps >$out
'';
pkgsList = buildList "ci-pkgs" (builtins.attrValues pkgsTopLevel);
testsList = buildList "ci-tests" (collect isDerivation tests);
allList = buildList' "ci-all" [ pkgsList testsList ];
# For now only RISC-V
crossList = buildList "ci-cross"
(filter
(canCrossCompile final.pkgsCross.riscv64.stdenv.hostPlatform)
(builtins.attrValues crossSet.riscv64));
in bscPkgs // {
lib = prev.lib // {
maintainers = prev.lib.maintainers // {
bsc = import ./pkgs/maintainers.nix;
};
};
# Prevent accidental usage of bsc-ci attribute
bsc-ci = throw "the bsc-ci attribute is deprecated, use bsc.ci";
# Internal for our CI tests
bsc = {
# CI targets for nix build
ci = { pkgs = pkgsList; tests = testsList; all = allList; cross = crossList; };
# Direct access to package sets
tests = tests;
pkgs = bscPkgs;
pkgsTopLevel = pkgsTopLevel;
cross = crossSet;
# Hydra uses attribute sets of pkgs
hydraJobs = { tests = tests; pkgs = pkgsTopLevel; cross = crossSet; };
};
}

View File

@ -1,212 +0,0 @@
#!/usr/bin/env bash
set -Eeuo pipefail
PACKAGE="agenix"
function show_help () {
echo "$PACKAGE - edit and rekey age secret files"
echo " "
echo "$PACKAGE -e FILE [-i PRIVATE_KEY]"
echo "$PACKAGE -r [-i PRIVATE_KEY]"
echo ' '
echo 'options:'
echo '-h, --help show help'
# shellcheck disable=SC2016
echo '-e, --edit FILE edits FILE using $EDITOR'
echo '-r, --rekey re-encrypts all secrets with specified recipients'
echo '-d, --decrypt FILE decrypts FILE to STDOUT'
echo '-i, --identity identity to use when decrypting'
echo '-v, --verbose verbose output'
echo ' '
echo 'FILE an age-encrypted file'
echo ' '
echo 'PRIVATE_KEY a path to a private SSH key used to decrypt file'
echo ' '
echo 'EDITOR environment variable of editor to use when editing FILE'
echo ' '
echo 'If STDIN is not interactive, EDITOR will be set to "cp /dev/stdin"'
echo ' '
echo 'RULES environment variable with path to Nix file specifying recipient public keys.'
echo "Defaults to './secrets.nix'"
echo ' '
echo "agenix version: @version@"
echo "age binary path: @ageBin@"
echo "age version: $(@ageBin@ --version)"
}
function warn() {
printf '%s\n' "$*" >&2
}
function err() {
warn "$*"
exit 1
}
test $# -eq 0 && (show_help && exit 1)
REKEY=0
DECRYPT_ONLY=0
DEFAULT_DECRYPT=(--decrypt)
while test $# -gt 0; do
case "$1" in
-h|--help)
show_help
exit 0
;;
-e|--edit)
shift
if test $# -gt 0; then
export FILE=$1
else
echo "no FILE specified"
exit 1
fi
shift
;;
-i|--identity)
shift
if test $# -gt 0; then
DEFAULT_DECRYPT+=(--identity "$1")
else
echo "no PRIVATE_KEY specified"
exit 1
fi
shift
;;
-r|--rekey)
shift
REKEY=1
;;
-d|--decrypt)
shift
DECRYPT_ONLY=1
if test $# -gt 0; then
export FILE=$1
else
echo "no FILE specified"
exit 1
fi
shift
;;
-v|--verbose)
shift
set -x
;;
*)
show_help
exit 1
;;
esac
done
RULES=${RULES:-./secrets.nix}
function cleanup {
if [ -n "${CLEARTEXT_DIR+x}" ]
then
rm -rf -- "$CLEARTEXT_DIR"
fi
if [ -n "${REENCRYPTED_DIR+x}" ]
then
rm -rf -- "$REENCRYPTED_DIR"
fi
}
trap "cleanup" 0 2 3 15
function keys {
(@nixInstantiate@ --json --eval --strict -E "(let rules = import $RULES; in rules.\"$1\".publicKeys)" | @jqBin@ -r .[]) || exit 1
}
function armor {
(@nixInstantiate@ --json --eval --strict -E "(let rules = import $RULES; in (builtins.hasAttr \"armor\" rules.\"$1\" && rules.\"$1\".armor))") || exit 1
}
function decrypt {
FILE=$1
KEYS=$2
if [ -z "$KEYS" ]
then
err "There is no rule for $FILE in $RULES."
fi
if [ -f "$FILE" ]
then
DECRYPT=("${DEFAULT_DECRYPT[@]}")
if [[ "${DECRYPT[*]}" != *"--identity"* ]]; then
if [ -f "$HOME/.ssh/id_rsa" ]; then
DECRYPT+=(--identity "$HOME/.ssh/id_rsa")
fi
if [ -f "$HOME/.ssh/id_ed25519" ]; then
DECRYPT+=(--identity "$HOME/.ssh/id_ed25519")
fi
fi
if [[ "${DECRYPT[*]}" != *"--identity"* ]]; then
err "No identity found to decrypt $FILE. Try adding an SSH key at $HOME/.ssh/id_rsa or $HOME/.ssh/id_ed25519 or using the --identity flag to specify a file."
fi
@ageBin@ "${DECRYPT[@]}" -- "$FILE" || exit 1
fi
}
function edit {
FILE=$1
KEYS=$(keys "$FILE") || exit 1
ARMOR=$(armor "$FILE") || exit 1
CLEARTEXT_DIR=$(@mktempBin@ -d)
CLEARTEXT_FILE="$CLEARTEXT_DIR/$(basename -- "$FILE")"
DEFAULT_DECRYPT+=(-o "$CLEARTEXT_FILE")
decrypt "$FILE" "$KEYS" || exit 1
[ ! -f "$CLEARTEXT_FILE" ] || cp -- "$CLEARTEXT_FILE" "$CLEARTEXT_FILE.before"
[ -t 0 ] || EDITOR='cp -- /dev/stdin'
$EDITOR "$CLEARTEXT_FILE"
if [ ! -f "$CLEARTEXT_FILE" ]
then
warn "$FILE wasn't created."
return
fi
[ -f "$FILE" ] && [ "$EDITOR" != ":" ] && @diffBin@ -q -- "$CLEARTEXT_FILE.before" "$CLEARTEXT_FILE" && warn "$FILE wasn't changed, skipping re-encryption." && return
ENCRYPT=()
if [[ "$ARMOR" == "true" ]]; then
ENCRYPT+=(--armor)
fi
while IFS= read -r key
do
if [ -n "$key" ]; then
ENCRYPT+=(--recipient "$key")
fi
done <<< "$KEYS"
REENCRYPTED_DIR=$(@mktempBin@ -d)
REENCRYPTED_FILE="$REENCRYPTED_DIR/$(basename -- "$FILE")"
ENCRYPT+=(-o "$REENCRYPTED_FILE")
@ageBin@ "${ENCRYPT[@]}" <"$CLEARTEXT_FILE" || exit 1
mkdir -p -- "$(dirname -- "$FILE")"
mv -f -- "$REENCRYPTED_FILE" "$FILE"
}
function rekey {
FILES=$( (@nixInstantiate@ --json --eval -E "(let rules = import $RULES; in builtins.attrNames rules)" | @jqBin@ -r .[]) || exit 1)
for FILE in $FILES
do
warn "rekeying $FILE..."
EDITOR=: edit "$FILE"
cleanup
done
}
[ $REKEY -eq 1 ] && rekey && exit 0
[ $DECRYPT_ONLY -eq 1 ] && DEFAULT_DECRYPT+=("-o" "-") && decrypt "${FILE}" "$(keys "$FILE")" && exit 0
edit "$FILE" && cleanup && exit 0

View File

@ -1,66 +0,0 @@
{
lib,
stdenv,
age,
jq,
nix,
mktemp,
diffutils,
replaceVars,
ageBin ? "${age}/bin/age",
shellcheck,
}:
let
bin = "${placeholder "out"}/bin/agenix";
in
stdenv.mkDerivation rec {
pname = "agenix";
version = "0.15.0";
src = replaceVars ./agenix.sh {
inherit ageBin version;
jqBin = "${jq}/bin/jq";
nixInstantiate = "${nix}/bin/nix-instantiate";
mktempBin = "${mktemp}/bin/mktemp";
diffBin = "${diffutils}/bin/diff";
};
dontUnpack = true;
doInstallCheck = true;
installCheckInputs = [ shellcheck ];
postInstallCheck = ''
shellcheck ${bin}
${bin} -h | grep ${version}
test_tmp=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
export HOME="$test_tmp/home"
export NIX_STORE_DIR="$test_tmp/nix/store"
export NIX_STATE_DIR="$test_tmp/nix/var"
mkdir -p "$HOME" "$NIX_STORE_DIR" "$NIX_STATE_DIR"
function cleanup {
rm -rf "$test_tmp"
}
trap "cleanup" 0 2 3 15
mkdir -p $HOME/.ssh
cp -r "${./example}" $HOME/secrets
chmod -R u+rw $HOME/secrets
(
umask u=rw,g=r,o=r
cp ${./example_keys/user1.pub} $HOME/.ssh/id_ed25519.pub
chown $UID $HOME/.ssh/id_ed25519.pub
)
(
umask u=rw,g=,o=
cp ${./example_keys/user1} $HOME/.ssh/id_ed25519
chown $UID $HOME/.ssh/id_ed25519
)
cd $HOME/secrets
test $(${bin} -d secret1.age) = "hello"
'';
installPhase = ''
install -D $src ${bin}
'';
meta.description = "age-encrypted secrets for NixOS";
}

View File

@ -1,7 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 V3XmEA zirqdzZZ1E+sedBn7fbEHq4ntLEkokZ4GctarBBOHXY
Rvs5YHaAUeCZyNwPedubPcHClWYIuXXWA5zadXPWY6w
-> ssh-ed25519 KLPP8w BVp4rDkOYSQyn8oVeHFeinSqW+pdVtxBF9+5VM1yORY
bMwppAi8Nhz0328taU4AzUkTVyWtSLvFZG6c5W/Fs78
--- xCbqLhXAcOziO2wmbjTiSQfZvt5Rlsc4SCvF+iEzpQA
ôKB£î/²ZÅÈrÙ%¾à4¡´—Mq5×Ô_ÌÂ݆ã„Ò11 ܨqM;& ¢‡LríÂÒføû”]>N

View File

@ -1,7 +0,0 @@
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IHNzaC1lZDI1NTE5IFYzWG1FQSBpZkZW
aFpLNnJxc0VUMHRmZ2dZS0pjMGVENnR3OHd5K0RiT1RjRUhibFZBCnN5UG5vUjA3
SXpsNGtiVUw4T0tIVFo5Wkk5QS9NQlBndzVvektiQ0ozc0kKLS0tIGxyY1Q4dEZ1
VGZEanJyTFNta2JNRmpZb2FnK2JyS1hSVml1UGdMNWZKQXMKYla+wTXcRedyZoEb
LVWaSx49WoUTU0KBPJg9RArxaeC23GoCDzR/aM/1DvYU
-----END AGE ENCRYPTED FILE-----

View File

@ -1,9 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 KLPP8w s1DYZRlZuSsyhmZCF1lFB+E9vB8bZ/+ZhBRlx8nprwE
nmYVCsVBrX2CFXXPU+D+bbkkIe/foofp+xoUrg9DHZw
-> ssh-ed25519 V3XmEA Pwv3oCwcY0DX8rY48UNfsj9RumWsn4dbgorYHCwObgI
FKxRYkL3JHtJxUwymWDF0rAtJ33BivDI6IfPsfumM90
-> V'v(/u$-grease em/Vgf 2qDuk
7I3iiQLPGi1COML9u/JeYkr7EqbSLoU
--- 57WJRigUGtmcObrssS3s4PvmR8wgh1AOC/ijJn1s3xI
<EFBFBD>'K©Æ·Y&7GÆOÝòFj±kÆXç«BnuJöê:9Ê(ÙÏX¬#¼AíÄÞÃÚ§j,ê_ÈþÝ?ÝZ“¥vœ¹V96]oks~%£c Îe^CÅ%JQ5€<H¢z}îCý,°pŒ¿*!W§§ÈA±º­Ò…dC¼K)¿¢-žy

Binary file not shown.

View File

@ -1,5 +0,0 @@
age-encryption.org/v1
-> ssh-ed25519 V3XmEA OB4+1FbPhQ3r6iGksM7peWX5it8NClpXIq/o5nnP7GA
FmHVUj+A5i5+bDFgySQskmlvynnosJiWUTJmBRiNA9I
--- tP+3mFVtd7ogVu1Lkboh55zoi5a77Ht08Uc/QuIviv4
¤¬Xæ{”ïOŠ£èätMXxÔvÓª(¬IÁmyPÇï¸è+3²S3i

View File

@ -1,23 +0,0 @@
let
user1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0idNvgGiucWgup/mP78zyC23uFjYq0evcWdjGQUaBH";
system1 = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJDyIr/FSz1cJdcoW69R+NrWzwGK/+3gJpqD1t8L2zE";
in
{
"secret1.age".publicKeys = [
user1
system1
];
"secret2.age".publicKeys = [ user1 ];
"passwordfile-user1.age".publicKeys = [
user1
system1
];
"-leading-hyphen-filename.age".publicKeys = [
user1
system1
];
"armored-secret.age" = {
publicKeys = [ user1 ];
armor = true;
};
}

View File

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACDyQ8iK/xUs9XCXXKFuvUfja1s8Biv/t4Caag9bfC9sxAAAAJA3yvCWN8rw
lgAAAAtzc2gtZWQyNTUxOQAAACDyQ8iK/xUs9XCXXKFuvUfja1s8Biv/t4Caag9bfC9sxA
AAAEA+J2V6AG1NriAIvnNKRauIEh1JE9HSdhvKJ68a5Fm0w/JDyIr/FSz1cJdcoW69R+Nr
WzwGK/+3gJpqD1t8L2zEAAAADHJ5YW50bUBob21lMQE=
-----END OPENSSH PRIVATE KEY-----

View File

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJDyIr/FSz1cJdcoW69R+NrWzwGK/+3gJpqD1t8L2zE

View File

@ -1,7 +0,0 @@
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACC9InTb4BornFoLqf5j+/M8gtt7hY2KtHr3FnYxkFGgRwAAAJC2JJ8htiSf
IQAAAAtzc2gtZWQyNTUxOQAAACC9InTb4BornFoLqf5j+/M8gtt7hY2KtHr3FnYxkFGgRw
AAAEDxt5gC/s53IxiKAjfZJVCCcFIsdeERdIgbYhLO719+Kb0idNvgGiucWgup/mP78zyC
23uFjYq0evcWdjGQUaBHAAAADHJ5YW50bUBob21lMQE=
-----END OPENSSH PRIVATE KEY-----

View File

@ -1 +0,0 @@
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0idNvgGiucWgup/mP78zyC23uFjYq0evcWdjGQUaBH

View File

@ -1,23 +0,0 @@
#!/bin/sh
set -e
# All operations are done relative to root
GITROOT=$(git rev-parse --show-toplevel)
cd "$GITROOT"
REVISION=${1:-main}
TMPCLONE=$(mktemp -d)
trap "rm -rf ${TMPCLONE}" EXIT
git clone https://github.com/ryantm/agenix.git --revision="$REVISION" "$TMPCLONE" --depth=1
cp "${TMPCLONE}/pkgs/agenix.sh" pkgs/agenix/agenix.sh
cp "${TMPCLONE}/pkgs/agenix.nix" pkgs/agenix/default.nix
sed -i 's#../example#./example#' pkgs/agenix/default.nix
cp "${TMPCLONE}/example/"* pkgs/agenix/example/
cp "${TMPCLONE}/example_keys/"* pkgs/agenix/example_keys/
cp "${TMPCLONE}/modules/age.nix" m/module/agenix.nix

View File

@ -1,98 +0,0 @@
{ stdenv
, lib
, curl
, cacert
, runCommandLocal
, autoPatchelfHook
, elfutils
, glib
, libGL
, ncurses5
, xorg
, zlib
, libxkbcommon
, freetype
, fontconfig
, libGLU
, dbus
, rocmPackages
, libxcrypt-legacy
, numactl
, radare2
}:
let
version = "5.1.701";
tarball = "AMDuProf_Linux_x64_${version}.tar.bz2";
# NOTE: Remember to update the radare2 patch below if AMDuProfPcm changes.
uprofSrc = runCommandLocal tarball {
nativeBuildInputs = [ curl ];
outputHash = "sha256-j9gxcBcIg6Zhc5FglUXf/VV9bKSo+PAKeootbN7ggYk=";
SSL_CERT_FILE="${cacert}/etc/ssl/certs/ca-bundle.crt";
} ''
curl \
-o $out \
'https://download.amd.com/developer/eula/uprof/uprof-5-1/${tarball}' \
-H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:139.0) Gecko/20100101 Firefox/139.0' \
-H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' \
-H 'Accept-Language: en-US,en;q=0.5' \
-H 'Accept-Encoding: gzip, deflate, br, zstd' \
-H 'Referer: https://www.amd.com/' 2>&1 | tr '\r' '\n'
'';
in
stdenv.mkDerivation {
pname = "AMD-uProf";
inherit version;
src = uprofSrc;
dontStrip = true;
phases = [ "installPhase" "fixupPhase" ];
nativeBuildInputs = [ autoPatchelfHook radare2 ];
buildInputs = [
stdenv.cc.cc.lib
ncurses5
elfutils
glib
libGL
libGLU
libxcrypt-legacy
xorg.libX11
xorg.libXext
xorg.libXi
xorg.libXmu
xorg.libxcb
xorg.xcbutilwm
xorg.xcbutilrenderutil
xorg.xcbutilkeysyms
xorg.xcbutilimage
fontconfig.lib
libxkbcommon
zlib
freetype
dbus
rocmPackages.rocprofiler
numactl
];
installPhase = ''
set -x
mkdir -p $out
tar -x -v -C $out --strip-components=1 -f $src
rm $out/bin/AMDPowerProfilerDriverSource.tar.gz
patchelf --replace-needed libroctracer64.so.1 libroctracer64.so $out/bin/ProfileAgents/x64/libAMDGpuAgent.so
patchelf --add-needed libcrypt.so.1 --add-needed libstdc++.so.6 $out/bin/AMDuProfSys
echo "16334a51fcc48668307ad94e20482ca4 $out/bin/AMDuProfPcm" | md5sum -c -
radare2 -w -q -i ${./libnuma.r2} $out/bin/AMDuProfPcm
patchelf --add-needed libnuma.so $out/bin/AMDuProfPcm
set +x
'';
meta = {
description = "Performance analysis tool-suite for x86 based applications";
homepage = "https://www.amd.com/es/developer/uprof.html";
platforms = lib.platforms.linux;
license = lib.licenses.unfree;
maintainers = with lib.maintainers.bsc; [ rarias varcila ];
};
}

View File

@ -1,35 +0,0 @@
{ stdenv
, lib
, amd-uprof
, kernel
, runCommandLocal
}:
let
version = amd-uprof.version;
tarball = amd-uprof.src;
in stdenv.mkDerivation {
pname = "AMDPowerProfilerDriver";
inherit version;
src = runCommandLocal "AMDPowerProfilerDriverSource.tar.gz" { } ''
set -x
tar -x -f ${tarball} AMDuProf_Linux_x64_${version}/bin/AMDPowerProfilerDriverSource.tar.gz
mv AMDuProf_Linux_x64_${version}/bin/AMDPowerProfilerDriverSource.tar.gz $out
set +x
'';
hardeningDisable = [ "pic" "format" ];
nativeBuildInputs = kernel.moduleBuildDependencies;
patches = [ ./makefile.patch ./hrtimer.patch ];
makeFlags = [
"KERNEL_VERSION=${kernel.modDirVersion}"
"KERNEL_DIR=${kernel.dev}/lib/modules/${kernel.modDirVersion}/build"
"INSTALL_MOD_PATH=$(out)"
];
meta = {
description = "AMD Power Profiler Driver";
homepage = "https://www.amd.com/es/developer/uprof.html";
platforms = lib.platforms.linux;
license = lib.licenses.unfree;
maintainers = with lib.maintainers.bsc; [ rarias varcila ];
};
}

View File

@ -1,31 +0,0 @@
--- a/src/PmcTimerConfig.c 2025-09-04 12:17:16.771707049 +0200
+++ b/src/PmcTimerConfig.c 2025-09-04 12:17:04.878515468 +0200
@@ -99,7 +99,7 @@ static void PmcInitTimer(void* pInfo)
DRVPRINT("pTimerConfig(%p)", pTimerConfig);
- hrtimer_init(&pTimerConfig->m_hrTimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
+ hrtimer_setup(&pTimerConfig->m_hrTimer, PmcTimerCallback, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
}
int PmcSetupTimer(ClientContext* pClientCtx)
@@ -157,7 +157,6 @@ int PmcSetupTimer(ClientContext* pClient
{
/* Interval in ms */
pTimerConfig->m_time = ktime_set(interval / 1000, interval * 1000000);
- pTimerConfig->m_hrTimer.function = PmcTimerCallback;
DRVPRINT("retVal(%d) m_time(%lld)", retVal, (long long int) pTimerConfig->m_time);
}
--- a/src/PwrProfTimer.c 2025-09-04 12:18:08.750544327 +0200
+++ b/src/PwrProfTimer.c 2025-09-04 12:18:28.557863382 +0200
@@ -573,8 +573,7 @@ void InitHrTimer(uint32 cpu)
pCoreClientData = &per_cpu(g_coreClientData, cpu);
// initialize HR timer
- hrtimer_init(&pCoreClientData->m_hrTimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
- pCoreClientData->m_hrTimer.function = &HrTimerCallback;
+ hrtimer_setup(&pCoreClientData->m_hrTimer, &HrTimerCallback, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
return;
} // InitHrTimer

View File

@ -1,10 +0,0 @@
# Patch arguments to call sym std::string::find(char const*, unsigned long, unsigned long)
# so it matches NixOS:
#
# Change OS name to NixOS
wz NixOS @ 0x00550a43
# And set the length to 5 characters
wa mov ecx, 5 @0x00517930
#
# Then change the argument to dlopen() so it only uses libnuma.so
wz libnuma.so @ 0x00562940

View File

@ -1,66 +0,0 @@
--- a/Makefile 2025-06-19 20:36:49.346693267 +0200
+++ b/Makefile 2025-06-19 20:42:29.778088660 +0200
@@ -27,7 +27,7 @@ MODULE_VERSION=$(shell cat AMDPowerProfi
MODULE_NAME_KO=$(MODULE_NAME).ko
# check is module inserted
-MODPROBE_OUTPUT=$(shell lsmod | grep $(MODULE_NAME))
+#MODPROBE_OUTPUT=$(shell lsmod | grep $(MODULE_NAME))
# check pcore dkms status
PCORE_DKMS_STATUS=$(shell dkms status | grep $(MODULE_NAME) | grep $(MODULE_VERSION))
@@ -50,7 +50,7 @@ endif
# “-Wno-missing-attributes” is added for GCC version >= 9.0 and kernel version <= 5.00
G_VERSION=9
K_VERSION=5
-KERNEL_MAJOR_VERSION=$(shell uname -r | cut -f1 -d.)
+KERNEL_MAJOR_VERSION=$(shell echo "$(KERNEL_VERSION)" | cut -f1 -d.)
GCCVERSION = $(shell gcc -dumpversion | cut -f1 -d.)
ifeq ($(G_VERSION),$(firstword $(sort $(GCCVERSION) $(G_VERSION))))
ifeq ($(K_VERSION),$(lastword $(sort $(KERNEL_MAJOR_VERSION) $(K_VERSION))))
@@ -66,17 +66,7 @@ ${MODULE_NAME}-objs := src/PmcDataBuffe
# make
all:
- @chmod a+x ./AMDPPcert.sh
- @./AMDPPcert.sh 0 1; echo $$? > $(PWD)/sign_status;
- @SIGSTATUS1=`cat $(PWD)/sign_status | tr -d '\n'`; \
- if [ $$SIGSTATUS1 -eq 1 ]; then \
- exit 1; \
- fi
- @make -C /lib/modules/$(KERNEL_VERSION)/build M=$(PWD) $(MAKE_OPTS) EXTRA_CFLAGS="$(EXTRA_CFLAGS)" modules
- @SIGSTATUS3=`cat $(PWD)/sign_status | tr -d '\n'`; \
- if [ $$SIGSTATUS3 -eq 0 ]; then \
- ./AMDPPcert.sh 1 $(MODULE_NAME_KO); \
- fi
+ make -C $(KERNEL_DIR) M=$(PWD) $(MAKE_OPTS) CFLAGS_MODULE="$(EXTRA_CFLAGS)" modules
# make clean
clean:
@@ -84,23 +74,9 @@ clean:
# make install
install:
- @mkdir -p /lib/modules/`uname -r`/kernel/drivers/extra
- @rm -f /lib/modules/`uname -r`/kernel/drivers/extra/$(MODULE_NAME_KO)
- @cp $(MODULE_NAME_KO) /lib/modules/`uname -r`/kernel/drivers/extra/
- @depmod -a
- @if [ ! -z "$(MODPROBE_OUTPUT)" ]; then \
- echo "Uninstalling AMDPowerProfiler Linux kernel module.";\
- rmmod $(MODULE_NAME);\
- fi
- @modprobe $(MODULE_NAME) 2> $(PWD)/sign_status1; \
- cat $(PWD)/sign_status1 | grep "Key was rejected by service"; \
- echo $$? > $(PWD)/sign_status; SIGSTATUS1=`cat $(PWD)/sign_status | tr -d '\n'`; \
- if [ $$SIGSTATUS1 -eq 0 ]; then \
- echo "ERROR: Secure Boot enabled, correct key is not yet enrolled in BIOS key table"; \
- exit 1; \
- else \
- cat $(PWD)/sign_status1; \
- fi
+ mkdir -p $(INSTALL_MOD_PATH)/lib/modules/$(KERNEL_VERSION)/kernel/drivers/extra/
+ cp -a $(MODULE_NAME_KO) $(INSTALL_MOD_PATH)/lib/modules/$(KERNEL_VERSION)/kernel/drivers/extra/
+
# make dkms
dkms:
@chmod a+x ./AMDPPcert.sh

View File

@ -1,25 +0,0 @@
{ stdenv, lib, fetchurl, pkg-config, glib, libuuid, popt, elfutils, swig4, python3 }:
stdenv.mkDerivation rec {
name = "babeltrace-1.5.8";
src = fetchurl {
url = "https://www.efficios.com/files/babeltrace/${name}.tar.bz2";
sha256 = "1hkg3phnamxfrhwzmiiirbhdgckzfkqwhajl0lmr1wfps7j47wcz";
};
nativeBuildInputs = [ pkg-config ];
buildInputs = [ glib libuuid popt elfutils swig4 python3 ];
meta = with lib; {
description = "Command-line tool and library to read and convert LTTng tracefiles";
homepage = "https://www.efficios.com/babeltrace";
license = licenses.mit;
platforms = platforms.linux;
maintainers = [ maintainers.bjornfor ];
};
configureFlags = [
"--enable-python-bindings"
];
}

View File

@ -1,34 +0,0 @@
{
stdenv
, fetchurl
, pkg-config
, glib
, libuuid
, popt
, elfutils
, python3
, swig4
, ncurses
, breakpointHook
}:
stdenv.mkDerivation rec {
pname = "babeltrace2";
version = "2.0.3";
src = fetchurl {
url = "https://www.efficios.com/files/babeltrace/${pname}-${version}.tar.bz2";
sha256 = "1804pyq7fz6rkcz4r1abkkn0pfnss13m6fd8if32s42l4lajadm5";
};
enableParallelBuilding = true;
nativeBuildInputs = [ pkg-config ];
buildInputs = [ glib libuuid popt elfutils python3 swig4 ncurses breakpointHook ];
hardeningDisable = [ "all" ];
configureFlags = [
"--enable-python-plugins"
"--enable-python-bindings"
];
}

View File

@ -1,70 +0,0 @@
{
stdenv
, lib
, bigotes
, cmake
, clangOmpss2
, openmp
, openmpv
, nanos6
, nodes
, nosv
, mpi
, tampi
, openblas
, ovni
, gitBranch ? "master"
, gitURL ? "ssh://git@bscpm04.bsc.es/rarias/bench6.git"
, gitCommit ? "bf29a53113737c3aa74d2fe3d55f59868faea7b4"
}:
stdenv.mkDerivation rec {
pname = "bench6";
version = "${src.shortRev}";
src = builtins.fetchGit {
url = gitURL;
ref = gitBranch;
rev = gitCommit;
};
nativeBuildInputs = [
cmake
clangOmpss2
];
buildInputs = [
bigotes
openmp
openmpv
nanos6
nodes
nosv
mpi
tampi
openblas
openblas.dev
ovni
];
env = {
NANOS6_HOME = nanos6;
NODES_HOME = nodes;
NOSV_HOME = nosv;
};
cmakeFlags = [
"-DCMAKE_C_COMPILER=clang"
"-DCMAKE_CXX_COMPILER=clang++"
];
hardeningDisable = [ "all" ];
dontStrip = true;
meta = {
homepage = "https://gitlab.pm.bsc.es/rarias/bench6";
description = "Set of micro-benchmarks for OmpSs-2 and several mini-apps";
maintainers = with lib.maintainers.bsc; [ rarias ];
platforms = lib.platforms.linux;
license = lib.licenses.gpl3Plus;
};
}

View File

@ -1,26 +0,0 @@
{
stdenv
, lib
, fetchFromGitHub
, cmake
}:
stdenv.mkDerivation {
pname = "bigotes";
version = "9dce13";
src = fetchFromGitHub {
owner = "rodarima";
repo = "bigotes";
rev = "9dce13446a8da30bea552d569d260d54e0188518";
sha256 = "sha256-ktxM3pXiL8YXSK+/IKWYadijhYXqGoLY6adLk36iigE=";
};
nativeBuildInputs = [ cmake ];
meta = {
homepage = "https://github.com/rodarima/bigotes";
description = "Versatile benchmark tool";
maintainers = with lib.maintainers.bsc; [ rarias ];
platforms = lib.platforms.linux;
license = lib.licenses.gpl3Plus;
};
}

View File

@ -1,54 +0,0 @@
{ stdenv
, fetchFromGitHub
, libcap
, libcgroup
, libmhash
, doxygen
, graphviz
, autoreconfHook
, pkg-config
, glib
}:
let
version = "0.4.4";
in stdenv.mkDerivation {
pname = "clsync";
inherit version;
src = fetchFromGitHub {
repo = "clsync";
owner = "clsync";
rev = "v${version}";
sha256 = "0sdiyfwp0iqr6l1sirm51pirzmhi4jzgky5pzfj24nn71q3fwqgz";
};
outputs = [ "out" "dev" ];
buildInputs = [
autoreconfHook
libcap
libcgroup
libmhash
doxygen
graphviz
pkg-config
glib
];
preConfigure = ''
./configure --help
'';
enableParallelBuilding = true;
meta = with lib; {
description = "File live sync daemon based on inotify/kqueue/bsm (Linux, FreeBSD), written in GNU C";
homepage = "https://github.com/clsync/clsync";
license = licenses.gpl3Plus;
maintainers = [ ];
platforms = platforms.linux;
};
}

View File

@ -1,51 +0,0 @@
{
stdenv
, lib
, babeltrace2
, pkg-config
, uthash
, enableTest ? false
, mpi ? null
, clangOmpss2 ? null
, tampi ? null
}:
with lib;
assert (enableTest -> (mpi != null));
assert (enableTest -> (clangOmpss2 != null));
assert (enableTest -> (tampi != null));
stdenv.mkDerivation rec {
pname = "cn6";
version = "${src.shortRev}";
buildInputs = [
babeltrace2
pkg-config
uthash
mpi
] ++ optionals (enableTest) [ mpi clangOmpss2 tampi ];
src = builtins.fetchGit {
url = "ssh://git@bscpm04.bsc.es/rarias/cn6.git";
ref = "master";
rev = "c72c3b66b720c2a33950f536fc819051c8f20a69";
};
makeFlags = [ "PREFIX=$(out)" ];
postBuild = optionalString (enableTest) ''
(
cd test
make timediff timediff_mpi
)
'';
postInstall = optionalString (enableTest) ''
(
cd test
cp timediff timediff_mpi sync-err.sh $out/bin/
)
'';
}

View File

@ -1,21 +0,0 @@
{
stdenv
, perl # For the pod2man command
}:
stdenv.mkDerivation rec {
version = "20201006";
pname = "cpuid";
buildInputs = [ perl ];
# Replace /usr install directory for $out
postPatch = ''
sed -i "s@/usr@$out@g" Makefile
'';
src = builtins.fetchTarball {
url = "http://www.etallen.com/cpuid/${pname}-${version}.src.tar.gz";
sha256 = "04qhs938gs1kjxpsrnfy6lbsircsprfyh4db62s5cf83a1nrwn9w";
};
}

View File

@ -1,12 +0,0 @@
HOSTCXX ?= g++
NVCC := nvcc -ccbin $(HOSTCXX)
CXXFLAGS := -m64
# Target rules
all: cudainfo
cudainfo: cudainfo.cpp
$(NVCC) $(CXXFLAGS) -o $@ $<
clean:
rm -f cudainfo cudainfo.o

View File

@ -1,600 +0,0 @@
/*
* Copyright 1993-2015 NVIDIA Corporation. All rights reserved.
*
* Please refer to the NVIDIA end user license agreement (EULA) associated
* with this source code for terms and conditions that govern your use of
* this software. Any use, reproduction, disclosure, or distribution of
* this software and related documentation outside the terms of the EULA
* is strictly prohibited.
*
*/
/* This sample queries the properties of the CUDA devices present in the system via CUDA Runtime API. */
// Shared Utilities (QA Testing)
// std::system includes
#include <memory>
#include <iostream>
#include <cuda_runtime.h>
// This will output the proper CUDA error strings in the event that a CUDA host call returns an error
#define checkCudaErrors(val) check ( (val), #val, __FILE__, __LINE__ )
// CUDA Runtime error messages
#ifdef __DRIVER_TYPES_H__
static const char *_cudaGetErrorEnum(cudaError_t error)
{
switch (error)
{
case cudaSuccess:
return "cudaSuccess";
case cudaErrorMissingConfiguration:
return "cudaErrorMissingConfiguration";
case cudaErrorMemoryAllocation:
return "cudaErrorMemoryAllocation";
case cudaErrorInitializationError:
return "cudaErrorInitializationError";
case cudaErrorLaunchFailure:
return "cudaErrorLaunchFailure";
case cudaErrorPriorLaunchFailure:
return "cudaErrorPriorLaunchFailure";
case cudaErrorLaunchTimeout:
return "cudaErrorLaunchTimeout";
case cudaErrorLaunchOutOfResources:
return "cudaErrorLaunchOutOfResources";
case cudaErrorInvalidDeviceFunction:
return "cudaErrorInvalidDeviceFunction";
case cudaErrorInvalidConfiguration:
return "cudaErrorInvalidConfiguration";
case cudaErrorInvalidDevice:
return "cudaErrorInvalidDevice";
case cudaErrorInvalidValue:
return "cudaErrorInvalidValue";
case cudaErrorInvalidPitchValue:
return "cudaErrorInvalidPitchValue";
case cudaErrorInvalidSymbol:
return "cudaErrorInvalidSymbol";
case cudaErrorMapBufferObjectFailed:
return "cudaErrorMapBufferObjectFailed";
case cudaErrorUnmapBufferObjectFailed:
return "cudaErrorUnmapBufferObjectFailed";
case cudaErrorInvalidHostPointer:
return "cudaErrorInvalidHostPointer";
case cudaErrorInvalidDevicePointer:
return "cudaErrorInvalidDevicePointer";
case cudaErrorInvalidTexture:
return "cudaErrorInvalidTexture";
case cudaErrorInvalidTextureBinding:
return "cudaErrorInvalidTextureBinding";
case cudaErrorInvalidChannelDescriptor:
return "cudaErrorInvalidChannelDescriptor";
case cudaErrorInvalidMemcpyDirection:
return "cudaErrorInvalidMemcpyDirection";
case cudaErrorAddressOfConstant:
return "cudaErrorAddressOfConstant";
case cudaErrorTextureFetchFailed:
return "cudaErrorTextureFetchFailed";
case cudaErrorTextureNotBound:
return "cudaErrorTextureNotBound";
case cudaErrorSynchronizationError:
return "cudaErrorSynchronizationError";
case cudaErrorInvalidFilterSetting:
return "cudaErrorInvalidFilterSetting";
case cudaErrorInvalidNormSetting:
return "cudaErrorInvalidNormSetting";
case cudaErrorMixedDeviceExecution:
return "cudaErrorMixedDeviceExecution";
case cudaErrorCudartUnloading:
return "cudaErrorCudartUnloading";
case cudaErrorUnknown:
return "cudaErrorUnknown";
case cudaErrorNotYetImplemented:
return "cudaErrorNotYetImplemented";
case cudaErrorMemoryValueTooLarge:
return "cudaErrorMemoryValueTooLarge";
case cudaErrorInvalidResourceHandle:
return "cudaErrorInvalidResourceHandle";
case cudaErrorNotReady:
return "cudaErrorNotReady";
case cudaErrorInsufficientDriver:
return "cudaErrorInsufficientDriver";
case cudaErrorSetOnActiveProcess:
return "cudaErrorSetOnActiveProcess";
case cudaErrorInvalidSurface:
return "cudaErrorInvalidSurface";
case cudaErrorNoDevice:
return "cudaErrorNoDevice";
case cudaErrorECCUncorrectable:
return "cudaErrorECCUncorrectable";
case cudaErrorSharedObjectSymbolNotFound:
return "cudaErrorSharedObjectSymbolNotFound";
case cudaErrorSharedObjectInitFailed:
return "cudaErrorSharedObjectInitFailed";
case cudaErrorUnsupportedLimit:
return "cudaErrorUnsupportedLimit";
case cudaErrorDuplicateVariableName:
return "cudaErrorDuplicateVariableName";
case cudaErrorDuplicateTextureName:
return "cudaErrorDuplicateTextureName";
case cudaErrorDuplicateSurfaceName:
return "cudaErrorDuplicateSurfaceName";
case cudaErrorDevicesUnavailable:
return "cudaErrorDevicesUnavailable";
case cudaErrorInvalidKernelImage:
return "cudaErrorInvalidKernelImage";
case cudaErrorNoKernelImageForDevice:
return "cudaErrorNoKernelImageForDevice";
case cudaErrorIncompatibleDriverContext:
return "cudaErrorIncompatibleDriverContext";
case cudaErrorPeerAccessAlreadyEnabled:
return "cudaErrorPeerAccessAlreadyEnabled";
case cudaErrorPeerAccessNotEnabled:
return "cudaErrorPeerAccessNotEnabled";
case cudaErrorDeviceAlreadyInUse:
return "cudaErrorDeviceAlreadyInUse";
case cudaErrorProfilerDisabled:
return "cudaErrorProfilerDisabled";
case cudaErrorProfilerNotInitialized:
return "cudaErrorProfilerNotInitialized";
case cudaErrorProfilerAlreadyStarted:
return "cudaErrorProfilerAlreadyStarted";
case cudaErrorProfilerAlreadyStopped:
return "cudaErrorProfilerAlreadyStopped";
/* Since CUDA 4.0*/
case cudaErrorAssert:
return "cudaErrorAssert";
case cudaErrorTooManyPeers:
return "cudaErrorTooManyPeers";
case cudaErrorHostMemoryAlreadyRegistered:
return "cudaErrorHostMemoryAlreadyRegistered";
case cudaErrorHostMemoryNotRegistered:
return "cudaErrorHostMemoryNotRegistered";
/* Since CUDA 5.0 */
case cudaErrorOperatingSystem:
return "cudaErrorOperatingSystem";
case cudaErrorPeerAccessUnsupported:
return "cudaErrorPeerAccessUnsupported";
case cudaErrorLaunchMaxDepthExceeded:
return "cudaErrorLaunchMaxDepthExceeded";
case cudaErrorLaunchFileScopedTex:
return "cudaErrorLaunchFileScopedTex";
case cudaErrorLaunchFileScopedSurf:
return "cudaErrorLaunchFileScopedSurf";
case cudaErrorSyncDepthExceeded:
return "cudaErrorSyncDepthExceeded";
case cudaErrorLaunchPendingCountExceeded:
return "cudaErrorLaunchPendingCountExceeded";
case cudaErrorNotPermitted:
return "cudaErrorNotPermitted";
case cudaErrorNotSupported:
return "cudaErrorNotSupported";
/* Since CUDA 6.0 */
case cudaErrorHardwareStackError:
return "cudaErrorHardwareStackError";
case cudaErrorIllegalInstruction:
return "cudaErrorIllegalInstruction";
case cudaErrorMisalignedAddress:
return "cudaErrorMisalignedAddress";
case cudaErrorInvalidAddressSpace:
return "cudaErrorInvalidAddressSpace";
case cudaErrorInvalidPc:
return "cudaErrorInvalidPc";
case cudaErrorIllegalAddress:
return "cudaErrorIllegalAddress";
/* Since CUDA 6.5*/
case cudaErrorInvalidPtx:
return "cudaErrorInvalidPtx";
case cudaErrorInvalidGraphicsContext:
return "cudaErrorInvalidGraphicsContext";
case cudaErrorStartupFailure:
return "cudaErrorStartupFailure";
case cudaErrorApiFailureBase:
return "cudaErrorApiFailureBase";
}
return "<unknown>";
}
#endif
template< typename T >
void check(T result, char const *const func, const char *const file, int const line)
{
if (result)
{
fprintf(stderr, "CUDA error at %s:%d code=%d(%s) \"%s\" \n",
file, line, static_cast<unsigned int>(result), _cudaGetErrorEnum(result), func);
cudaDeviceReset();
// Make sure we call CUDA Device Reset before exiting
exit(EXIT_FAILURE);
}
}
int *pArgc = NULL;
char **pArgv = NULL;
#if CUDART_VERSION < 5000
// CUDA-C includes
#include <cuda.h>
// This function wraps the CUDA Driver API into a template function
template <class T>
inline void getCudaAttribute(T *attribute, CUdevice_attribute device_attribute, int device)
{
CUresult error = cuDeviceGetAttribute(attribute, device_attribute, device);
if (CUDA_SUCCESS != error) {
fprintf(stderr, "cuSafeCallNoSync() Driver API error = %04d from file <%s>, line %i.\n",
error, __FILE__, __LINE__);
// cudaDeviceReset causes the driver to clean up all state. While
// not mandatory in normal operation, it is good practice. It is also
// needed to ensure correct operation when the application is being
// profiled. Calling cudaDeviceReset causes all profile data to be
// flushed before the application exits
cudaDeviceReset();
exit(EXIT_FAILURE);
}
}
#endif /* CUDART_VERSION < 5000 */
// Beginning of GPU Architecture definitions
inline int ConvertSMVer2Cores(int major, int minor)
{
// Defines for GPU Architecture types (using the SM version to determine the # of cores per SM
typedef struct {
int SM; // 0xMm (hexidecimal notation), M = SM Major version, and m = SM minor version
int Cores;
} sSMtoCores;
sSMtoCores nGpuArchCoresPerSM[] = {
{ 0x20, 32 }, // Fermi Generation (SM 2.0) GF100 class
{ 0x21, 48 }, // Fermi Generation (SM 2.1) GF10x class
{ 0x30, 192}, // Kepler Generation (SM 3.0) GK10x class
{ 0x32, 192}, // Kepler Generation (SM 3.2) GK10x class
{ 0x35, 192}, // Kepler Generation (SM 3.5) GK11x class
{ 0x37, 192}, // Kepler Generation (SM 3.7) GK21x class
{ 0x50, 128}, // Maxwell Generation (SM 5.0) GM10x class
{ 0x52, 128}, // Maxwell Generation (SM 5.2) GM20x class
{ -1, -1 }
};
int index = 0;
while (nGpuArchCoresPerSM[index].SM != -1) {
if (nGpuArchCoresPerSM[index].SM == ((major << 4) + minor)) {
return nGpuArchCoresPerSM[index].Cores;
}
index++;
}
// If we don't find the values, we default use the previous one to run properly
printf("MapSMtoCores for SM %d.%d is undefined. Default to use %d Cores/SM\n", major, minor, nGpuArchCoresPerSM[index-1].Cores);
return nGpuArchCoresPerSM[index-1].Cores;
}
////////////////////////////////////////////////////////////////////////////////
// Program main
////////////////////////////////////////////////////////////////////////////////
int
main(int argc, char **argv)
{
pArgc = &argc;
pArgv = argv;
printf("%s Starting...\n\n", argv[0]);
printf(" CUDA Device Query (Runtime API) version (CUDART static linking)\n\n");
int deviceCount = 0;
cudaError_t error_id = cudaGetDeviceCount(&deviceCount);
if (error_id != cudaSuccess) {
printf("cudaGetDeviceCount failed: %s (%d)\n",
cudaGetErrorString(error_id), (int) error_id);
printf("Result = FAIL\n");
exit(EXIT_FAILURE);
}
// This function call returns 0 if there are no CUDA capable devices.
if (deviceCount == 0)
printf("There are no available device(s) that support CUDA\n");
else
printf("Detected %d CUDA Capable device(s)\n", deviceCount);
int dev, driverVersion = 0, runtimeVersion = 0;
for (dev = 0; dev < deviceCount; ++dev) {
cudaSetDevice(dev);
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, dev);
printf("\nDevice %d: \"%s\"\n", dev, deviceProp.name);
// Console log
cudaDriverGetVersion(&driverVersion);
cudaRuntimeGetVersion(&runtimeVersion);
printf(" CUDA Driver Version / Runtime Version %d.%d / %d.%d\n", driverVersion/1000, (driverVersion%100)/10, runtimeVersion/1000, (runtimeVersion%100)/10);
printf(" CUDA Capability Major/Minor version number: %d.%d\n", deviceProp.major, deviceProp.minor);
printf(" Total amount of global memory: %.0f MBytes (%llu bytes)\n",
(float)deviceProp.totalGlobalMem/1048576.0f, (unsigned long long) deviceProp.totalGlobalMem);
printf(" (%2d) Multiprocessors, (%3d) CUDA Cores/MP: %d CUDA Cores\n",
deviceProp.multiProcessorCount,
ConvertSMVer2Cores(deviceProp.major, deviceProp.minor),
ConvertSMVer2Cores(deviceProp.major, deviceProp.minor) * deviceProp.multiProcessorCount);
printf(" GPU Max Clock rate: %.0f MHz (%0.2f GHz)\n", deviceProp.clockRate * 1e-3f, deviceProp.clockRate * 1e-6f);
#if CUDART_VERSION >= 5000
// This is supported in CUDA 5.0 (runtime API device properties)
printf(" Memory Clock rate: %.0f Mhz\n", deviceProp.memoryClockRate * 1e-3f);
printf(" Memory Bus Width: %d-bit\n", deviceProp.memoryBusWidth);
if (deviceProp.l2CacheSize) {
printf(" L2 Cache Size: %d bytes\n", deviceProp.l2CacheSize);
}
#else
// This only available in CUDA 4.0-4.2 (but these were only exposed in the CUDA Driver API)
int memoryClock;
getCudaAttribute<int>(&memoryClock, CU_DEVICE_ATTRIBUTE_MEMORY_CLOCK_RATE, dev);
printf(" Memory Clock rate: %.0f Mhz\n", memoryClock * 1e-3f);
int memBusWidth;
getCudaAttribute<int>(&memBusWidth, CU_DEVICE_ATTRIBUTE_GLOBAL_MEMORY_BUS_WIDTH, dev);
printf(" Memory Bus Width: %d-bit\n", memBusWidth);
int L2CacheSize;
getCudaAttribute<int>(&L2CacheSize, CU_DEVICE_ATTRIBUTE_L2_CACHE_SIZE, dev);
if (L2CacheSize) {
printf(" L2 Cache Size: %d bytes\n", L2CacheSize);
}
#endif
printf(" Maximum Texture Dimension Size (x,y,z) 1D=(%d), 2D=(%d, %d), 3D=(%d, %d, %d)\n",
deviceProp.maxTexture1D , deviceProp.maxTexture2D[0], deviceProp.maxTexture2D[1],
deviceProp.maxTexture3D[0], deviceProp.maxTexture3D[1], deviceProp.maxTexture3D[2]);
printf(" Maximum Layered 1D Texture Size, (num) layers 1D=(%d), %d layers\n",
deviceProp.maxTexture1DLayered[0], deviceProp.maxTexture1DLayered[1]);
printf(" Maximum Layered 2D Texture Size, (num) layers 2D=(%d, %d), %d layers\n",
deviceProp.maxTexture2DLayered[0], deviceProp.maxTexture2DLayered[1], deviceProp.maxTexture2DLayered[2]);
printf(" Total amount of constant memory: %lu bytes\n", deviceProp.totalConstMem);
printf(" Total amount of shared memory per block: %lu bytes\n", deviceProp.sharedMemPerBlock);
printf(" Total number of registers available per block: %d\n", deviceProp.regsPerBlock);
printf(" Warp size: %d\n", deviceProp.warpSize);
printf(" Maximum number of threads per multiprocessor: %d\n", deviceProp.maxThreadsPerMultiProcessor);
printf(" Maximum number of threads per block: %d\n", deviceProp.maxThreadsPerBlock);
printf(" Max dimension size of a thread block (x,y,z): (%d, %d, %d)\n",
deviceProp.maxThreadsDim[0],
deviceProp.maxThreadsDim[1],
deviceProp.maxThreadsDim[2]);
printf(" Max dimension size of a grid size (x,y,z): (%d, %d, %d)\n",
deviceProp.maxGridSize[0],
deviceProp.maxGridSize[1],
deviceProp.maxGridSize[2]);
printf(" Maximum memory pitch: %lu bytes\n", deviceProp.memPitch);
printf(" Texture alignment: %lu bytes\n", deviceProp.textureAlignment);
printf(" Concurrent copy and kernel execution: %s with %d copy engine(s)\n", (deviceProp.deviceOverlap ? "Yes" : "No"), deviceProp.asyncEngineCount);
printf(" Run time limit on kernels: %s\n", deviceProp.kernelExecTimeoutEnabled ? "Yes" : "No");
printf(" Integrated GPU sharing Host Memory: %s\n", deviceProp.integrated ? "Yes" : "No");
printf(" Support host page-locked memory mapping: %s\n", deviceProp.canMapHostMemory ? "Yes" : "No");
printf(" Alignment requirement for Surfaces: %s\n", deviceProp.surfaceAlignment ? "Yes" : "No");
printf(" Device has ECC support: %s\n", deviceProp.ECCEnabled ? "Enabled" : "Disabled");
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
printf(" CUDA Device Driver Mode (TCC or WDDM): %s\n", deviceProp.tccDriver ? "TCC (Tesla Compute Cluster Driver)" : "WDDM (Windows Display Driver Model)");
#endif
printf(" Device supports Unified Addressing (UVA): %s\n", deviceProp.unifiedAddressing ? "Yes" : "No");
printf(" Device PCI Domain ID / Bus ID / location ID: %d / %d / %d\n", deviceProp.pciDomainID, deviceProp.pciBusID, deviceProp.pciDeviceID);
const char *sComputeMode[] = {
"Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)",
"Exclusive (only one host thread in one process is able to use ::cudaSetDevice() with this device)",
"Prohibited (no host thread can use ::cudaSetDevice() with this device)",
"Exclusive Process (many threads in one process is able to use ::cudaSetDevice() with this device)",
"Unknown",
NULL
};
printf(" Compute Mode:\n");
printf(" < %s >\n", sComputeMode[deviceProp.computeMode]);
}
// If there are 2 or more GPUs, query to determine whether RDMA is supported
if (deviceCount >= 2)
{
cudaDeviceProp prop[64];
int gpuid[64]; // we want to find the first two GPU's that can support P2P
int gpu_p2p_count = 0;
for (int i=0; i < deviceCount; i++)
{
checkCudaErrors(cudaGetDeviceProperties(&prop[i], i));
// Only boards based on Fermi or later can support P2P
if ((prop[i].major >= 2)
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
// on Windows (64-bit), the Tesla Compute Cluster driver for windows must be enabled to supprot this
&& prop[i].tccDriver
#endif
)
{
// This is an array of P2P capable GPUs
gpuid[gpu_p2p_count++] = i;
}
}
// Show all the combinations of support P2P GPUs
int can_access_peer_0_1, can_access_peer_1_0;
if (gpu_p2p_count >= 2)
{
for (int i = 0; i < gpu_p2p_count-1; i++)
{
for (int j = 1; j < gpu_p2p_count; j++)
{
checkCudaErrors(cudaDeviceCanAccessPeer(&can_access_peer_0_1, gpuid[i], gpuid[j]));
printf("> Peer access from %s (GPU%d) -> %s (GPU%d) : %s\n", prop[gpuid[i]].name, gpuid[i],
prop[gpuid[j]].name, gpuid[j] ,
can_access_peer_0_1 ? "Yes" : "No");
}
}
for (int j = 1; j < gpu_p2p_count; j++)
{
for (int i = 0; i < gpu_p2p_count-1; i++)
{
checkCudaErrors(cudaDeviceCanAccessPeer(&can_access_peer_1_0, gpuid[j], gpuid[i]));
printf("> Peer access from %s (GPU%d) -> %s (GPU%d) : %s\n", prop[gpuid[j]].name, gpuid[j],
prop[gpuid[i]].name, gpuid[i] ,
can_access_peer_1_0 ? "Yes" : "No");
}
}
}
}
// csv masterlog info
// *****************************
// exe and CUDA driver name
printf("\n");
std::string sProfileString = "deviceQuery, CUDA Driver = CUDART";
char cTemp[128];
// driver version
sProfileString += ", CUDA Driver Version = ";
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
sprintf_s(cTemp, 10, "%d.%d", driverVersion/1000, (driverVersion%100)/10);
#else
sprintf(cTemp, "%d.%d", driverVersion/1000, (driverVersion%100)/10);
#endif
sProfileString += cTemp;
// Runtime version
sProfileString += ", CUDA Runtime Version = ";
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
sprintf_s(cTemp, 10, "%d.%d", runtimeVersion/1000, (runtimeVersion%100)/10);
#else
sprintf(cTemp, "%d.%d", runtimeVersion/1000, (runtimeVersion%100)/10);
#endif
sProfileString += cTemp;
// Device count
sProfileString += ", NumDevs = ";
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
sprintf_s(cTemp, 10, "%d", deviceCount);
#else
sprintf(cTemp, "%d", deviceCount);
#endif
sProfileString += cTemp;
// Print Out all device Names
for (dev = 0; dev < deviceCount; ++dev)
{
#if defined(WIN32) || defined(_WIN32) || defined(WIN64) || defined(_WIN64)
sprintf_s(cTemp, 13, ", Device%d = ", dev);
#else
sprintf(cTemp, ", Device%d = ", dev);
#endif
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, dev);
sProfileString += cTemp;
sProfileString += deviceProp.name;
}
sProfileString += "\n";
printf("%s", sProfileString.c_str());
printf("Result = PASS\n");
// finish
// cudaDeviceReset causes the driver to clean up all state. While
// not mandatory in normal operation, it is good practice. It is also
// needed to ensure correct operation when the application is being
// profiled. Calling cudaDeviceReset causes all profile data to be
// flushed before the application exits
cudaDeviceReset();
return 0;
}

View File

@ -1,43 +0,0 @@
{
stdenv
, cudatoolkit
, cudaPackages
, autoAddDriverRunpath
, strace
}:
stdenv.mkDerivation (finalAttrs: {
name = "cudainfo";
src = ./.;
buildInputs = [
cudatoolkit # Required for nvcc
cudaPackages.cuda_cudart.static # Required for -lcudart_static
autoAddDriverRunpath
];
installPhase = ''
mkdir -p $out/bin
cp -a cudainfo $out/bin
'';
passthru.gpuCheck = stdenv.mkDerivation {
name = "cudainfo-test";
requiredSystemFeatures = [ "cuda" ];
dontBuild = true;
nativeCheckInputs = [
finalAttrs.finalPackage # The cudainfo package from above
strace # When it fails, it will show the trace
];
dontUnpack = true;
doCheck = true;
checkPhase = ''
if ! cudainfo; then
set -x
cudainfo=$(command -v cudainfo)
ldd $cudainfo
readelf -d $cudainfo
strace -f $cudainfo
set +x
fi
'';
installPhase = "touch $out";
};
})

View File

@ -1,25 +0,0 @@
{
stdenv
}:
stdenv.mkDerivation rec {
name = "dummy";
src = null;
dontUnpack = true;
dontBuild = true;
programPath = "/bin/dummy";
installPhase = ''
mkdir -p $out/bin
cat > $out/bin/dummy <<EOF
#!/bin/sh
echo Hello worlda!
EOF
chmod +x $out/bin/dummy
'';
}

View File

@ -1,13 +0,0 @@
diff --git a/src/merger/common/bfd_manager.c b/src/merger/common/bfd_manager.c
index 5f9dacf9..5231e3eb 100644
--- a/src/merger/common/bfd_manager.c
+++ b/src/merger/common/bfd_manager.c
@@ -225,7 +225,7 @@ asymbol **BFDmanager_getDefaultSymbols (void)
*
* @return No return value.
*/
-static void BFDmanager_findAddressInSection (bfd * abfd, asection * section, PTR data)
+static void BFDmanager_findAddressInSection (bfd * abfd, asection * section, void * data)
{
#if HAVE_BFD_GET_SECTION_SIZE || HAVE_BFD_SECTION_SIZE || HAVE_BFD_GET_SECTION_SIZE_BEFORE_RELOC
bfd_size_type size;

View File

@ -1,123 +0,0 @@
{ stdenv
, lib
, fetchFromGitHub
, boost
, libdwarf
, libelf
, libxml2
, libunwind
, papi
, binutils-unwrapped
, libiberty
, gfortran
, xml2
, which
, libbfd
, mpi ? null
, cuda ? null
, llvmPackages
, autoreconfHook
#, python3Packages
, installShellFiles
, symlinkJoin
, enablePapi ? stdenv.hostPlatform == stdenv.buildPlatform # Disabled when cross-compiling
}:
let
libdwarfBundle = symlinkJoin {
name = "libdwarfBundle";
paths = [ libdwarf.dev libdwarf.lib libdwarf.out ];
};
in
stdenv.mkDerivation rec {
pname = "extrae";
version = "4.0.1";
src = fetchFromGitHub {
owner = "bsc-performance-tools";
repo = "extrae";
rev = "${version}";
sha256 = "SlMYxNQXJ0Xg90HmpnotUR3tEPVVBXhk1NtEBJwGBR4=";
};
patches = [
# FIXME: Waiting for German to merge this patch. Still not in master, merged
# on 2023-03-01 in devel branch (after 3 years), see:
# https://github.com/bsc-performance-tools/extrae/pull/45
./use-command.patch
# https://github.com/bsc-performance-tools/extrae/issues/71
./PTR.patch
];
enableParallelBuilding = true;
hardeningDisable = [ "all" ];
nativeBuildInputs = [ installShellFiles ];
buildInputs = [
autoreconfHook
gfortran
libunwind
binutils-unwrapped
boost
boost.dev
libiberty
mpi
xml2
which
libxml2.dev
libbfd
#python3Packages.sphinx
]
++ lib.optional stdenv.cc.isClang llvmPackages.openmp;
preConfigure = ''
configureFlagsArray=(
--enable-posix-clock
--with-binutils="${binutils-unwrapped} ${libiberty}"
--with-dwarf=${libdwarfBundle}
--with-elf=${libelf}
--with-boost=${boost.dev}
--enable-instrument-io
--enable-instrument-dynamic-memory
--without-memkind
--enable-merge-in-trace
--disable-online
--without-opencl
--enable-pebs-sampling
--enable-sampling
--with-unwind=${libunwind.dev}
--with-xml-prefix=${libxml2.dev}
${lib.optionalString enablePapi "--with-papi=${papi}"}
${if (mpi != null) then ''--with-mpi=${mpi}''
else ''--without-mpi''}
--without-dyninst)
'';
# Install the manuals only by hand, as we don't want to pull the complete
# LaTeX world
# FIXME: sphinx is broken
#postBuild = ''
# make -C docs man
#'';
#
#postInstall = ''
# installManPage docs/builds/man/*/*
#'';
# ++ (
# if (openmp)
# then [ "--enable-openmp" ]
# else []
# );
meta = {
homepage = "https://github.com/bsc-performance-tools/extrae";
description = "Instrumentation framework to generate execution traces of the most used parallel runtimes";
maintainers = [ ];
broken = true;
platforms = lib.platforms.linux;
license = lib.licenses.lgpl21Plus;
};
}

View File

@ -1,24 +0,0 @@
diff --git a/substitute b/substitute
index d5615606..82ca91a5 100755
--- a/substitute
+++ b/substitute
@@ -16,7 +16,7 @@ UNAME=`uname`
if [ "${UNAME}" = "Darwin" -o "${UNAME}" = "AIX" ] ; then
TMPFILE=substitute-$$
${SED} "s|${KEY}|${VALUE}|g" < ${FILE} >${TMPFILE}
- /bin/mv -f ${TMPFILE} ${FILE}
+ command mv -f ${TMPFILE} ${FILE}
else
${SED} "s|${KEY}|${VALUE}|g" -i ${FILE}
fi
diff --git a/substitute-all b/substitute-all
index 48c6b76a..eda7a0f2 100755
--- a/substitute-all
+++ b/substitute-all
@@ -23,5 +23,5 @@ fi
echo "Applying modification in ${PATHTOCHANGE} - Key = ${KEY} for value = ${VALUE}"
-/usr/bin/find ${PATHTOCHANGE} -type f -exec ${SCRIPT_LOCATION} "${SED}" "${KEY}" "${VALUE}" {} \;
+command find ${PATHTOCHANGE} -type f -exec ${SCRIPT_LOCATION} "${SED}" "${KEY}" "${VALUE}" {} \;

View File

@ -1,58 +0,0 @@
{ fetchurl, stdenv, lib, llvmPackages ? null, precision ? "double", perl, mpi }:
with lib;
assert stdenv.cc.isClang -> llvmPackages != null;
assert elem precision [ "single" "double" "long-double" "quad-precision" ];
let
version = "3.3.8";
withDoc = stdenv.cc.isGNU;
in
stdenv.mkDerivation {
name = "fftw-${precision}-${version}";
src = fetchurl {
urls = [
"http://fftw.org/fftw-${version}.tar.gz"
"ftp://ftp.fftw.org/pub/fftw/fftw-${version}.tar.gz"
];
sha256 = "00z3k8fq561wq2khssqg0kallk0504dzlx989x3vvicjdqpjc4v1";
};
outputs = [ "out" "dev" "man" ]
++ optional withDoc "info"; # it's dev-doc only
outputBin = "dev"; # fftw-wisdom
buildInputs = [ mpi ]
++ lib.optionals stdenv.cc.isClang [
# TODO: This may mismatch the LLVM version sin the stdenv, see #79818.
llvmPackages.openmp
];
configureFlags =
[ "--enable-shared"
"--enable-threads"
"--enable-mpi"
"--disable-openmp"
]
++ optional (precision != "double") "--enable-${precision}"
# all x86_64 have sse2
# however, not all float sizes fit
++ optional (stdenv.isx86_64 && (precision == "single" || precision == "double") ) "--enable-sse2"
# doc generation causes Fortran wrapper generation which hard-codes gcc
++ optional (!withDoc) "--disable-doc";
enableParallelBuilding = true;
checkInputs = [ perl ];
meta = with lib; {
description = "Fastest Fourier Transform in the West library";
homepage = "http://www.fftw.org/";
license = licenses.gpl2Plus;
maintainers = [ maintainers.spwhitt ];
platforms = platforms.unix;
};
}

View File

@ -1,64 +0,0 @@
{
stdenv
, lib
, fetchurl
, symlinkJoin
, slurm
, rdma-core
, autoconf
, automake
, libtool
, mpi
, rsync
, gfortran
}:
let
rdma-core-all = symlinkJoin {
name ="rdma-core-all";
paths = [ rdma-core.dev rdma-core.out ];
};
mpiAll = symlinkJoin {
name = "mpi-all";
paths = [ mpi.all ];
};
in
stdenv.mkDerivation rec {
pname = "GPI-2";
version = "tagaspi-2021.11";
src = fetchurl {
url = "https://pm.bsc.es/gitlab/interoperability/extern/GPI-2/-/archive/${version}/GPI-2-${version}.tar.gz";
hash = "sha256-eY2wpyTpnOXRoAcYoAP82Jq9Q7p5WwDpMj+f1vEX5zw=";
};
enableParallelBuilding = true;
patches = [ ./rdma-core.patch ./max-mem.patch ];
preConfigure = ''
patchShebangs autogen.sh
./autogen.sh
'';
configureFlags = [
"--with-infiniband=${rdma-core-all}"
"--with-mpi=${mpiAll}"
"--with-slurm"
"CFLAGS=-fPIC"
"CXXFLAGS=-fPIC"
];
buildInputs = [ slurm mpiAll rdma-core-all autoconf automake libtool rsync gfortran ];
hardeningDisable = [ "all" ];
meta = {
homepage = "https://pm.bsc.es/gitlab/interoperability/extern/GPI-2";
description = "GPI-2 extended for supporting Task-Aware GASPI (TAGASPI) library";
maintainers = with lib.maintainers.bsc; [ rarias ];
platforms = lib.platforms.linux;
license = lib.licenses.gpl3Plus;
};
}

View File

@ -1,10 +0,0 @@
--- a/tests/tests/segments/max_mem.c 2025-09-12 13:30:53.449353591 +0200
+++ b/tests/tests/segments/max_mem.c 2025-09-12 13:33:49.750352401 +0200
@@ -1,5 +1,7 @@
#include <test_utils.h>
+gaspi_size_t gaspi_get_system_mem (void);
+
/* Test allocates 45% of system memory and creates a segment that
large or if several ranks per node exist, divided among that
number */

View File

@ -1,12 +0,0 @@
--- a/src/devices/ib/GPI2_IB.h 2025-09-12 13:25:31.564181121 +0200
+++ b/src/devices/ib/GPI2_IB.h 2025-09-12 13:24:49.105422150 +0200
@@ -26,6 +26,9 @@ along with GPI-2. If not, see <http://ww
#include "GPI2_Dev.h"
+/* Missing prototype as driver.h is now private */
+int ibv_read_sysfs_file(const char *dir, const char *file, char *buf, size_t size);
+
#define GASPI_GID_INDEX (0)
#define PORT_LINK_UP (5)
#define MAX_INLINE_BYTES (128)

View File

@ -1,46 +0,0 @@
From 1454525f70b43a6957b7c9e1870e997368787da3 Mon Sep 17 00:00:00 2001
From: Samuel Dionne-Riel <samuel@dionne-riel.com>
Date: Fri, 8 Nov 2019 21:59:21 -0500
Subject: [PATCH] Fix cross-compilation by looking for `ar`.
---
Makefile.am | 2 +-
configure.ac | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/Makefile.am b/Makefile.am
index d18c49b8..b1b53338 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -494,7 +494,7 @@ CCC=@CXX@
# INSTALL_INFO
# LN_S
-AR=ar
+AR=@AR@
ETAGS=etags
ETAGSFLAGS=
# Flag that tells etags to assume C++.
diff --git a/configure.ac b/configure.ac
index 28e75f17..2449b9f5 100644
--- a/configure.ac
+++ b/configure.ac
@@ -37,6 +37,7 @@ AC_CONFIG_AUX_DIR([build-aux])
AC_CONFIG_HEADERS([src/include/config.h:src/include/config.hin])
AC_CONFIG_SRCDIR([src/roff/groff/groff.cpp])
+AC_CONFIG_MACRO_DIR([m4])
AC_USE_SYSTEM_EXTENSIONS
@@ -72,6 +73,7 @@ GROFF_DOC_CHECK
GROFF_MAKEINFO
GROFF_TEXI2DVI
AC_PROG_RANLIB
+AC_CHECK_TOOL([AR], [ar], [ar])
GROFF_INSTALL_SH
GROFF_INSTALL_INFO
AC_PROG_INSTALL
--
2.23.0

View File

@ -1,127 +0,0 @@
{ stdenv, lib, fetchurl, perl
, ghostscript #for postscript and html output
, psutils, netpbm #for html output
, buildPackages
, autoreconfHook
, pkg-config
, texinfo
}:
stdenv.mkDerivation rec {
pname = "groff";
version = "1.22.4";
src = fetchurl {
url = "mirror://gnu/groff/${pname}-${version}.tar.gz";
sha256 = "14q2mldnr1vx0l9lqp9v2f6iww24gj28iyh4j2211hyynx67p3p7";
};
enableParallelBuilding = false;
patches = [
./0001-Fix-cross-compilation-by-looking-for-ar.patch
];
postPatch = lib.optionalString (psutils != null) ''
substituteInPlace src/preproc/html/pre-html.cpp \
--replace "psselect" "${psutils}/bin/psselect"
'' + lib.optionalString (netpbm != null) ''
substituteInPlace src/preproc/html/pre-html.cpp \
--replace "pnmcut" "${lib.getBin netpbm}/bin/pnmcut" \
--replace "pnmcrop" "${lib.getBin netpbm}/bin/pnmcrop" \
--replace "pnmtopng" "${lib.getBin netpbm}/bin/pnmtopng"
substituteInPlace tmac/www.tmac.in \
--replace "pnmcrop" "${lib.getBin netpbm}/bin/pnmcrop" \
--replace "pngtopnm" "${lib.getBin netpbm}/bin/pngtopnm" \
--replace "@PNMTOPS_NOSETPAGE@" "${lib.getBin netpbm}/bin/pnmtops -nosetpage"
'';
buildInputs = [ ghostscript psutils netpbm perl ];
nativeBuildInputs = [ autoreconfHook pkg-config texinfo ];
# Builds running without a chroot environment may detect the presence
# of /usr/X11 in the host system, leading to an impure build of the
# package. To avoid this issue, X11 support is explicitly disabled.
# Note: If we ever want to *enable* X11 support, then we'll probably
# have to pass "--with-appresdir", too.
configureFlags = [
"--without-x"
] ++ lib.optionals (ghostscript != null) [
"--with-gs=${ghostscript}/bin/gs"
] ++ lib.optionals (stdenv.buildPlatform != stdenv.hostPlatform) [
"ac_cv_path_PERL=${buildPackages.perl}/bin/perl"
];
makeFlags = lib.optionals (stdenv.buildPlatform != stdenv.hostPlatform) [
# Trick to get the build system find the proper 'native' groff
# http://www.mail-archive.com/bug-groff@gnu.org/msg01335.html
"GROFF_BIN_PATH=${buildPackages.groff}/bin"
"GROFFBIN=${buildPackages.groff}/bin/groff"
];
doCheck = true;
postInstall = ''
for f in 'man.local' 'mdoc.local'; do
cat '${./site.tmac}' >>"$out/share/groff/site-tmac/$f"
done
moveToOutput bin/gropdf $out
moveToOutput bin/pdfmom $out
moveToOutput bin/roff2text $out
moveToOutput bin/roff2pdf $out
moveToOutput bin/roff2ps $out
moveToOutput bin/roff2dvi $out
moveToOutput bin/roff2ps $out
moveToOutput bin/roff2html $out
moveToOutput bin/glilypond $out
moveToOutput bin/mmroff $out
moveToOutput bin/roff2x $out
moveToOutput bin/afmtodit $out
moveToOutput bin/gperl $out
moveToOutput bin/chem $out
moveToOutput share/groff/${version}/font/devpdf $out
# idk if this is needed, but Fedora does it
moveToOutput share/groff/${version}/tmac/pdf.tmac $out
moveToOutput bin/gpinyin $out
moveToOutput lib/groff/gpinyin $out
substituteInPlace $out/bin/gpinyin \
--replace $out/lib/groff/gpinyin $out/lib/groff/gpinyin
moveToOutput bin/groffer $out
moveToOutput lib/groff/groffer $out
substituteInPlace $out/bin/groffer \
--replace $out/lib/groff/groffer $out/lib/groff/groffer
moveToOutput bin/grog $out
moveToOutput lib/groff/grog $out
substituteInPlace $out/bin/grog \
--replace $out/lib/groff/grog $out/lib/groff/grog
'' + lib.optionalString (stdenv.buildPlatform != stdenv.hostPlatform) ''
find $out/ -type f -print0 | xargs --null sed -i 's|${buildPackages.perl}|${perl}|'
'';
meta = with lib; {
homepage = "https://www.gnu.org/software/groff/";
description = "GNU Troff, a typesetting package that reads plain text and produces formatted output";
license = licenses.gpl3Plus;
platforms = platforms.all;
maintainers = with maintainers; [ pSub ];
longDescription = ''
groff is the GNU implementation of troff, a document formatting
system. Included in this release are implementations of troff,
pic, eqn, tbl, grn, refer, -man, -mdoc, -mom, and -ms macros,
and drivers for PostScript, TeX dvi format, HP LaserJet 4
printers, Canon CAPSL printers, HTML and XHTML format (beta
status), and typewriter-like devices. Also included is a
modified version of the Berkeley -me macros, the enhanced
version gxditview of the X11 xditview previewer, and an
implementation of the -mm macros.
'';
};
}

Some files were not shown because too many files have changed in this diff Show More