Update to nixpkgs 25.11 (Xantusia) #218
Reference in New Issue
Block a user
Delete Branch "upgrade/25.11"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Update to nixpkgs 25.11 (Xantusia)
Compiler changes:
Broken:
Evaluation warnings when building hut:
111fcc61d8to408b974433408b974433to00a7122768WIP: nixpkgs 25.11to Update to nixpkgs 25.11 (Xantusia)00a7122768to1d3bda33a0Thanks!, looks good. I would need to upgrade all machines to test it (including Fox due to SLURM), I would rather do it after Christmas unless we need some fixes before that. We have a custom AMD driver in Fox, could you also build the configuration for Fox to see if it still compiles?
CC: @varcila you were doing some experiments in Fox and this will upgrade the kernel (but not your development shell).
Thanks for the copy. FYI, I have finished most of the batch of jobs I needed to execute this year, so I will most probably not use Fox until the second of January when I come back from holidays. Just to say that I have no preference for when the upgrade is done :)
amd-uprof-driveris broken: https://jungle.bsc.es/p/abonerib/B8gcl28j.logIt is caused by the definitions of
rdmsrqandwrmsrqdefined in:inc/PwrProfAsm.hwhich now collide with the kernel's own: https://lkml.org/lkml/2025/4/9/1709 .Doing a grep on the driver source it seems that they are not used anywhere, and since the
amd-uprofcomes from a binary blob I think it should be safe to remove them? @varcilaI have added a patch to comment them out and now fox config builds.
No rush from my side, we can merge it once we come back from vacations.
We can try, I think it makes sense to remove them.
ee9af71da0to14fe50fc2aFixed infiniband name in hut and switched to 25.11. I have also updated the nixpkgs commit so we pick the backported fixes. Everything else seems to be working fine so far.
I will propagate the upgrade to the rest of machines in the following days.
Upgraded bay and lake2 (ceph storage). After rebooting lake2 three (of four) NVME disks are missing:
Let's see if rebooting it fixes it.
They are back:
Something must be going on with the BIOS / BMC boot as the PCI address has changed for the nvme0 disk. I don't think is related with the upgrade. Ceph is fine and recovering now:
7686a75fd5to4a6e36c7e9@rarias Can we delay the upgrade of fox until 17 of January? One day after the wamta deadline, turns out getting results never ends
Sure, I will leave apex, fox, owl1 and owl2 as-is until after the 17th, as they all need SLURM to be upgraded at the same time.
Raccoon and tent (including this Gitea service) have been just upgraded, I haven't seen anything broken yet.
Fox, owl1, owl2 and apex upgraded, no problems so far.
fcfee6c674to2577f6344b2577f6344btodda6a66782