Merge commit 'a8ba803d23cef8d3ef59bffb97122bbba6de9818' into HEAD

This commit is contained in:
Luke Granger-Brown 2025-03-24 22:30:08 +00:00
commit 5aad30f248
5339 changed files with 98317 additions and 249725 deletions
third_party/nixpkgs
.git-blame-ignore-revs
.github
ci
doc
lib
maintainers
nixos

View file

@ -238,3 +238,6 @@ e0fe216f4912dd88a021d12a44155fd2cfeb31c8
# nixos/movim: format with nixfmt-rfc-style
43c1654cae47cbf987cb63758c06245fa95c1e3b
# nixos/iso-image.nix: nixfmt
da9a092c34cef6947d7aee2b134f61df45171631

View file

@ -102,6 +102,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -116,6 +116,7 @@ body:
If this issue is related to the Darwin packaging architecture as a whole, or is related to the core Darwin frameworks, consider mentioning the `@NixOS/darwin-core` team.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -106,6 +106,7 @@ body:
If in doubt, check `git blame` for whoever last touched the module, or check the associated package's maintainers. Please add the mentions above the `---` characters.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -109,6 +109,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -82,6 +82,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -62,6 +62,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -64,6 +64,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -48,6 +48,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -121,6 +121,7 @@ body:
Please mention the people who are in the **Maintainers** list of the offending package. This is done by by searching for the package on the [NixOS Package Search](https://search.nixos.org/packages) and mentioning the people listed under **Maintainers** by prefixing their GitHub usernames with an '@' character. Please add the mentions above the `---` characters in the template below.
value: |
---
**Note for maintainers:** Please tag this issue in your pull request description. (i.e. `Resolves #ISSUE`.)

View file

@ -47,7 +47,7 @@ jobs:
steps:
- uses: cachix/install-nix-action@08dcb3a5e62fa31e2da3d490afc4176ef55ecd72 # v30
- uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci

View file

@ -10,6 +10,9 @@ on:
# the release notes and some css and js files from there.
# See nixos/doc/manual/default.nix
- "doc/**"
# Build when something in lib changes
# Since the lib functions are used to 'massage' the options before producing the manual
- "lib/**"
permissions: {}
@ -26,7 +29,7 @@ jobs:
with:
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
if: github.repository_owner == 'NixOS'
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.

View file

@ -24,7 +24,7 @@ jobs:
with:
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@ad2ddac53f961de1989924296a1f236fcfbaa4fc # v15
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
if: github.repository_owner == 'NixOS'
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.

View file

@ -129,6 +129,9 @@ nixos/modules/installer/tools/nix-fallback-paths.nix @NixOS/nix-team @raitobeza
# Systemd-boot
/nixos/modules/system/boot/loader/systemd-boot @JulienMalka
# Limine
/nixos/modules/system/boot/loader/limine @lzcunt @phip1611 @programmerlexi
# Images and installer media
/nixos/modules/profiles/installation-device.nix @ElvishJerricco
/nixos/modules/installer/cd-dvd/ @ElvishJerricco

View file

@ -1,4 +1,4 @@
{
"rev": "5757bbb8bd7c0630a0cc4bb19c47e588db30b97c",
"sha256": "0px0lr7ad2zrws400507c9w5nnaffz9mp9hqssm64icdm6f6h0fz"
"rev": "573c650e8a14b2faa0041645ab18aed7e60f0c9a",
"sha256": "0qg99zj0gb0pc6sjlkmwhk1c1xz14qxmk6gamgfmcxpsfdp5vn72"
}

View file

@ -108,6 +108,7 @@ A few markups for other kinds of literals are also available:
These literal kinds are used mostly in NixOS option documentation.
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
They are handled by `myst_role` defined per renderer. <!-- reverse references in code -->
#### Admonitions

View file

@ -0,0 +1,152 @@
# COSMIC {#sec-language-cosmic}
## Packaging COSMIC applications {#ssec-cosmic-packaging}
COSMIC (Computer Operating System Main Interface Components) is a desktop environment developed by
System76, primarily for the Pop!_OS Linux distribution. Applications in the COSMIC ecosystem are
written in Rust and use libcosmic, which builds on the Iced GUI framework. This section explains
how to properly package and integrate COSMIC applications within Nix.
### libcosmicAppHook {#ssec-cosmic-libcosmic-app-hook}
The `libcosmicAppHook` is a setup hook that helps with this by automatically configuring
and wrapping applications based on libcosmic. It handles many common requirements like:
- Setting up proper linking for libraries that may be dlopen'd by libcosmic/iced apps
- Configuring XDG paths for settings schemas, icons, and other resources
- Managing Vergen environment variables for build-time information
- Setting up Rust linker flags for specific libraries
To use the hook, simply add it to your package's `nativeBuildInputs`:
```nix
{
lib,
rustPlatform,
libcosmicAppHook,
}:
rustPlatform.buildRustPackage {
# ...
nativeBuildInputs = [ libcosmicAppHook ];
# ...
}
```
### Settings fallback {#ssec-cosmic-settings-fallback}
COSMIC applications use libcosmic's UI components, which may need access to theme settings. The
`cosmic-settings` package provides default theme settings as a fallback in its `share` directory.
By default, `libcosmicAppHook` includes this fallback path in `XDG_DATA_DIRS`, ensuring that COSMIC
applications will have access to theme settings even if they aren't available elsewhere in the
system.
This fallback behavior can be disabled by setting `includeSettings = false` when including the hook:
```nix
{
lib,
rustPlatform,
libcosmicAppHook,
}:
let
# Get build-time version of libcosmicAppHook
libcosmicAppHook' = (libcosmicAppHook.__spliced.buildHost or libcosmicAppHook).override {
includeSettings = false;
};
in
rustPlatform.buildRustPackage {
# ...
nativeBuildInputs = [ libcosmicAppHook' ];
# ...
}
```
Note that `cosmic-settings` is a separate application and not a part of the libcosmic settings
system itself. It's included by default in `libcosmicAppHook` only to provide these fallback theme
settings.
### Icons {#ssec-cosmic-icons}
COSMIC applications can use icons from the COSMIC icon theme. While COSMIC applications can build
and run without these icons, they would be missing visual elements. The `libcosmicAppHook`
automatically includes `cosmic-icons` in the wrapped application's `XDG_DATA_DIRS` as a fallback,
ensuring that the application has access to its required icons even if the system doesn't have the
COSMIC icon theme installed globally.
Unlike the `cosmic-settings` fallback, the `cosmic-icons` fallback cannot be removed or disabled, as
it is essential for COSMIC applications to have access to these icons for proper visual rendering.
### Runtime Libraries {#ssec-cosmic-runtime-libraries}
COSMIC applications built on libcosmic and Iced require several runtime libraries that are dlopen'd
rather than linked directly. The `libcosmicAppHook` ensures that these libraries are correctly
linked by setting appropriate Rust linker flags. The libraries handled include:
- Graphics libraries (EGL, Vulkan)
- Input libraries (xkbcommon)
- Display server protocols (Wayland, X11)
This ensures that the applications will work correctly at runtime, even though they use dynamic
loading for these dependencies.
### Adding custom wrapper arguments {#ssec-cosmic-custom-wrapper-args}
You can pass additional arguments to the wrapper using `libcosmicAppWrapperArgs` in the `preFixup` hook:
```nix
{
lib,
rustPlatform,
libcosmicAppHook,
}:
rustPlatform.buildRustPackage {
# ...
preFixup = ''
libcosmicAppWrapperArgs+=(--set-default ENVIRONMENT_VARIABLE VALUE)
'';
# ...
}
```
## Frequently encountered issues {#ssec-cosmic-common-issues}
### Setting up Vergen environment variables {#ssec-cosmic-common-issues-vergen}
Many COSMIC applications use the Vergen Rust crate for build-time information. The `libcosmicAppHook`
automatically sets up the `VERGEN_GIT_COMMIT_DATE` environment variable based on `SOURCE_DATE_EPOCH`
to ensure reproducible builds.
However, some applications may explicitly require additional Vergen environment variables.
Without these properly set, you may encounter build failures with errors like:
```
> cargo:rerun-if-env-changed=VERGEN_GIT_COMMIT_DATE
> cargo:rerun-if-env-changed=VERGEN_GIT_SHA
>
> --- stderr
> Error: no suitable 'git' command found!
> warning: build failed, waiting for other jobs to finish...
```
While `libcosmicAppHook` handles `VERGEN_GIT_COMMIT_DATE`, you may need to explicitly set other
variables. For applications that require these variables, you should set them directly in the
package definition:
```nix
{
lib,
rustPlatform,
libcosmicAppHook,
}:
rustPlatform.buildRustPackage {
# ...
env = {
VERGEN_GIT_COMMIT_DATE = "2025-01-01";
VERGEN_GIT_SHA = "0000000000000000000000000000000000000000"; # SHA-1 hash of the commit
};
# ...
}
```
Not all COSMIC applications require these variables, but for those that do, setting them explicitly
will prevent build failures.

View file

@ -58,6 +58,7 @@ beam.section.md
bower.section.md
chicken.section.md
coq.section.md
cosmic.section.md
crystal.section.md
cuda.section.md
cuelang.section.md

View file

@ -10,6 +10,6 @@ The NixOS desktop or other non-headless configurations are the primary target fo
## Nix on GNU/Linux {#nix-on-gnulinux}
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
For proprietary video drivers, you might have luck with also adding the corresponding video driver package.

View file

@ -62,6 +62,9 @@
"sec-build-helper-extendMkDerivation": [
"index.html#sec-build-helper-extendMkDerivation"
],
"sec-language-cosmic": [
"index.html#sec-language-cosmic"
],
"sec-modify-via-packageOverrides": [
"index.html#sec-modify-via-packageOverrides"
],
@ -317,6 +320,30 @@
"sec-tools-of-stdenv": [
"index.html#sec-tools-of-stdenv"
],
"ssec-cosmic-common-issues": [
"index.html#ssec-cosmic-common-issues"
],
"ssec-cosmic-common-issues-vergen": [
"index.html#ssec-cosmic-common-issues-vergen"
],
"ssec-cosmic-custom-wrapper-args": [
"index.html#ssec-cosmic-custom-wrapper-args"
],
"ssec-cosmic-icons": [
"index.html#ssec-cosmic-icons"
],
"ssec-cosmic-libcosmic-app-hook": [
"index.html#ssec-cosmic-libcosmic-app-hook"
],
"ssec-cosmic-packaging": [
"index.html#ssec-cosmic-packaging"
],
"ssec-cosmic-runtime-libraries": [
"index.html#ssec-cosmic-runtime-libraries"
],
"ssec-cosmic-settings-fallback": [
"index.html#ssec-cosmic-settings-fallback"
],
"ssec-stdenv-dependencies": [
"index.html#ssec-stdenv-dependencies"
],

View file

@ -36,6 +36,8 @@
- NetBox version 4.0.X available as `netbox_4_0` was removed. Please upgrade to `4.2`.
- `i3status-rust`-package no longer enables `notmuch` by default. It can be enabled via `withNotmuch`.
- Default ICU version updated from 74 to 76
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
@ -46,7 +48,7 @@
### NexusMods.App upgraded {#sec-nixpkgs-release-25.05-incompatibilities-nexusmods-app-upgraded}
- `nexusmods-app` has been upgraded from version 0.6.3 to 0.7.3.
- `nexusmods-app` has been upgraded from version 0.6.3 to 0.8.2.
- Before upgrading, you **must reset all app state** (mods, games, settings, etc). NexusMods.App will crash if any state from a version older than 0.7.0 is still present.

View file

@ -140,11 +140,6 @@ lib.mapAttrs mkLicense ({
fullName = "Apache License 2.0";
};
asl20-llvm = {
spdxId = "Apache-2.0 WITH LLVM-exception";
fullName = "Apache License 2.0 with LLVM Exceptions";
};
bitstreamVera = {
spdxId = "Bitstream-Vera";
fullName = "Bitstream Vera Font License";
@ -220,6 +215,11 @@ lib.mapAttrs mkLicense ({
fullName = "Lawrence Berkeley National Labs BSD variant license";
};
bsdAxisNoDisclaimerUnmodified = {
fullName = "BSD-Axis without Warranty Disclaimer with Unmodified requirement";
url = "https://scancode-licensedb.aboutcode.org/bsd-no-disclaimer-unmodified.html";
};
bsdOriginal = {
spdxId = "BSD-4-Clause";
fullName = ''BSD 4-clause "Original" or "Old" License'';
@ -854,6 +854,11 @@ lib.mapAttrs mkLicense ({
url = "https://opensource.franz.com/preamble.html";
};
llvm-exception = {
spdxId = "LLVM-exception";
fullName = "LLVM Exception"; # LLVM exceptions to the Apache 2.0 License
};
lppl1 = {
spdxId = "LPPL-1.0";
fullName = "LaTeX Project Public License v1.0";

View file

@ -159,7 +159,13 @@ let
# but this is not fully specified, so let's tie this too much to the currently implemented concept of store paths.
# Similar reasoning applies to the validity of the name part.
# We care more about discerning store path-ness on realistic values. Making it airtight would be fragile and slow.
&& match ".{32}-.+" (elemAt components storeDirLength) != null;
&& match ".{32}-.+" (elemAt components storeDirLength) != null
# alternatively match contentaddressed derivations, which _currently_ do
# not have a store directory prefix.
# This is a workaround for https://github.com/NixOS/nix/issues/12361 which
# was needed during the experimental phase of ca-derivations and should be
# removed once the issue has been resolved.
|| match "[0-9a-z]{52}" (head components) != null;
in
# No rec! Add dependencies on this file at the top.

View file

@ -137,6 +137,16 @@ let
expected = true;
};
# Test paths for contentaddressed derivations
testHasStorePathPrefixExample7 = {
expr = hasStorePathPrefix (/. + "/1121rp0gvr1qya7hvy925g5kjwg66acz6sn1ra1hca09f1z5dsab");
expected = true;
};
testHasStorePathPrefixExample8 = {
expr = hasStorePathPrefix (/. + "/1121rp0gvr1qya7hvy925g5kjwg66acz6sn1ra1hca09f1z5dsab/foo/bar");
expected = true;
};
# Test examples from the lib.path.subpath.isValid documentation
testSubpathIsValidExample1 = {
expr = subpath.isValid null;

View file

@ -579,6 +579,17 @@
githubId = 50264672;
name = "Adam Freeth";
};
adamperkowski = {
name = "Adam Perkowski";
email = "adas1per@protonmail.com";
matrix = "@xx0a_q:matrix.org";
github = "adamperkowski";
githubId = 75480869;
keys = [
{ fingerprint = "00F6 1623 FB56 BC5B B709 4E63 4CE6 C117 2DF6 BE79"; }
{ fingerprint = "5A53 0832 DA91 20B0 CA57 DDB6 7CBD B58E CF1D 3478"; }
];
};
adamt = {
email = "mail@adamtulinius.dk";
github = "adamtulinius";
@ -1185,6 +1196,18 @@
githubId = 30437811;
name = "Alex Andrews";
};
alikindsys = {
email = "alice@blocovermelho.org";
github = "alikindsys";
githubId = 36565196;
name = "Alikind System";
keys = [
{
fingerprint = "7D31 15DC D912 C15A 2781 F7BB 511C B44B C752 2A89";
}
];
};
alirezameskin = {
email = "alireza.meskin@gmail.com";
github = "alirezameskin";
@ -1825,6 +1848,13 @@
githubId = 8436007;
name = "Aria Edmonds";
};
arbel-arad = {
email = "arbel@spacetime.technology";
github = "arbel-arad";
githubId = 65590498;
matrix = "@arbel:matrix.spacetime.technology";
name = "Arbel Arad";
};
arcadio = {
email = "arc@well.ox.ac.uk";
github = "arcadio";
@ -2228,6 +2258,13 @@
name = "tali auster";
matrix = "@atalii:matrix.org";
};
atar13 = {
name = "Anthony Tarbinian";
email = "atar137h@gmail.com";
github = "atar13";
githubId = 42757207;
matrix = "@atar13:matrix.org";
};
ataraxiasjel = {
email = "nix@ataraxiadev.com";
github = "AtaraxiaSjel";
@ -2444,6 +2481,12 @@
githubId = 206242;
name = "Andreas Wiese";
};
awwpotato = {
email = "awwpotato@voidq.com";
github = "awwpotato";
githubId = 153149335;
name = "awwpotato";
};
axertheaxe = {
email = "axertheaxe@proton.me";
github = "AxerTheAxe";
@ -2996,6 +3039,14 @@
githubId = 727571;
keys = [ { fingerprint = "AAD4 3B70 A504 9675 CFC8 B101 BAFD 205D 5FA2 B329"; } ];
};
berrij = {
email = "jonathan@berrisch.biz";
matrix = "@berrij:fairydust.space";
name = "Jonathan Berrisch";
github = "BerriJ";
githubId = 37799358;
keys = [ { fingerprint = "42 B6 CC90 6 A91 EA4F 8 A7E 315 B 30 DC 5398 152 C 5310"; } ];
};
berryp = {
email = "berryphillips@gmail.com";
github = "berryp";
@ -3181,6 +3232,12 @@
githubId = 77934086;
keys = [ { fingerprint = "4CA3 48F6 8FE1 1777 8EDA 3860 B9A2 C1B0 25EC 2C55"; } ];
};
blenderfreaky = {
name = "blenderfreaky";
email = "nix@blenderfreaky.de";
github = "blenderfreaky";
githubId = 14351657;
};
blinry = {
name = "blinry";
email = "mail@blinry.org";
@ -3570,6 +3627,12 @@
githubId = 32319131;
name = "Brett L";
};
bubblepipe = {
email = "bubblepipe42@gmail.com";
github = "bubblepipe";
githubId = 30717258;
name = "bubblepipe";
};
buckley310 = {
email = "sean.bck@gmail.com";
matrix = "@buckley310:matrix.org";
@ -4089,6 +4152,12 @@
name = "ChaosAttractor";
keys = [ { fingerprint = "A137 4415 DB7C 6439 10EA 5BF1 0FEE 4E47 5940 E125"; } ];
};
charain = {
email = "charain_li@outlook.com";
github = "chai-yuan";
githubId = 42235952;
name = "charain";
};
charB66 = {
email = "nix.disparate221@passinbox.com";
github = "charB66";
@ -4343,6 +4412,12 @@
github = "ciferkey";
githubId = 101422;
};
ciflire = {
name = "Léo Vesse";
email = "leovesse@gmail.com";
github = "Ciflire";
githubId = 39668077;
};
cig0 = {
name = "Martín Cigorraga";
email = "cig0.github@gmail.com";
@ -4448,6 +4523,12 @@
githubId = 71959829;
name = "Cleeyv";
};
clementpoiret = {
email = "poiret.clement@outlook.fr";
github = "clementpoiret";
githubId = 10899984;
name = "Clement POIRET";
};
clemjvdm = {
email = "clement.jvdm@gmail.com";
github = "clemjvdm";
@ -5185,6 +5266,12 @@
githubId = 245394;
name = "Hannu Hartikainen";
};
dandedotdev = {
email = "contact@dande.dev";
github = "dandedotdev";
githubId = 106054083;
name = "Dandelion Huang";
};
dandellion = {
email = "daniel@dodsorf.as";
matrix = "@dandellion:dodsorf.as";
@ -5278,7 +5365,7 @@
};
danth = {
name = "Daniel Thwaites";
email = "danthwaites30@btinternet.com";
email = "danth@danth.me";
matrix = "@danth:danth.me";
github = "danth";
githubId = 28959268;
@ -6466,6 +6553,13 @@
name = "Duncan Dean";
keys = [ { fingerprint = "9484 44FC E03B 05BA 5AB0 591E C37B 1C1D 44C7 86EE"; } ];
};
DutchGerman = {
name = "Stefan Visser";
email = "stefan.visser@apm-ecampus.de";
github = "DutchGerman";
githubId = 60694691;
keys = [ { fingerprint = "A7C9 3DC7 E891 046A 980F 2063 F222 A13B 2053 27A5"; } ];
};
dvaerum = {
email = "nixpkgs-maintainer@varum.dk";
github = "dvaerum";
@ -6940,6 +7034,11 @@
github = "EmanuelM153";
githubId = 134736553;
};
emaryn = {
name = "emaryn";
github = "emaryn";
githubId = 197520219;
};
emattiza = {
email = "nix@mattiza.dev";
github = "emattiza";
@ -8377,6 +8476,11 @@
githubId = 293586;
name = "Adam Gamble";
};
gamedungeon = {
github = "GameDungeon";
githubId = 60719255;
name = "gamedungeon";
};
gangaram = {
email = "Ganga.Ram@tii.ae";
github = "gangaram-tii";
@ -8474,12 +8578,6 @@
githubId = 34658064;
name = "Grace Dinh";
};
gebner = {
email = "gebner@gebner.org";
github = "gebner";
githubId = 313929;
name = "Gabriel Ebner";
};
geluk = {
email = "johan+nix@geluk.io";
github = "geluk";
@ -9028,6 +9126,12 @@
githubId = 39066502;
name = "Guekka";
};
guelakais = {
email = "koroyeldiores@gmail.com";
github = "Guelakais";
githubId = 76840985;
name = "GueLaKais";
};
guibert = {
email = "david.guibert@gmail.com";
github = "dguibert";
@ -9101,6 +9205,17 @@
github = "gytis-ivaskevicius";
githubId = 23264966;
};
GZGavinZhao = {
name = "Gavin Zhao";
github = "GZGavinZhao";
githubId = 74938940;
};
h3cth0r = {
name = "Hector Miranda";
email = "hector.miranda@tec.mx";
github = "h3cth0r";
githubId = 43997408;
};
h7x4 = {
name = "h7x4";
email = "h7x4@nani.wtf";
@ -9140,6 +9255,12 @@
githubId = 1498782;
name = "Jesse Haber-Kucharsky";
};
hakujin = {
email = "colin@hakuj.in";
github = "hakujin";
githubId = 2192042;
name = "Colin King";
};
hamburger1984 = {
email = "hamburger1984@gmail.com";
github = "hamburger1984";
@ -9228,6 +9349,12 @@
githubId = 33523827;
name = "Harrison Thorne";
};
harryposner = {
email = "nixpkgs@harryposner.com";
github = "harryposner";
githubId = 23534120;
name = "Harry Posner";
};
haruki7049 = {
email = "tontonkirikiri@gmail.com";
github = "haruki7049";
@ -9286,6 +9413,12 @@
githubId = 1379411;
name = "Georg Haas";
};
haylin = {
email = "me@haylinmoore.com";
github = "haylinmoore";
githubId = 8162992;
name = "Haylin Moore";
};
hbjydev = {
email = "hayden@kuraudo.io";
github = "hbjydev";
@ -9414,6 +9547,12 @@
githubId = 49935860;
name = "Henri Rosten";
};
henrispriet = {
email = "henri.spriet@gmail.com";
github = "henrispriet";
githubId = 36509362;
name = "Henri Spriet";
};
henrytill = {
email = "henrytill@gmail.com";
github = "henrytill";
@ -9690,6 +9829,12 @@
githubId = 39689;
name = "Hugo Tavares Reis";
};
httprafa = {
email = "rafael.kienitz@gmail.com";
github = "HttpRafa";
githubId = 60099368;
name = "Rafael Kienitz";
};
huantian = {
name = "David Li";
email = "davidtianli@gmail.com";
@ -10403,6 +10548,12 @@
githubId = 94313;
name = "Xianyi Lin";
};
izelnakri = {
email = "contact@izelnakri.com";
github = "izelnakri";
githubId = 1190931;
name = "Izel Nakri";
};
izorkin = {
email = "Izorkin@gmail.com";
github = "Izorkin";
@ -11227,6 +11378,12 @@
{ fingerprint = "816D 23F5 E672 EC58 7674 4A73 197F 9A63 2D13 9E30"; }
];
};
j-mendez = {
email = "jeff@a11ywatch.com";
github = "j-mendez";
githubId = 8095978;
name = "j-mendez";
};
jmendyk = {
email = "jakub@ndyk.me";
github = "JMendyk";
@ -11298,6 +11455,13 @@
githubId = 22916782;
name = "Joan Massachs";
};
joaomoreira = {
matrix = "@joaomoreira:matrix.org";
github = "joaoymoreira";
githubId = 151087767;
name = "João Moreira";
keys = [ { fingerprint = "F457 0A3A 5F89 22F8 F572 E075 EF8B F2C8 C5F4 097D"; } ];
};
joaquintrinanes = {
email = "hi@joaquint.io";
github = "JoaquinTrinanes";
@ -13209,6 +13373,12 @@
name = "Jakob Leifhelm";
keys = [ { fingerprint = "4A82 F68D AC07 9FFD 8BF0 89C4 6817 AA02 3810 0822"; } ];
};
leiserfg = {
email = "leiserfg@gmail.com";
github = "leiserfg";
githubId = 2947276;
name = "Leiser Fernández Gallo";
};
leixb = {
email = "abone9999+nixpkgs@gmail.com";
matrix = "@leix_b:matrix.org";
@ -13359,6 +13529,12 @@
githubId = 54590679;
name = "Liam Murphy";
};
Liamolucko = {
name = "Liam Murphy";
email = "liampm32@gmail.com";
github = "Liamolucko";
githubId = 43807659;
};
liarokapisv = {
email = "liarokapis.v@gmail.com";
github = "liarokapisv";
@ -14250,6 +14426,12 @@
}
];
};
mahyarmirrashed = {
email = "mah.mirr@gmail.com";
github = "mahyarmirrashed";
githubId = 59240843;
name = "Mahyar Mirrashed";
};
majesticmullet = {
email = "hoccthomas@gmail.com.au";
github = "MajesticMullet";
@ -14408,6 +14590,13 @@
githubId = 30194994;
name = "Felix Nilles";
};
marcin-serwin = {
name = "Marcin Serwin";
github = "marcin-serwin";
githubId = 12128106;
email = "marcin@serwin.dev";
keys = [ { fingerprint = "F311 FA15 1A66 1875 0C4D A88D 82F5 C70C DC49 FD1D"; } ];
};
marcovergueira = {
email = "vergueira.marco@gmail.com";
github = "marcovergueira";
@ -16299,6 +16488,12 @@
githubId = 6783654;
name = "Nadrieril Feneanar";
};
naelstrof = {
email = "naelstrof@gmail.com";
github = "naelstrof";
githubId = 1131571;
name = "naelstrof";
};
nagisa = {
name = "Simonas Kazlauskas";
email = "nixpkgs@kazlauskas.me";
@ -16431,6 +16626,13 @@
githubId = 56316606;
name = "Amneesh Singh";
};
naufik = {
email = "naufal@naufik.net";
github = "naufik";
githubId = 8577904;
name = "Naufal Fikri";
keys = [ { fingerprint = "1575 D651 E31EC 6117A CF0AA C1A3B 8BBC A515 8835"; } ];
};
naxdy = {
name = "Naxdy";
email = "naxdy@naxdy.org";
@ -16439,11 +16641,6 @@
githubId = 4532582;
keys = [ { fingerprint = "BDEA AB07 909D B96F 4106 85F1 CC15 0758 46BC E91B"; } ];
};
nayeko = {
name = "nayeko";
github = "nayeko";
githubId = 196556004;
};
nazarewk = {
name = "Krzysztof Nazarewski";
email = "nixpkgs@kdn.im";
@ -17633,6 +17830,12 @@
githubId = 34910574;
keys = [ { fingerprint = "D055 8A23 3947 B7A0 F966 B07F 0B41 0348 9833 7273"; } ];
};
Oops418 = {
email = "oooopsxxx@gmail.com";
github = "Oops418";
name = "Oops418";
githubId = 93655215;
};
oosquare = {
name = "Justin Chen";
email = "oosquare@outlook.com";
@ -18669,6 +18872,12 @@
github = "pladypus";
githubId = 56337621;
};
plamper = {
name = "Felix Plamper";
email = "felix.plamper@tuta.io";
github = "plamper";
githubId = 59016721;
};
plchldr = {
email = "mail@oddco.de";
github = "plchldr";
@ -18824,6 +19033,12 @@
githubId = 1829032;
name = "Paul Hendry";
};
polyfloyd = {
email = "floyd@polyfloyd.net";
github = "polyfloyd";
githubId = 4839878;
name = "polyfloyd";
};
polygon = {
email = "polygon@wh2.tu-dresden.de";
name = "Polygon";
@ -18848,6 +19063,12 @@
githubId = 4201956;
name = "pongo1231";
};
poopsicles = {
name = "Fumnanya";
email = "fmowete@outlook.com";
github = "poopsicles";
githubId = 87488715;
};
PopeRigby = {
name = "PopeRigby";
github = "poperigby";
@ -19020,6 +19241,11 @@
githubId = 74465;
name = "James Fargher";
};
programmerlexi = {
name = "programmerlexi";
github = "programmerlexi";
githubId = 60185691;
};
progrm_jarvis = {
email = "mrjarviscraft+nix@gmail.com";
github = "JarvisCraft";
@ -19913,12 +20139,6 @@
githubId = 22803888;
name = "Lu Hongxu";
};
rexim = {
email = "reximkut@gmail.com";
github = "rexim";
githubId = 165283;
name = "Alexey Kutepov";
};
rexxDigital = {
email = "joellarssonpriv@gmail.com";
github = "rexxDigital";
@ -20014,6 +20234,12 @@
githubId = 10631029;
name = "Richard Ipsum";
};
richiejp = {
email = "io@richiejp.com";
github = "richiejp";
githubId = 988098;
name = "Richard Palethorpe";
};
rick68 = {
email = "rick68@gmail.com";
github = "rick68";
@ -20399,6 +20625,12 @@
githubId = 19699320;
keys = [ { fingerprint = "FD5D F7A8 85BB 378A 0157 5356 B09C 4220 3566 9AF8"; } ];
};
RossSmyth = {
name = "Ross Smyth";
matrix = "@rosssmyth:matrix.org";
github = "RossSmyth";
githubId = 18294397;
};
rostan-t = {
name = "Rostan Tabet";
email = "rostan.tabet@gmail.com";
@ -21214,6 +21446,11 @@
githubId = 19472270;
name = "Sebastian";
};
sebaguardian = {
name = "Sebaguardian";
github = "Sebaguardian";
githubId = 68247013;
};
sebastianblunt = {
name = "Sebastian Blunt";
email = "nix@sebastianblunt.com";
@ -21303,6 +21540,12 @@
githubId = 33031;
name = "Greg Pfeil";
};
semtexerror = {
email = "github@spampert.com";
github = "SemtexError";
githubId = 8776314;
name = "Robin";
};
sengaya = {
email = "tlo@sengaya.de";
github = "sengaya";
@ -21435,6 +21678,12 @@
githubId = 1151264;
name = "Sebastian Graf";
};
sguimmara = {
email = "fair.lid2365@fastmail.com";
github = "sguimmara";
githubId = 5512096;
name = "Sébastien Guimmara";
};
shackra = {
name = "Jorge Javier Araya Navarro";
email = "jorge@esavara.cr";
@ -21789,6 +22038,11 @@
githubId = 91412114;
keys = [ { fingerprint = "C1DA A551 B422 7A6F 3FD9 6B3A 467B 7D12 9EA7 3AC9"; } ];
};
silvanshade = {
github = "silvanshade";
githubId = 11022302;
name = "silvanshade";
};
Silver-Golden = {
name = "Brendan Golden";
email = "github+nixpkgs@brendan.ie";
@ -22044,6 +22298,12 @@
githubId = 4477729;
name = "Sergey Mironov";
};
smissingham = {
email = "sean@missingham.com";
github = "smissingham";
githubId = 9065495;
name = "Sean Missingham";
};
smitop = {
name = "Smitty van Bodegom";
email = "me@smitop.com";
@ -22148,7 +22408,7 @@
name = "sodiboo";
github = "sodiboo";
githubId = 37938646;
matrix = "@sodiboo:arcticfoxes.net";
matrix = "@sodiboo:gaysex.cloud";
};
softinio = {
email = "code@softinio.com";
@ -22307,6 +22567,13 @@
githubId = 47164123;
name = "Spoonbaker";
};
sportshead = {
email = "me@sportshead.dev";
github = "sportshead";
githubId = 32637656;
name = "sportshead";
keys = [ { fingerprint = "A6B6 D031 782E BDF7 631A 8E7E A874 DB2C BFD3 CFD0"; } ];
};
sprock = {
email = "rmason@mun.ca";
github = "sprock";
@ -22880,6 +23147,12 @@
githubId = 203195;
name = "Szczyp";
};
szkiba = {
email = "iszkiba@gmail.com";
github = "szkiba";
githubId = 16244553;
name = "Iván Szkiba";
};
szlend = {
email = "pub.nix@zlender.si";
github = "szlend";
@ -23344,6 +23617,12 @@
githubId = 7060816;
name = "Thao-Tran Le-Phuong";
};
thardin = {
email = "th020394@gmail.com";
github = "Tyler-Hardin";
githubId = 5404976;
name = "Tyler Hardin";
};
thblt = {
name = "Thibault Polge";
email = "thibault@thb.lt";
@ -23618,6 +23897,12 @@
githubId = 678511;
name = "Thomas Mader";
};
thornoar = {
email = "r.a.maksimovich@gmail.com";
github = "thornoar";
githubId = 84677666;
name = "Roman Maksimovich";
};
thornycrackers = {
email = "codyfh@gmail.com";
github = "thornycrackers";
@ -23785,6 +24070,12 @@
matrix = "@titaniumtown:envs.net";
keys = [ { fingerprint = "D15E 4754 FE1A EDA1 5A6D 4702 9AB2 8AC1 0ECE 533D"; } ];
};
tjkeller = {
email = "tjk@tjkeller.xyz";
github = "tjkeller-xyz";
githubId = 36288711;
name = "Tim Keller";
};
tjni = {
email = "43ngvg@masqt.com";
matrix = "@tni:matrix.org";
@ -26301,6 +26592,11 @@
github = "zfnmxt";
githubId = 37446532;
};
zh4ngx = {
github = "zh4ngx";
githubId = 1329212;
name = "Andy Zhang";
};
zhaofengli = {
email = "hello@zhaofeng.li";
matrix = "@zhaofeng:zhaofeng.li";

View file

@ -16,6 +16,7 @@
keep-going ? null,
commit ? null,
skip-prompt ? null,
order ? null,
}:
let
@ -217,6 +218,18 @@ let
to skip prompt:
--argstr skip-prompt true
By default, the updater will update the packages in arbitrary order. Alternately, you can force a specific order based on the packages dependency relations:
- Reverse topological order (e.g. {"gnome-text-editor", "gimp"}, {"gtk3", "gtk4"}, {"glib"}) is useful when you want checkout each commit one by one to build each package individually but some of the packages to be updated would cause a mass rebuild for the others. Of course, this requires that none of the updated dependents require a new version of the dependency.
--argstr order reverse-topological
- Topological order (e.g. {"glib"}, {"gtk3", "gtk4"}, {"gnome-text-editor", "gimp"}) is useful when the updated dependents require a new version of updated dependency.
--argstr order topological
Note that sorting requires instantiating each package and then querying Nix store for requisites so it will be pretty slow with large number of packages.
'';
# Transform a matched package into an object for update.py.
@ -241,7 +254,8 @@ let
lib.optional (max-workers != null) "--max-workers=${max-workers}"
++ lib.optional (keep-going == "true") "--keep-going"
++ lib.optional (commit == "true") "--commit"
++ lib.optional (skip-prompt == "true") "--skip-prompt";
++ lib.optional (skip-prompt == "true") "--skip-prompt"
++ lib.optional (order != null) "--order=${order}";
args = [ packagesJson ] ++ optionalArgs;

View file

@ -1,5 +1,6 @@
from __future__ import annotations
from typing import Dict, Generator, List, Optional, Tuple
from graphlib import TopologicalSorter
from pathlib import Path
from typing import Any, Generator, Literal
import argparse
import asyncio
import contextlib
@ -10,17 +11,24 @@ import subprocess
import sys
import tempfile
Order = Literal["arbitrary", "reverse-topological", "topological"]
class CalledProcessError(Exception):
process: asyncio.subprocess.Process
stderr: Optional[bytes]
stderr: bytes | None
class UpdateFailedException(Exception):
pass
def eprint(*args, **kwargs):
def eprint(*args: Any, **kwargs: Any) -> None:
print(*args, file=sys.stderr, **kwargs)
async def check_subprocess_output(*args, **kwargs):
async def check_subprocess_output(*args: str, **kwargs: Any) -> bytes:
"""
Emulate check and capture_output arguments of subprocess.run function.
"""
@ -38,26 +46,182 @@ async def check_subprocess_output(*args, **kwargs):
return stdout
async def run_update_script(nixpkgs_root: str, merge_lock: asyncio.Lock, temp_dir: Optional[Tuple[str, str]], package: Dict, keep_going: bool):
worktree: Optional[str] = None
update_script_command = package['updateScript']
async def nix_instantiate(attr_path: str) -> Path:
out = await check_subprocess_output(
"nix-instantiate",
"-A",
attr_path,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
drv = out.decode("utf-8").strip().split("!", 1)[0]
return Path(drv)
async def nix_query_requisites(drv: Path) -> list[Path]:
requisites = await check_subprocess_output(
"nix-store",
"--query",
"--requisites",
str(drv),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
drv_str = str(drv)
return [
Path(requisite)
for requisite in requisites.decode("utf-8").splitlines()
# Avoid self-loops.
if requisite != drv_str
]
async def attr_instantiation_worker(
semaphore: asyncio.Semaphore,
attr_path: str,
) -> tuple[Path, str]:
async with semaphore:
eprint(f"Instantiating {attr_path}")
return (await nix_instantiate(attr_path), attr_path)
async def requisites_worker(
semaphore: asyncio.Semaphore,
drv: Path,
) -> tuple[Path, list[Path]]:
async with semaphore:
eprint(f"Obtaining requisites for {drv}")
return (drv, await nix_query_requisites(drv))
def requisites_to_attrs(
drv_attr_paths: dict[Path, str],
requisites: list[Path],
) -> set[str]:
"""
Converts a set of requisite `.drv`s to a set of attribute paths.
Derivations that do not correspond to any of the packages we want to update will be discarded.
"""
return {
drv_attr_paths[requisite]
for requisite in requisites
if requisite in drv_attr_paths
}
def reverse_edges(graph: dict[str, set[str]]) -> dict[str, set[str]]:
"""
Flips the edges of a directed graph.
"""
reversed_graph: dict[str, set[str]] = {}
for dependent, dependencies in graph.items():
for dependency in dependencies:
reversed_graph.setdefault(dependency, set()).add(dependent)
return reversed_graph
def get_independent_sorter(
packages: list[dict],
) -> TopologicalSorter[str]:
"""
Returns a sorter which treats all packages as independent,
which will allow them to be updated in parallel.
"""
attr_deps: dict[str, set[str]] = {
package["attrPath"]: set() for package in packages
}
sorter = TopologicalSorter(attr_deps)
sorter.prepare()
return sorter
async def get_topological_sorter(
max_workers: int,
packages: list[dict],
reverse_order: bool,
) -> tuple[TopologicalSorter[str], list[dict]]:
"""
Returns a sorter which returns packages in topological or reverse topological order,
which will ensure a package is updated before or after its dependencies, respectively.
"""
semaphore = asyncio.Semaphore(max_workers)
drv_attr_paths = dict(
await asyncio.gather(
*(
attr_instantiation_worker(semaphore, package["attrPath"])
for package in packages
)
)
)
drv_requisites = await asyncio.gather(
*(requisites_worker(semaphore, drv) for drv in drv_attr_paths.keys())
)
attr_deps = {
drv_attr_paths[drv]: requisites_to_attrs(drv_attr_paths, requisites)
for drv, requisites in drv_requisites
}
if reverse_order:
attr_deps = reverse_edges(attr_deps)
# Adjust packages order based on the topological one
ordered = list(TopologicalSorter(attr_deps).static_order())
packages = sorted(packages, key=lambda package: ordered.index(package["attrPath"]))
sorter = TopologicalSorter(attr_deps)
sorter.prepare()
return sorter, packages
async def run_update_script(
nixpkgs_root: str,
merge_lock: asyncio.Lock,
temp_dir: tuple[str, str] | None,
package: dict,
keep_going: bool,
) -> None:
worktree: str | None = None
update_script_command = package["updateScript"]
if temp_dir is not None:
worktree, _branch = temp_dir
# Ensure the worktree is clean before update.
await check_subprocess_output('git', 'reset', '--hard', '--quiet', 'HEAD', cwd=worktree)
await check_subprocess_output(
"git",
"reset",
"--hard",
"--quiet",
"HEAD",
cwd=worktree,
)
# Update scripts can use $(dirname $0) to get their location but we want to run
# their clones in the git worktree, not in the main nixpkgs repo.
update_script_command = map(lambda arg: re.sub(r'^{0}'.format(re.escape(nixpkgs_root)), worktree, arg), update_script_command)
update_script_command = map(
lambda arg: re.sub(r"^{0}".format(re.escape(nixpkgs_root)), worktree, arg),
update_script_command,
)
eprint(f" - {package['name']}: UPDATING ...")
try:
update_info = await check_subprocess_output(
'env',
"env",
f"UPDATE_NIX_NAME={package['name']}",
f"UPDATE_NIX_PNAME={package['pname']}",
f"UPDATE_NIX_OLD_VERSION={package['oldVersion']}",
@ -69,50 +233,77 @@ async def run_update_script(nixpkgs_root: str, merge_lock: asyncio.Lock, temp_di
)
await merge_changes(merge_lock, package, update_info, temp_dir)
except KeyboardInterrupt as e:
eprint('Cancelling…')
eprint("Cancelling…")
raise asyncio.exceptions.CancelledError()
except CalledProcessError as e:
eprint(f" - {package['name']}: ERROR")
eprint()
eprint(f"--- SHOWING ERROR LOG FOR {package['name']} ----------------------")
eprint()
eprint(e.stderr.decode('utf-8'))
with open(f"{package['pname']}.log", 'wb') as logfile:
logfile.write(e.stderr)
eprint()
eprint(f"--- SHOWING ERROR LOG FOR {package['name']} ----------------------")
if e.stderr is not None:
eprint()
eprint(
f"--- SHOWING ERROR LOG FOR {package['name']} ----------------------"
)
eprint()
eprint(e.stderr.decode("utf-8"))
with open(f"{package['pname']}.log", "wb") as logfile:
logfile.write(e.stderr)
eprint()
eprint(
f"--- SHOWING ERROR LOG FOR {package['name']} ----------------------"
)
if not keep_going:
raise UpdateFailedException(f"The update script for {package['name']} failed with exit code {e.process.returncode}")
raise UpdateFailedException(
f"The update script for {package['name']} failed with exit code {e.process.returncode}"
)
@contextlib.contextmanager
def make_worktree() -> Generator[Tuple[str, str], None, None]:
def make_worktree() -> Generator[tuple[str, str], None, None]:
with tempfile.TemporaryDirectory() as wt:
branch_name = f'update-{os.path.basename(wt)}'
target_directory = f'{wt}/nixpkgs'
branch_name = f"update-{os.path.basename(wt)}"
target_directory = f"{wt}/nixpkgs"
subprocess.run(['git', 'worktree', 'add', '-b', branch_name, target_directory])
subprocess.run(["git", "worktree", "add", "-b", branch_name, target_directory])
try:
yield (target_directory, branch_name)
finally:
subprocess.run(['git', 'worktree', 'remove', '--force', target_directory])
subprocess.run(['git', 'branch', '-D', branch_name])
subprocess.run(["git", "worktree", "remove", "--force", target_directory])
subprocess.run(["git", "branch", "-D", branch_name])
async def commit_changes(name: str, merge_lock: asyncio.Lock, worktree: str, branch: str, changes: List[Dict]) -> None:
async def commit_changes(
name: str,
merge_lock: asyncio.Lock,
worktree: str,
branch: str,
changes: list[dict],
) -> None:
for change in changes:
# Git can only handle a single index operation at a time
async with merge_lock:
await check_subprocess_output('git', 'add', *change['files'], cwd=worktree)
commit_message = '{attrPath}: {oldVersion} -> {newVersion}'.format(**change)
if 'commitMessage' in change:
commit_message = change['commitMessage']
elif 'commitBody' in change:
commit_message = commit_message + '\n\n' + change['commitBody']
await check_subprocess_output('git', 'commit', '--quiet', '-m', commit_message, cwd=worktree)
await check_subprocess_output('git', 'cherry-pick', branch)
await check_subprocess_output("git", "add", *change["files"], cwd=worktree)
commit_message = "{attrPath}: {oldVersion} -> {newVersion}".format(**change)
if "commitMessage" in change:
commit_message = change["commitMessage"]
elif "commitBody" in change:
commit_message = commit_message + "\n\n" + change["commitBody"]
await check_subprocess_output(
"git",
"commit",
"--quiet",
"-m",
commit_message,
cwd=worktree,
)
await check_subprocess_output("git", "cherry-pick", branch)
async def check_changes(package: Dict, worktree: str, update_info: str):
if 'commit' in package['supportedFeatures']:
async def check_changes(
package: dict,
worktree: str,
update_info: bytes,
) -> list[dict]:
if "commit" in package["supportedFeatures"]:
changes = json.loads(update_info)
else:
changes = [{}]
@ -120,133 +311,289 @@ async def check_changes(package: Dict, worktree: str, update_info: str):
# Try to fill in missing attributes when there is just a single change.
if len(changes) == 1:
# Dynamic data from updater take precedence over static data from passthru.updateScript.
if 'attrPath' not in changes[0]:
if "attrPath" not in changes[0]:
# update.nix is always passing attrPath
changes[0]['attrPath'] = package['attrPath']
changes[0]["attrPath"] = package["attrPath"]
if 'oldVersion' not in changes[0]:
if "oldVersion" not in changes[0]:
# update.nix is always passing oldVersion
changes[0]['oldVersion'] = package['oldVersion']
changes[0]["oldVersion"] = package["oldVersion"]
if 'newVersion' not in changes[0]:
attr_path = changes[0]['attrPath']
obtain_new_version_output = await check_subprocess_output('nix-instantiate', '--expr', f'with import ./. {{}}; lib.getVersion {attr_path}', '--eval', '--strict', '--json', stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, cwd=worktree)
changes[0]['newVersion'] = json.loads(obtain_new_version_output.decode('utf-8'))
if "newVersion" not in changes[0]:
attr_path = changes[0]["attrPath"]
obtain_new_version_output = await check_subprocess_output(
"nix-instantiate",
"--expr",
f"with import ./. {{}}; lib.getVersion {attr_path}",
"--eval",
"--strict",
"--json",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=worktree,
)
changes[0]["newVersion"] = json.loads(
obtain_new_version_output.decode("utf-8")
)
if 'files' not in changes[0]:
changed_files_output = await check_subprocess_output('git', 'diff', '--name-only', 'HEAD', stdout=asyncio.subprocess.PIPE, cwd=worktree)
if "files" not in changes[0]:
changed_files_output = await check_subprocess_output(
"git",
"diff",
"--name-only",
"HEAD",
stdout=asyncio.subprocess.PIPE,
cwd=worktree,
)
changed_files = changed_files_output.splitlines()
changes[0]['files'] = changed_files
changes[0]["files"] = changed_files
if len(changed_files) == 0:
return []
return changes
async def merge_changes(merge_lock: asyncio.Lock, package: Dict, update_info: str, temp_dir: Optional[Tuple[str, str]]) -> None:
async def merge_changes(
merge_lock: asyncio.Lock,
package: dict,
update_info: bytes,
temp_dir: tuple[str, str] | None,
) -> None:
if temp_dir is not None:
worktree, branch = temp_dir
changes = await check_changes(package, worktree, update_info)
if len(changes) > 0:
await commit_changes(package['name'], merge_lock, worktree, branch, changes)
await commit_changes(package["name"], merge_lock, worktree, branch, changes)
else:
eprint(f" - {package['name']}: DONE, no changes.")
else:
eprint(f" - {package['name']}: DONE.")
async def updater(nixpkgs_root: str, temp_dir: Optional[Tuple[str, str]], merge_lock: asyncio.Lock, packages_to_update: asyncio.Queue[Optional[Dict]], keep_going: bool, commit: bool):
async def updater(
nixpkgs_root: str,
temp_dir: tuple[str, str] | None,
merge_lock: asyncio.Lock,
packages_to_update: asyncio.Queue[dict | None],
keep_going: bool,
commit: bool,
) -> None:
while True:
package = await packages_to_update.get()
if package is None:
# A sentinel received, we are done.
return
if not ('commit' in package['supportedFeatures'] or 'attrPath' in package):
if not ("commit" in package["supportedFeatures"] or "attrPath" in package):
temp_dir = None
await run_update_script(nixpkgs_root, merge_lock, temp_dir, package, keep_going)
async def start_updates(max_workers: int, keep_going: bool, commit: bool, packages: List[Dict]):
packages_to_update.task_done()
async def populate_queue(
attr_packages: dict[str, dict],
sorter: TopologicalSorter[str],
packages_to_update: asyncio.Queue[dict | None],
num_workers: int,
) -> None:
"""
Keeps populating the queue with packages that can be updated
according to ordering requirements. If topological order
is used, the packages will appear in waves, as packages with
no dependencies are processed and removed from the sorter.
With `order="none"`, all packages will be enqueued simultaneously.
"""
# Fill up an update queue,
while sorter.is_active():
ready_packages = list(sorter.get_ready())
eprint(f"Enqueuing group of {len(ready_packages)} packages")
for package in ready_packages:
await packages_to_update.put(attr_packages[package])
await packages_to_update.join()
sorter.done(*ready_packages)
# Add sentinels, one for each worker.
# A worker will terminate when it gets a sentinel from the queue.
for i in range(num_workers):
await packages_to_update.put(None)
async def start_updates(
max_workers: int,
keep_going: bool,
commit: bool,
attr_packages: dict[str, dict],
sorter: TopologicalSorter[str],
) -> None:
merge_lock = asyncio.Lock()
packages_to_update: asyncio.Queue[Optional[Dict]] = asyncio.Queue()
packages_to_update: asyncio.Queue[dict | None] = asyncio.Queue()
with contextlib.ExitStack() as stack:
temp_dirs: List[Optional[Tuple[str, str]]] = []
temp_dirs: list[tuple[str, str] | None] = []
# Do not create more workers than there are packages.
num_workers = min(max_workers, len(packages))
num_workers = min(max_workers, len(attr_packages))
nixpkgs_root_output = await check_subprocess_output('git', 'rev-parse', '--show-toplevel', stdout=asyncio.subprocess.PIPE)
nixpkgs_root = nixpkgs_root_output.decode('utf-8').strip()
nixpkgs_root_output = await check_subprocess_output(
"git",
"rev-parse",
"--show-toplevel",
stdout=asyncio.subprocess.PIPE,
)
nixpkgs_root = nixpkgs_root_output.decode("utf-8").strip()
# Set up temporary directories when using auto-commit.
for i in range(num_workers):
temp_dir = stack.enter_context(make_worktree()) if commit else None
temp_dirs.append(temp_dir)
# Fill up an update queue,
for package in packages:
await packages_to_update.put(package)
# Add sentinels, one for each worker.
# A workers will terminate when it gets sentinel from the queue.
for i in range(num_workers):
await packages_to_update.put(None)
queue_task = populate_queue(
attr_packages,
sorter,
packages_to_update,
num_workers,
)
# Prepare updater workers for each temp_dir directory.
# At most `num_workers` instances of `run_update_script` will be running at one time.
updaters = asyncio.gather(*[updater(nixpkgs_root, temp_dir, merge_lock, packages_to_update, keep_going, commit) for temp_dir in temp_dirs])
updater_tasks = [
updater(
nixpkgs_root,
temp_dir,
merge_lock,
packages_to_update,
keep_going,
commit,
)
for temp_dir in temp_dirs
]
tasks = asyncio.gather(
*updater_tasks,
queue_task,
)
try:
# Start updater workers.
await updaters
await tasks
except asyncio.exceptions.CancelledError:
# When one worker is cancelled, cancel the others too.
updaters.cancel()
tasks.cancel()
except UpdateFailedException as e:
# When one worker fails, cancel the others, as this exception is only thrown when keep_going is false.
updaters.cancel()
tasks.cancel()
eprint(e)
sys.exit(1)
def main(max_workers: int, keep_going: bool, commit: bool, packages_path: str, skip_prompt: bool) -> None:
async def main(
max_workers: int,
keep_going: bool,
commit: bool,
packages_path: str,
skip_prompt: bool,
order: Order,
) -> None:
with open(packages_path) as f:
packages = json.load(f)
if order != "arbitrary":
eprint("Sorting packages…")
reverse_order = order == "reverse-topological"
sorter, packages = await get_topological_sorter(
max_workers,
packages,
reverse_order,
)
else:
sorter = get_independent_sorter(packages)
attr_packages = {package["attrPath"]: package for package in packages}
eprint()
eprint('Going to be running update for following packages:')
eprint("Going to be running update for following packages:")
for package in packages:
eprint(f" - {package['name']}")
eprint()
confirm = '' if skip_prompt else input('Press Enter key to continue...')
confirm = "" if skip_prompt else input("Press Enter key to continue...")
if confirm == '':
if confirm == "":
eprint()
eprint('Running update for:')
eprint("Running update for:")
asyncio.run(start_updates(max_workers, keep_going, commit, packages))
await start_updates(max_workers, keep_going, commit, attr_packages, sorter)
eprint()
eprint('Packages updated!')
eprint("Packages updated!")
sys.exit()
else:
eprint('Aborting!')
eprint("Aborting!")
sys.exit(130)
parser = argparse.ArgumentParser(description='Update packages')
parser.add_argument('--max-workers', '-j', dest='max_workers', type=int, help='Number of updates to run concurrently', nargs='?', default=4)
parser.add_argument('--keep-going', '-k', dest='keep_going', action='store_true', help='Do not stop after first failure')
parser.add_argument('--commit', '-c', dest='commit', action='store_true', help='Commit the changes')
parser.add_argument('packages', help='JSON file containing the list of package names and their update scripts')
parser.add_argument('--skip-prompt', '-s', dest='skip_prompt', action='store_true', help='Do not stop for prompts')
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Update packages")
parser.add_argument(
"--max-workers",
"-j",
dest="max_workers",
type=int,
help="Number of updates to run concurrently",
nargs="?",
default=4,
)
parser.add_argument(
"--keep-going",
"-k",
dest="keep_going",
action="store_true",
help="Do not stop after first failure",
)
parser.add_argument(
"--commit",
"-c",
dest="commit",
action="store_true",
help="Commit the changes",
)
parser.add_argument(
"--order",
dest="order",
default="arbitrary",
choices=["arbitrary", "reverse-topological", "topological"],
help="Sort the packages based on dependency relation",
)
parser.add_argument(
"packages",
help="JSON file containing the list of package names and their update scripts",
)
parser.add_argument(
"--skip-prompt",
"-s",
dest="skip_prompt",
action="store_true",
help="Do not stop for prompts",
)
if __name__ == "__main__":
args = parser.parse_args()
try:
main(args.max_workers, args.keep_going, args.commit, args.packages, args.skip_prompt)
asyncio.run(
main(
args.max_workers,
args.keep_going,
args.commit,
args.packages,
args.skip_prompt,
args.order,
)
)
except KeyboardInterrupt as e:
# Lets cancel outside of the main loop too.
sys.exit(130)

View file

@ -58,6 +58,16 @@ with lib.maintainers;
enableFeatureFreezePing = true;
};
apm = {
scope = "Team for packages maintained by employees of Akademie für Pflegeberufe und Management GmbH.";
shortName = "apm employees";
# Edits to this list should only be done by an already existing member.
members = [
wolfgangwalther
DutchGerman
];
};
bazel = {
members = [
mboes
@ -434,6 +444,7 @@ with lib.maintainers;
members = [
globin
krav
leona
talyz
yayayayaka
];
@ -515,6 +526,7 @@ with lib.maintainers;
home-assistant = {
members = [
dotlambda
fab
hexa
];
@ -537,7 +549,10 @@ with lib.maintainers;
};
infisical = {
members = [ akhilmhdh ];
members = [
akhilmhdh
mahyarmirrashed
];
scope = "Maintain Infisical";
shortName = "Infisical";
};
@ -1003,8 +1018,9 @@ with lib.maintainers;
rocm = {
members = [
Madouura
Flakebi
GZGavinZhao
LunNova
mschwaig
];
githubTeams = [ "rocm-maintainers" ];

View file

@ -64,15 +64,14 @@ enables OpenCL support:
### Intel {#sec-gpu-accel-opencl-intel}
[Intel Gen8 and later
GPUs](https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#Gen8)
are supported by the Intel NEO OpenCL runtime that is provided by the
intel-compute-runtime package. The proprietary Intel OpenCL runtime, in
the intel-ocl package, is an alternative for Gen7 GPUs.
[Intel Gen12 and later GPUs](https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#Gen12)
are supported by the Intel NEO OpenCL runtime that is provided by the `intel-compute-runtime` package.
The previous generations (8,9 and 11), have been moved to the `intel-compute-runtime-legacy1` package.
The proprietary Intel OpenCL runtime, in the `intel-ocl` package, is an alternative for Gen7 GPUs.
The intel-compute-runtime or intel-ocl package can be added to
Both `intel-compute-runtime` packages, as well as the `intel-ocl` package can be added to
[](#opt-hardware.graphics.extraPackages)
to enable OpenCL support. For example, for Gen8 and later GPUs, the following
to enable OpenCL support. For example, for Gen12 and later GPUs, the following
configuration can be used:
```nix

View file

@ -114,6 +114,56 @@ using lightdm for a user `alice`:
}
```
## Running X without a display manager {#sec-x11-startx}
It is possible to avoid a display manager entirely and starting the X server
manually from a virtual terminal. Add to your configuration:
```nix
{
services.xserver.displayManager.startx = {
enable = true;
generateScript = true;
};
}
```
then you can start the X server with the `startx` command.
The second option will generate a base `xinitrc` script that will run your
window manager and set up the systemd user session.
You can extend the script using the
[extraCommands](#opt-services.xserver.displayManager.startx.extraCommands)
option, for example:
```nix
{
services.xserver.displayManager.startx = {
generateScript = true;
extraCommands = ''
xrdb -load .Xresources
xsetroot -solid '#666661'
xsetroot -cursor_name left_ptr
'';
};
}
```
or, alternatively, you can write your own from scratch in `~/.xinitrc`.
In this case, remember you're responsible for starting the window manager, for
example:
```shell
sxhkd &
bspwm &
```
and if you have enabled some systemd user service, you will probably want to
also add these lines too:
```shell
# import required env variables from the current shell
systemctl --user import-environment DISPLAY XDG_SESSION_ID
# start all graphical user services
systemctl --user start nixos-fake-graphical-session.target
# start the user dbus daemon
dbus-daemon --session --address="unix:path=/run/user/$(id -u)/bus" &
```
## Intel Graphics drivers {#sec-x11--graphics-cards-intel}
The default and recommended driver for Intel Graphics in X.org is `modesetting`
@ -123,6 +173,24 @@ setting](https://en.wikipedia.org/wiki/Mode_setting) (KMS) mechanism, it
supports Glamor (2D graphics acceleration via OpenGL) and is actively
maintained, it may perform worse in some cases (like in old chipsets).
There is a second driver, `intel` (provided by the xf86-video-intel package),
specific to older Intel iGPUs from generation 2 to 9. It is not recommended by
most distributions: it lacks several modern features (for example, it doesn't
support Glamor) and the package hasn't been officially updated since 2015.
Third generation and older iGPUs (15-20+ years old) are not supported by the
`modesetting` driver (X will crash upon startup). Thus, the `intel` driver is
required for these chipsets.
Otherwise, the results vary depending on the hardware, so you may have to try
both drivers. Use the option
[](#opt-services.xserver.videoDrivers)
to set one. The recommended configuration for modern systems is:
```nix
{
services.xserver.videoDrivers = [ "modesetting" ];
}
```
::: {.note}
The `modesetting` driver doesn't currently provide a `TearFree` option (this
will become available in an upcoming X.org release), So, without using a
@ -130,20 +198,22 @@ compositor (for example, see [](#opt-services.picom.enable)) you will
experience screen tearing.
:::
There also used to be a second driver, `intel` (provided by the
xf86-video-intel package), specific to older Intel iGPUs from generation 2 to
9.
This driver hasn't been maintained in years and was removed in NixOS 24.11
after it stopped working. If you chipset is too old to be supported by
`modesetting` and have no other choice you may try an unsupported NixOS version
(reportedly working up to NixOS 24.05) and set
If you experience screen tearing no matter what, this configuration was
reported to resolve the issue:
```nix
{
services.xserver.videoDrivers = [ "intel" ];
services.xserver.deviceSection = ''
Option "DRI" "2"
Option "TearFree" "true"
'';
}
```
Note that this will likely downgrade the performance compared to
`modesetting` or `intel` with DRI 3 (default).
## Proprietary NVIDIA drivers {#sec-x11-graphics-cards-nvidia}
NVIDIA provides a proprietary driver for its graphics cards that has

View file

@ -5,13 +5,12 @@ configuration of your machine. Whenever you've [changed
something](#ch-configuration) in that file, you should do
```ShellSession
$ nixos-rebuild switch --use-remote-sudo
# nixos-rebuild switch
```
to build the new configuration as your current user, and as the root user,
make it the default configuration for booting. `switch` will also try to
realise the configuration in the running system (e.g., by restarting system
services).
to build the new configuration, make it the default configuration for
booting, and try to realise the configuration in the running system
(e.g., by restarting system services).
::: {.warning}
This command doesn't start/stop [user services](#opt-systemd.user.services)
@ -20,23 +19,14 @@ user services.
:::
::: {.warning}
Applying a configuration is an action that must be done by the root user, so the
`switch`, `boot` and `test` commands should be ran with the `--use-remote-sudo`
flag. Despite its odd name, this flag runs the activation script with elevated
permissions, regardless of whether or not the target system is remote, without
affecting the other stages of the `nixos-rebuild` call. This allows unprivileged
users to rebuild the system and only elevate their permissions when necessary.
Alternatively, one can run the whole command as root while preserving user
environment variables by prefixing the command with `sudo -E`. However, this
method may create root-owned files in `$HOME/.cache` if Nix decides to use the
cache during evaluation.
These commands must be executed as root, so you should either run them
from a root shell or by prefixing them with `sudo -i`.
:::
You can also do
```ShellSession
$ nixos-rebuild test --use-remote-sudo
# nixos-rebuild test
```
to build the configuration and switch the running system to it, but
@ -47,7 +37,7 @@ configuration.
There is also
```ShellSession
$ nixos-rebuild boot --use-remote-sudo
# nixos-rebuild boot
```
to build the configuration and make it the boot default, but not switch
@ -57,7 +47,7 @@ You can make your configuration show up in a different submenu of the
GRUB 2 boot screen by giving it a different *profile name*, e.g.
```ShellSession
$ nixos-rebuild switch -p test --use-remote-sudo
# nixos-rebuild switch -p test
```
which causes the new configuration (and previous ones created using
@ -68,7 +58,7 @@ configurations.
A repl, or read-eval-print loop, is also available. You can inspect your configuration and use the Nix language with
```ShellSession
$ nixos-rebuild repl
# nixos-rebuild repl
```
Your configuration is loaded into the `config` variable. Use tab for autocompletion, use the `:r` command to reload the configuration files. See `:?` or [`nix repl` in the Nix manual](https://nixos.org/manual/nix/stable/command-ref/new-cli/nix3-repl.html) to learn more.

View file

@ -272,6 +272,9 @@
"sec-x11-auto-login": [
"index.html#sec-x11-auto-login"
],
"sec-x11-startx": [
"index.html#sec-x11-startx"
],
"sec-x11--graphics-cards-intel": [
"index.html#sec-x11--graphics-cards-intel"
],

View file

@ -236,6 +236,9 @@
- The `intel` driver for the X server (`services.xserver.videoDrives = [ "intel" ]`) is no longer functional due to incompatibilities with the latest Mesa version.
All users are strongly encouraged to switch to the generic `modesetting` driver (the default one) whenever possible, for more information see the manual chapter on [Intel Graphics](#sec-x11--graphics-cards-intel) and issue [#342763](https://github.com/NixOS/nixpkgs/issues/342763).
- The `intel-compute-runtime` package dropped support for older GPUs, and only supports 12th Gen and newer from now on.
Intel GPUs from Gen 8,9 and 11 need to use the `intel-compute-runtime-legacy1` package in `hardware.graphics.extraPackages`.
- The `(buildPythonPackage { ... }).override` and `(buildPythonPackage { ... }).overrideDerivation` attributes is now deprecated and removed in favour of `overridePythonAttrs` and `lib.overrideDerivation`.
This change does not affect the override interface of most Python packages, as [`<pkg>.override`](https://nixos.org/manual/nixpkgs/unstable/#sec-pkg-override) provided by `callPackage` shadows such a locally-defined `override` attribute.
The `<pkg>.overrideDerivation` attribute of Python packages called with `callPackage` will also remain available after this change.

View file

@ -18,6 +18,11 @@
- LLVM has been updated from LLVM 16 (on Darwin) and LLVM 18 (on other platforms) to LLVM 19.
This introduces some backwardsincompatible changes; see the [upstream release notes](https://releases.llvm.org/) for details.
- Emacs has been updated to 30.1.
This introduces some backwardsincompatible changes; see the NEWS for details.
NEWS can been viewed from Emacs by typing `C-h n`, or by clicking `Help->Emacs News` from the menu bar.
It can also be browsed [online](https://git.savannah.gnu.org/cgit/emacs.git/tree/etc/NEWS?h=emacs-30).
- The default PHP version has been updated to 8.3.
- The default Erlang OTP version has been updated to 27.
@ -37,6 +42,8 @@
- `nixos-option` has been rewritten to a Nix expression called by a simple bash script. This lowers our maintenance threshold, makes eval errors less verbose, adds support for flake-based configurations, descending into `attrsOf` and `listOf` submodule options, and `--show-trace`.
- The `intel` video driver for X.org (from the xf86-video-intel package) which was previously removed because it was non-functional has been fixed and the driver has been re-introduced.
- The Mattermost module ({option}`services.mattermost`) and packages (`mattermost` and `mmctl`) have been substantially updated:
- {option}`services.mattermost.preferNixConfig` now defaults to true if you advance {option}`system.stateVersion` to 25.05. This means that if you have {option}`services.mattermost.mutableConfig` set, NixOS will override your settings to those that you define in the module. It is recommended to leave this at the default, even if you used a mutable config before, because it will ensure that your Mattermost data directories are correct. If you moved your data directories, you may want to review the module changes before upgrading.
- Mattermost telemetry reporting is now disabled by default, though security update notifications are enabled. Look at {option}`services.mattermost.telemetry` for options to control this behavior.
@ -101,6 +108,8 @@
- [Schroot](https://codeberg.org/shelter/reschroot), a lightweight virtualisation tool. Securely enter a chroot and run a command or login shell. Available as [programs.schroot](#opt-programs.schroot.enable).
- [Firezone](https://firezone.dev), an enterprise-ready zero-trust access platform built on WireGuard. This includes the server stack as [services.firezone.server.enable](#opt-services.firezone.server.enable), a TURN/STUN relay service as [services.firezone.relay.enable](#opt-services.firezone.relay.enable), a gateway service as [services.firezone.gateway.enable](#opt-services.firezone.gateway.enable), a headless client as [services.firezone.headless-client.enable](#opt-services.firezone.headless-client.enable) and a GUI client as [services.firezone.gui-client.enable](#opt-services.firezone.gui-client.enable).
- [crab-hole](https://github.com/LuckyTurtleDev/crab-hole), a cross platform Pi-hole clone written in Rust using hickory-dns/trust-dns. Available as [services.crab-hole](#opt-services.crab-hole.enable).
- [zwave-js-ui](https://zwave-js.github.io/zwave-js-ui/), a full featured Z-Wave Control Panel and MQTT Gateway. Available as [services.zwave-js-ui](#opt-services.zwave-js-ui.enable).
@ -135,6 +144,8 @@
- [victorialogs][https://docs.victoriametrics.com/victorialogs/], log database from VictoriaMetrics. Available as [services.victorialogs](#opt-services.victorialogs.enable)
- [gokapi](https://github.com/Forceu/Gokapi), Lightweight selfhosted Firefox Send alternative without public upload. AWS S3 supported. Available with [services.gokapi](options.html#opt-services.gokapi.enable)
- [nostr-rs-relay](https://git.sr.ht/~gheartsfield/nostr-rs-relay/), This is a nostr relay, written in Rust. Available as [services.nostr-rs-relay](options.html#opt-services.nostr-rs-relay.enable).
- [Prometheus Node Cert Exporter](https://github.com/amimof/node-cert-exporter), a prometheus exporter to check for SSL cert expiry. Available under [services.prometheus.exporters.node-cert](#opt-services.prometheus.exporters.node-cert.enable).
@ -171,6 +182,8 @@
- [echoip](https://github.com/mpolden/echoip), a simple service for looking up your IP address. Available as [services.echoip](#opt-services.echoip.enable).
- [LiteLLM](https://github.com/BerriAI/litellm), a LLM Gateway to provide model access, fallbacks and spend tracking across 100+ LLMs. All in the OpenAI format. Available as [services.litellm](#opt-services.litellm.enable).
- [Buffyboard](https://gitlab.postmarketos.org/postmarketOS/buffybox/-/tree/master/buffyboard), a framebuffer on-screen keyboard. Available as [services.buffyboard](option.html#opt-services.buffyboard).
- [KanBoard](https://github.com/kanboard/kanboard), a project management tool that focuses on the Kanban methodology. Available as [services.kanboard](#opt-services.kanboard.enable).
@ -183,6 +196,12 @@
- [Rebuilderd](https://github.com/kpcyrd/rebuilderd) an independent verification of binary packages - Reproducible Builds. Available as [services.rebuilderd](#opt-services.rebuilderd.enable).
- [Limine](https://github.com/limine-bootloader/limine) a modern, advanced, portable, multiprotocol bootloader and boot manager. Available as [boot.loader.limine](#opt-boot.loader.limine.enable)
- [Orthanc](https://orthanc.uclouvain.be/) a lightweight, RESTful DICOM server for healthcare and medical research. Available as [services.orthanc](#opt-services.orthanc.enable).
- [Pareto Security](https://paretosecurity.com/) is an alternative to corporate compliance solutions for companies that care about security but know it doesn't have to be invasive. Available as [services.paretosecurity](#opt-services.paretosecurity.enable)
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Backward Incompatibilities {#sec-release-25.05-incompatibilities}
@ -193,6 +212,8 @@
- rename package `wtf` to `wtfutil`.
- The udev rules of the libjaylink package require users to be in the `jlink` instead of `plugdev` group now, since the `plugdev` group is very uncommon for NixOS. Alternatively, access is granted to seat sessions.
- `python3Packages.beancount` was updated to 3.1.0. Previous major version remains available as `python3Packages.beancount_2`.
- `binwalk` was updated to 3.1.0, which has been rewritten in rust. The python module is no longer available.
@ -206,9 +227,6 @@
- `pkgs.nextcloud28` has been removed since it's out of support upstream.
- Emacs lisp build helpers, such as `emacs.pkgs.melpaBuild`, now enables `__structuredAttrs` by default.
Environment variables have to be passed via the `env` attribute.
- `buildGoModule` now passes environment variables via the `env` attribute. `CGO_ENABLED` should now be specified with `env.CGO_ENABLED` when passing to buildGoModule. Direct specification of `CGO_ENABLED` is now redirected by a compatibility layer with a warning, but will become an error in future releases.
Go-related environment variables previously shadowed by `buildGoModule` now results in errors when specified directly. Such variables include `GOOS` and `GOARCH`.
@ -231,6 +249,8 @@
- `pytestFlagsArray` and `unittestFlagsArray` are kept for compatibility purposes. They continue to be Bash-expanded before concatenated. This compatibility layer will be removed in future releases.
- The `haka` package and module has been removed because the package was broken and unmaintained for 9 years.
- `strawberry` has been updated to 1.2, which drops support for the VLC backend and Qt 5. The `strawberry-qt5` package
and `withGstreamer`/`withVlc` override options have been removed due to this.
@ -256,6 +276,10 @@
- `kmonad` is now hardened by default using common `systemd` settings.
If KMonad is used to execute shell commands, hardening may make some of them fail. In that case, you can disable hardening using {option}`services.kmonad.keyboards.<name>.enableHardening` option.
- `isd` was updated from 0.2.0 to 0.5.1, the new version may crash with a previously generated config, try moving or deleting `~/.config/isd/schema.json`.
- `uwsgi` no longer supports Python 2 plugins.
- `asusd` has been upgraded to version 6 which supports multiple aura devices. To account for this, the single `auraConfig` configuration option has been replaced with `auraConfigs` which is an attribute set of config options per each device. The config files may also be now specified as either source files or text strings; to account for this you will need to specify that `text` is used for your existing configs, e.g.:
```diff
-services.asusd.asusdConfig = '''file contents'''
@ -370,6 +394,10 @@
[v1.8.0](https://github.com/jtroo/kanata/releases/tag/v1.8.0)
for more information.
- `authelia` version 4.39.0 has made changes on the default claims for ID Tokens, to mirror the standard claims from the specification.
This change may affect some clients in unexpected ways, so manual intervention may be required.
Read the [release notes](https://www.authelia.com/blog/4.39-release-notes/), along with [the guide](https://www.authelia.com/integration/openid-connect/openid-connect-1.0-claims/#restore-functionality-prior-to-claims-parameter) to work around issues that may be encountered.
- `ags` was updated to v2, which is just a CLI for Astal now. Components are available as a different package set `astal.*`.
If you want to use v1, it is available as `ags_1` package.
@ -435,6 +463,8 @@
- `docker_24` has been removed, as it was EOL with vulnerabilities since June 08, 2024.
- Emacs 28 and 29 have been removed.
- `containerd` has been updated to v2, which contains breaking changes. See the [containerd
2.0](https://github.com/containerd/containerd/blob/main/docs/containerd-2.0.md) documentation for more
details.
@ -473,6 +503,8 @@
- `security.apparmor.policies.<name>.enforce` and `security.apparmor.policies.<name>.enable` were removed.
Configuring the state of apparmor policies must now be done using `security.apparmor.policies.<name>.state` tristate option.
- `services.graylog.package` now defaults to `graylog-6_0` as previous default `graylog-5_1` is EOL and therefore removed.
Check the migration guides on [5.1→5.2](https://go2docs.graylog.org/5-2/upgrading_graylog/upgrading_to_graylog_5.2.x.htm) and [5.2→6.0](https://go2docs.graylog.org/6-0/upgrading_graylog/upgrading_to_graylog_6.0.x.html) for breaking changes.
- the notmuch vim plugin now lives in a separate output of the `notmuch`
package. Installing `notmuch` will not bring the notmuch vim package anymore,
@ -495,6 +527,8 @@
- `programs.clash-verge.tunMode` was deprecated and removed because now service mode is necessary to start program. Without `programs.clash-verge.enable`, clash-verge-rev will refuse to start.
- `confluent-cli` was updated from 3.60.0 to 4.16.0, which includes several breaking changes as detailed in [Confluent's release notes](https://docs.confluent.io/confluent-cli/current/release-notes.html).
- `siduck76-st` has been renamed to `st-snazzy`, like the project's [flake](https://github.com/siduck/st/blob/main/flake.nix).
- `python3Packages.jax` now directly depends on `python3Packages.jaxlib`.
@ -547,16 +581,26 @@
- `services.avahi.ipv6` now defaults to true.
- In the `services.xserver.displayManager.startx` module, two new options [generateScript](#opt-services.xserver.displayManager.startx.generateScript) and [extraCommands](#opt-services.xserver.displayManager.startx.extraCommands) have been added to to declaratively configure the .xinitrc script.
- All services that require a root certificate bundle now use the value of a new read-only option, `security.pki.caBundle`.
- hddfancontrol has been updated to major release 2. See the [migration guide](https://github.com/desbma/hddfancontrol/tree/master?tab=readme-ov-file#migrating-from-v1x), as there are breaking changes.
- `services.cloudflared` now uses a dynamic user, and its `user` and `group` options have been removed. If the user or group is still necessary, they can be created manually.
- The Home Assistant module has new options {option}`services.home-assistant.blueprints.automation`, `services.home-assistant.blueprints.script`, and {option}`services.home-assistant.blueprints.template` that allow for the declarative installation of [blueprints](https://www.home-assistant.io/docs/blueprint/) into the appropriate configuration directories.
- For matrix homeserver Synapse we are now following the upstream recommendation to enable jemalloc as the memory allocator by default.
- In `dovecot` package removed hard coding path to module directory.
- `services.dovecot2.modules` have been removed, now need to use `environment.systemPackages` to load additional Dovecot modules.
- `services.kmonad` now creates a determinate symlink (in `/dev/input/by-id/`) to each of KMonad virtual devices.
- `services.searx` now supports configuration of the favicons cache and other options available in SearXNG's `favicons.toml` file
- `services.gitea` now supports CAPTCHA usage through the `services.gitea.captcha` variable.
- `services.soft-serve` now restarts upon config change.
@ -571,6 +615,8 @@
- New options for the declarative configuration of the user space part of ALSA have been introduced under [hardware.alsa](options.html#opt-hardware.alsa.enable), including setting the default capture and playback device, defining sound card aliases and volume controls.
Note: these are intended for users not running a sound server like PulseAudio or PipeWire, but having ALSA as their only sound system.
- `services.k3s` now provides the `autoDeployCharts` option that allows to automatically deploy Helm charts via the k3s Helm controller.
- Caddy can now be built with plugins by using `caddy.withPlugins`, a `passthru` function that accepts an attribute set as a parameter. The `plugins` argument represents a list of Caddy plugins, with each Caddy plugin being a versioned module. The `hash` argument represents the `vendorHash` of the resulting Caddy source code with the plugins added.
Example:
@ -599,6 +645,10 @@
[is removed](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c01f664e4ca210823b7594b50669bbd9b0a3c3b0)
in Linux 6.13.
- `authelia` version 4.39.0 has made some changes which deprecate older configurations.
They are still expected to be working until future version 5.0.0, but will generate warnings in logs.
Read the [release notes](https://www.authelia.com/blog/4.39-release-notes/) for human readable summaries of the changes.
- `programs.fzf.keybindings` now supports the fish shell.
- `gerbera` now has wavpack support.

View file

@ -45,6 +45,19 @@ let
isNixAtLeast = versionAtLeast (getVersion nixPackage);
defaultSystemFeatures = [
"nixos-test"
"benchmark"
"big-parallel"
"kvm"
] ++ optionals (pkgs.stdenv.hostPlatform ? gcc.arch) (
# a builder can run code for `gcc.arch` and inferior architectures
[ "gccarch-${pkgs.stdenv.hostPlatform.gcc.arch}" ]
++ map (x: "gccarch-${x}") (
systems.architectures.inferiors.${pkgs.stdenv.hostPlatform.gcc.arch} or [ ]
)
);
legacyConfMappings = {
useSandbox = "sandbox";
buildCores = "cores";
@ -315,20 +328,9 @@ in
system-features = mkOption {
type = types.listOf types.str;
default =
[
"nixos-test"
"benchmark"
"big-parallel"
"kvm"
]
++ optionals (pkgs.stdenv.hostPlatform ? gcc.arch) (
# a builder can run code for `gcc.arch` and inferior architectures
[ "gccarch-${pkgs.stdenv.hostPlatform.gcc.arch}" ]
++ map (x: "gccarch-${x}") (
systems.architectures.inferiors.${pkgs.stdenv.hostPlatform.gcc.arch} or [ ]
)
);
# We expose system-featuers here and in config below.
# This allows users to access the default value via `options.nix.settings.system-features`
default = defaultSystemFeatures;
defaultText = literalExpression ''[ "nixos-test" "benchmark" "big-parallel" "kvm" "gccarch-<arch>" ]'';
description = ''
The set of features supported by the machine. Derivations
@ -385,6 +387,7 @@ in
trusted-public-keys = [ "cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY=" ];
trusted-users = [ "root" ];
substituters = mkAfter [ "https://cache.nixos.org/" ];
system-features = defaultSystemFeatures;
};
};
}

View file

@ -76,12 +76,18 @@
export TERM=$TERM
'';
security.sudo.extraConfig = lib.mkIf config.security.sudo.keepTerminfo ''
# Keep terminfo database for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
'';
security =
let
extraConfig = ''
# Keep terminfo database for root and %wheel.
Defaults:root,%wheel env_keep+=TERMINFO_DIRS
Defaults:root,%wheel env_keep+=TERMINFO
'';
in
lib.mkIf config.security.sudo.keepTerminfo {
sudo = { inherit extraConfig; };
sudo-rs = { inherit extraConfig; };
};
};
}

View file

@ -376,4 +376,4 @@ foreach my $u (values %usersOut) {
updateFile("/etc/subuid", join("\n", @subUids) . "\n");
updateFile("/etc/subgid", join("\n", @subGids) . "\n");
updateFile($subUidMapFile, encode_json($subUidMap) . "\n");
updateFile($subUidMapFile, to_json($subUidMap) . "\n");

View file

@ -951,6 +951,21 @@ in {
}
] ++ flatten (flip mapAttrsToList cfg.users (name: user:
[
(
let
# Things fail in various ways with especially non-ascii usernames.
# This regex mirrors the one from shadow's is_valid_name:
# https://github.com/shadow-maint/shadow/blob/bee77ffc291dfed2a133496db465eaa55e2b0fec/lib/chkname.c#L68
# though without the trailing $, because Samba 3 got its last release
# over 10 years ago and is not in Nixpkgs anymore,
# while later versions don't appear to require anything like that.
nameRegex = "[a-zA-Z0-9_.][a-zA-Z0-9_.-]*";
in
{
assertion = builtins.match nameRegex user.name != null;
message = "The username \"${user.name}\" is not valid, it does not match the regex \"${nameRegex}\".";
}
)
{
assertion = (user.hashedPassword != null)
-> (match ".*:.*" user.hashedPassword == null);

View file

@ -120,7 +120,7 @@ in
{ "r" = {}; };
};
hardware.graphics.package = lib.mkDefault pkgs.mesa.drivers;
hardware.graphics.package32 = lib.mkDefault pkgs.pkgsi686Linux.mesa.drivers;
hardware.graphics.package = lib.mkDefault pkgs.mesa;
hardware.graphics.package32 = lib.mkDefault pkgs.pkgsi686Linux.mesa;
};
}

View file

@ -0,0 +1,27 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.hardware.libjaylink;
in
{
options.hardware.libjaylink = {
enable = lib.mkEnableOption ''
udev rules for devices supported by libjaylink.
Add users to the `jlink` group in order to grant
them access
'';
package = lib.mkPackageOption pkgs "libjaylink" { };
};
config = lib.mkIf cfg.enable {
users.groups.jlink = { };
services.udev.packages = [ cfg.package ];
};
meta.maintainers = with lib.maintainers; [ felixsinger ];
}

View file

@ -75,6 +75,7 @@ in
config = mkIf cfg.enable {
systemd.services.hddtemp = {
description = "HDD/SSD temperature";
documentation = [ "man:hddtemp(8)" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "forking";

View file

@ -47,7 +47,10 @@ let
let
module = ../. + "/installer/sd-card/sd-image-${pkgs.targetPlatform.qemuArch}.nix";
in
if builtins.pathExists module then [ module ] else throw "The module ${module} does not exist.";
if builtins.pathExists module then
[ module ]
else
throw "The module ${toString module} does not exist.";
};
kexec = ../installer/netboot/netboot-minimal.nix;
};

View file

@ -24,6 +24,7 @@
# compression tools
, zstd
, xz
, zeekstd
# arguments
, name
@ -89,11 +90,13 @@ let
compressionPkg = {
"zstd" = zstd;
"xz" = xz;
"zstd-seekable" = zeekstd;
}."${compression.algorithm}";
compressionCommand = {
"zstd" = "zstd --no-progress --threads=$NIX_BUILD_CORES -${toString compression.level}";
"xz" = "xz --keep --verbose --threads=$NIX_BUILD_CORES -${toString compression.level}";
"zstd-seekable" = "zeekstd --quiet --max-frame-size 2M --compression-level ${toString compression.level}";
}."${compression.algorithm}";
in
stdenvNoCC.mkDerivation (finalAttrs:

View file

@ -48,7 +48,7 @@ let
};
repartConfig = lib.mkOption {
type = with lib.types; attrsOf (oneOf [ str int bool ]);
type = with lib.types; attrsOf (oneOf [ str int bool (listOf str) ]);
example = {
Type = "home";
SizeMinBytes = "512M";
@ -113,7 +113,7 @@ in
enable = lib.mkEnableOption "Image compression";
algorithm = lib.mkOption {
type = lib.types.enum [ "zstd" "xz" ];
type = lib.types.enum [ "zstd" "xz" "zstd-seekable" ];
default = "zstd";
description = "Compression algorithm";
};
@ -274,6 +274,7 @@ in
{
"zstd" = ".zst";
"xz" = ".xz";
"zstd-seekable" = ".zst";
}."${cfg.compression.algorithm}";
makeClosure = paths: pkgs.closureInfo { rootPaths = paths; };
@ -298,6 +299,7 @@ in
level = lib.mkOptionDefault {
"zstd" = 3;
"xz" = 3;
"zstd-seekable" = 3;
}."${cfg.compression.algorithm}";
};
@ -311,7 +313,7 @@ in
(lib.mapAttrsToList (_n: v: v.repartConfig.Format or null) cfg.partitions);
format = pkgs.formats.ini { };
format = pkgs.formats.ini { listsAsDuplicateKeys = true; };
definitionsDirectory = utils.systemdUtils.lib.definitions
"repart.d"

View file

@ -1,11 +1,11 @@
# This module defines a NixOS installation CD that contains GNOME.
{ pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [ ./installation-cd-graphical-calamares.nix ];
isoImage.edition = "gnome";
isoImage.edition = lib.mkDefault "gnome";
services.xserver.desktopManager.gnome = {
# Add Firefox and other tools useful for installation to the launcher

View file

@ -1,12 +1,12 @@
# This module defines a NixOS installation CD that contains X11 and
# Plasma 5.
{ pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [ ./installation-cd-graphical-calamares.nix ];
isoImage.edition = "plasma5";
isoImage.edition = lib.mkDefault "plasma5";
services.xserver.desktopManager.plasma5 = {
enable = true;

View file

@ -1,11 +1,11 @@
# This module defines a NixOS installation CD that contains Plasma 6.
{ pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [ ./installation-cd-graphical-calamares.nix ];
isoImage.edition = "plasma6";
isoImage.edition = lib.mkDefault "plasma6";
services.desktopManager.plasma6.enable = true;

View file

@ -0,0 +1,52 @@
# This configuration uses a specialisation for each desired boot
# configuration, and a common parent configuration for all of them
# that's hidden. This allows users to import this module alongside
# their own and get the full array of specialisations inheriting the
# users' settings.
{ lib, ... }:
{
imports = [ ./installation-cd-base.nix ];
isoImage.edition = "graphical";
isoImage.showConfiguration = lib.mkDefault false;
specialisation = {
gnome.configuration =
{ config, ... }:
{
imports = [ ./installation-cd-graphical-calamares-gnome.nix ];
isoImage.showConfiguration = true;
isoImage.configurationName = "GNOME (Linux LTS)";
};
gnome_latest_kernel.configuration =
{ config, ... }:
{
imports = [
./installation-cd-graphical-calamares-gnome.nix
./latest-kernel.nix
];
isoImage.showConfiguration = true;
isoImage.configurationName = "GNOME (Linux ${config.boot.kernelPackages.kernel.version})";
};
plasma.configuration =
{ config, ... }:
{
imports = [ ./installation-cd-graphical-calamares-plasma6.nix ];
isoImage.showConfiguration = true;
isoImage.configurationName = "Plasma (Linux LTS)";
};
plasma_latest_kernel.configuration =
{ config, ... }:
{
imports = [
./installation-cd-graphical-calamares-plasma6.nix
./latest-kernel.nix
];
isoImage.showConfiguration = true;
isoImage.configurationName = "Plasma (Linux ${config.boot.kernelPackages.kernel.version})";
};
};
}

View file

@ -1,11 +1,11 @@
# This module defines a NixOS installation CD that contains GNOME.
{ ... }:
{ lib, ... }:
{
imports = [ ./installation-cd-graphical-base.nix ];
isoImage.edition = "gnome";
isoImage.edition = lib.mkDefault "gnome";
services.xserver.desktopManager.gnome = {
# Add Firefox and other tools useful for installation to the launcher

View file

@ -1,12 +1,12 @@
# This module defines a NixOS installation CD that contains X11 and
# Plasma 5.
{ pkgs, ... }:
{ lib, pkgs, ... }:
{
imports = [ ./installation-cd-graphical-base.nix ];
isoImage.edition = "plasma5";
isoImage.edition = lib.mkDefault "plasma5";
services.xserver.desktopManager.plasma5 = {
enable = true;

View file

@ -0,0 +1,14 @@
{ lib, ... }:
{
imports = [ ./installation-cd-minimal.nix ];
isoImage.configurationName = lib.mkDefault "(Linux LTS)";
specialisation.latest_kernel.configuration =
{ config, ... }:
{
imports = [ ./latest-kernel.nix ];
isoImage.configurationName = "(Linux ${config.boot.kernelPackages.kernel.version})";
};
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
{ lib, pkgs, ... }:
{
boot.kernelPackages = pkgs.linuxPackages_latest;
boot.supportedFilesystems.zfs = false;
environment.etc."nixos-generate-config.conf".text = ''
[Defaults]
Kernel=latest
'';
}

View file

@ -13,6 +13,7 @@
.Op Fl -root Ar root
.Op Fl -dir Ar dir
.Op Fl -flake
.Op Fl -kernel Ar <lts|latest>
.
.
.
@ -66,6 +67,9 @@ instead of
.Pa /etc/nixos Ns
\&.
.
.It Fl -kernel Ar <lts|latest>
Set the kernel in the generated configuration file.
.
.It Fl -force
Overwrite
.Pa /etc/nixos/configuration.nix

View file

@ -7,6 +7,7 @@ use File::Path;
use File::Basename;
use File::Slurp;
use File::stat;
use Config::IniFiles;
umask(0022);
@ -37,6 +38,18 @@ my $force = 0;
my $noFilesystems = 0;
my $flake = 0;
my $showHardwareConfig = 0;
my $kernel = "lts";
if (-e "/etc/nixos-generate-config.conf") {
my $cfg = new Config::IniFiles -file => "/etc/nixos-generate-config.conf";
$outDir = $cfg->val("Defaults", "Directory") // $outDir;
if (defined $cfg->val("Defaults", "RootDirectory")) {
$rootDir = $cfg->val("Defaults", "RootDirectory");
$rootDir =~ s/\/*$//; # remove trailing slashes
$rootDir = File::Spec->rel2abs($rootDir); # resolve absolute path
}
$kernel = $cfg->val("Defaults", "Kernel") // $kernel;
}
for (my $n = 0; $n < scalar @ARGV; $n++) {
my $arg = $ARGV[$n];
@ -68,11 +81,17 @@ for (my $n = 0; $n < scalar @ARGV; $n++) {
elsif ($arg eq "--flake") {
$flake = 1;
}
elsif ($arg eq "--kernel") {
$n++;
$kernel = $ARGV[$n];
die "$0: --kernel requires an argument\n" unless defined $kernel;
}
else {
die "$0: unrecognized argument $arg\n";
}
}
die "$0: invalid kernel: '$kernel'" unless $kernel eq "lts" || $kernel eq "latest";
my @attrs = ();
my @kernelModules = ();
@ -709,6 +728,14 @@ EOF
EOF
}
if ($kernel eq "latest") {
$bootLoaderConfig .= <<EOF;
# Use latest kernel.
boot.kernelPackages = pkgs.linuxPackages_latest;
EOF
}
my $networkingDhcpConfig = generateNetworkingDhcpConfig();
my $xserverConfig = generateXserverConfig();

View file

@ -1,25 +1,41 @@
# This module generates nixos-install, nixos-rebuild,
# nixos-generate-config, etc.
{ config, lib, pkgs, options, ... }:
{
config,
lib,
pkgs,
options,
...
}:
let
makeProg = args: pkgs.replaceVarsWith (args // {
dir = "bin";
isExecutable = true;
nativeBuildInputs = [
pkgs.installShellFiles
];
postInstall = ''
installManPage ${args.manPage}
'';
});
makeProg =
args:
pkgs.replaceVarsWith (
args
// {
dir = "bin";
isExecutable = true;
nativeBuildInputs = [
pkgs.installShellFiles
];
postInstall = ''
installManPage ${args.manPage}
'';
}
);
nixos-generate-config = makeProg {
name = "nixos-generate-config";
src = ./nixos-generate-config.pl;
replacements = {
perl = "${pkgs.perl.withPackages (p: [ p.FileSlurp ])}/bin/perl";
perl = "${
pkgs.perl.withPackages (p: [
p.FileSlurp
p.ConfigIniFiles
])
}/bin/perl";
hostPlatformSystem = pkgs.stdenv.hostPlatform.system;
detectvirt = "${config.systemd.package}/bin/systemd-detect-virt";
btrfs = "${pkgs.btrfs-progs}/bin/btrfs";
@ -36,13 +52,17 @@ let
inherit (pkgs) runtimeShell;
inherit (config.system.nixos) version codeName revision;
inherit (config.system) configurationRevision;
json = builtins.toJSON ({
nixosVersion = config.system.nixos.version;
} // lib.optionalAttrs (config.system.nixos.revision != null) {
nixpkgsRevision = config.system.nixos.revision;
} // lib.optionalAttrs (config.system.configurationRevision != null) {
configurationRevision = config.system.configurationRevision;
});
json = builtins.toJSON (
{
nixosVersion = config.system.nixos.version;
}
// lib.optionalAttrs (config.system.nixos.revision != null) {
nixpkgsRevision = config.system.nixos.revision;
}
// lib.optionalAttrs (config.system.configurationRevision != null) {
configurationRevision = config.system.configurationRevision;
}
);
};
manPage = ./manpages/nixos-version.8;
};
@ -266,26 +286,46 @@ in
'';
};
imports = let
mkToolModule = { name, package ? pkgs.${name} }: { config, ... }: {
options.system.tools.${name}.enable = lib.mkEnableOption "${name} script" // {
default = config.nix.enable && ! config.system.disableInstallerTools;
defaultText = "config.nix.enable && !config.system.disableInstallerTools";
};
imports =
let
mkToolModule =
{
name,
package ? pkgs.${name},
}:
{ config, ... }:
{
options.system.tools.${name}.enable = lib.mkEnableOption "${name} script" // {
default = config.nix.enable && !config.system.disableInstallerTools;
defaultText = "config.nix.enable && !config.system.disableInstallerTools";
};
config = lib.mkIf config.system.tools.${name}.enable {
environment.systemPackages = [ package ];
};
};
in [
(mkToolModule { name = "nixos-build-vms"; })
(mkToolModule { name = "nixos-enter"; })
(mkToolModule { name = "nixos-generate-config"; package = config.system.build.nixos-generate-config; })
(mkToolModule { name = "nixos-install"; package = config.system.build.nixos-install; })
(mkToolModule { name = "nixos-option"; })
(mkToolModule { name = "nixos-rebuild"; package = config.system.build.nixos-rebuild; })
(mkToolModule { name = "nixos-version"; package = nixos-version; })
];
config = lib.mkIf config.system.tools.${name}.enable {
environment.systemPackages = [ package ];
};
};
in
[
(mkToolModule { name = "nixos-build-vms"; })
(mkToolModule { name = "nixos-enter"; })
(mkToolModule {
name = "nixos-generate-config";
package = config.system.build.nixos-generate-config;
})
(mkToolModule {
name = "nixos-install";
package = config.system.build.nixos-install;
})
(mkToolModule { name = "nixos-option"; })
(mkToolModule {
name = "nixos-rebuild";
package = config.system.build.nixos-rebuild;
})
(mkToolModule {
name = "nixos-version";
package = nixos-version;
})
];
config = {
documentation.man.man-db.skipPackages = [ nixos-version ];
@ -293,10 +333,7 @@ in
# These may be used in auxiliary scripts (ie not part of toplevel), so they are defined unconditionally.
system.build = {
inherit nixos-generate-config nixos-install;
nixos-rebuild =
if config.system.rebuild.enableNg
then nixos-rebuild-ng
else nixos-rebuild;
nixos-rebuild = if config.system.rebuild.enableNg then nixos-rebuild-ng else nixos-rebuild;
nixos-option = lib.warn "Accessing nixos-option through `config.system.build` is deprecated, use `pkgs.nixos-option` instead." pkgs.nixos-option;
nixos-enter = lib.warn "Accessing nixos-enter through `config.system.build` is deprecated, use `pkgs.nixos-enter` instead." pkgs.nixos-enter;
};

View file

@ -80,6 +80,7 @@
./hardware/ksm.nix
./hardware/ledger.nix
./hardware/libftdi.nix
./hardware/libjaylink.nix
./hardware/logitech.nix
./hardware/mcelog.nix
./hardware/network/ath-user-regd.nix
@ -824,6 +825,7 @@
./services/misc/languagetool.nix
./services/misc/leaps.nix
./services/misc/lifecycled.nix
./services/misc/litellm.nix
./services/misc/llama-cpp.nix
./services/misc/logkeys.nix
./services/misc/mame.nix
@ -847,6 +849,7 @@
./services/misc/ombi.nix
./services/misc/omnom.nix
./services/misc/open-webui.nix
./services/misc/orthanc.nix
./services/misc/osrm.nix
./services/misc/owncast.nix
./services/misc/packagekit.nix
@ -1098,6 +1101,11 @@
./services/networking/firewall.nix
./services/networking/firewall-iptables.nix
./services/networking/firewall-nftables.nix
./services/networking/firezone/gateway.nix
./services/networking/firezone/gui-client.nix
./services/networking/firezone/headless-client.nix
./services/networking/firezone/relay.nix
./services/networking/firezone/server.nix
./services/networking/flannel.nix
./services/networking/freenet.nix
./services/networking/freeradius.nix
@ -1115,6 +1123,7 @@
./services/networking/go-neb.nix
./services/networking/go-shadowsocks2.nix
./services/networking/gobgpd.nix
./services/networking/gokapi.nix
./services/networking/gvpe.nix
./services/networking/hans.nix
./services/networking/harmonia.nix
@ -1360,6 +1369,7 @@
./services/scheduling/atd.nix
./services/scheduling/cron.nix
./services/scheduling/fcron.nix
./services/scheduling/prefect.nix
./services/scheduling/scx.nix
./services/search/elasticsearch-curator.nix
./services/search/elasticsearch.nix
@ -1384,7 +1394,6 @@
./services/security/esdm.nix
./services/security/fail2ban.nix
./services/security/fprintd.nix
./services/security/haka.nix
./services/security/haveged.nix
./services/security/hockeypuck.nix
./services/security/hologram-agent.nix
@ -1398,6 +1407,7 @@
./services/security/oauth2-proxy.nix
./services/security/oauth2-proxy-nginx.nix
./services/security/opensnitch.nix
./services/security/paretosecurity.nix
./services/security/pass-secret-service.nix
./services/security/physlock.nix
./services/security/shibboleth-sp.nix
@ -1719,6 +1729,7 @@
./system/boot/loader/grub/memtest.nix
./system/boot/loader/external/external.nix
./system/boot/loader/init-script/init-script.nix
./system/boot/loader/limine/limine.nix
./system/boot/loader/loader.nix
./system/boot/loader/systemd-boot/systemd-boot.nix
./system/boot/luksroot.nix

View file

@ -1,7 +1,12 @@
# This module defines the software packages included in the "minimal"
# installation CD. It might be useful elsewhere.
{ config, lib, pkgs, ... }:
{
config,
lib,
pkgs,
...
}:
{
# Include some utilities that are useful for installing or repairing
@ -43,9 +48,19 @@
];
# Include support for various filesystems and tools to create / manipulate them.
boot.supportedFilesystems =
[ "btrfs" "cifs" "f2fs" "ntfs" "vfat" "xfs" ] ++
lib.optional (lib.meta.availableOn pkgs.stdenv.hostPlatform config.boot.zfs.package) "zfs";
boot.supportedFilesystems = lib.mkMerge [
[
"btrfs"
"cifs"
"f2fs"
"ntfs"
"vfat"
"xfs"
]
(lib.mkIf (lib.meta.availableOn pkgs.stdenv.hostPlatform config.boot.zfs.package) {
zfs = lib.mkDefault true;
})
];
# Configure host id for ZFS to work
networking.hostId = lib.mkDefault "8425e349";

View file

@ -127,8 +127,6 @@ in
system.disableInstallerTools = true;
nix.settings = {
auto-optimise-store = true;
min-free = cfg.min-free;
max-free = cfg.max-free;

View file

@ -8,7 +8,7 @@
services.userborn.enable = lib.mkDefault true;
# Random perl remnants
system.disableInstallerTools = lib.mkDefault true;
system.tools.nixos-generate-config.enable = lib.mkDefault false;
programs.less.lessopen = lib.mkDefault null;
programs.command-not-found.enable = lib.mkDefault false;
boot.enableContainers = lib.mkDefault false;
@ -20,9 +20,4 @@
# Check that the system does not contain a Nix store path that contains the
# string "perl".
system.forbiddenDependenciesRegexes = [ "perl" ];
# Re-add nixos-rebuild to the systemPackages that was removed by the
# `system.disableInstallerTools` option.
environment.systemPackages = [ pkgs.nixos-rebuild ];
}

View file

@ -13,11 +13,19 @@ in
programs.bash.enableLsColors = lib.mkEnableOption "extra colors in directory listings" // {
default = true;
};
programs.bash.lsColorsFile = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
example = lib.literalExpression "\${pkgs.dircolors-solarized}/ansi-dark";
description = "Alternative colorscheme for ls colors";
};
};
config = lib.mkIf enable {
programs.bash.promptPluginInit = ''
eval "$(${pkgs.coreutils}/bin/dircolors -b)"
eval "$(${pkgs.coreutils}/bin/dircolors -b ${
lib.optionalString (config.programs.bash.lsColorsFile != null) config.programs.bash.lsColorsFile
})"
'';
};
}

View file

@ -34,6 +34,9 @@ in
enableFishIntegration = enabledOption ''
Fish integration
'';
enableXonshIntegration = enabledOption ''
Xonsh integration
'';
direnvrcExtra = lib.mkOption {
type = lib.types.lines;
@ -94,6 +97,19 @@ in
${lib.getExe cfg.package} hook fish | source
end
'';
xonsh = lib.mkIf cfg.enableXonshIntegration {
extraPackages = ps: [ ps.xonsh.xontribs.xonsh-direnv ];
config = ''
if ${
if cfg.loadInNixShell then
"True"
else
"not any(map(lambda s: s.startswith('/nix/store'), __xonsh__.env.get('PATH')))"
}:
xontrib load direnv
'';
};
};
environment = {

View file

@ -49,6 +49,15 @@ in
'';
};
minBrightness = lib.mkOption {
type = lib.types.numbers.between 0 100;
default = 0.1;
description = ''
The minimum authorized brightness value, e.g. to avoid the
display going dark.
'';
};
};
};
@ -63,13 +72,14 @@ in
let
light = "${pkgs.light}/bin/light";
step = builtins.toString cfg.brightnessKeys.step;
minBrightness = builtins.toString cfg.brightnessKeys.minBrightness;
in
[
{
keys = [ 224 ];
events = [ "key" ];
# Use minimum brightness 0.1 so the display won't go totally black.
command = "${light} -N 0.1 && ${light} -U ${step}";
# -N is used to ensure that value >= minBrightness
command = "${light} -N ${minBrightness} && ${light} -U ${step}";
}
{
keys = [ 225 ];

View file

@ -59,7 +59,7 @@ let
shell:
if (shell != "fish") then
''
eval $(${getExe finalPackage} ${shell} --alias ${cfg.alias})
eval "$(${getExe finalPackage} ${shell} --alias ${cfg.alias})"
''
else
''

View file

@ -167,10 +167,15 @@ in
group = config.users.users.${config.services.greetd.settings.default_session.user}.group;
mode = "0755";
};
dataDir =
if lib.versionAtLeast (cfg.package.version) "0.2.0" then
{ "/var/lib/regreet".d = defaultConfig; }
else
{ "/var/cache/regreet".d = defaultConfig; };
in
{
"/var/log/regreet".d = defaultConfig;
"/var/cache/regreet".d = defaultConfig;
};
}
// dataDir;
};
}

View file

@ -6,6 +6,7 @@ let
cfg = config.programs.xonsh;
package = cfg.package.override { inherit (cfg) extraPackages; };
bashCompletionPath = "${cfg.bashCompletion.package}/share/bash-completion/bash_completion";
in
{
@ -49,6 +50,13 @@ in
Xontribs and extra Python packages to be available in xonsh.
'';
};
bashCompletion = {
enable = lib.mkEnableOption "bash completions for xonsh" // {
default = true;
};
package = lib.mkPackageOption pkgs "bash-completion" { };
};
};
};
@ -78,6 +86,8 @@ in
aliases['ls'] = _ls_alias
del _ls_alias
${lib.optionalString cfg.bashCompletion.enable "$BASH_COMPLETIONS = '${bashCompletionPath}'"}
${cfg.config}
'';

View file

@ -49,10 +49,5 @@ in
);
serviceConfig.Restart = "always";
};
warnings = lib.mkIf (config.services.xserver.displayManager.startx.enable) [
"xss-lock service only works if a displayManager is set; it doesn't work when services.xserver.displayManager.startx.enable = true"
];
};
}

View file

@ -292,6 +292,9 @@ in
See https://www.isc.org/blogs/isc-dhcp-eol/ for details.
Please switch to a different implementation like kea or dnsmasq.
'')
(mkRemovedOptionModule [ "services" "haka" ] ''
The corresponding package was broken and removed from nixpkgs.
'')
(mkRemovedOptionModule [ "services" "tedicross" ] ''
The corresponding package was broken and removed from nixpkgs.
'')

View file

@ -15,6 +15,7 @@
systemd.services.auditd = {
description = "Linux Audit daemon";
documentation = [ "man:auditd(8)" ];
wantedBy = [ "sysinit.target" ];
after = [
"local-fs.target"

View file

@ -128,6 +128,7 @@ in
systemd.services.isolate = {
description = "Isolate control group hierarchy daemon";
wantedBy = [ "multi-user.target" ];
documentation = [ "man:isolate(1)" ];
serviceConfig = {
Type = "notify";
ExecStart = "${isolate}/bin/isolate-cg-keeper";

View file

@ -61,16 +61,33 @@ rec {
description = "Which principal the rule applies to";
};
access = mkOption {
type = either (listOf (enum [
"add"
"cpw"
"delete"
"get"
"list"
"modify"
])) (enum [ "all" ]);
type = coercedTo str singleton (
listOf (enum [
"all"
"add"
"cpw"
"delete"
"get-keys"
"get"
"list"
"modify"
])
);
default = "all";
description = "The changes the principal is allowed to make.";
description = ''
The changes the principal is allowed to make.
:::{.important}
The "all" permission does not imply the "get-keys" permission. This
is consistent with the behavior of both MIT Kerberos and Heimdal.
:::
:::{.warning}
Value "all" is allowed as a list member only if it appears alone
or accompanied by "get-keys". Any other combination involving
"all" will raise an exception.
:::
'';
};
target = mkOption {
type = str;

View file

@ -36,7 +36,7 @@ in
defaultOptions = lib.mkOption {
type = with lib.types; listOf str;
default = [ ];
default = [ "SETENV" ];
description = ''
Options used for the default rules, granting `root` and the
`wheel` group permission to run any command as any user.

View file

@ -20,110 +20,355 @@ let
chartDir = "/var/lib/rancher/k3s/server/static/charts";
imageDir = "/var/lib/rancher/k3s/agent/images";
containerdConfigTemplateFile = "/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl";
yamlFormat = pkgs.formats.yaml { };
yamlDocSeparator = builtins.toFile "yaml-doc-separator" "\n---\n";
# Manifests need a valid YAML suffix to be respected by k3s
mkManifestTarget =
name: if (lib.hasSuffix ".yaml" name || lib.hasSuffix ".yml" name) then name else name + ".yaml";
# Produces a list containing all duplicate manifest names
duplicateManifests =
with builtins;
lib.intersectLists (attrNames cfg.autoDeployCharts) (attrNames cfg.manifests);
# Produces a list containing all duplicate chart names
duplicateCharts =
with builtins;
lib.intersectLists (attrNames cfg.autoDeployCharts) (attrNames cfg.charts);
manifestModule =
let
mkTarget =
name: if (lib.hasSuffix ".yaml" name || lib.hasSuffix ".yml" name) then name else name + ".yaml";
in
lib.types.submodule (
{
name,
config,
options,
...
}:
{
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether this manifest file should be generated.";
};
target = lib.mkOption {
type = lib.types.nonEmptyStr;
example = lib.literalExpression "manifest.yaml";
description = ''
Name of the symlink (relative to {file}`${manifestDir}`).
Defaults to the attribute name.
'';
};
content = lib.mkOption {
type = with lib.types; nullOr (either attrs (listOf attrs));
default = null;
description = ''
Content of the manifest file. A single attribute set will
generate a single document YAML file. A list of attribute sets
will generate multiple documents separated by `---` in a single
YAML file.
'';
};
source = lib.mkOption {
type = lib.types.path;
example = lib.literalExpression "./manifests/app.yaml";
description = ''
Path of the source `.yaml` file.
'';
};
};
config = {
target = lib.mkDefault (mkTarget name);
source = lib.mkIf (config.content != null) (
let
name' = "k3s-manifest-" + builtins.baseNameOf name;
docName = "k3s-manifest-doc-" + builtins.baseNameOf name;
yamlDocSeparator = builtins.toFile "yaml-doc-separator" "\n---\n";
mkYaml = name: x: (pkgs.formats.yaml { }).generate name x;
mkSource =
value:
if builtins.isList value then
pkgs.concatText name' (
lib.concatMap (x: [
yamlDocSeparator
(mkYaml docName x)
]) value
)
else
mkYaml name' value;
in
lib.mkDerivedConfig options.content mkSource
);
};
}
# Converts YAML -> JSON -> Nix
fromYaml =
path:
with builtins;
fromJSON (
readFile (
pkgs.runCommand "${path}-converted.json" { nativeBuildInputs = [ yq-go ]; } ''
yq --no-colors --output-format json ${path} > $out
''
)
);
enabledManifests = lib.filter (m: m.enable) (lib.attrValues cfg.manifests);
linkManifestEntry = m: "${pkgs.coreutils-full}/bin/ln -sfn ${m.source} ${manifestDir}/${m.target}";
linkImageEntry = image: "${pkgs.coreutils-full}/bin/ln -sfn ${image} ${imageDir}/${image.name}";
linkChartEntry =
let
mkTarget = name: if (lib.hasSuffix ".tgz" name) then name else name + ".tgz";
in
# Replace characters that are problematic in file names
cleanHelmChartName =
lib.replaceStrings
[
"/"
":"
]
[
"-"
"-"
];
# Fetch a Helm chart from a public registry. This only supports a basic Helm pull.
fetchHelm =
{
name,
repo,
version,
hash ? lib.fakeHash,
}:
pkgs.runCommand (cleanHelmChartName "${lib.removePrefix "https://" repo}-${name}-${version}.tgz")
{
inherit (lib.fetchers.normalizeHash { } { inherit hash; }) outputHash outputHashAlgo;
impureEnvVars = lib.fetchers.proxyImpureEnvVars;
nativeBuildInputs = with pkgs; [
kubernetes-helm
cacert
];
}
''
export HOME="$PWD"
helm repo add repository ${repo}
helm pull repository/${name} --version ${version}
mv ./*.tgz $out
'';
# Returns the path to a YAML manifest file
mkExtraDeployManifest =
x:
# x is a derivation that provides a YAML file
if lib.isDerivation x then
x.outPath
# x is an attribute set that needs to be converted to a YAML file
else if builtins.isAttrs x then
(yamlFormat.generate "extra-deploy-chart-manifest" x)
# assume x is a path to a YAML file
else
x;
# Generate a HelmChart custom resource.
mkHelmChartCR =
name: value:
"${pkgs.coreutils-full}/bin/ln -sfn ${value} ${chartDir}/${mkTarget (builtins.baseNameOf name)}";
let
chartValues = if (lib.isPath value.values) then fromYaml value.values else value.values;
# use JSON for values as it's a subset of YAML and understood by the k3s Helm controller
valuesContent = builtins.toJSON chartValues;
in
# merge with extraFieldDefinitions to allow setting advanced values and overwrite generated
# values
lib.recursiveUpdate {
apiVersion = "helm.cattle.io/v1";
kind = "HelmChart";
metadata = {
inherit name;
namespace = "kube-system";
};
spec = {
inherit valuesContent;
inherit (value) targetNamespace createNamespace;
chart = "https://%{KUBERNETES_API}%/static/charts/${name}.tgz";
};
} value.extraFieldDefinitions;
activateK3sContent = pkgs.writeShellScript "activate-k3s-content" ''
${lib.optionalString (
builtins.length enabledManifests > 0
) "${pkgs.coreutils-full}/bin/mkdir -p ${manifestDir}"}
${lib.optionalString (cfg.charts != { }) "${pkgs.coreutils-full}/bin/mkdir -p ${chartDir}"}
${lib.optionalString (
builtins.length cfg.images > 0
) "${pkgs.coreutils-full}/bin/mkdir -p ${imageDir}"}
# Generate a HelmChart custom resource together with extraDeploy manifests. This
# generates possibly a multi document YAML file that the auto deploy mechanism of k3s
# deploys.
mkAutoDeployChartManifest = name: value: {
# target is the final name of the link created for the manifest file
target = mkManifestTarget name;
inherit (value) enable package;
# source is a store path containing the complete manifest file
source = pkgs.concatText "auto-deploy-chart-${name}.yaml" (
[
(yamlFormat.generate "helm-chart-manifest-${name}.yaml" (mkHelmChartCR name value))
]
# alternate the YAML doc seperator (---) and extraDeploy manifests to create
# multi document YAMLs
++ (lib.concatMap (x: [
yamlDocSeparator
(mkExtraDeployManifest x)
]) value.extraDeploy)
);
};
${builtins.concatStringsSep "\n" (map linkManifestEntry enabledManifests)}
${builtins.concatStringsSep "\n" (lib.mapAttrsToList linkChartEntry cfg.charts)}
${builtins.concatStringsSep "\n" (map linkImageEntry cfg.images)}
autoDeployChartsModule = lib.types.submodule (
{ config, ... }:
{
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
example = false;
description = ''
Whether to enable the installation of this Helm chart. Note that setting
this option to `false` will not uninstall the chart from the cluster, if
it was previously installed. Please use the the `--disable` flag or `.skip`
files to delete/disable Helm charts, as mentioned in the
[docs](https://docs.k3s.io/installation/packaged-components#disabling-manifests).
'';
};
${lib.optionalString (cfg.containerdConfigTemplate != null) ''
mkdir -p $(dirname ${containerdConfigTemplateFile})
${pkgs.coreutils-full}/bin/ln -sfn ${pkgs.writeText "config.toml.tmpl" cfg.containerdConfigTemplate} ${containerdConfigTemplateFile}
''}
'';
repo = lib.mkOption {
type = lib.types.nonEmptyStr;
example = "https://kubernetes.github.io/ingress-nginx";
description = ''
The repo of the Helm chart. Only has an effect if `package` is not set.
The Helm chart is fetched during build time and placed as a `.tgz` archive on the
filesystem.
'';
};
name = lib.mkOption {
type = lib.types.nonEmptyStr;
example = "ingress-nginx";
description = ''
The name of the Helm chart. Only has an effect if `package` is not set.
The Helm chart is fetched during build time and placed as a `.tgz` archive on the
filesystem.
'';
};
version = lib.mkOption {
type = lib.types.nonEmptyStr;
example = "4.7.0";
description = ''
The version of the Helm chart. Only has an effect if `package` is not set.
The Helm chart is fetched during build time and placed as a `.tgz` archive on the
filesystem.
'';
};
hash = lib.mkOption {
type = lib.types.str;
example = "sha256-ej+vpPNdiOoXsaj1jyRpWLisJgWo8EqX+Z5VbpSjsPA=";
description = ''
The hash of the packaged Helm chart. Only has an effect if `package` is not set.
The Helm chart is fetched during build time and placed as a `.tgz` archive on the
filesystem.
'';
};
package = lib.mkOption {
type = with lib.types; either path package;
example = lib.literalExpression "../my-helm-chart.tgz";
description = ''
The packaged Helm chart. Overwrites the options `repo`, `name`, `version`
and `hash` in case of conflicts.
'';
};
targetNamespace = lib.mkOption {
type = lib.types.nonEmptyStr;
default = "default";
example = "kube-system";
description = "The namespace in which the Helm chart gets installed.";
};
createNamespace = lib.mkOption {
type = lib.types.bool;
default = false;
example = true;
description = "Whether to create the target namespace if not present.";
};
values = lib.mkOption {
type = with lib.types; either path attrs;
default = { };
example = {
replicaCount = 3;
hostName = "my-host";
server = {
name = "nginx";
port = 80;
};
};
description = ''
Override default chart values via Nix expressions. This is equivalent to setting
values in a `values.yaml` file.
WARNING: The values (including secrets!) specified here are exposed unencrypted
in the world-readable nix store.
'';
};
extraDeploy = lib.mkOption {
type = with lib.types; listOf (either path attrs);
default = [ ];
example = lib.literalExpression ''
[
../manifests/my-extra-deployment.yaml
{
apiVersion = "v1";
kind = "Service";
metadata = {
name = "app-service";
};
spec = {
selector = {
"app.kubernetes.io/name" = "MyApp";
};
ports = [
{
name = "name-of-service-port";
protocol = "TCP";
port = 80;
targetPort = "http-web-svc";
}
];
};
}
];
'';
description = "List of extra Kubernetes manifests to deploy with this Helm chart.";
};
extraFieldDefinitions = lib.mkOption {
inherit (yamlFormat) type;
default = { };
example = {
spec = {
bootstrap = true;
helmVersion = "v2";
backOffLimit = 3;
jobImage = "custom-helm-controller:v0.0.1";
};
};
description = ''
Extra HelmChart field definitions that are merged with the rest of the HelmChart
custom resource. This can be used to set advanced fields or to overwrite
generated fields. See https://docs.k3s.io/helm#helmchart-field-definitions
for possible fields.
'';
};
};
config.package = lib.mkDefault (fetchHelm {
inherit (config)
repo
name
version
hash
;
});
}
);
manifestModule = lib.types.submodule (
{
name,
config,
options,
...
}:
{
options = {
enable = lib.mkOption {
type = lib.types.bool;
default = true;
description = "Whether this manifest file should be generated.";
};
target = lib.mkOption {
type = lib.types.nonEmptyStr;
example = "manifest.yaml";
description = ''
Name of the symlink (relative to {file}`${manifestDir}`).
Defaults to the attribute name.
'';
};
content = lib.mkOption {
type = with lib.types; nullOr (either attrs (listOf attrs));
default = null;
description = ''
Content of the manifest file. A single attribute set will
generate a single document YAML file. A list of attribute sets
will generate multiple documents separated by `---` in a single
YAML file.
'';
};
source = lib.mkOption {
type = lib.types.path;
example = lib.literalExpression "./manifests/app.yaml";
description = ''
Path of the source `.yaml` file.
'';
};
};
config = {
target = lib.mkDefault (mkManifestTarget name);
source = lib.mkIf (config.content != null) (
let
name' = "k3s-manifest-" + builtins.baseNameOf name;
docName = "k3s-manifest-doc-" + builtins.baseNameOf name;
mkSource =
value:
if builtins.isList value then
pkgs.concatText name' (
lib.concatMap (x: [
yamlDocSeparator
(yamlFormat.generate docName x)
]) value
)
else
yamlFormat.generate name' value;
in
lib.mkDerivedConfig options.content mkSource
);
};
}
);
in
{
imports = [ (removeOption [ "docker" ] "k3s docker option is no longer supported.") ];
@ -242,78 +487,80 @@ in
type = lib.types.attrsOf manifestModule;
default = { };
example = lib.literalExpression ''
deployment.source = ../manifests/deployment.yaml;
my-service = {
enable = false;
target = "app-service.yaml";
content = {
apiVersion = "v1";
kind = "Service";
metadata = {
name = "app-service";
};
spec = {
selector = {
"app.kubernetes.io/name" = "MyApp";
{
deployment.source = ../manifests/deployment.yaml;
my-service = {
enable = false;
target = "app-service.yaml";
content = {
apiVersion = "v1";
kind = "Service";
metadata = {
name = "app-service";
};
spec = {
selector = {
"app.kubernetes.io/name" = "MyApp";
};
ports = [
{
name = "name-of-service-port";
protocol = "TCP";
port = 80;
targetPort = "http-web-svc";
}
];
};
ports = [
{
name = "name-of-service-port";
protocol = "TCP";
port = 80;
targetPort = "http-web-svc";
}
];
};
}
};
};
nginx.content = [
{
apiVersion = "v1";
kind = "Pod";
metadata = {
name = "nginx";
labels = {
"app.kubernetes.io/name" = "MyApp";
nginx.content = [
{
apiVersion = "v1";
kind = "Pod";
metadata = {
name = "nginx";
labels = {
"app.kubernetes.io/name" = "MyApp";
};
};
};
spec = {
containers = [
{
name = "nginx";
image = "nginx:1.14.2";
ports = [
{
containerPort = 80;
name = "http-web-svc";
}
];
}
];
};
}
{
apiVersion = "v1";
kind = "Service";
metadata = {
name = "nginx-service";
};
spec = {
selector = {
"app.kubernetes.io/name" = "MyApp";
spec = {
containers = [
{
name = "nginx";
image = "nginx:1.14.2";
ports = [
{
containerPort = 80;
name = "http-web-svc";
}
];
}
];
};
ports = [
{
name = "name-of-service-port";
protocol = "TCP";
port = 80;
targetPort = "http-web-svc";
}
];
};
}
];
}
{
apiVersion = "v1";
kind = "Service";
metadata = {
name = "nginx-service";
};
spec = {
selector = {
"app.kubernetes.io/name" = "MyApp";
};
ports = [
{
name = "name-of-service-port";
protocol = "TCP";
port = 80;
targetPort = "http-web-svc";
}
];
};
}
];
};
'';
description = ''
Auto-deploying manifests that are linked to {file}`${manifestDir}` before k3s starts.
@ -337,10 +584,9 @@ in
Packaged Helm charts that are linked to {file}`${chartDir}` before k3s starts.
The attribute name will be used as the link target (relative to {file}`${chartDir}`).
The specified charts will only be placed on the file system and made available to the
Kubernetes APIServer from within the cluster, you may use the
[k3s Helm controller](https://docs.k3s.io/helm#using-the-helm-controller)
to deploy the charts. This option only makes sense on server nodes
(`role = server`).
Kubernetes APIServer from within the cluster. See the [](#opt-services.k3s.autoDeployCharts)
option and the [k3s Helm controller docs](https://docs.k3s.io/helm#using-the-helm-controller)
to deploy Helm charts. This option only makes sense on server nodes (`role = server`).
'';
};
@ -450,6 +696,53 @@ in
set the `clientConnection.kubeconfig` if you want to use `extraKubeProxyConfig`.
'';
};
autoDeployCharts = lib.mkOption {
type = lib.types.attrsOf autoDeployChartsModule;
apply = lib.mapAttrs mkAutoDeployChartManifest;
default = { };
example = lib.literalExpression ''
{
harbor = {
name = "harbor";
repo = "https://helm.goharbor.io";
version = "1.14.0";
hash = "sha256-fMP7q1MIbvzPGS9My91vbQ1d3OJMjwc+o8YE/BXZaYU=";
values = {
existingSecretAdminPassword = "harbor-admin";
expose = {
tls = {
enabled = true;
certSource = "secret";
secret.secretName = "my-tls-secret";
};
ingress = {
hosts.core = "example.com";
className = "nginx";
};
};
};
};
custom-chart = {
package = ../charts/my-chart.tgz;
values = ../values/my-values.yaml;
extraFieldDefinitions = {
spec.timeout = "60s";
};
};
}
'';
description = ''
Auto deploying Helm charts that are installed by the k3s Helm controller. Avoid to use
attribute names that are also used in the [](#opt-services.k3s.manifests) and
[](#opt-services.k3s.charts) options. Manifests with the same name will override
auto deploying charts with the same name. Similiarly, charts with the same name will
overwrite the Helm chart contained in auto deploying charts. This option only makes
sense on server nodes (`role = server`). See the
[k3s Helm documentation](https://docs.k3s.io/helm) for further information.
'';
};
};
# implementation
@ -462,6 +755,15 @@ in
++ (lib.optional (cfg.role != "server" && cfg.charts != { })
"k3s: Helm charts are only made available to the cluster on server nodes (role == server), they will be ignored by this node."
)
++ (lib.optional (cfg.role != "server" && cfg.autoDeployCharts != { })
"k3s: Auto deploying Helm charts are only installed on server nodes (role == server), they will be ignored by this node."
)
++ (lib.optional (duplicateManifests != [ ])
"k3s: The following auto deploying charts are overriden by manifests of the same name: ${toString duplicateManifests}."
)
++ (lib.optional (duplicateCharts != [ ])
"k3s: The following auto deploying charts are overriden by charts of the same name: ${toString duplicateCharts}."
)
++ (lib.optional (
cfg.disableAgent && cfg.images != [ ]
) "k3s: Images are only imported on nodes with an enabled agent, they will be ignored by this node")
@ -486,6 +788,50 @@ in
environment.systemPackages = [ config.services.k3s.package ];
# Use systemd-tmpfiles to activate k3s content
systemd.tmpfiles.settings."10-k3s" =
let
# Merge manifest with manifests generated from auto deploying charts, keep only enabled manifests
enabledManifests = lib.filterAttrs (_: v: v.enable) (cfg.autoDeployCharts // cfg.manifests);
# Merge charts with charts contained in enabled auto deploying charts
helmCharts =
(lib.concatMapAttrs (n: v: { ${n} = v.package; }) (
lib.filterAttrs (_: v: v.enable) cfg.autoDeployCharts
))
// cfg.charts;
# Make a systemd-tmpfiles rule for a manifest
mkManifestRule = manifest: {
name = "${manifestDir}/${manifest.target}";
value = {
"L+".argument = "${manifest.source}";
};
};
# Ensure that all chart targets have a .tgz suffix
mkChartTarget = name: if (lib.hasSuffix ".tgz" name) then name else name + ".tgz";
# Make a systemd-tmpfiles rule for a chart
mkChartRule = target: source: {
name = "${chartDir}/${mkChartTarget target}";
value = {
"L+".argument = "${source}";
};
};
# Make a systemd-tmpfiles rule for a container image
mkImageRule = image: {
name = "${imageDir}/${image.name}";
value = {
"L+".argument = "${image}";
};
};
in
(lib.mapAttrs' (_: v: mkManifestRule v) enabledManifests)
// (lib.mapAttrs' (n: v: mkChartRule n v) helmCharts)
// (builtins.listToAttrs (map mkImageRule cfg.images))
// (lib.optionalAttrs (cfg.containerdConfigTemplate != null) {
${containerdConfigTemplateFile} = {
"L+".argument = "${pkgs.writeText "config.toml.tmpl" cfg.containerdConfigTemplate}";
};
});
systemd.services.k3s =
let
kubeletParams =
@ -533,7 +879,6 @@ in
LimitCORE = "infinity";
TasksMax = "infinity";
EnvironmentFile = cfg.environmentFile;
ExecStartPre = activateK3sContent;
ExecStart = lib.concatStringsSep " \\\n " (
[ "${cfg.package}/bin/k3s ${cfg.role}" ]
++ (lib.optional cfg.clusterInit "--cluster-init")

View file

@ -70,6 +70,7 @@ in
};
systemd.services.torque-server = {
documentation = [ "man:pbs_server(8)" ];
path = [ torque ];
wantedBy = [ "multi-user.target" ];
@ -93,6 +94,7 @@ in
};
systemd.services.torque-scheduler = {
documentation = [ "man:pbs_sched(8)" ];
path = [ torque ];
requires = [ "torque-server-init.service" ];

View file

@ -520,7 +520,7 @@ in
elif [[ $compression == zstd ]]; then
compressionCmd=(zstd --rm)
fi
find ${baseDir}/build-logs -type f -name "*.drv" -mtime +3 -size +0c -print0 | xargs -0 -r "''${compressionCmd[@]}" --force --quiet
find ${baseDir}/build-logs -ignore_readdir_race -type f -name "*.drv" -mtime +3 -size +0c -print0 | xargs -0 -r "''${compressionCmd[@]}" --force --quiet
'';
startAt = "Sun 01:45";
serviceConfig.Slice = "system-hydra.slice";

View file

@ -93,11 +93,7 @@ let
};
} cfg.extraConfig;
configFile = pkgs.runCommandLocal "config.toml" { } ''
${pkgs.buildPackages.remarshal}/bin/remarshal -if json -of toml \
< ${pkgs.writeText "config.json" (builtins.toJSON configOptions)} \
> $out
'';
configFile = (pkgs.formats.toml {}).generate "config.toml" configOptions;
in
{

View file

@ -433,6 +433,25 @@ in
done
''}
${lib.optionalString isMariaDB ''
# If MariaDB is used in an Galera cluster, we have to check if the sync is done,
# or it will fail to init the database while joining, so we get in an broken non recoverable state
# so we wait until we have an synced state
if ${cfg.package}/bin/mysql -u ${superUser} -N -e "SHOW VARIABLES LIKE 'wsrep_on'" 2>/dev/null | ${lib.getExe' pkgs.gnugrep "grep"} -q 'ON'; then
echo "Galera cluster detected, waiting for node to be synced..."
while true; do
STATE=$(${cfg.package}/bin/mysql -u ${superUser} -N -e "SHOW STATUS LIKE 'wsrep_local_state_comment'" | ${lib.getExe' pkgs.gawk "awk"} '{print $2}')
if [ "$STATE" = "Synced" ]; then
echo "Node is synced"
break
else
echo "Current state: $STATE - Waiting for 1 second..."
sleep 1
fi
done
fi
''}
if [ -f ${cfg.dataDir}/mysql_init ]
then
# While MariaDB comes with a 'mysql' super user account since 10.4.x, MySQL does not
@ -447,10 +466,10 @@ in
# Create initial databases
if ! test -e "${cfg.dataDir}/${database.name}"; then
echo "Creating initial database: ${database.name}"
( echo 'create database `${database.name}`;'
( echo 'CREATE DATABASE IF NOT EXISTS `${database.name}`;'
${lib.optionalString (database.schema != null) ''
echo 'use `${database.name}`;'
echo 'USE `${database.name}`;'
# TODO: this silently falls through if database.schema does not exist,
# we should catch this somehow and exit, but can't do it here because we're in a subshell.
@ -469,7 +488,7 @@ in
${lib.optionalString (cfg.replication.role == "master") ''
# Set up the replication master
( echo "use mysql;"
( echo "USE mysql;"
echo "CREATE USER '${cfg.replication.masterUser}'@'${cfg.replication.slaveHost}' IDENTIFIED WITH mysql_native_password;"
echo "SET PASSWORD FOR '${cfg.replication.masterUser}'@'${cfg.replication.slaveHost}' = PASSWORD('${cfg.replication.masterPassword}');"
echo "GRANT REPLICATION SLAVE ON *.* TO '${cfg.replication.masterUser}'@'${cfg.replication.slaveHost}';"
@ -479,9 +498,9 @@ in
${lib.optionalString (cfg.replication.role == "slave") ''
# Set up the replication slave
( echo "stop slave;"
echo "change master to master_host='${cfg.replication.masterHost}', master_user='${cfg.replication.masterUser}', master_password='${cfg.replication.masterPassword}';"
echo "start slave;"
( echo "STOP SLAVE;"
echo "CHANGE MASTER TO MASTER_HOST='${cfg.replication.masterHost}', MASTER_USER='${cfg.replication.masterUser}', MASTER_PASSWORD='${cfg.replication.masterPassword}';"
echo "START SLAVE;"
) | ${cfg.package}/bin/mysql -u ${superUser} -N
''}

View file

@ -14,8 +14,11 @@ let
const
elem
escapeShellArgs
filter
filterAttrs
getAttr
getName
hasPrefix
isString
literalExpression
mapAttrs
@ -31,6 +34,8 @@ let
mkRemovedOptionModule
mkRenamedOptionModule
optionalString
pipe
sortProperties
types
versionAtLeast
warn
@ -124,6 +129,100 @@ in
'';
};
systemCallFilter = mkOption {
type = types.attrsOf (
types.coercedTo types.bool (enable: { inherit enable; }) (
types.submodule (
{ name, ... }:
{
options = {
enable = mkEnableOption "${name} in postgresql's syscall filter";
priority = mkOption {
default =
if hasPrefix "@" name then
500
else if hasPrefix "~@" name then
1000
else
1500;
defaultText = literalExpression ''
if hasPrefix "@" name then 500 else if hasPrefix "~@" name then 1000 else 1500
'';
type = types.int;
description = ''
Set the priority of the system call filter setting. Later declarations
override earlier ones, e.g.
```ini
[Service]
SystemCallFilter=~read write
SystemCallFilter=write
```
results in a service where _only_ `read` is not allowed.
The ordering in the unit file is controlled by this option: the higher
the number, the later it will be added to the filterset.
By default, depending on the prefix a priority is assigned: usually, call-groups
(starting with `@`) are used to allow/deny a larger set of syscalls and later
on single syscalls are configured for exceptions. Hence, syscall groups
and negative groups are placed before individual syscalls by default.
'';
};
};
}
)
)
);
defaultText = literalExpression ''
{
"@system-service" = true;
"~@privileged" = true;
"~@resources" = true;
}
'';
description = ''
Configures the syscall filter for `postgresql.service`. The keys are
declarations for `SystemCallFilter` as described in {manpage}`systemd.exec(5)`.
The value is a boolean: `true` adds the attribute name to the syscall filter-set,
`false` doesn't. This is done to allow downstream configurations to turn off
restrictions made here. E.g. with
```nix
{
services.postgresql.systemCallFilter."~@resources" = false;
}
```
it's possible to remove the restriction on `@resources` (keep in mind that
`@system-service` implies `@resources`).
As described in the section for [](#opt-services.postgresql.systemCallFilter._name_.priority),
the ordering matters. Hence, it's also possible to specify customizations with
```nix
{
services.postgresql.systemCallFilter = {
"foobar" = { enable = true; priority = 23; };
};
}
```
[](#opt-services.postgresql.systemCallFilter._name_.enable) is the flag whether
or not it will be added to the `SystemCallFilter` of `postgresql.service`.
Settings with a higher priority are added after filter settings with a lower
priority. Hence, syscall groups with a higher priority can discard declarations
with a lower priority.
By default, syscall groups (i.e. attribute names starting with `@`) are added
_before_ negated groups (i.e. `~@` as prefix) _before_ syscall names
and negations.
'';
};
checkConfig = mkOption {
type = types.bool;
default = true;
@ -439,7 +538,7 @@ in
]);
options = {
shared_preload_libraries = mkOption {
type = nullOr (coercedTo (listOf str) (concatStringsSep ", ") str);
type = nullOr (coercedTo (listOf str) (concatStringsSep ",") commas);
default = null;
example = literalExpression ''[ "auto_explain" "anon" ]'';
description = ''
@ -583,6 +682,21 @@ in
'')
];
services.postgresql.systemCallFilter = mkMerge [
(mapAttrs (const mkDefault) {
"@system-service" = true;
"~@privileged" = true;
"~@resources" = true;
})
(mkIf (any extensionInstalled [ "plv8" ]) {
"@pkey" = true;
})
(mkIf (any extensionInstalled [ "citus" ]) {
"getpriority" = true;
"setpriority" = true;
})
];
users.users.postgres = {
name = "postgres";
uid = config.ids.uids.postgres;
@ -727,16 +841,12 @@ in
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter =
[
"@system-service"
"~@privileged @resources"
]
++ lib.optionals (any extensionInstalled [ "plv8" ]) [ "@pkey" ]
++ lib.optionals (any extensionInstalled [ "citus" ]) [
"getpriority"
"setpriority"
];
SystemCallFilter = pipe cfg.systemCallFilter [
(mapAttrsToList (name: v: v // { inherit name; }))
(filter (getAttr "enable"))
sortProperties
(map (getAttr "name"))
];
UMask = if groupAccessAvailable then "0027" else "0077";
}
(mkIf (cfg.dataDir != "/var/lib/postgresql/${cfg.package.psqlSchema}") {

View file

@ -58,15 +58,12 @@ let
configPackages = cfg.configPackages;
extraConfigPkg =
extraConfigPkgFromFiles [ "pipewire" "client" "client-rt" "jack" "pipewire-pulse" ]
(
mapToFiles "pipewire" cfg.extraConfig.pipewire
// mapToFiles "client" cfg.extraConfig.client
// mapToFiles "client-rt" cfg.extraConfig.client-rt
// mapToFiles "jack" cfg.extraConfig.jack
// mapToFiles "pipewire-pulse" cfg.extraConfig.pipewire-pulse
);
extraConfigPkg = extraConfigPkgFromFiles [ "pipewire" "client" "jack" "pipewire-pulse" ] (
mapToFiles "pipewire" cfg.extraConfig.pipewire
// mapToFiles "client" cfg.extraConfig.client
// mapToFiles "jack" cfg.extraConfig.jack
// mapToFiles "pipewire-pulse" cfg.extraConfig.pipewire-pulse
);
configs = pkgs.buildEnv {
name = "pipewire-configs";
@ -205,27 +202,6 @@ in
[wiki]: https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Config-client
'';
};
client-rt = mkOption {
type = attrsOf json.type;
default = { };
example = {
"10-alsa-linear-volume" = {
"alsa.properties" = {
"alsa.volume-method" = "linear";
};
};
};
description = ''
Additional configuration for the PipeWire client library, used by real-time applications and legacy ALSA clients.
Every item in this attrset becomes a separate drop-in file in `/etc/pipewire/client-rt.conf.d`.
See the [PipeWire wiki][wiki] for examples of general configuration, and [PipeWire wiki - ALSA][wiki-alsa] for ALSA clients.
[wiki]: https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Config-client
[wiki-alsa]: https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/Config-ALSA
'';
};
jack = mkOption {
type = attrsOf json.type;
default = { };
@ -341,6 +317,10 @@ in
pipewire-media-session is no longer supported upstream and has been removed.
Please switch to `services.pipewire.wireplumber` instead.
'')
(mkRemovedOptionModule [ "services" "pipewire" "extraConfig" "client-rt" ] ''
`services.pipewire.extraConfig.client-rt` is no longer applicable, as `client-rt.conf` has been
removed upstream. Please move your customizations to `services.pipewire.extraConfig.client`.
'')
];
###### implementation
@ -392,10 +372,13 @@ in
) "${lv2Plugins}/lib/lv2";
# Mask pw-pulse if it's not wanted
systemd.user.services.pipewire-pulse.enable = cfg.pulse.enable;
systemd.user.sockets.pipewire-pulse.enable = cfg.pulse.enable;
systemd.services.pipewire-pulse.enable = cfg.pulse.enable && cfg.systemWide;
systemd.sockets.pipewire-pulse.enable = cfg.pulse.enable && cfg.systemWide;
systemd.user.services.pipewire-pulse.enable = cfg.pulse.enable && !cfg.systemWide;
systemd.user.sockets.pipewire-pulse.enable = cfg.pulse.enable && !cfg.systemWide;
systemd.sockets.pipewire.wantedBy = mkIf cfg.socketActivation [ "sockets.target" ];
systemd.sockets.pipewire-pulse.wantedBy = mkIf cfg.socketActivation [ "sockets.target" ];
systemd.user.sockets.pipewire.wantedBy = mkIf cfg.socketActivation [ "sockets.target" ];
systemd.user.sockets.pipewire-pulse.wantedBy = mkIf cfg.socketActivation [ "sockets.target" ];

View file

@ -140,12 +140,10 @@ let
}
);
configFile = pkgs.runCommandLocal "config.toml" { } ''
${pkgs.buildPackages.jq}/bin/jq 'del(..|nulls)' \
< ${pkgs.writeText "config.json" (builtins.toJSON athensConfig)} | \
${pkgs.buildPackages.remarshal}/bin/remarshal -if json -of toml \
> $out
'';
configFile = lib.pipe athensConfig [
(lib.filterAttrsRecursive (_k: v: v != null))
((pkgs.formats.toml {}).generate "config.toml")
];
in
{
meta = {

View file

@ -127,5 +127,7 @@
services.libeufin.nexus.settings.libeufin-nexusdb-postgres.CONFIG = lib.mkIf (
cfgMain.bank.enable && cfgMain.bank.createLocalDatabase
) "postgresql:///libeufin-bank";
systemd.services.libeufin-nexus.documentation = [ "man:libeufin-nexus(1)" ];
};
}

View file

@ -68,11 +68,19 @@ in
requires = [ "taler-${talerComponent}-dbinit.service" ];
after = [ "taler-${talerComponent}-dbinit.service" ];
wantedBy = [ "multi-user.target" ]; # TODO slice?
documentation = [
"man:taler-${talerComponent}-${name}(1)"
"info:taler-${talerComponent}"
];
}))
# Database Initialisation
{
"taler-${talerComponent}-dbinit" = {
path = [ config.services.postgresql.package ];
documentation = [
"man:taler-${talerComponent}-dbinit(1)"
"info:taler-${talerComponent}"
];
serviceConfig = {
Type = "oneshot";
DynamicUser = true;

View file

@ -148,7 +148,7 @@ in {
};
package = lib.mkPackageOption pkgs "minecraft-server" {
example = "minecraft-server_1_12_2";
example = "pkgs.minecraft-server_1_12_2";
};
jvmOpts = lib.mkOption {

View file

@ -49,6 +49,7 @@ in
systemd.services.thermald = {
description = "Thermal Daemon Service";
documentation = [ "man:thermald(8)" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
PrivateNetwork = true;

View file

@ -122,6 +122,7 @@ in
systemd.services.triggerhappy = {
wantedBy = [ "multi-user.target" ];
description = "Global hotkey daemon";
documentation = [ "man:thd(1)" ];
serviceConfig = {
ExecStart = "${pkgs.triggerhappy}/bin/thd ${
lib.optionalString (cfg.user != "root") "--user ${cfg.user}"

View file

@ -58,6 +58,15 @@ in
serviceConfig = {
ExecStart = (
lib.concatStringsSep " " [
# `python-matter-server` writes to /data even when a storage-path
# is specified. This symlinks /data at the systemd-managed
# /var/lib/matter-server, so all files get dropped into the state
# directory.
"${pkgs.bash}/bin/sh"
"-c"
"'"
"${pkgs.coreutils}/bin/ln -s %S/matter-server/ %t/matter-server/root/data"
"&&"
"${cfg.package}/bin/matter-server"
"--port"
(toString cfg.port)
@ -68,22 +77,21 @@ in
"--log-level"
"${cfg.logLevel}"
"${lib.escapeShellArgs cfg.extraArgs}"
"'"
]
);
# Start with a clean root filesystem, and allowlist what the container
# is permitted to access.
TemporaryFileSystem = "/";
# See https://discourse.nixos.org/t/hardening-systemd-services/17147/14.
RuntimeDirectory = [ "matter-server/root" ];
RootDirectory = "%t/matter-server/root";
# Allowlist /nix/store (to allow the binary to find its dependencies)
# and dbus.
ReadOnlyPaths = "/nix/store /run/dbus";
BindReadOnlyPaths = "/nix/store /run/dbus";
# Let systemd manage `/var/lib/matter-server` for us inside the
# ephemeral TemporaryFileSystem.
StateDirectory = storageDir;
# `python-matter-server` writes to /data even when a storage-path is
# specified. This bind-mount points /data at the systemd-managed
# /var/lib/matter-server, so all files get dropped into the state
# directory.
BindPaths = "${storagePath}:/data";
# Hardening bits
AmbientCapabilities = "";

View file

@ -111,6 +111,7 @@ in
SystemCallFilter = [
"@system-service @pkey"
"~@privileged @resources"
"@chown"
];
UMask = "0077";
};

View file

@ -38,17 +38,8 @@ in
enable = lib.mkEnableOption "Graylog, a log management solution";
package = lib.mkOption {
type = lib.types.package;
default =
if lib.versionOlder config.system.stateVersion "23.05" then pkgs.graylog-3_3 else pkgs.graylog-5_1;
defaultText = lib.literalExpression (
if lib.versionOlder config.system.stateVersion "23.05" then
"pkgs.graylog-3_3"
else
"pkgs.graylog-5_1"
);
description = "Graylog package to use.";
package = lib.mkPackageOption pkgs "graylog" {
example = "graylog-6_0";
};
user = lib.mkOption {
@ -139,6 +130,22 @@ in
config = lib.mkIf cfg.enable {
# Note: when changing the default, make it conditional on
# system.stateVersion to maintain compatibility with existing
# systems!
services.graylog.package =
let
mkThrow = ver: throw "graylog-${ver} was removed, please upgrade your graylog version.";
base =
if lib.versionAtLeast config.system.stateVersion "25.05" then
pkgs.graylog-6_0
else if lib.versionAtLeast config.system.stateVersion "23.05" then
mkThrow "5_1"
else
mkThrow "3_3";
in
lib.mkDefault base;
users.users = lib.mkIf (cfg.user == "graylog") {
graylog = {
isSystemUser = true;

View file

@ -2,9 +2,11 @@
let
cfg = config.services.promtail;
prettyJSON = conf: pkgs.runCommandLocal "promtail-config.json" {} ''
echo '${builtins.toJSON conf}' | ${pkgs.buildPackages.jq}/bin/jq 'del(._module)' > $out
'';
format = pkgs.formats.json {};
prettyJSON = conf: with lib; pipe conf [
(flip removeAttrs [ "_module" ])
(format.generate "promtail-config.json")
];
allowSystemdJournal = cfg.configuration ? scrape_configs && lib.any (v: v ? journal) cfg.configuration.scrape_configs;
@ -20,7 +22,7 @@ in {
enable = mkEnableOption "the Promtail ingresser";
configuration = mkOption {
type = (pkgs.formats.json {}).type;
type = format.type;
description = ''
Specify the configuration for Promtail in Nix.
This option will be ignored if `services.promtail.configFile` is defined.

View file

@ -111,6 +111,7 @@ let
base_dir = ${baseDir}
protocols = ${concatStringsSep " " cfg.protocols}
sendmail_path = /run/wrappers/bin/sendmail
mail_plugin_dir = /run/current-system/sw/lib/dovecot/modules
# defining mail_plugins must be done before the first protocol {} filter because of https://doc.dovecot.org/configuration_manual/config_file/config_file_syntax/#variable-expansion
mail_plugins = $mail_plugins ${concatStringsSep " " cfg.mailPlugins.globally.enable}
''
@ -207,13 +208,6 @@ let
cfg.extraConfig
];
modulesDir = pkgs.symlinkJoin {
name = "dovecot-modules";
paths = map (pkg: "${pkg}/lib/dovecot") (
[ dovecotPkg ] ++ map (module: module.override { dovecot = dovecotPkg; }) cfg.modules
);
};
mailboxConfig =
mailbox:
''
@ -280,6 +274,11 @@ in
{
imports = [
(mkRemovedOptionModule [ "services" "dovecot2" "package" ] "")
(mkRemovedOptionModule [
"services"
"dovecot2"
"modules"
] "Now need to use `environment.systemPackages` to load additional Dovecot modules")
(mkRenamedOptionModule
[ "services" "dovecot2" "sieveScripts" ]
[ "services" "dovecot2" "sieve" "scripts" ]
@ -409,17 +408,6 @@ in
default = true;
};
modules = mkOption {
type = types.listOf types.package;
default = [ ];
example = literalExpression "[ pkgs.dovecot_pigeonhole ]";
description = ''
Symlinks the contents of lib/dovecot of every given package into
/etc/dovecot/modules. This will make the given modules available
if a dovecot package with the module_dir patch applied is being used.
'';
};
sslCACert = mkOption {
type = types.nullOr types.str;
default = null;
@ -702,7 +690,6 @@ in
${cfg.mailGroup} = { };
};
environment.etc."dovecot/modules".source = modulesDir;
environment.etc."dovecot/dovecot.conf".source = cfg.configFile;
systemd.services.dovecot2 = {
@ -712,7 +699,6 @@ in
wantedBy = [ "multi-user.target" ];
restartTriggers = [
cfg.configFile
modulesDir
];
startLimitIntervalSec = 60; # 1 min

View file

@ -871,6 +871,7 @@ in
systemd.services.postfix = {
description = "Postfix mail server";
documentation = [ "man:postfix(1)" ];
wantedBy = [ "multi-user.target" ];
after = [
"network.target"

View file

@ -9,7 +9,8 @@ let
registrationFile = "${dataDir}/telegram-registration.yaml";
cfg = config.services.mautrix-telegram;
settingsFormat = pkgs.formats.json { };
settingsFile = settingsFormat.generate "mautrix-telegram-config.json" cfg.settings;
settingsFileUnsubstituted = settingsFormat.generate "mautrix-telegram-config.json" cfg.settings;
settingsFile = "${dataDir}/config.json";
in
{
@ -132,10 +133,37 @@ in
List of Systemd services to require and wait for when starting the application service.
'';
};
registerToSynapse = lib.mkOption {
type = lib.types.bool;
default = config.services.matrix-synapse.enable;
defaultText = lib.literalExpression "config.services.matrix-synapse.enable";
description = ''
Whether to add the bridge's app service registration file to
`services.matrix-synapse.settings.app_service_config_files`.
'';
};
};
};
config = lib.mkIf cfg.enable {
users.users.mautrix-telegram = {
isSystemUser = true;
group = "mautrix-telegram";
home = dataDir;
description = "Mautrix-Telegram bridge user";
};
users.groups.mautrix-telegram = { };
services.matrix-synapse = lib.mkIf cfg.registerToSynapse {
settings.app_service_config_files = [ registrationFile ];
};
systemd.services.matrix-synapse = lib.mkIf cfg.registerToSynapse {
serviceConfig.SupplementaryGroups = [ "mautrix-telegram" ];
};
systemd.services.mautrix-telegram = {
description = "Mautrix-Telegram, a Matrix-Telegram hybrid puppeting/relaybot bridge.";
@ -161,6 +189,16 @@ in
preStart =
''
# substitute the settings file by environment variables
# in this case read from EnvironmentFile
test -f '${settingsFile}' && rm -f '${settingsFile}'
old_umask=$(umask)
umask 0177
${pkgs.envsubst}/bin/envsubst \
-o '${settingsFile}' \
-i '${settingsFileUnsubstituted}'
umask $old_umask
# generate the appservice's registration file if absent
if [ ! -f '${registrationFile}' ]; then
${pkgs.mautrix-telegram}/bin/mautrix-telegram \
@ -168,6 +206,19 @@ in
--config='${settingsFile}' \
--registration='${registrationFile}'
fi
old_umask=$(umask)
umask 0177
# 1. Overwrite registration tokens in config
# is set, set it as the login shared secret value for the configured
# homeserver domain.
${pkgs.yq}/bin/yq -s '.[0].appservice.as_token = .[1].as_token
| .[0].appservice.hs_token = .[1].hs_token
| .[0]' \
'${settingsFile}' '${registrationFile}' > '${settingsFile}.tmp'
mv '${settingsFile}.tmp' '${settingsFile}'
umask $old_umask
''
+ lib.optionalString (pkgs.mautrix-telegram ? alembic) ''
# run automatic database init and migration scripts
@ -175,6 +226,8 @@ in
'';
serviceConfig = {
User = "mautrix-telegram";
Group = "mautrix-telegram";
Type = "simple";
Restart = "always";
@ -184,7 +237,6 @@ in
ProtectKernelModules = true;
ProtectControlGroups = true;
DynamicUser = true;
PrivateTmp = true;
WorkingDirectory = pkgs.mautrix-telegram; # necessary for the database migration scripts to be found
StateDirectory = baseNameOf dataDir;

View file

@ -10,7 +10,7 @@ let
settings = lib.attrsets.filterAttrs (n: v: v != null) cfg.settings;
configFile = format.generate "evremap.toml" settings;
key = lib.types.strMatching "(BTN|KEY)_[[:upper:]]+" // {
key = lib.types.strMatching "(BTN|KEY)_[[:upper:][:digit:]_]+" // {
description = "key ID prefixed with BTN_ or KEY_";
};

View file

@ -86,7 +86,7 @@ in
ProtectProc = "invisible";
ProtectSystem = "strict";
ReadWritePaths = [
"${config.users.users.${cfg.user}.home}"
cfg.dataDir
];
RemoveIPC = true;
RestrictAddressFamilies = [

View file

@ -0,0 +1,182 @@
{
config,
lib,
pkgs,
...
}:
let
inherit (lib) types;
cfg = config.services.litellm;
settingsFormat = pkgs.formats.yaml { };
in
{
options = {
services.litellm = {
enable = lib.mkEnableOption "LiteLLM server";
package = lib.mkPackageOption pkgs "litellm" { };
stateDir = lib.mkOption {
type = types.path;
default = "/var/lib/litellm";
example = "/home/foo";
description = "State directory of LiteLLM.";
};
host = lib.mkOption {
type = types.str;
default = "127.0.0.1";
example = "0.0.0.0";
description = ''
The host address which the LiteLLM server HTTP interface listens to.
'';
};
port = lib.mkOption {
type = types.port;
default = 8080;
example = 11111;
description = ''
Which port the LiteLLM server listens to.
'';
};
settings = lib.mkOption {
type = types.submodule {
freeformType = settingsFormat.type;
options = {
model_list = lib.mkOption {
type = settingsFormat.type;
description = ''
List of supported models on the server, with model-specific configs.
'';
default = [ ];
};
router_settings = lib.mkOption {
type = settingsFormat.type;
description = ''
LiteLLM Router settings
'';
default = { };
};
litellm_settings = lib.mkOption {
type = settingsFormat.type;
description = ''
LiteLLM Module settings
'';
default = { };
};
general_settings = lib.mkOption {
type = settingsFormat.type;
description = ''
LiteLLM Server settings
'';
default = { };
};
environment_variables = lib.mkOption {
type = settingsFormat.type;
description = ''
Environment variables to pass to the Lite
'';
default = { };
};
};
};
default = { };
description = ''
Configuration for LiteLLM.
See <https://docs.litellm.ai/docs/proxy/configs> for more.
'';
};
environment = lib.mkOption {
type = types.attrsOf types.str;
default = {
SCARF_NO_ANALYTICS = "True";
DO_NOT_TRACK = "True";
ANONYMIZED_TELEMETRY = "False";
};
example = ''
{
NO_DOCS="True";
}
'';
description = ''
Extra environment variables for LiteLLM.
'';
};
environmentFile = lib.mkOption {
description = ''
Environment file to be passed to the systemd service.
Useful for passing secrets to the service to prevent them from being
world-readable in the Nix store.
'';
type = lib.types.nullOr lib.types.path;
default = null;
example = "/var/lib/secrets/liteLLMSecrets";
};
openFirewall = lib.mkOption {
type = types.bool;
default = false;
description = ''
Whether to open the firewall for LiteLLM.
This adds `services.litellm.port` to `networking.firewall.allowedTCPPorts`.
'';
};
};
};
config = lib.mkIf cfg.enable {
systemd.services.litellm = {
description = "LLM Gateway to provide model access, fallbacks and spend tracking across 100+ LLMs.";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = cfg.environment;
serviceConfig =
let
configFile = settingsFormat.generate "config.yaml" cfg.settings;
in
{
ExecStart = "${lib.getExe cfg.package} --host \"${cfg.host}\" --port ${toString cfg.port} --config ${configFile}";
EnvironmentFile = lib.optional (cfg.environmentFile != null) cfg.environmentFile;
WorkingDirectory = cfg.stateDir;
StateDirectory = "litellm";
RuntimeDirectory = "litellm";
RuntimeDirectoryMode = "0755";
PrivateTmp = true;
DynamicUser = true;
DevicePolicy = "closed";
LockPersonality = true;
PrivateUsers = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
SystemCallArchitectures = "native";
UMask = "0077";
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
"AF_UNIX"
];
ProtectClock = true;
ProtectProc = "invisible";
};
};
networking.firewall = lib.mkIf cfg.openFirewall { allowedTCPPorts = [ cfg.port ]; };
};
meta.maintainers = with lib.maintainers; [ drupol ];
}

View file

@ -120,6 +120,18 @@ in
RestrictRealtime = true;
SystemCallArchitectures = "native";
UMask = "0077";
CapabilityBoundingSet = "";
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
"AF_UNIX"
];
ProtectClock = true;
ProtectProc = "invisible";
SystemCallFilter = [
"@system-service"
"~@privileged"
];
};
};

View file

@ -0,0 +1,134 @@
{
config,
options,
lib,
pkgs,
...
}:
let
inherit (lib) types;
cfg = config.services.orthanc;
opt = options.services.orthanc;
settingsFormat = pkgs.formats.json { };
in
{
options = {
services.orthanc = {
enable = lib.mkEnableOption "Orthanc server";
package = lib.mkPackageOption pkgs "orthanc" { };
stateDir = lib.mkOption {
type = types.path;
default = "/var/lib/orthanc";
example = "/home/foo";
description = "State directory of Orthanc.";
};
environment = lib.mkOption {
type = types.attrsOf types.str;
default = {
};
example = ''
{
ORTHANC_NAME = "Orthanc server";
}
'';
description = ''
Extra environment variables
For more details see <https://orthanc.uclouvain.be/book/users/configuration.html>
'';
};
environmentFile = lib.mkOption {
description = ''
Environment file to be passed to the systemd service.
Useful for passing secrets to the service to prevent them from being
world-readable in the Nix store.
'';
type = lib.types.nullOr lib.types.path;
default = null;
example = "/var/lib/secrets/orthancSecrets";
};
settings = lib.mkOption {
type = lib.types.submodule {
freeformType = settingsFormat.type;
};
default = {
HttpPort = lib.mkDefault 8042;
IndexDirectory = lib.mkDefault "/var/lib/orthanc/";
StorageDirectory = lib.mkDefault "/var/lib/orthanc/";
};
example = {
Name = "My Orthanc Server";
HttpPort = 12345;
};
description = ''
Configuration written to a json file that is read by orthanc.
See <https://orthanc.uclouvain.be/book/index.html> for more.
'';
};
openFirewall = lib.mkOption {
type = types.bool;
default = false;
description = ''
Whether to open the firewall for Orthanc.
This adds `services.orthanc.settings.HttpPort` to `networking.firewall.allowedTCPPorts`.
'';
};
};
};
config = lib.mkIf cfg.enable {
services.orthanc.settings = options.services.orthanc.settings.default;
systemd.services.orthanc = {
description = "Orthanc is a lightweight, RESTful DICOM server for healthcare and medical research";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = cfg.environment;
serviceConfig =
let
config-json = settingsFormat.generate "orthanc-config.json" (cfg.settings);
in
{
ExecStart = "${lib.getExe cfg.package} ${config-json}";
EnvironmentFile = lib.optional (cfg.environmentFile != null) cfg.environmentFile;
WorkingDirectory = cfg.stateDir;
BindReadOnlyPaths = [
"-/etc/localtime"
];
StateDirectory = "orthanc";
RuntimeDirectory = "orthanc";
RuntimeDirectoryMode = "0755";
PrivateTmp = true;
DynamicUser = true;
DevicePolicy = "closed";
LockPersonality = true;
PrivateUsers = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
RestrictNamespaces = true;
RestrictRealtime = true;
SystemCallArchitectures = "native";
UMask = "0077";
};
};
networking.firewall = lib.mkIf cfg.openFirewall { allowedTCPPorts = [ cfg.settings.HttpPort ]; };
# Orthanc requires /etc/localtime to be present
time.timeZone = lib.mkDefault "UTC";
};
meta.maintainers = with lib.maintainers; [ drupol ];
}

View file

@ -69,7 +69,7 @@ in
validateSettings = mkOption {
type = types.bool;
default = true;
description = "Weither to run renovate's config validator on the built configuration.";
description = "Whether to run renovate's config validator on the built configuration.";
};
settings = mkOption {
type = json.type;

View file

@ -30,12 +30,10 @@ in
configuration file via `environment.etc."alloy/config.alloy"`.
This allows config reload, contrary to specifying a store path.
A `reloadTrigger` for `config.alloy` is configured.
Other `*.alloy` files in the same directory (ignoring subdirs) are also
honored, but it's necessary to manually extend
`systemd.services.alloy.reloadTriggers` to enable config reload
during nixos-rebuild switch.
All `.alloy` files in the same directory (ignoring subdirs) are also
honored and are added to `systemd.services.alloy.reloadTriggers` to
enable config reload during nixos-rebuild switch.
This can also point to another directory containing `*.alloy` files, or
a single Alloy file in the Nix store (at the cost of reload).
@ -68,7 +66,9 @@ in
config = lib.mkIf cfg.enable {
systemd.services.alloy = {
wantedBy = [ "multi-user.target" ];
reloadTriggers = [ config.environment.etc."alloy/config.alloy".source or null ];
reloadTriggers = lib.mapAttrsToList (_: v: v.source or null) (
lib.filterAttrs (n: _: lib.hasPrefix "alloy/" n && lib.hasSuffix ".alloy" n) config.environment.etc
);
serviceConfig = {
Restart = "always";
DynamicUser = true;

View file

@ -68,6 +68,7 @@ in
systemd.services."glances" = {
description = "Glances";
documentation = [ "man:glances(1)" ];
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];

Some files were not shown because too many files have changed in this diff Show more