Project import generated by Copybara.

GitOrigin-RevId: 18036c0be90f4e308ae3ebcab0e14aae0336fe42
This commit is contained in:
Default email 2023-08-05 00:07:22 +02:00
parent 670ffb4186
commit 5e9e1146e1
3799 changed files with 97519 additions and 123476 deletions

View file

@ -196,6 +196,11 @@ pkgs/development/python-modules/buildcatrust/ @ajs124 @lukegb @mweinelt
/nixos/tests/kea.nix @mweinelt /nixos/tests/kea.nix @mweinelt
/nixos/tests/knot.nix @mweinelt /nixos/tests/knot.nix @mweinelt
# Web servers
/doc/builders/packages/nginx.section.md @raitobezarius
/pkgs/servers/http/nginx/ @raitobezarius
/nixos/modules/services/web-servers/nginx/ @raitobezarius
# Dhall # Dhall
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry /pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry /pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry
@ -226,11 +231,6 @@ pkgs/development/python-modules/buildcatrust/ @ajs124 @lukegb @mweinelt
# VsCode Extensions # VsCode Extensions
/pkgs/applications/editors/vscode/extensions @jonringer /pkgs/applications/editors/vscode/extensions @jonringer
# Prometheus exporter modules and tests
/nixos/modules/services/monitoring/prometheus/exporters.nix @WilliButz
/nixos/modules/services/monitoring/prometheus/exporters.xml @WilliButz
/nixos/tests/prometheus-exporters.nix @WilliButz
# PHP interpreter, packages, extensions, tests and documentation # PHP interpreter, packages, extensions, tests and documentation
/doc/languages-frameworks/php.section.md @aanderse @drupol @etu @globin @ma27 @talyz /doc/languages-frameworks/php.section.md @aanderse @drupol @etu @globin @ma27 @talyz
/nixos/tests/php @aanderse @drupol @etu @globin @ma27 @talyz /nixos/tests/php @aanderse @drupol @etu @globin @ma27 @talyz
@ -308,3 +308,6 @@ nixos/lib/make-single-disk-zfs-image.nix @raitobezarius
nixos/lib/make-multi-disk-zfs-image.nix @raitobezarius nixos/lib/make-multi-disk-zfs-image.nix @raitobezarius
nixos/modules/tasks/filesystems/zfs.nix @raitobezarius nixos/modules/tasks/filesystems/zfs.nix @raitobezarius
nixos/tests/zfs.nix @raitobezarius nixos/tests/zfs.nix @raitobezarius
# Linux Kernel
pkgs/os-specific/linux/kernel/manual-config.nix @amjoseph-nixpkgs

View file

@ -1,11 +1,11 @@
###### Description of changes ## Description of changes
<!-- <!--
For package updates please link to a changelog or describe changes, this helps your fellow maintainers discover breaking updates. For package updates please link to a changelog or describe changes, this helps your fellow maintainers discover breaking updates.
For new packages please briefly describe the package or provide a link to its homepage. For new packages please briefly describe the package or provide a link to its homepage.
--> -->
###### Things done ## Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. --> <!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->

View file

@ -9,6 +9,7 @@
outputs/ outputs/
result-* result-*
result result
repl-result-*
!pkgs/development/python-modules/result !pkgs/development/python-modules/result
/doc/NEWS.html /doc/NEWS.html
/doc/NEWS.txt /doc/NEWS.txt

View file

@ -34,5 +34,7 @@ The `ibus-engines.typing-booster` package contains a program named `emoji-picker
On NixOS, it can be installed using the following expression: On NixOS, it can be installed using the following expression:
```nix ```nix
{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; } { pkgs, ... }: {
fonts.packages = with pkgs; [ noto-fonts-emoji ];
}
``` ```

4
third_party/nixpkgs/doc/common.nix vendored Normal file
View file

@ -0,0 +1,4 @@
{
outputPath = "share/doc/nixpkgs";
indexPath = "manual.html";
}

View file

@ -456,7 +456,7 @@ In the file `pkgs/top-level/all-packages.nix` you can find fetch helpers, these
owner = "NixOS"; owner = "NixOS";
repo = "nix"; repo = "nix";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae"; rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
hash = "ha256-7D4m+saJjbSFP5hOwpQq2FGR2rr+psQMTcyb1ZvtXsQ="; hash = "sha256-7D4m+saJjbSFP5hOwpQq2FGR2rr+psQMTcyb1ZvtXsQ=";
} }
``` ```

View file

@ -1,28 +1,24 @@
# Contributing to this documentation {#chap-contributing} # Contributing to this documentation {#chap-contributing}
The sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository. The manual is still partially written in DocBook but it is progressively being converted to [Markdown](#sec-contributing-markup). The sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository.
You can quickly check your edits with `make`: You can quickly check your edits with `nix-build`:
```ShellSession ```ShellSession
$ cd /path/to/nixpkgs/doc $ cd /path/to/nixpkgs
$ nix-shell $ nix-build doc
[nix-shell]$ make
```
If you experience problems, run `make debug` to help understand the docbook errors.
After making modifications to the manual, it's important to build it before committing. You can do that as follows:
```ShellSession
$ cd /path/to/nixpkgs/doc
$ nix-shell
[nix-shell]$ make clean
[nix-shell]$ nix-build .
``` ```
If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.html`. If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.html`.
## devmode {#sec-contributing-devmode}
The shell in the manual source directory makes available a command, `devmode`.
It is a daemon, that:
1. watches the manual's source for changes and when they occur — rebuilds
2. HTTP serves the manual, injecting a script that triggers reload on changes
3. opens the manual in the default browser
## Syntax {#sec-contributing-markup} ## Syntax {#sec-contributing-markup}
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect. As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
@ -114,5 +110,3 @@ Additional syntax extensions are available, all of which can be used in NixOS op
> >
> watermelon > watermelon
> : green fruit with red flesh > : green fruit with red flesh
For contributing to the legacy parts, please see [DocBook: The Definitive Guide](https://tdg.docbook.org/) or the [DocBook rocks! primer](https://web.archive.org/web/20200816233747/https://docbook.rocks/).

View file

@ -3,6 +3,8 @@ let
inherit (pkgs) lib; inherit (pkgs) lib;
inherit (lib) hasPrefix removePrefix; inherit (lib) hasPrefix removePrefix;
common = import ./common.nix;
lib-docs = import ./doc-support/lib-function-docs.nix { lib-docs = import ./doc-support/lib-function-docs.nix {
inherit pkgs nixpkgs; inherit pkgs nixpkgs;
libsets = [ libsets = [
@ -132,15 +134,15 @@ in pkgs.stdenv.mkDerivation {
''; '';
installPhase = '' installPhase = ''
dest="$out/share/doc/nixpkgs" dest="$out/${common.outputPath}"
mkdir -p "$(dirname "$dest")" mkdir -p "$(dirname "$dest")"
mv out "$dest" mv out "$dest"
mv "$dest/index.html" "$dest/manual.html" mv "$dest/index.html" "$dest/${common.indexPath}"
cp ${epub} "$dest/nixpkgs-manual.epub" cp ${epub} "$dest/nixpkgs-manual.epub"
mkdir -p $out/nix-support/ mkdir -p $out/nix-support/
echo "doc manual $dest manual.html" >> $out/nix-support/hydra-build-products echo "doc manual $dest ${common.indexPath}" >> $out/nix-support/hydra-build-products
echo "doc manual $dest nixpkgs-manual.epub" >> $out/nix-support/hydra-build-products echo "doc manual $dest nixpkgs-manual.epub" >> $out/nix-support/hydra-build-products
''; '';
} }

View file

@ -29,5 +29,6 @@ tetex-tex-live.section.md
unzip.section.md unzip.section.md
validatePkgConfig.section.md validatePkgConfig.section.md
waf.section.md waf.section.md
zig.section.md
xcbuild.section.md xcbuild.section.md
``` ```

View file

@ -0,0 +1,59 @@
# zigHook {#zighook}
[Zig](https://ziglang.org/) is a general-purpose programming language and toolchain for maintaining robust, optimal and reusable software.
In Nixpkgs, `zigHook` overrides the default build, check and install phases.
## Example code snippet {#example-code-snippet}
```nix
{ lib
, stdenv
, zigHook
}:
stdenv.mkDerivation {
# . . .
nativeBuildInputs = [
zigHook
];
zigBuildFlags = [ "-Dman-pages=true" ];
dontUseZigCheck = true;
# . . .
}
```
## Variables controlling zigHook {#variables-controlling-zighook}
### `dontUseZigBuild` {#dontUseZigBuild}
Disables using `zigBuildPhase`.
### `zigBuildFlags` {#zigBuildFlags}
Controls the flags passed to the build phase.
### `dontUseZigCheck` {#dontUseZigCheck}
Disables using `zigCheckPhase`.
### `zigCheckFlags` {#zigCheckFlags}
Controls the flags passed to the check phase.
### `dontUseZigInstall` {#dontUseZigInstall}
Disables using `zigInstallPhase`.
### `zigInstallFlags` {#zigInstallFlags}
Controls the flags passed to the install phase.
### Variables honored by zigHook {#variablesHonoredByZigHook}
- `prefixKey`
- `dontAddPrefix`

View file

@ -12,8 +12,11 @@ compatible are available as well. For example, there can be a
To use one or more CUDA packages in an expression, give the expression a `cudaPackages` parameter, and in case CUDA is optional To use one or more CUDA packages in an expression, give the expression a `cudaPackages` parameter, and in case CUDA is optional
```nix ```nix
cudaSupport ? false { config
cudaPackages ? {} , cudaSupport ? config.cudaSupport
, cudaPackages ? { }
, ...
}:
``` ```
When using `callPackage`, you can choose to pass in a different variant, e.g. When using `callPackage`, you can choose to pass in a different variant, e.g.

View file

@ -20,7 +20,7 @@ In the following is an example expression using `buildGoModule`, the following a
To obtain the actual hash, set `vendorHash = lib.fakeSha256;` and run the build ([more details here](#sec-source-hashes)). To obtain the actual hash, set `vendorHash = lib.fakeSha256;` and run the build ([more details here](#sec-source-hashes)).
- `proxyVendor`: Fetches (go mod download) and proxies the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build or if any dependency has case-insensitive conflicts which will produce platform-dependent `vendorHash` checksums. - `proxyVendor`: Fetches (go mod download) and proxies the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build or if any dependency has case-insensitive conflicts which will produce platform-dependent `vendorHash` checksums.
- `modPostBuild`: Shell commands to run after the build of the go-modules executes `go mod vendor`, and before calculating fixed output derivation's `vendorHash` (or `vendorSha256`). Note that if you change this attribute, you need to update `vendorHash` (or `vendorSha256`) attribute. - `modPostBuild`: Shell commands to run after the build of the goModules executes `go mod vendor`, and before calculating fixed output derivation's `vendorHash` (or `vendorSha256`). Note that if you change this attribute, you need to update `vendorHash` (or `vendorSha256`) attribute.
```nix ```nix
pet = buildGoModule rec { pet = buildGoModule rec {
@ -115,7 +115,7 @@ done
## Attributes used by the builders {#ssec-go-common-attributes} ## Attributes used by the builders {#ssec-go-common-attributes}
Many attributes [controlling the build phase](#variables-controlling-the-build-phase) are respected by both `buildGoModule` and `buildGoPackage`. Note that `buildGoModule` reads the following attributes also when building the `vendor/` go-modules fixed output derivation as well: Many attributes [controlling the build phase](#variables-controlling-the-build-phase) are respected by both `buildGoModule` and `buildGoPackage`. Note that `buildGoModule` reads the following attributes also when building the `vendor/` goModules fixed output derivation as well:
- [`sourceRoot`](#var-stdenv-sourceRoot) - [`sourceRoot`](#var-stdenv-sourceRoot)
- [`prePatch`](#var-stdenv-prePatch) - [`prePatch`](#var-stdenv-prePatch)

View file

@ -4,6 +4,87 @@ Maven is a well-known build tool for the Java ecosystem however it has some chal
The following provides a list of common patterns with how to package a Maven project (or any JVM language that can export to Maven) as a Nix package. The following provides a list of common patterns with how to package a Maven project (or any JVM language that can export to Maven) as a Nix package.
## Building a package using `maven.buildMavenPackage` {#maven-buildmavenpackage}
Consider the following package:
```nix
{ lib, fetchFromGitHub, jre, makeWrapper, maven }:
maven.buildMavenPackage rec {
pname = "jd-cli";
version = "1.2.1";
src = fetchFromGitHub {
owner = "intoolswetrust";
repo = pname;
rev = "${pname}-${version}";
hash = "sha256-rRttA5H0A0c44loBzbKH7Waoted3IsOgxGCD2VM0U/Q=";
};
mvnHash = "sha256-kLpjMj05uC94/5vGMwMlFzLKNFOKeyNvq/vmB6pHTAo=";
nativeBuildInputs = [ makeWrapper ];
installPhase = ''
mkdir -p $out/bin $out/share/jd-cli
install -Dm644 jd-cli/target/jd-cli.jar $out/share/jd-cli
makeWrapper ${jre}/bin/java $out/bin/jd-cli \
--add-flags "-jar $out/share/jd-cli/jd-cli.jar"
'';
meta = with lib; {
description = "Simple command line wrapper around JD Core Java Decompiler project";
homepage = "https://github.com/intoolswetrust/jd-cli";
license = licenses.gpl3Plus;
maintainers = with maintainers; [ majiir ];
};
}:
```
This package calls `maven.buildMavenPackage` to do its work. The primary difference from `stdenv.mkDerivation` is the `mvnHash` variable, which is a hash of all of the Maven dependencies.
::: {.tip}
After setting `maven.buildMavenPackage`, we then do standard Java `.jar` installation by saving the `.jar` to `$out/share/java` and then making a wrapper which allows executing that file; see [](#sec-language-java) for additional generic information about packaging Java applications.
:::
### Stable Maven plugins {#stable-maven-plugins}
Maven defines default versions for its core plugins, e.g. `maven-compiler-plugin`. If your project does not override these versions, an upgrade of Maven will change the version of the used plugins, and therefore the derivation and hash.
When `maven` is upgraded, `mvnHash` for the derivation must be updated as well: otherwise, the project will simply be built on the derivation of old plugins, and fail because the requested plugins are missing.
This clearly prevents automatic upgrades of Maven: a manual effort must be made throughout nixpkgs by any maintainer wishing to push the upgrades.
To make sure that your package does not add extra manual effort when upgrading Maven, explicitly define versions for all plugins. You can check if this is the case by adding the following plugin to your (parent) POM:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<id>enforce-plugin-versions</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requirePluginVersions />
</rules>
</configuration>
</execution>
</executions>
</plugin>
```
## Manually using `mvn2nix` {#maven-mvn2nix}
::: {.warning}
This way is no longer recommended; see [](#maven-buildmavenpackage) for the simpler and preferred way.
:::
For the purposes of this example let's consider a very basic Maven project with the following `pom.xml` with a single dependency on [emoji-java](https://github.com/vdurmont/emoji-java). For the purposes of this example let's consider a very basic Maven project with the following `pom.xml` with a single dependency on [emoji-java](https://github.com/vdurmont/emoji-java).
```xml ```xml
@ -41,14 +122,11 @@ public class Main {
} }
``` ```
You find this demo project at https://github.com/fzakaria/nixos-maven-example You find this demo project at [https://github.com/fzakaria/nixos-maven-example](https://github.com/fzakaria/nixos-maven-example).
## Solving for dependencies {#solving-for-dependencies} ### Solving for dependencies {#solving-for-dependencies}
### buildMaven with NixOS/mvn2nix-maven-plugin {#buildmaven-with-nixosmvn2nix-maven-plugin}
> ⚠️ Although `buildMaven` is the "blessed" way within nixpkgs, as of 2020, it hasn't seen much activity in quite a while.
#### buildMaven with NixOS/mvn2nix-maven-plugin {#buildmaven-with-nixosmvn2nix-maven-plugin}
`buildMaven` is an alternative method that tries to follow similar patterns of other programming languages by generating a lock file. It relies on the maven plugin [mvn2nix-maven-plugin](https://github.com/NixOS/mvn2nix-maven-plugin). `buildMaven` is an alternative method that tries to follow similar patterns of other programming languages by generating a lock file. It relies on the maven plugin [mvn2nix-maven-plugin](https://github.com/NixOS/mvn2nix-maven-plugin).
First you generate a `project-info.json` file using the maven plugin. First you generate a `project-info.json` file using the maven plugin.
@ -105,9 +183,10 @@ The benefit over the _double invocation_ as we will see below, is that the _/nix
│   ├── avalon-framework-4.1.3.jar -> /nix/store/iv5fp3955w3nq28ff9xfz86wvxbiw6n9-avalon-framework-4.1.3.jar │   ├── avalon-framework-4.1.3.jar -> /nix/store/iv5fp3955w3nq28ff9xfz86wvxbiw6n9-avalon-framework-4.1.3.jar
``` ```
### Double Invocation {#double-invocation} #### Double Invocation {#double-invocation}
::: {.note}
> ⚠️ This pattern is the simplest but may cause unnecessary rebuilds due to the output hash changing. This pattern is the simplest but may cause unnecessary rebuilds due to the output hash changing.
:::
The double invocation is a _simple_ way to get around the problem that `nix-build` may be sandboxed and have no Internet connectivity. The double invocation is a _simple_ way to get around the problem that `nix-build` may be sandboxed and have no Internet connectivity.
@ -115,7 +194,9 @@ It treats the entire Maven repository as a single source to be downloaded, relyi
The first step will be to build the Maven project as a fixed-output derivation in order to collect the Maven repository -- below is an [example](https://github.com/fzakaria/nixos-maven-example/blob/main/double-invocation-repository.nix). The first step will be to build the Maven project as a fixed-output derivation in order to collect the Maven repository -- below is an [example](https://github.com/fzakaria/nixos-maven-example/blob/main/double-invocation-repository.nix).
> Traditionally the Maven repository is at `~/.m2/repository`. We will override this to be the `$out` directory. ::: {.note}
Traditionally the Maven repository is at `~/.m2/repository`. We will override this to be the `$out` directory.
:::
```nix ```nix
{ lib, stdenv, maven }: { lib, stdenv, maven }:
@ -147,7 +228,9 @@ stdenv.mkDerivation {
The build will fail, and tell you the expected `outputHash` to place. When you've set the hash, the build will return with a `/nix/store` entry whose contents are the full Maven repository. The build will fail, and tell you the expected `outputHash` to place. When you've set the hash, the build will return with a `/nix/store` entry whose contents are the full Maven repository.
> Some additional files are deleted that would cause the output hash to change potentially on subsequent runs. ::: {.warning}
Some additional files are deleted that would cause the output hash to change potentially on subsequent runs.
:::
```bash ```bash
tree $(nix-build --no-out-link double-invocation-repository.nix) | head tree $(nix-build --no-out-link double-invocation-repository.nix) | head
@ -165,40 +248,7 @@ The build will fail, and tell you the expected `outputHash` to place. When you'v
If your package uses _SNAPSHOT_ dependencies or _version ranges_; there is a strong likelihood that over-time your output hash will change since the resolved dependencies may change. Hence this method is less recommended then using `buildMaven`. If your package uses _SNAPSHOT_ dependencies or _version ranges_; there is a strong likelihood that over-time your output hash will change since the resolved dependencies may change. Hence this method is less recommended then using `buildMaven`.
#### Stable Maven plugins {#stable-maven-plugins} ### Building a JAR {#building-a-jar}
Maven defines default versions for its core plugins, e.g. `maven-compiler-plugin`.
If your project does not override these versions, an upgrade of Maven will change the version of the used plugins.
This changes the output of the first invocation and the plugins required by the second invocation.
However, since a hash is given for the output of the first invocation, the second invocation will simply fail
because the requested plugins are missing.
This will prevent automatic upgrades of Maven: the manual fix for this is to change the hash of the first invocation.
To make sure that your package does not add manual effort when upgrading Maven, explicitly define versions for all
plugins. You can check if this is the case by adding the following plugin to your (parent) POM:
```xml
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-enforcer-plugin</artifactId>
<version>3.3.0</version>
<executions>
<execution>
<id>enforce-plugin-versions</id>
<goals>
<goal>enforce</goal>
</goals>
<configuration>
<rules>
<requirePluginVersions />
</rules>
</configuration>
</execution>
</executions>
</plugin>
```
## Building a JAR {#building-a-jar}
Regardless of which strategy is chosen above, the step to build the derivation is the same. Regardless of which strategy is chosen above, the step to build the derivation is the same.
@ -224,7 +274,9 @@ in stdenv.mkDerivation rec {
} }
``` ```
> We place the library in `$out/share/java` since JDK package has a _stdenv setup hook_ that adds any JARs in the `share/java` directories of the build inputs to the CLASSPATH environment. ::: {.tip}
We place the library in `$out/share/java` since JDK package has a _stdenv setup hook_ that adds any JARs in the `share/java` directories of the build inputs to the CLASSPATH environment.
:::
```bash ```bash
tree $(nix-build --no-out-link build-jar.nix) tree $(nix-build --no-out-link build-jar.nix)
@ -236,7 +288,7 @@ in stdenv.mkDerivation rec {
2 directories, 1 file 2 directories, 1 file
``` ```
## Runnable JAR {#runnable-jar} ### Runnable JAR {#runnable-jar}
The previous example builds a `jar` file but that's not a file one can run. The previous example builds a `jar` file but that's not a file one can run.
@ -248,9 +300,9 @@ We will use the same repository we built above (either _double invocation_ or _b
The following two methods are more suited to Nix then building an [UberJar](https://imagej.net/Uber-JAR) which may be the more traditional approach. The following two methods are more suited to Nix then building an [UberJar](https://imagej.net/Uber-JAR) which may be the more traditional approach.
### CLASSPATH {#classpath} #### CLASSPATH {#classpath}
> This is ideal if you are providing a derivation for _nixpkgs_ and don't want to patch the project's `pom.xml`. This method is ideal if you are providing a derivation for _nixpkgs_ and don't want to patch the project's `pom.xml`.
We will read the Maven repository and flatten it to a single list. This list will then be concatenated with the _CLASSPATH_ separator to create the full classpath. We will read the Maven repository and flatten it to a single list. This list will then be concatenated with the _CLASSPATH_ separator to create the full classpath.
@ -288,9 +340,9 @@ in stdenv.mkDerivation rec {
} }
``` ```
### MANIFEST file via Maven Plugin {#manifest-file-via-maven-plugin} #### MANIFEST file via Maven Plugin {#manifest-file-via-maven-plugin}
> This is ideal if you are the project owner and want to change your `pom.xml` to set the CLASSPATH within it. This method is ideal if you are the project owner and want to change your `pom.xml` to set the CLASSPATH within it.
Augment the `pom.xml` to create a JAR with the following manifest: Augment the `pom.xml` to create a JAR with the following manifest:
@ -366,8 +418,9 @@ in stdenv.mkDerivation rec {
''; '';
} }
``` ```
::: {.note}
> Our script produces a dependency on `jre` rather than `jdk` to restrict the runtime closure necessary to run the application. Our script produces a dependency on `jre` rather than `jdk` to restrict the runtime closure necessary to run the application.
:::
This will give you an executable shell-script that launches your JAR with all the dependencies available. This will give you an executable shell-script that launches your JAR with all the dependencies available.

View file

@ -1514,11 +1514,11 @@ Note: There is a boolean value `lib.inNixShell` set to `true` if nix-shell is in
### Tools {#tools} ### Tools {#tools}
Packages inside nixpkgs are written by hand. However many tools exist in Packages inside nixpkgs must use the `buildPythonPackage` or `buildPythonApplication` function directly,
community to help save time. No tool is preferred at the moment. because we can only provide security support for non-vendored dependencies.
- [nixpkgs-pytools](https://github.com/nix-community/nixpkgs-pytools) We recommend [nix-init](https://github.com/nix-community/nix-init) for creating new python packages within nixpkgs,
- [poetry2nix](https://github.com/nix-community/poetry2nix) as it already prefetches the source, parses dependencies for common formats and prefills most things in `meta`.
### Deterministic builds {#deterministic-builds} ### Deterministic builds {#deterministic-builds}

View file

@ -39,7 +39,7 @@ rustPlatform.buildRustPackage rec {
description = "A fast line-oriented regex search tool, similar to ag and ack"; description = "A fast line-oriented regex search tool, similar to ag and ack";
homepage = "https://github.com/BurntSushi/ripgrep"; homepage = "https://github.com/BurntSushi/ripgrep";
license = licenses.unlicense; license = licenses.unlicense;
maintainers = [ maintainers.tailhook ]; maintainers = [];
}; };
} }
``` ```
@ -558,7 +558,7 @@ buildPythonPackage rec {
hash = "sha256-miW//pnOmww2i6SOGbkrAIdc/JMDT4FJLqdMFojZeoY="; hash = "sha256-miW//pnOmww2i6SOGbkrAIdc/JMDT4FJLqdMFojZeoY=";
}; };
sourceRoot = "source/bindings/python"; sourceRoot = "${src.name}/bindings/python";
nativeBuildInputs = [ nativeBuildInputs = [
cargo cargo
@ -926,7 +926,7 @@ rustPlatform.buildRustPackage rec {
description = "A fast line-oriented regex search tool, similar to ag and ack"; description = "A fast line-oriented regex search tool, similar to ag and ack";
homepage = "https://github.com/BurntSushi/ripgrep"; homepage = "https://github.com/BurntSushi/ripgrep";
license = with licenses; [ mit unlicense ]; license = with licenses; [ mit unlicense ];
maintainers = with maintainers; [ tailhook ]; maintainers = with maintainers; [];
}; };
} }
``` ```

20
third_party/nixpkgs/doc/shell.nix vendored Normal file
View file

@ -0,0 +1,20 @@
let
pkgs = import ../. {
config = {};
overlays = [];
};
common = import ./common.nix;
inherit (common) outputPath indexPath;
web-devmode = import ../pkgs/tools/nix/web-devmode.nix {
inherit pkgs;
buildArgs = "./.";
open = "/${outputPath}/${indexPath}";
};
in
pkgs.mkShell {
packages = [
web-devmode
];
}

View file

@ -70,7 +70,7 @@ A list of the maintainers of this Nix expression. Maintainers are defined in [`n
### `mainProgram` {#var-meta-mainProgram} ### `mainProgram` {#var-meta-mainProgram}
The name of the main binary for the package. This affects the binary `nix run` executes and falls back to the name of the package. Example: `"rg"` The name of the main binary for the package. This affects the binary `nix run` executes. Example: `"rg"`
### `priority` {#var-meta-priority} ### `priority` {#var-meta-priority}

View file

@ -614,14 +614,19 @@ The list of source files or directories to be unpacked or copied. One of these m
##### `sourceRoot` {#var-stdenv-sourceRoot} ##### `sourceRoot` {#var-stdenv-sourceRoot}
After running `unpackPhase`, the generic builder changes the current directory to the directory created by unpacking the sources. If there are multiple source directories, you should set `sourceRoot` to the name of the intended directory. Set `sourceRoot = ".";` if you use `srcs` and control the unpack phase yourself. After unpacking all of `src` and `srcs`, if neither of `sourceRoot` and `setSourceRoot` are set, `unpackPhase` of the generic builder checks that the unpacking produced a single directory and moves the current working directory into it.
By default the `sourceRoot` is set to `"source"`. If you want to point to a sub-directory inside your project, you therefore need to set `sourceRoot = "source/my-sub-directory"`. If `unpackPhase` produces multiple source directories, you should set `sourceRoot` to the name of the intended directory.
You can also set `sourceRoot = ".";` if you want to control it yourself in a later phase.
For example, if your want your build to start in a sub-directory inside your sources, and you are using `fetchzip`-derived `src` (like `fetchFromGitHub` or similar), you need to set `sourceRoot = "${src.name}/my-sub-directory"`.
##### `setSourceRoot` {#var-stdenv-setSourceRoot} ##### `setSourceRoot` {#var-stdenv-setSourceRoot}
Alternatively to setting `sourceRoot`, you can set `setSourceRoot` to a shell command to be evaluated by the unpack phase after the sources have been unpacked. This command must set `sourceRoot`. Alternatively to setting `sourceRoot`, you can set `setSourceRoot` to a shell command to be evaluated by the unpack phase after the sources have been unpacked. This command must set `sourceRoot`.
For example, if you are using `fetchurl` on an archive file that gets unpacked into a single directory the name of which changes between package versions, and you want your build to start in its sub-directory, you need to set `setSourceRoot = "sourceRoot=$(echo */my-sub-directory)";`, or in the case of multiple sources, you could use something more specific, like `setSourceRoot = "sourceRoot=$(echo ${pname}-*/my-sub-directory)";`.
##### `preUnpack` {#var-stdenv-preUnpack} ##### `preUnpack` {#var-stdenv-preUnpack}
Hook executed at the start of the unpack phase. Hook executed at the start of the unpack phase.

73
third_party/nixpkgs/lib/README.md vendored Normal file
View file

@ -0,0 +1,73 @@
# Nixpkgs lib
This directory contains the implementation, documentation and tests for the Nixpkgs `lib` library.
## Overview
The evaluation entry point for `lib` is [`default.nix`](default.nix).
This file evaluates to an attribute set containing two separate kinds of attributes:
- Sub-libraries:
Attribute sets grouping together similar functionality.
Each sub-library is defined in a separate file usually matching its attribute name.
Example: `lib.lists` is a sub-library containing list-related functionality such as `lib.lists.take` and `lib.lists.imap0`.
These are defined in the file [`lists.nix`](lists.nix).
- Aliases:
Attributes that point to an attribute of the same name in some sub-library.
Example: `lib.take` is an alias for `lib.lists.take`.
Most files in this directory are definitions of sub-libraries, but there are a few others:
- [`minver.nix`](minver.nix): A string of the minimum version of Nix that is required to evaluate Nixpkgs.
- [`tests`](tests): Tests, see [Running tests](#running-tests)
- [`release.nix`](tests/release.nix): A derivation aggregating all tests
- [`misc.nix`](tests/misc.nix): Evaluation unit tests for most sub-libraries
- `*.sh`: Bash scripts that run tests for specific sub-libraries
- All other files in this directory exist to support the tests
- [`systems`](systems): The `lib.systems` sub-library, structured into a directory instead of a file due to its complexity
- [`path`](path): The `lib.path` sub-library, which includes tests as well as a document describing the design goals of `lib.path`
- All other files in this directory are sub-libraries
### Module system
The [module system](https://nixos.org/manual/nixpkgs/#module-system) spans multiple sub-libraries:
- [`modules.nix`](modules.nix): `lib.modules` for the core functions and anything not relating to option definitions
- [`options.nix`](options.nix): `lib.options` for anything relating to option definitions
- [`types.nix`](types.nix): `lib.types` for module system types
## Reference documentation
Reference documentation for library functions is written above each function as a multi-line comment.
These comments are processed using [nixdoc](https://github.com/nix-community/nixdoc) and [rendered in the Nixpkgs manual](https://nixos.org/manual/nixpkgs/stable/#chap-functions).
The nixdoc README describes the [comment format](https://github.com/nix-community/nixdoc#comment-format).
See the [chapter on contributing to the Nixpkgs manual](https://nixos.org/manual/nixpkgs/#chap-contributing) for how to build the manual.
## Running tests
All library tests can be run by building the derivation in [`tests/release.nix`](tests/release.nix):
```bash
nix-build tests/release.nix
```
Some commands for quicker iteration over parts of the test suite are also available:
```bash
# Run all evaluation unit tests in tests/misc.nix
# if the resulting list is empty, all tests passed
nix-instantiate --eval --strict tests/misc.nix
# Run the module system tests
tests/modules.sh
# Run the lib.sources tests
tests/sources.sh
# Run the lib.filesystem tests
tests/filesystem.sh
# Run the lib.path property tests
path/tests/prop.sh
```

View file

@ -738,6 +738,42 @@ rec {
sets: sets:
zipAttrsWith (name: values: values) sets; zipAttrsWith (name: values: values) sets;
/*
Merge a list of attribute sets together using the `//` operator.
In case of duplicate attributes, values from later list elements take precedence over earlier ones.
The result is the same as `foldl mergeAttrs { }`, but the performance is better for large inputs.
For n list elements, each with an attribute set containing m unique attributes, the complexity of this operation is O(nm log n).
Type:
mergeAttrsList :: [ Attrs ] -> Attrs
Example:
mergeAttrsList [ { a = 0; b = 1; } { c = 2; d = 3; } ]
=> { a = 0; b = 1; c = 2; d = 3; }
mergeAttrsList [ { a = 0; } { a = 1; } ]
=> { a = 1; }
*/
mergeAttrsList = list:
let
# `binaryMerge start end` merges the elements at indices `index` of `list` such that `start <= index < end`
# Type: Int -> Int -> Attrs
binaryMerge = start: end:
# assert start < end; # Invariant
if end - start >= 2 then
# If there's at least 2 elements, split the range in two, recurse on each part and merge the result
# The invariant is satisfied because each half will have at least 1 element
binaryMerge start (start + (end - start) / 2)
// binaryMerge (start + (end - start) / 2) end
else
# Otherwise there will be exactly 1 element due to the invariant, in which case we just return it directly
elemAt list start;
in
if list == [ ] then
# Calling binaryMerge as below would not satisfy its invariant
{ }
else
binaryMerge 0 (length list);
/* Does the same as the update operator '//' except that attributes are /* Does the same as the update operator '//' except that attributes are
merged until the given predicate is verified. The predicate should merged until the given predicate is verified. The predicate should

View file

@ -81,9 +81,10 @@ rec {
*/ */
toKeyValue = { toKeyValue = {
mkKeyValue ? mkKeyValueDefault {} "=", mkKeyValue ? mkKeyValueDefault {} "=",
listsAsDuplicateKeys ? false listsAsDuplicateKeys ? false,
indent ? ""
}: }:
let mkLine = k: v: mkKeyValue k v + "\n"; let mkLine = k: v: indent + mkKeyValue k v + "\n";
mkLines = if listsAsDuplicateKeys mkLines = if listsAsDuplicateKeys
then k: v: map (mkLine k) (if lib.isList v then v else [v]) then k: v: map (mkLine k) (if lib.isList v then v else [v])
else k: v: [ (mkLine k v) ]; else k: v: [ (mkLine k v) ];

View file

@ -657,6 +657,13 @@ in mkLicense lset) ({
redistributable = true; redistributable = true;
}; };
hl3 = {
fullName = "Hippocratic License v3.0";
url = "https://firstdonoharm.dev/version/3/0/core.txt";
free = false;
redistributable = true;
};
issl = { issl = {
fullName = "Intel Simplified Software License"; fullName = "Intel Simplified Software License";
url = "https://software.intel.com/en-us/license/intel-simplified-software-license"; url = "https://software.intel.com/en-us/license/intel-simplified-software-license";

View file

@ -3,7 +3,7 @@
{ lib }: { lib }:
let let
inherit (lib.strings) toInt; inherit (lib.strings) toInt;
inherit (lib.trivial) compare min; inherit (lib.trivial) compare min id;
inherit (lib.attrsets) mapAttrs; inherit (lib.attrsets) mapAttrs;
in in
rec { rec {
@ -180,18 +180,18 @@ rec {
else if len != 1 then multiple else if len != 1 then multiple
else head found; else head found;
/* Find the first element in the list matching the specified /* Find the first index in the list matching the specified
predicate or return `default` if no such element exists. predicate or return `default` if no such element exists.
Type: findFirst :: (a -> bool) -> a -> [a] -> a Type: findFirstIndex :: (a -> Bool) -> b -> [a] -> (Int | b)
Example: Example:
findFirst (x: x > 3) 7 [ 1 6 4 ] findFirstIndex (x: x > 3) null [ 0 6 4 ]
=> 6 => 1
findFirst (x: x > 9) 7 [ 1 6 4 ] findFirstIndex (x: x > 9) null [ 0 6 4 ]
=> 7 => null
*/ */
findFirst = findFirstIndex =
# Predicate # Predicate
pred: pred:
# Default value to return # Default value to return
@ -229,7 +229,33 @@ rec {
if resultIndex < 0 then if resultIndex < 0 then
default default
else else
elemAt list resultIndex; resultIndex;
/* Find the first element in the list matching the specified
predicate or return `default` if no such element exists.
Type: findFirst :: (a -> bool) -> a -> [a] -> a
Example:
findFirst (x: x > 3) 7 [ 1 6 4 ]
=> 6
findFirst (x: x > 9) 7 [ 1 6 4 ]
=> 7
*/
findFirst =
# Predicate
pred:
# Default value to return
default:
# Input list
list:
let
index = findFirstIndex pred null list;
in
if index == null then
default
else
elemAt list index;
/* Return true if function `pred` returns true for at least one /* Return true if function `pred` returns true for at least one
element of `list`. element of `list`.
@ -637,6 +663,32 @@ rec {
else if start + count > len then len - start else if start + count > len then len - start
else count); else count);
/* The common prefix of two lists.
Type: commonPrefix :: [a] -> [a] -> [a]
Example:
commonPrefix [ 1 2 3 4 5 6 ] [ 1 2 4 8 ]
=> [ 1 2 ]
commonPrefix [ 1 2 3 ] [ 1 2 3 4 5 ]
=> [ 1 2 3 ]
commonPrefix [ 1 2 3 ] [ 4 5 6 ]
=> [ ]
*/
commonPrefix =
list1:
list2:
let
# Zip the lists together into a list of booleans whether each element matches
matchings = zipListsWith (fst: snd: fst != snd) list1 list2;
# Find the first index where the elements don't match,
# which will then also be the length of the common prefix.
# If all elements match, we fall back to the length of the zipped list,
# which is the same as the length of the smaller list.
commonPrefixLength = findFirstIndex id (length matchings) matchings;
in
take commonPrefixLength list1;
/* Return the last element of a list. /* Return the last element of a list.
This function throws an error if the list is empty. This function throws an error if the list is empty.

View file

@ -132,10 +132,9 @@ rec {
{ shortName = licstr; } { shortName = licstr; }
); );
/* Get the path to the main program of a derivation with either /* Get the path to the main program of a package based on meta.mainProgram
meta.mainProgram or pname or name
Type: getExe :: derivation -> string Type: getExe :: package -> string
Example: Example:
getExe pkgs.hello getExe pkgs.hello
@ -144,5 +143,9 @@ rec {
=> "/nix/store/am9ml4f4ywvivxnkiaqwr0hyxka1xjsf-mustache-go-1.3.0/bin/mustache" => "/nix/store/am9ml4f4ywvivxnkiaqwr0hyxka1xjsf-mustache-go-1.3.0/bin/mustache"
*/ */
getExe = x: getExe = x:
"${lib.getBin x}/bin/${x.meta.mainProgram or (lib.getName x)}"; "${lib.getBin x}/bin/${x.meta.mainProgram or (
# This could be turned into an error when 23.05 is at end of life
lib.warn "getExe: Package ${lib.strings.escapeNixIdentifier x.meta.name or x.pname or x.name} does not have the meta.mainProgram attribute. We'll assume that the main program has the same name for now, but this behavior is deprecated, because it leads to surprising errors when the assumption does not hold. If the package has a main program, please set `meta.mainProgram` in its definition to make this warning go away. Otherwise, if the package does not have a main program, or if you don't control its definition, specify the full path to the program, such as \"\${lib.getBin foo}/bin/bar\"."
lib.getName x
)}";
} }

View file

@ -639,7 +639,7 @@ let
unmatchedDefns = []; unmatchedDefns = [];
} }
else if optionDecls != [] then else if optionDecls != [] then
if all (x: x.options.type.name == "submodule") optionDecls if all (x: x.options.type.name or null == "submodule") optionDecls
# Raw options can only be merged into submodules. Merging into # Raw options can only be merged into submodules. Merging into
# attrsets might be nice, but ambiguous. Suppose we have # attrsets might be nice, but ambiguous. Suppose we have
# attrset as a `attrsOf submodule`. User declares option # attrset as a `attrsOf submodule`. User declares option

View file

@ -187,6 +187,27 @@ Decision: All functions remove trailing slashes in their results.
</details> </details>
### Prefer returning subpaths over components
[subpath-preference]: #prefer-returning-subpaths-over-components
Observing: Functions could return subpaths or lists of path component strings.
Considering: Subpaths are used as inputs for some functions. Using them for outputs, too, makes the library more consistent and composable.
Decision: Subpaths should be preferred over list of path component strings.
<details>
<summary>Arguments</summary>
- (+) It is consistent with functions accepting subpaths, making the library more composable
- (-) It is less efficient when the components are needed, because after creating the normalised subpath string, it will have to be parsed into components again
- (+) If necessary, we can still make it faster by adding builtins to Nix
- (+) Alternatively if necessary, versions of these functions that return components could later still be introduced.
- (+) It makes the path library simpler because there's only two types (paths and subpaths). Only `lib.path.subpath.components` can be used to get a list of components.
And once we have a list of component strings, `lib.lists` and `lib.strings` can be used to operate on them.
For completeness, `lib.path.subpath.join` allows converting the list of components back to a subpath.
</details>
## Other implementations and references ## Other implementations and references
- [Rust](https://doc.rust-lang.org/std/path/struct.Path.html) - [Rust](https://doc.rust-lang.org/std/path/struct.Path.html)

View file

@ -20,6 +20,7 @@ let
concatMap concatMap
foldl' foldl'
take take
drop
; ;
inherit (lib.strings) inherit (lib.strings)
@ -217,9 +218,110 @@ in /* No rec! Add dependencies on this file at the top. */ {
second argument: "${toString path2}" with root "${toString path2Deconstructed.root}"''; second argument: "${toString path2}" with root "${toString path2Deconstructed.root}"'';
take (length path1Deconstructed.components) path2Deconstructed.components == path1Deconstructed.components; take (length path1Deconstructed.components) path2Deconstructed.components == path1Deconstructed.components;
/*
Remove the first path as a component-wise prefix from the second path.
The result is a normalised subpath string, see `lib.path.subpath.normalise`.
Laws:
- Inverts `append` for normalised subpaths:
removePrefix p (append p s) == subpath.normalise s
Type:
removePrefix :: Path -> Path -> String
Example:
removePrefix /foo /foo/bar/baz
=> "./bar/baz"
removePrefix /foo /foo
=> "./."
removePrefix /foo/bar /foo
=> <error>
removePrefix /. /foo
=> "./foo"
*/
removePrefix =
path1:
assert assertMsg
(isPath path1)
"lib.path.removePrefix: First argument is of type ${typeOf path1}, but a path was expected.";
let
path1Deconstructed = deconstructPath path1;
path1Length = length path1Deconstructed.components;
in
path2:
assert assertMsg
(isPath path2)
"lib.path.removePrefix: Second argument is of type ${typeOf path2}, but a path was expected.";
let
path2Deconstructed = deconstructPath path2;
success = take path1Length path2Deconstructed.components == path1Deconstructed.components;
components =
if success then
drop path1Length path2Deconstructed.components
else
throw ''
lib.path.removePrefix: The first path argument "${toString path1}" is not a component-wise prefix of the second path argument "${toString path2}".'';
in
assert assertMsg
(path1Deconstructed.root == path2Deconstructed.root) ''
lib.path.removePrefix: Filesystem roots must be the same for both paths, but paths with different roots were given:
first argument: "${toString path1}" with root "${toString path1Deconstructed.root}"
second argument: "${toString path2}" with root "${toString path2Deconstructed.root}"'';
joinRelPath components;
/*
Split the filesystem root from a [path](https://nixos.org/manual/nix/stable/language/values.html#type-path).
The result is an attribute set with these attributes:
- `root`: The filesystem root of the path, meaning that this directory has no parent directory.
- `subpath`: The [normalised subpath string](#function-library-lib.path.subpath.normalise) that when [appended](#function-library-lib.path.append) to `root` returns the original path.
Laws:
- [Appending](#function-library-lib.path.append) the `root` and `subpath` gives the original path:
p ==
append
(splitRoot p).root
(splitRoot p).subpath
- Trying to get the parent directory of `root` using [`readDir`](https://nixos.org/manual/nix/stable/language/builtins.html#builtins-readDir) returns `root` itself:
dirOf (splitRoot p).root == (splitRoot p).root
Type:
splitRoot :: Path -> { root :: Path, subpath :: String }
Example:
splitRoot /foo/bar
=> { root = /.; subpath = "./foo/bar"; }
splitRoot /.
=> { root = /.; subpath = "./."; }
# Nix neutralises `..` path components for all path values automatically
splitRoot /foo/../bar
=> { root = /.; subpath = "./bar"; }
splitRoot "/foo/bar"
=> <error>
*/
splitRoot = path:
assert assertMsg
(isPath path)
"lib.path.splitRoot: Argument is of type ${typeOf path}, but a path was expected";
let
deconstructed = deconstructPath path;
in {
root = deconstructed.root;
subpath = joinRelPath deconstructed.components;
};
/* Whether a value is a valid subpath string. /* Whether a value is a valid subpath string.
A subpath string points to a specific file or directory within an absolute base directory.
It is a stricter form of a relative path that excludes `..` components, since those could escape the base directory.
- The value is a string - The value is a string
- The string is not empty - The string is not empty
@ -336,6 +438,37 @@ in /* No rec! Add dependencies on this file at the top. */ {
${subpathInvalidReason path}'' ${subpathInvalidReason path}''
) 0 subpaths; ) 0 subpaths;
/*
Split [a subpath](#function-library-lib.path.subpath.isValid) into its path component strings.
Throw an error if the subpath isn't valid.
Note that the returned path components are also valid subpath strings, though they are intentionally not [normalised](#function-library-lib.path.subpath.normalise).
Laws:
- Splitting a subpath into components and [joining](#function-library-lib.path.subpath.join) the components gives the same subpath but [normalised](#function-library-lib.path.subpath.normalise):
subpath.join (subpath.components s) == subpath.normalise s
Type:
subpath.components :: String -> [ String ]
Example:
subpath.components "."
=> [ ]
subpath.components "./foo//bar/./baz/"
=> [ "foo" "bar" "baz" ]
subpath.components "/foo"
=> <error>
*/
subpath.components =
subpath:
assert assertMsg (isValid subpath) ''
lib.path.subpath.components: Argument is not a valid subpath string:
${subpathInvalidReason subpath}'';
splitRelPath subpath;
/* Normalise a subpath. Throw an error if the subpath isn't valid, see /* Normalise a subpath. Throw an error if the subpath isn't valid, see
`lib.path.subpath.isValid` `lib.path.subpath.isValid`

View file

@ -1,9 +1,12 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Property tests for the `lib.path` library # Property tests for lib/path/default.nix
#
# It generates random path-like strings and runs the functions on # It generates random path-like strings and runs the functions on
# them, checking that the expected laws of the functions hold # them, checking that the expected laws of the functions hold
# Run:
# [nixpkgs]$ lib/path/tests/prop.sh
# or:
# [nixpkgs]$ nix-build lib/tests/release.nix
set -euo pipefail set -euo pipefail
shopt -s inherit_errexit shopt -s inherit_errexit

View file

@ -3,7 +3,7 @@
{ libpath }: { libpath }:
let let
lib = import libpath; lib = import libpath;
inherit (lib.path) hasPrefix append subpath; inherit (lib.path) hasPrefix removePrefix append splitRoot subpath;
cases = lib.runTests { cases = lib.runTests {
# Test examples from the lib.path.append documentation # Test examples from the lib.path.append documentation
@ -57,6 +57,40 @@ let
expected = true; expected = true;
}; };
testRemovePrefixExample1 = {
expr = removePrefix /foo /foo/bar/baz;
expected = "./bar/baz";
};
testRemovePrefixExample2 = {
expr = removePrefix /foo /foo;
expected = "./.";
};
testRemovePrefixExample3 = {
expr = (builtins.tryEval (removePrefix /foo/bar /foo)).success;
expected = false;
};
testRemovePrefixExample4 = {
expr = removePrefix /. /foo;
expected = "./foo";
};
testSplitRootExample1 = {
expr = splitRoot /foo/bar;
expected = { root = /.; subpath = "./foo/bar"; };
};
testSplitRootExample2 = {
expr = splitRoot /.;
expected = { root = /.; subpath = "./."; };
};
testSplitRootExample3 = {
expr = splitRoot /foo/../bar;
expected = { root = /.; subpath = "./bar"; };
};
testSplitRootExample4 = {
expr = (builtins.tryEval (splitRoot "/foo/bar")).success;
expected = false;
};
# Test examples from the lib.path.subpath.isValid documentation # Test examples from the lib.path.subpath.isValid documentation
testSubpathIsValidExample1 = { testSubpathIsValidExample1 = {
expr = subpath.isValid null; expr = subpath.isValid null;
@ -204,6 +238,19 @@ let
expr = (builtins.tryEval (subpath.normalise "..")).success; expr = (builtins.tryEval (subpath.normalise "..")).success;
expected = false; expected = false;
}; };
testSubpathComponentsExample1 = {
expr = subpath.components ".";
expected = [ ];
};
testSubpathComponentsExample2 = {
expr = subpath.components "./foo//bar/./baz/";
expected = [ "foo" "bar" "baz" ];
};
testSubpathComponentsExample3 = {
expr = (builtins.tryEval (subpath.components "/foo")).success;
expected = false;
};
}; };
in in
if cases == [] then "Unit tests successful" if cases == [] then "Unit tests successful"

View file

@ -85,17 +85,18 @@ rec {
# is why we use the more obscure "bfd" and not "binutils" for this # is why we use the more obscure "bfd" and not "binutils" for this
# choice. # choice.
else "bfd"; else "bfd";
extensions = rec { extensions = lib.optionalAttrs final.hasSharedLibraries {
sharedLibrary = assert final.hasSharedLibraries; sharedLibrary =
/**/ if final.isDarwin then ".dylib" if final.isDarwin then ".dylib"
else if final.isWindows then ".dll" else if final.isWindows then ".dll"
else ".so"; else ".so";
} // {
staticLibrary = staticLibrary =
/**/ if final.isWindows then ".lib" /**/ if final.isWindows then ".lib"
else ".a"; else ".a";
library = library =
/**/ if final.isStatic then staticLibrary /**/ if final.isStatic then final.extensions.staticLibrary
else sharedLibrary; else final.extensions.sharedLibrary;
executable = executable =
/**/ if final.isWindows then ".exe" /**/ if final.isWindows then ".exe"
else ""; else "";

View file

@ -213,7 +213,6 @@ rec {
bluefield2 = { bluefield2 = {
gcc = { gcc = {
arch = "armv8-a+fp+simd+crc+crypto"; arch = "armv8-a+fp+simd+crc+crypto";
cpu = "cortex-a72";
}; };
}; };

View file

@ -1,6 +1,18 @@
# to run these tests: /*
# nix-instantiate --eval --strict nixpkgs/lib/tests/misc.nix Nix evaluation tests for various lib functions.
# if the resulting list is empty, all tests passed
Since these tests are implemented with Nix evaluation, error checking is limited to what `builtins.tryEval` can detect, which is `throw`'s and `abort`'s, without error messages.
If you need to test error messages or more complex evaluations, see ./modules.sh, ./sources.sh or ./filesystem.sh as examples.
To run these tests:
[nixpkgs]$ nix-instantiate --eval --strict lib/tests/misc.nix
If the resulting list is empty, all tests passed.
Alternatively, to run all `lib` tests:
[nixpkgs]$ nix-build lib/tests/release.nix
*/
with import ../default.nix; with import ../default.nix;
let let
@ -488,6 +500,39 @@ runTests {
expected = { a = [ 2 3 ]; b = [7]; c = [8];}; expected = { a = [ 2 3 ]; b = [7]; c = [8];};
}; };
testListCommonPrefixExample1 = {
expr = lists.commonPrefix [ 1 2 3 4 5 6 ] [ 1 2 4 8 ];
expected = [ 1 2 ];
};
testListCommonPrefixExample2 = {
expr = lists.commonPrefix [ 1 2 3 ] [ 1 2 3 4 5 ];
expected = [ 1 2 3 ];
};
testListCommonPrefixExample3 = {
expr = lists.commonPrefix [ 1 2 3 ] [ 4 5 6 ];
expected = [ ];
};
testListCommonPrefixEmpty = {
expr = lists.commonPrefix [ ] [ 1 2 3 ];
expected = [ ];
};
testListCommonPrefixSame = {
expr = lists.commonPrefix [ 1 2 3 ] [ 1 2 3 ];
expected = [ 1 2 3 ];
};
testListCommonPrefixLazy = {
expr = lists.commonPrefix [ 1 ] [ 1 (abort "lib.lists.commonPrefix shouldn't evaluate this")];
expected = [ 1 ];
};
# This would stack overflow if `commonPrefix` were implemented using recursion
testListCommonPrefixLong =
let
longList = genList (n: n) 100000;
in {
expr = lists.commonPrefix longList longList;
expected = longList;
};
testSort = { testSort = {
expr = sort builtins.lessThan [ 40 2 30 42 ]; expr = sort builtins.lessThan [ 40 2 30 42 ];
expected = [2 30 40 42]; expected = [2 30 40 42];
@ -518,45 +563,55 @@ runTests {
expected = false; expected = false;
}; };
testFindFirstExample1 = { testFindFirstIndexExample1 = {
expr = findFirst (x: x > 3) 7 [ 1 6 4 ]; expr = lists.findFirstIndex (x: x > 3) (abort "index found, so a default must not be evaluated") [ 1 6 4 ];
expected = 6; expected = 1;
}; };
testFindFirstExample2 = { testFindFirstIndexExample2 = {
expr = findFirst (x: x > 9) 7 [ 1 6 4 ]; expr = lists.findFirstIndex (x: x > 9) "a very specific default" [ 1 6 4 ];
expected = 7; expected = "a very specific default";
}; };
testFindFirstEmpty = { testFindFirstIndexEmpty = {
expr = findFirst (abort "when the list is empty, the predicate is not needed") null []; expr = lists.findFirstIndex (abort "when the list is empty, the predicate is not needed") null [];
expected = null; expected = null;
}; };
testFindFirstSingleMatch = { testFindFirstIndexSingleMatch = {
expr = findFirst (x: x == 5) null [ 5 ]; expr = lists.findFirstIndex (x: x == 5) null [ 5 ];
expected = 5; expected = 0;
}; };
testFindFirstSingleDefault = { testFindFirstIndexSingleDefault = {
expr = findFirst (x: false) null [ (abort "if the predicate doesn't access the value, it must not be evaluated") ]; expr = lists.findFirstIndex (x: false) null [ (abort "if the predicate doesn't access the value, it must not be evaluated") ];
expected = null; expected = null;
}; };
testFindFirstNone = { testFindFirstIndexNone = {
expr = builtins.tryEval (findFirst (x: x == 2) null [ 1 (throw "the last element must be evaluated when there's no match") ]); expr = builtins.tryEval (lists.findFirstIndex (x: x == 2) null [ 1 (throw "the last element must be evaluated when there's no match") ]);
expected = { success = false; value = false; }; expected = { success = false; value = false; };
}; };
# Makes sure that the implementation doesn't cause a stack overflow # Makes sure that the implementation doesn't cause a stack overflow
testFindFirstBig = { testFindFirstIndexBig = {
expr = findFirst (x: x == 1000000) null (range 0 1000000); expr = lists.findFirstIndex (x: x == 1000000) null (range 0 1000000);
expected = 1000000; expected = 1000000;
}; };
testFindFirstLazy = { testFindFirstIndexLazy = {
expr = findFirst (x: x == 1) 7 [ 1 (abort "list elements after the match must not be evaluated") ]; expr = lists.findFirstIndex (x: x == 1) null [ 1 (abort "list elements after the match must not be evaluated") ];
expected = 1; expected = 0;
};
testFindFirstExample1 = {
expr = lists.findFirst (x: x > 3) 7 [ 1 6 4 ];
expected = 6;
};
testFindFirstExample2 = {
expr = lists.findFirst (x: x > 9) 7 [ 1 6 4 ];
expected = 7;
}; };
# ATTRSETS # ATTRSETS
@ -609,6 +664,31 @@ runTests {
}; };
}; };
testMergeAttrsListExample1 = {
expr = attrsets.mergeAttrsList [ { a = 0; b = 1; } { c = 2; d = 3; } ];
expected = { a = 0; b = 1; c = 2; d = 3; };
};
testMergeAttrsListExample2 = {
expr = attrsets.mergeAttrsList [ { a = 0; } { a = 1; } ];
expected = { a = 1; };
};
testMergeAttrsListExampleMany =
let
list = genList (n:
listToAttrs (genList (m:
let
# Integer divide n by two to create duplicate attributes
str = "halfn${toString (n / 2)}m${toString m}";
in
nameValuePair str str
) 100)
) 100;
in {
expr = attrsets.mergeAttrsList list;
expected = foldl' mergeAttrs { } list;
};
# code from the example # code from the example
testRecursiveUpdateUntil = { testRecursiveUpdateUntil = {
expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) { expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) {

View file

@ -1,7 +1,13 @@
#!/usr/bin/env bash #!/usr/bin/env bash
#
# This script is used to test that the module system is working as expected. # This script is used to test that the module system is working as expected.
# Executing it runs tests for `lib.modules`, `lib.options` and `lib.types`.
# By default it test the version of nixpkgs which is defined in the NIX_PATH. # By default it test the version of nixpkgs which is defined in the NIX_PATH.
#
# Run:
# [nixpkgs]$ lib/tests/modules.sh
# or:
# [nixpkgs]$ nix-build lib/tests/release.nix
set -o errexit -o noclobber -o nounset -o pipefail set -o errexit -o noclobber -o nounset -o pipefail
shopt -s failglob inherit_errexit shopt -s failglob inherit_errexit
@ -63,6 +69,28 @@ checkConfigOutput '^"one two"$' config.result ./shorthand-meta.nix
checkConfigOutput '^true$' config.result ./test-mergeAttrDefinitionsWithPrio.nix checkConfigOutput '^true$' config.result ./test-mergeAttrDefinitionsWithPrio.nix
# Check that a module argument is passed, also when a default is available
# (but not needed)
#
# When the default is needed, we currently fail to do what the users expect, as
# we pass our own argument anyway, even if it *turns out* not to exist.
#
# The reason for this is that we don't know at invocation time what is in the
# _module.args option. That value is only available *after* all modules have been
# invoked.
#
# Hypothetically, Nix could help support this by giving access to the default
# values, through a new built-in function.
# However the default values are allowed to depend on other arguments, so those
# would have to be passed in somehow, making this not just a getter but
# something more complicated.
#
# At that point we have to wonder whether the extra complexity is worth the cost.
# Another - subjective - reason not to support it is that default values
# contradict the notion that an option has a single value, where _module.args
# is the option.
checkConfigOutput '^true$' config.result ./module-argument-default.nix
# types.pathInStore # types.pathInStore
checkConfigOutput '".*/store/0lz9p8xhf89kb1c1kk6jxrzskaiygnlh-bash-5.2-p15.drv"' config.pathInStore.ok1 ./types.nix checkConfigOutput '".*/store/0lz9p8xhf89kb1c1kk6jxrzskaiygnlh-bash-5.2-p15.drv"' config.pathInStore.ok1 ./types.nix
checkConfigOutput '".*/store/0fb3ykw9r5hpayd05sr0cizwadzq1d8q-bash-5.2-p15"' config.pathInStore.ok2 ./types.nix checkConfigOutput '".*/store/0fb3ykw9r5hpayd05sr0cizwadzq1d8q-bash-5.2-p15"' config.pathInStore.ok2 ./types.nix
@ -365,6 +393,9 @@ checkConfigError \
config.set \ config.set \
./declare-set.nix ./declare-enable-nested.nix ./declare-set.nix ./declare-enable-nested.nix
# Check that that merging of option collisions doesn't depend on type being set
checkConfigError 'The option .group..*would be a parent of the following options, but its type .<no description>. does not support nested options.\n\s*- option.s. with prefix .group.enable..*' config.group.enable ./merge-typeless-option.nix
# Test that types.optionType merges types correctly # Test that types.optionType merges types correctly
checkConfigOutput '^10$' config.theOption.int ./optionTypeMerging.nix checkConfigOutput '^10$' config.theOption.int ./optionTypeMerging.nix
checkConfigOutput '^"hello"$' config.theOption.str ./optionTypeMerging.nix checkConfigOutput '^"hello"$' config.theOption.str ./optionTypeMerging.nix

View file

@ -0,0 +1,25 @@
{ lib, ... }:
let
typeless =
{ lib, ... }:
{
options.group = lib.mkOption { };
};
childOfTypeless =
{ lib, ... }:
{
options.group.enable = lib.mkEnableOption "nothing";
};
in
{
imports = [
typeless
childOfTypeless
];
config.group.enable = false;
}

View file

@ -0,0 +1,9 @@
{ a ? false, lib, ... }: {
options = {
result = lib.mkOption {};
};
config = {
_module.args.a = true;
result = a;
};
}

View file

@ -1,4 +1,11 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Tests lib/sources.nix
# Run:
# [nixpkgs]$ lib/tests/sources.sh
# or:
# [nixpkgs]$ nix-build lib/tests/release.nix
set -euo pipefail set -euo pipefail
shopt -s inherit_errexit shopt -s inherit_errexit

View file

@ -418,6 +418,12 @@
githubId = 1250775; githubId = 1250775;
name = "Adolfo E. García Castro"; name = "Adolfo E. García Castro";
}; };
adriandole = {
email = "adrian@dole.tech";
github = "adriandole";
githubId = 25236206;
name = "Adrian Dole";
};
AdsonCicilioti = { AdsonCicilioti = {
name = "Adson Cicilioti"; name = "Adson Cicilioti";
email = "adson.cicilioti@live.com"; email = "adson.cicilioti@live.com";
@ -581,6 +587,12 @@
githubId = 1318982; githubId = 1318982;
name = "Anders Claesson"; name = "Anders Claesson";
}; };
akechishiro = {
email = "akechishiro-aur+nixpkgs@lahfa.xyz";
github = "AkechiShiro";
githubId = 14914796;
name = "Samy Lahfa";
};
a-kenji = { a-kenji = {
email = "aks.kenji@protonmail.com"; email = "aks.kenji@protonmail.com";
github = "a-kenji"; github = "a-kenji";
@ -638,6 +650,13 @@
githubId = 82811; githubId = 82811;
name = "Aldo Borrero"; name = "Aldo Borrero";
}; };
alejandrosame = {
email = "alejandrosanchzmedina@gmail.com";
matrix = "@alejandrosame:matrix.org";
github = "alejandrosame";
githubId = 1078000;
name = "Alejandro Sánchez Medina";
};
aleksana = { aleksana = {
email = "me@aleksana.moe"; email = "me@aleksana.moe";
github = "Aleksanaa"; github = "Aleksanaa";
@ -1148,6 +1167,9 @@
githubId = 48802534; githubId = 48802534;
name = "Anselm Schüler"; name = "Anselm Schüler";
matrix = "@schuelermine:matrix.org"; matrix = "@schuelermine:matrix.org";
keys = [{
fingerprint = "CDBF ECA8 36FE E340 1CEB 58FF BA34 EE1A BA3A 0955";
}];
}; };
anthonyroussel = { anthonyroussel = {
email = "anthony@roussel.dev"; email = "anthony@roussel.dev";
@ -1237,6 +1259,12 @@
githubId = 30842467; githubId = 30842467;
name = "April John"; name = "April John";
}; };
aqrln = {
email = "nix@aqrln.net";
github = "aqrln";
githubId = 4923335;
name = "Alexey Orlenko";
};
ar1a = { ar1a = {
email = "aria@ar1as.space"; email = "aria@ar1as.space";
github = "ar1a"; github = "ar1a";
@ -1389,6 +1417,12 @@
githubId = 37193992; githubId = 37193992;
name = "Arthur Teisseire"; name = "Arthur Teisseire";
}; };
arti5an = {
email = "artis4n@outlook.com";
github = "arti5an";
githubId = 14922630;
name = "Richard Smith";
};
artturin = { artturin = {
email = "artturin@artturin.com"; email = "artturin@artturin.com";
matrix = "@artturin:matrix.org"; matrix = "@artturin:matrix.org";
@ -1499,6 +1533,13 @@
fingerprint = "DD52 6BC7 767D BA28 16C0 95E5 6840 89CE 67EB B691"; fingerprint = "DD52 6BC7 767D BA28 16C0 95E5 6840 89CE 67EB B691";
}]; }];
}; };
atalii = {
email = "taliauster@gmail.com";
github = "atalii";
githubId = 120901234;
name = "tali auster";
matrix = "@atalii:matrix.org";
};
ataraxiasjel = { ataraxiasjel = {
email = "nix@ataraxiadev.com"; email = "nix@ataraxiadev.com";
github = "AtaraxiaSjel"; github = "AtaraxiaSjel";
@ -1691,6 +1732,13 @@
fingerprint = "2688 0377 C31D 9E81 9BDF 83A8 C8C6 BDDB 3847 F72B"; fingerprint = "2688 0377 C31D 9E81 9BDF 83A8 C8C6 BDDB 3847 F72B";
}]; }];
}; };
azazak123 = {
email = "azazaka2002@gmail.com";
matrix = "@ne_dvoeshnik:matrix.org";
name = "Volodymyr Antonov";
github = "azazak123";
githubId = 50211158;
};
azd325 = { azd325 = {
email = "tim.kleinschmidt@gmail.com"; email = "tim.kleinschmidt@gmail.com";
github = "Azd325"; github = "Azd325";
@ -1730,6 +1778,12 @@
fingerprint = "6FBC A462 4EAF C69C A7C4 98C1 F044 3098 48A0 7CAC"; fingerprint = "6FBC A462 4EAF C69C A7C4 98C1 F044 3098 48A0 7CAC";
}]; }];
}; };
babeuh = {
name = "Raphael Le Goaller";
email = "babeuh@rlglr.fr";
github = "babeuh";
githubId = 60193302;
};
bachp = { bachp = {
email = "pascal.bach@nextrem.ch"; email = "pascal.bach@nextrem.ch";
matrix = "@bachp:matrix.org"; matrix = "@bachp:matrix.org";
@ -2554,7 +2608,7 @@
}; };
cafkafk = { cafkafk = {
email = "christina@cafkafk.com"; email = "christina@cafkafk.com";
matrix = "@cafkafk:m.cafkafk.com"; matrix = "@cafkafk:nixos.dev";
name = "Christina Sørensen"; name = "Christina Sørensen";
github = "cafkafk"; github = "cafkafk";
githubId = 89321978; githubId = 89321978;
@ -2563,7 +2617,7 @@
fingerprint = "7B9E E848 D074 AE03 7A0C 651A 8ED4 DEF7 375A 30C8"; fingerprint = "7B9E E848 D074 AE03 7A0C 651A 8ED4 DEF7 375A 30C8";
} }
{ {
fingerprint = "208A 2A66 8A2F CDE7 B5D3 8F64 CDDC 792F 6552 51ED"; fingerprint = "208A 2A66 8A2F CDE7 B5D3 8F64 CDDC 792F 6552 51ED";
} }
]; ];
}; };
@ -2737,6 +2791,12 @@
githubId = 3471749; githubId = 3471749;
name = "Claudio Bley"; name = "Claudio Bley";
}; };
cbourjau = {
email = "christianb@posteo.de";
github = "cbourjau";
githubId = 3288058;
name = "Christian Bourjau";
};
cbrewster = { cbrewster = {
email = "cbrewster@hey.com"; email = "cbrewster@hey.com";
github = "cbrewster"; github = "cbrewster";
@ -2765,6 +2825,13 @@
githubId = 64804; githubId = 64804;
name = "Dennis Gosnell"; name = "Dennis Gosnell";
}; };
cdmistman = {
name = "Colton Donnelly";
email = "colton@donn.io";
matrix = "@donnellycolton:matrix.org";
github = "cdmistman";
githubId = 23486351;
};
ceedubs = { ceedubs = {
email = "ceedubs@gmail.com"; email = "ceedubs@gmail.com";
github = "ceedubs"; github = "ceedubs";
@ -3864,6 +3931,12 @@
githubId = 75067; githubId = 75067;
name = "Daniel Duan"; name = "Daniel Duan";
}; };
de11n = {
email = "nixpkgs-commits@deshaw.com";
github = "de11n";
githubId = 130508846;
name = "Elliot Cameron";
};
dearrude = { dearrude = {
name = "Ebrahim Nejati"; name = "Ebrahim Nejati";
email = "dearrude@tfwno.gf"; email = "dearrude@tfwno.gf";
@ -4108,6 +4181,12 @@
fingerprint = "1C4E F4FE 7F8E D8B7 1E88 CCDF BAB1 D15F B7B4 D4CE"; fingerprint = "1C4E F4FE 7F8E D8B7 1E88 CCDF BAB1 D15F B7B4 D4CE";
}]; }];
}; };
dgollings = {
email = "daniel.gollings+nixpkgs@gmail.com";
github = "dgollings";
githubId = 2032823;
name = "Daniel Gollings";
};
dgonyeo = { dgonyeo = {
email = "derek@gonyeo.com"; email = "derek@gonyeo.com";
github = "dgonyeo"; github = "dgonyeo";
@ -4309,6 +4388,12 @@
githubId = 10998835; githubId = 10998835;
name = "Doron Behar"; name = "Doron Behar";
}; };
dotemup = {
email = "dotemup.designs+nixpkgs@gmail.com";
github = "dotemup";
githubId = 11077277;
name = "Dote";
};
dotlambda = { dotlambda = {
email = "rschuetz17@gmail.com"; email = "rschuetz17@gmail.com";
matrix = "@robert:funklause.de"; matrix = "@robert:funklause.de";
@ -4400,6 +4485,13 @@
fingerprint = "7E38 89D9 B1A8 B381 C8DE A15F 95EB 6DFF 26D1 CEB0"; fingerprint = "7E38 89D9 B1A8 B381 C8DE A15F 95EB 6DFF 26D1 CEB0";
}]; }];
}; };
DrSensor = {
name = "Fahmi Akbar Wildana";
email = "sensorfied@gmail.com";
matrix = "@drsensor:matrix.org";
github = "DrSensor";
githubId = 4953069;
};
drupol = { drupol = {
name = "Pol Dellaiera"; name = "Pol Dellaiera";
email = "pol.dellaiera@protonmail.com"; email = "pol.dellaiera@protonmail.com";
@ -5312,6 +5404,12 @@
githubId = 4246921; githubId = 4246921;
name = "Florian Beeres"; name = "Florian Beeres";
}; };
fd = {
email = "simon.menke@gmail.com";
github = "fd";
githubId = 591;
name = "Simon Menke";
};
fdns = { fdns = {
email = "fdns02@gmail.com"; email = "fdns02@gmail.com";
github = "fdns"; github = "fdns";
@ -5601,6 +5699,12 @@
githubId = 84968; githubId = 84968;
name = "Florian Paul Schmidt"; name = "Florian Paul Schmidt";
}; };
fptje = {
email = "fpeijnenburg@gmail.com";
github = "FPtje";
githubId = 1202014;
name = "Falco Peijnenburg";
};
fragamus = { fragamus = {
email = "innovative.engineer@gmail.com"; email = "innovative.engineer@gmail.com";
github = "fragamus"; github = "fragamus";
@ -5756,6 +5860,11 @@
githubId = 17859309; githubId = 17859309;
name = "Fuzen"; name = "Fuzen";
}; };
fwc = {
github = "fwc";
githubId = 29337229;
name = "mtths";
};
fxfactorial = { fxfactorial = {
email = "edgar.factorial@gmail.com"; email = "edgar.factorial@gmail.com";
github = "fxfactorial"; github = "fxfactorial";
@ -5961,6 +6070,12 @@
fingerprint = "D0CF 440A A703 E0F9 73CB A078 82BB 70D5 41AE 2DB4"; fingerprint = "D0CF 440A A703 E0F9 73CB A078 82BB 70D5 41AE 2DB4";
}]; }];
}; };
gerg-l = {
email = "gregleyda@proton.me";
github = "Gerg-L";
githubId = 88247690;
name = "Greg Leyda";
};
geri1701 = { geri1701 = {
email = "geri@sdf.org"; email = "geri@sdf.org";
github = "geri1701"; github = "geri1701";
@ -6011,6 +6126,15 @@
githubId = 127353; githubId = 127353;
name = "Geoffrey Huntley"; name = "Geoffrey Huntley";
}; };
gigglesquid = {
email = "jack.connors@protonmail.com";
github = "gigglesquid";
githubId = 3685154;
name = "Jack connors";
keys = [{
fingerprint = "21DF 8034 B212 EDFF 9F19 9C19 F65B 7583 7ABF D019";
}];
};
gila = { gila = {
email = "jeffry.molanus@gmail.com"; email = "jeffry.molanus@gmail.com";
github = "gila"; github = "gila";
@ -6583,6 +6707,12 @@
githubId = 41522204; githubId = 41522204;
name = "hexchen"; name = "hexchen";
}; };
hexclover = {
email = "hexclover@outlook.com";
github = "hexclover";
githubId = 47456195;
name = "hexclover";
};
heyimnova = { heyimnova = {
email = "git@heyimnova.dev"; email = "git@heyimnova.dev";
github = "heyimnova"; github = "heyimnova";
@ -7076,12 +7206,6 @@
fingerprint = "F5B2 BE1B 9AAD 98FE 2916 5597 3665 FFF7 9D38 7BAA"; fingerprint = "F5B2 BE1B 9AAD 98FE 2916 5597 3665 FFF7 9D38 7BAA";
}]; }];
}; };
imsofi = {
email = "sofi+git@mailbox.org";
github = "imsofi";
githubId = 20756843;
name = "Sofi";
};
imuli = { imuli = {
email = "i@imu.li"; email = "i@imu.li";
github = "imuli"; github = "imuli";
@ -7147,6 +7271,12 @@
fingerprint = "5CB5 9AA0 D180 1997 2FB3 E0EC 943A 1DE9 372E BE4E"; fingerprint = "5CB5 9AA0 D180 1997 2FB3 E0EC 943A 1DE9 372E BE4E";
}]; }];
}; };
invokes-su = {
email = "nixpkgs-commits@deshaw.com";
github = "invokes-su";
githubId = 88038050;
name = "Souvik Sen";
};
ionutnechita = { ionutnechita = {
email = "ionut_n2001@yahoo.com"; email = "ionut_n2001@yahoo.com";
github = "ionutnechita"; github = "ionutnechita";
@ -7622,6 +7752,13 @@
githubId = 1608697; githubId = 1608697;
name = "Jens Binkert"; name = "Jens Binkert";
}; };
jeremiahs = {
email = "jeremiah@secrist.xyz";
github = "JeremiahSecrist";
githubId = 26032621;
matrix = "@jeremiahs:matrix.org";
name = "Jeremiah Secrist";
};
jeremyschlatter = { jeremyschlatter = {
email = "github@jeremyschlatter.com"; email = "github@jeremyschlatter.com";
github = "jeremyschlatter"; github = "jeremyschlatter";
@ -8247,6 +8384,12 @@
name = "John Soo"; name = "John Soo";
githubId = 10039785; githubId = 10039785;
}; };
jtbx = {
email = "jtbx@duck.com";
name = "Jeremy Baxter";
github = "jtbx";
githubId = 92071952;
};
jtcoolen = { jtcoolen = {
email = "jtcoolen@pm.me"; email = "jtcoolen@pm.me";
name = "Julien Coolen"; name = "Julien Coolen";
@ -8349,6 +8492,12 @@
githubId = 662666; githubId = 662666;
name = "Justinas Stankevičius"; name = "Justinas Stankevičius";
}; };
justinlime = {
email = "justinlime1999@gmail.com";
github = "justinlime";
githubId = 119710965;
name = "Justin Fields";
};
justinlovinger = { justinlovinger = {
email = "git@justinlovinger.com"; email = "git@justinlovinger.com";
github = "JustinLovinger"; github = "JustinLovinger";
@ -8593,6 +8742,11 @@
githubId = 762421; githubId = 762421;
name = "Pierre Thierry"; name = "Pierre Thierry";
}; };
keto = {
github = "TheRealKeto";
githubId = 24854941;
name = "Keto";
};
ketzacoatl = { ketzacoatl = {
email = "ketzacoatl@protonmail.com"; email = "ketzacoatl@protonmail.com";
github = "ketzacoatl"; github = "ketzacoatl";
@ -8631,6 +8785,12 @@
githubId = 546087; githubId = 546087;
name = "Kristoffer K. Føllesdal"; name = "Kristoffer K. Føllesdal";
}; };
khaneliman = {
email = "khaneliman12@gmail.com";
github = "khaneliman";
githubId = 1778670;
name = "Austin Horstman";
};
khaser = { khaser = {
email = "a-horohorin@mail.ru"; email = "a-horohorin@mail.ru";
github = "khaser"; github = "khaser";
@ -8866,6 +9026,12 @@
githubId = 3287933; githubId = 3287933;
name = "Josef Kemetmüller"; name = "Josef Kemetmüller";
}; };
knightpp = {
email = "knightpp@proton.me";
github = "knightpp";
githubId = 28928944;
name = "Danylo Kondratiev";
};
knl = { knl = {
email = "nikola@knezevic.co"; email = "nikola@knezevic.co";
github = "knl"; github = "knl";
@ -8921,6 +9087,12 @@
githubId = 524268; githubId = 524268;
name = "Koral"; name = "Koral";
}; };
koralowiec = {
email = "qnlgzyrw@anonaddy.me";
github = "koralowiec";
githubId = 36413794;
name = "Arek Kalandyk";
};
koslambrou = { koslambrou = {
email = "koslambrou@gmail.com"; email = "koslambrou@gmail.com";
github = "koslambrou"; github = "koslambrou";
@ -9017,6 +9189,12 @@
githubId = 5759930; githubId = 5759930;
name = "Alexis Destrez"; name = "Alexis Destrez";
}; };
krupkat = {
github = "krupkat";
githubId = 6817216;
name = "Tomas Krupka";
matrix = "@krupkat:matrix.org";
};
ktf = { ktf = {
email = "giulio.eulisse@cern.ch"; email = "giulio.eulisse@cern.ch";
github = "ktf"; github = "ktf";
@ -9091,6 +9269,12 @@
fingerprint = "5A9A 1C9B 2369 8049 3B48 CF5B 81A1 5409 4816 2372"; fingerprint = "5A9A 1C9B 2369 8049 3B48 CF5B 81A1 5409 4816 2372";
}]; }];
}; };
l0b0 = {
email = "victor@engmark.name";
github = "l0b0";
githubId = 168301;
name = "Victor Engmark";
};
l3af = { l3af = {
email = "L3afMeAlon3@gmail.com"; email = "L3afMeAlon3@gmail.com";
matrix = "@L3afMe:matrix.org"; matrix = "@L3afMe:matrix.org";
@ -9547,6 +9731,12 @@
fingerprint = "1763 9903 2D7C 5B82 5D5A 0EAD A2BC 3C6F 1435 1991"; fingerprint = "1763 9903 2D7C 5B82 5D5A 0EAD A2BC 3C6F 1435 1991";
}]; }];
}; };
locochoco = {
email = "contact@locochoco.dev";
github = "loco-choco";
githubId = 58634087;
name = "Ivan Pancheniak";
};
lodi = { lodi = {
email = "anthony.lodi@gmail.com"; email = "anthony.lodi@gmail.com";
github = "lodi"; github = "lodi";
@ -9743,6 +9933,15 @@
githubId = 1168435; githubId = 1168435;
name = "Ludovic Courtès"; name = "Ludovic Courtès";
}; };
ludovicopiero = {
email = "ludovicopiero@pm.me";
github = "ludovicopiero";
githubId = 44255157;
name = "Ludovico Piero";
keys = [{
fingerprint = "72CA 4F61 46C6 0DAB 6193 4D35 3911 DD27 6CFE 779C";
}];
};
lufia = { lufia = {
email = "lufia@lufia.org"; email = "lufia@lufia.org";
github = "lufia"; github = "lufia";
@ -9770,6 +9969,15 @@
githubId = 22085373; githubId = 22085373;
name = "Luis Hebendanz"; name = "Luis Hebendanz";
}; };
luisdaranda = {
email = "luisdomingoaranda@gmail.com";
github = "propet";
githubId = 8515861;
name = "Luis D. Aranda Sánchez";
keys = [{
fingerprint = "AB7C 81F4 9E07 CC64 F3E7 BC25 DCAC C6F4 AAFC C04E";
}];
};
luisnquin = { luisnquin = {
email = "lpaandres2020@gmail.com"; email = "lpaandres2020@gmail.com";
matrix = "@luisnquin:matrix.org"; matrix = "@luisnquin:matrix.org";
@ -9865,6 +10073,12 @@
githubId = 782440; githubId = 782440;
name = "Luna Nova"; name = "Luna Nova";
}; };
lurkki = {
email = "jussi.kuokkanen@protonmail.com";
github = "Lurkki14";
githubId = 44469719;
name = "Jussi Kuokkanen";
};
lux = { lux = {
email = "lux@lux.name"; email = "lux@lux.name";
github = "luxzeitlos"; github = "luxzeitlos";
@ -9976,6 +10190,16 @@
githubId = 93990818; githubId = 93990818;
name = "Madoura"; name = "Madoura";
}; };
maeve = {
email = "mrey@mailbox.org";
matrix = "@maeve:catgirl.cloud";
github = "m-rey";
githubId = 42996147;
name = "Mæve";
keys = [{
fingerprint = "96C9 D086 CC9D 7BD7 EF24 80E2 9168 796A 1CC3 AEA2";
}];
};
mafo = { mafo = {
email = "Marc.Fontaine@gmx.de"; email = "Marc.Fontaine@gmx.de";
github = "MarcFontaine"; github = "MarcFontaine";
@ -10132,6 +10356,13 @@
githubId = 105451387; githubId = 105451387;
name = "Maria"; name = "Maria";
}; };
marie = {
email = "tabmeier12+nix@gmail.com";
github = "nycodeghg";
githubId = 37078297;
matrix = "@marie:marie.cologne";
name = "Marie Ramlow";
};
marijanp = { marijanp = {
name = "Marijan Petričević"; name = "Marijan Petričević";
email = "marijan.petricevic94@gmail.com"; email = "marijan.petricevic94@gmail.com";
@ -10366,6 +10597,12 @@
fingerprint = "CAEC A12D CE23 37A6 6DFD 17B0 7AC7 631D 70D6 C898"; fingerprint = "CAEC A12D CE23 37A6 6DFD 17B0 7AC7 631D 70D6 C898";
}]; }];
}; };
max-amb = {
email = "maxpeterambaum@gmail.com";
github = "max-amb";
githubId = 137820334;
name = "Max Ambaum";
};
maxbrunet = { maxbrunet = {
email = "max@brnt.mx"; email = "max@brnt.mx";
github = "maxbrunet"; github = "maxbrunet";
@ -10509,12 +10746,6 @@
githubId = 10420834; githubId = 10420834;
name = "Mihai-Drosi Caju"; name = "Mihai-Drosi Caju";
}; };
mcbeth = {
email = "mcbeth@broggs.org";
github = "mcbeth";
githubId = 683809;
name = "Jeffrey Brent McBeth";
};
mccurdyc = { mccurdyc = {
email = "mccurdyc22@gmail.com"; email = "mccurdyc22@gmail.com";
github = "mccurdyc"; github = "mccurdyc";
@ -10718,6 +10949,16 @@
fingerprint = "8CE3 2906 516F C4D8 D373 308A E189 648A 55F5 9A9F"; fingerprint = "8CE3 2906 516F C4D8 D373 308A E189 648A 55F5 9A9F";
}]; }];
}; };
mib = {
name = "mib";
email = "mib@kanp.ai";
matrix = "@mib:kanp.ai";
github = "mibmo";
githubId = 87388017;
keys = [{
fingerprint = "AB0D C647 B2F7 86EB 045C 7EFE CF6E 67DE D6DC 1E3F";
}];
};
mic92 = { mic92 = {
email = "joerg@thalheim.io"; email = "joerg@thalheim.io";
matrix = "@mic92:nixos.dev"; matrix = "@mic92:nixos.dev";
@ -10814,6 +11055,12 @@
fingerprint = "FEF0 AE2D 5449 3482 5F06 40AA 186A 1EDA C5C6 3F83"; fingerprint = "FEF0 AE2D 5449 3482 5F06 40AA 186A 1EDA C5C6 3F83";
}]; }];
}; };
mig4ng = {
email = "mig4ng@gmail.com";
github = "mig4ng";
githubId = 5817039;
name = "Miguel Carneiro";
};
mightyiam = { mightyiam = {
email = "mightyiampresence@gmail.com"; email = "mightyiampresence@gmail.com";
github = "mightyiam"; github = "mightyiam";
@ -11300,6 +11547,12 @@
name = "Maxim Schuwalow"; name = "Maxim Schuwalow";
email = "maxim.schuwalow@gmail.com"; email = "maxim.schuwalow@gmail.com";
}; };
mschwaig = {
name = "Martin Schwaighofer";
github = "mschwaig";
githubId = 3856390;
email = "mschwaig+nixpkgs@eml.cc";
};
msfjarvis = { msfjarvis = {
github = "msfjarvis"; github = "msfjarvis";
githubId = 13348378; githubId = 13348378;
@ -14179,6 +14432,12 @@
githubId = 1069318; githubId = 1069318;
name = "Robin Lambertz"; name = "Robin Lambertz";
}; };
robwalt = {
email = "robwalter96@gmail.com";
github = "robwalt";
githubId = 26892280;
name = "Robert Walter";
};
roconnor = { roconnor = {
email = "roconnor@theorem.ca"; email = "roconnor@theorem.ca";
github = "roconnor"; github = "roconnor";
@ -14303,6 +14562,15 @@
}]; }];
name = "Rahul Butani"; name = "Rahul Butani";
}; };
rs0vere = {
email = "rs0vere@outlook.com";
github = "rs0vere";
githubId = 140035635;
keys = [{
fingerprint = "C6D8 B5C2 FA79 901B DCCF 95E1 FEC4 5C5A ED00 C58D";
}];
name = "Red Star Over Earth";
};
rski = { rski = {
name = "rski"; name = "rski";
email = "rom.skiad+nix@gmail.com"; email = "rom.skiad+nix@gmail.com";
@ -14416,6 +14684,12 @@
githubId = 889991; githubId = 889991;
name = "Ryan Artecona"; name = "Ryan Artecona";
}; };
ryanccn = {
email = "hello@ryanccn.dev";
github = "ryanccn";
githubId = 70191398;
name = "Ryan Cao";
};
ryane = { ryane = {
email = "ryanesc@gmail.com"; email = "ryanesc@gmail.com";
github = "ryane"; github = "ryane";
@ -14470,6 +14744,12 @@
githubId = 3280280; githubId = 3280280;
name = "Ryne Everett"; name = "Ryne Everett";
}; };
ryota-ka = {
email = "ok@ryota-ka.me";
github = "ryota-ka";
githubId = 7309170;
name = "Ryota Kameoka";
};
rytone = { rytone = {
email = "max@ryt.one"; email = "max@ryt.one";
github = "rastertail"; github = "rastertail";
@ -14875,6 +15155,16 @@
githubId = 4805746; githubId = 4805746;
name = "Sebastian Jordan"; name = "Sebastian Jordan";
}; };
septem9er = {
name = "Septem9er";
email = "develop@septem9er.de";
matrix = "@septem9er:fairydust.space";
github = "septem9er";
githubId = 33379902;
keys = [{
fingerprint = "C408 07F9 8677 3D98 EFF3 0980 355A 9AFB FD8E AD33";
}];
};
seqizz = { seqizz = {
email = "seqizz@gmail.com"; email = "seqizz@gmail.com";
github = "seqizz"; github = "seqizz";
@ -15279,6 +15569,12 @@
githubId = 3789764; githubId = 3789764;
name = "skykanin"; name = "skykanin";
}; };
slbtty = {
email = "shenlebantongying@gmail.com";
github = "shenlebantongying";
githubId = 20123683;
name = "Shenleban Tongying";
};
sleexyz = { sleexyz = {
email = "freshdried@gmail.com"; email = "freshdried@gmail.com";
github = "sleexyz"; github = "sleexyz";
@ -15456,6 +15752,12 @@
githubId = 6277322; githubId = 6277322;
name = "Wei Tang"; name = "Wei Tang";
}; };
soupglasses = {
email = "sofi+git@mailbox.org";
github = "soupglasses";
githubId = 20756843;
name = "Sofi";
};
soywod = { soywod = {
name = "Clément DOUIN"; name = "Clément DOUIN";
email = "clement.douin@posteo.net"; email = "clement.douin@posteo.net";
@ -15478,6 +15780,12 @@
githubId = 7669898; githubId = 7669898;
name = "Katharina Fey"; name = "Katharina Fey";
}; };
spalf = {
email = "tom@tombarrett.xyz";
name = "tom barrett";
github = "70m6";
githubId = 105207964;
};
spease = { spease = {
email = "peasteven@gmail.com"; email = "peasteven@gmail.com";
github = "spease"; github = "spease";
@ -15509,6 +15817,12 @@
githubId = 6391601; githubId = 6391601;
name = "Roger Mason"; name = "Roger Mason";
}; };
sputn1ck = {
email = "kon@kon.ninja";
github = "sputn1ck";
githubId = 8904314;
name = "Konstantin Nick";
};
squalus = { squalus = {
email = "squalus@squalus.net"; email = "squalus@squalus.net";
github = "squalus"; github = "squalus";
@ -15545,6 +15859,13 @@
githubId = 219362; githubId = 219362;
name = "Sarah Brofeldt"; name = "Sarah Brofeldt";
}; };
srid = {
email = "srid@srid.ca";
matrix = "@srid:matrix.org";
github = "srid";
githubId = 3998;
name = "Sridhar Ratnakumar";
};
srounce = { srounce = {
name = "Samuel Rounce"; name = "Samuel Rounce";
email = "me@samuelrounce.co.uk"; email = "me@samuelrounce.co.uk";
@ -15813,6 +16134,12 @@
githubId = 16734772; githubId = 16734772;
name = "Sumner Evans"; name = "Sumner Evans";
}; };
sund3RRR = {
email = "evenquantity@gmail.com";
github = "sund3RRR";
githubId = 73298492;
name = "Mikhail Kiselev";
};
suominen = { suominen = {
email = "kimmo@suominen.com"; email = "kimmo@suominen.com";
github = "suominen"; github = "suominen";
@ -16714,6 +17041,14 @@
githubId = 8577941; githubId = 8577941;
name = "Kevin Rauscher"; name = "Kevin Rauscher";
}; };
tomasajt = {
github = "TomaSajt";
githubId = 62384384;
name = "TomaSajt";
keys = [{
fingerprint = "8CA9 8016 F44D B717 5B44 6032 F011 163C 0501 22A1";
}];
};
tomaskala = { tomaskala = {
email = "public+nixpkgs@tomaskala.com"; email = "public+nixpkgs@tomaskala.com";
github = "tomaskala"; github = "tomaskala";
@ -16990,6 +17325,12 @@
matrix = "@ty:tjll.net"; matrix = "@ty:tjll.net";
name = "Tyler Langlois"; name = "Tyler Langlois";
}; };
tymscar = {
email = "oscar@tymscar.com";
github = "tymscar";
githubId = 3742502;
name = "Oscar Molnar";
};
typetetris = { typetetris = {
email = "ericwolf42@mail.com"; email = "ericwolf42@mail.com";
github = "typetetris"; github = "typetetris";
@ -17350,6 +17691,12 @@
fingerprint = "AEF2 3487 66F3 71C6 89A7 3600 95A4 2FE8 3535 25F9"; fingerprint = "AEF2 3487 66F3 71C6 89A7 3600 95A4 2FE8 3535 25F9";
}]; }];
}; };
vinetos = {
name = "vinetos";
email = "vinetosdev@gmail.com";
github = "vinetos";
githubId = 10145351;
};
vinnymeller = { vinnymeller = {
email = "vinnymeller@proton.me"; email = "vinnymeller@proton.me";
github = "vinnymeller"; github = "vinnymeller";
@ -17453,6 +17800,12 @@
githubId = 3413119; githubId = 3413119;
name = "Vonfry"; name = "Vonfry";
}; };
votava = {
email = "votava@gmail.com";
github = "janvotava";
githubId = 367185;
name = "Jan Votava";
};
vq = { vq = {
email = "vq@erq.se"; email = "vq@erq.se";
github = "vq"; github = "vq";
@ -18036,6 +18389,12 @@
githubId = 73759599; githubId = 73759599;
name = "Yaya"; name = "Yaya";
}; };
yboettcher = {
name = "Yannik Böttcher";
github = "yboettcher";
githubId = 39460066;
email = "yannikboettcher@outlook.de";
};
ydlr = { ydlr = {
name = "ydlr"; name = "ydlr";
email = "ydlr@ydlr.io"; email = "ydlr@ydlr.io";
@ -18106,6 +18465,12 @@
github = "ymeister"; github = "ymeister";
githubId = 47071325; githubId = 47071325;
}; };
yoavlavi = {
email = "yoav@yoavlavi.com";
github = "yoav-lavi";
githubId = 14347895;
name = "Yoav Lavi";
};
yochai = { yochai = {
email = "yochai@titat.info"; email = "yochai@titat.info";
github = "yochai"; github = "yochai";
@ -18355,12 +18720,6 @@
github = "zfnmxt"; github = "zfnmxt";
githubId = 37446532; githubId = 37446532;
}; };
zgrannan = {
email = "zgrannan@gmail.com";
github = "zgrannan";
githubId = 1141948;
name = "Zack Grannan";
};
zhaofengli = { zhaofengli = {
email = "hello@zhaofeng.li"; email = "hello@zhaofeng.li";
matrix = "@zhaofeng:zhaofeng.li"; matrix = "@zhaofeng:zhaofeng.li";
@ -18392,6 +18751,19 @@
githubId = 1108325; githubId = 1108325;
name = "Théo Zimmermann"; name = "Théo Zimmermann";
}; };
zmitchell = {
name = "Zach Mitchell";
email = "zmitchell@fastmail.com";
matrix = "@zmitchell:matrix.org";
github = "zmitchell";
githubId = 10246891;
};
znewman01 = {
email = "znewman01@gmail.com";
github = "znewman01";
githubId = 873857;
name = "Zack Newman";
};
zoedsoupe = { zoedsoupe = {
github = "zoedsoupe"; github = "zoedsoupe";
githubId = 44469426; githubId = 44469426;

View file

@ -1,5 +1,5 @@
#! /usr/bin/env nix-shell #! /usr/bin/env nix-shell
#! nix-shell -i bash -p nix curl jq nix-prefetch-github git gnused -I nixpkgs=. #! nix-shell -i bash -p nix curl jq git gnused -I nixpkgs=.
# See regenerate-hackage-packages.sh for details on the purpose of this script. # See regenerate-hackage-packages.sh for details on the purpose of this script.

View file

@ -1,5 +1,5 @@
#! /usr/bin/env nix-shell #! /usr/bin/env nix-shell
#! nix-shell -i bash -p nix curl jq nix-prefetch-github git gnused gnugrep -I nixpkgs=. #! nix-shell -i bash -p nix curl jq git gnused gnugrep -I nixpkgs=.
# shellcheck shell=bash # shellcheck shell=bash
set -eu -o pipefail set -eu -o pipefail

View file

@ -86,6 +86,7 @@ luuid,,,,,,
luv,,,,1.44.2-1,, luv,,,,1.44.2-1,,
lush.nvim,https://github.com/rktjmp/lush.nvim,,,,,teto lush.nvim,https://github.com/rktjmp/lush.nvim,,,,,teto
lyaml,,,,,,lblasc lyaml,,,,,,lblasc
magick,,,,,,donovanglover
markdown,,,,,, markdown,,,,,,
mediator_lua,,,,,, mediator_lua,,,,,,
mpack,,,,,, mpack,,,,,,

1 name src ref server version luaversion maintainers
86 luv 1.44.2-1
87 lush.nvim https://github.com/rktjmp/lush.nvim teto
88 lyaml lblasc
89 magick donovanglover
90 markdown
91 mediator_lua
92 mpack

View file

@ -193,10 +193,11 @@ with lib.maintainers; {
deshaw = { deshaw = {
# Verify additions to this team with at least one already existing member of the team. # Verify additions to this team with at least one already existing member of the team.
members = [ members = [
limeytexan de11n
invokes-su
]; ];
scope = "Group registration for D. E. Shaw employees who collectively maintain packages."; scope = "Group registration for D. E. Shaw employees who collectively maintain packages.";
shortName = "Shaw employees"; shortName = "D. E. Shaw employees";
}; };
determinatesystems = { determinatesystems = {
@ -410,6 +411,14 @@ with lib.maintainers; {
shortName = "Jitsi"; shortName = "Jitsi";
}; };
jupyter = {
members = [
natsukium
];
scope = "Maintain Jupyter and related packages.";
shortName = "Jupyter";
};
kubernetes = { kubernetes = {
members = [ members = [
johanot johanot
@ -570,6 +579,7 @@ with lib.maintainers; {
ralith ralith
dandellion dandellion
sumnerevans sumnerevans
nickcao
]; ];
scope = "Maintain the ecosystem around Matrix, a decentralized messenger."; scope = "Maintain the ecosystem around Matrix, a decentralized messenger.";
shortName = "Matrix"; shortName = "Matrix";
@ -820,9 +830,7 @@ with lib.maintainers; {
}; };
sphinx = { sphinx = {
members = [ members = [ ];
SuperSandro2000
];
scope = "Maintain Sphinx related packages."; scope = "Maintain Sphinx related packages.";
shortName = "Sphinx"; shortName = "Sphinx";
}; };

View file

@ -0,0 +1,4 @@
{
outputPath = "share/doc/nixos";
indexPath = "index.html";
}

View file

@ -11,6 +11,8 @@ $ nix-build nixos/release.nix -A manual.x86_64-linux
If the build succeeds, the manual will be in `./result/share/doc/nixos/index.html`. If the build succeeds, the manual will be in `./result/share/doc/nixos/index.html`.
There's also [a convenient development daemon](https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-devmode).
**Contributing to the man pages** **Contributing to the man pages**
The man pages are written in [DocBook] which is XML. The man pages are written in [DocBook] which is XML.

View file

@ -16,6 +16,8 @@ let
lib = pkgs.lib; lib = pkgs.lib;
common = import ./common.nix;
manpageUrls = pkgs.path + "/doc/manpage-urls.json"; manpageUrls = pkgs.path + "/doc/manpage-urls.json";
# We need to strip references to /nix/store/* from options, # We need to strip references to /nix/store/* from options,
@ -63,6 +65,9 @@ let
optionIdPrefix = "test-opt-"; optionIdPrefix = "test-opt-";
}; };
testDriverMachineDocstrings = pkgs.callPackage
../../../nixos/lib/test-driver/nixos-test-driver-docstrings.nix {};
prepareManualFromMD = '' prepareManualFromMD = ''
cp -r --no-preserve=all $inputs/* . cp -r --no-preserve=all $inputs/* .
@ -75,11 +80,13 @@ let
substituteInPlace ./nixos-options.md \ substituteInPlace ./nixos-options.md \
--replace \ --replace \
'@NIXOS_OPTIONS_JSON@' \ '@NIXOS_OPTIONS_JSON@' \
${optionsDoc.optionsJSON}/share/doc/nixos/options.json ${optionsDoc.optionsJSON}/${common.outputPath}/options.json
substituteInPlace ./development/writing-nixos-tests.section.md \ substituteInPlace ./development/writing-nixos-tests.section.md \
--replace \ --replace \
'@NIXOS_TEST_OPTIONS_JSON@' \ '@NIXOS_TEST_OPTIONS_JSON@' \
${testOptionsDoc.optionsJSON}/share/doc/nixos/options.json ${testOptionsDoc.optionsJSON}/${common.outputPath}/options.json
sed -e '/@PYTHON_MACHINE_METHODS@/ {' -e 'r ${testDriverMachineDocstrings}/machine-methods.md' -e 'd' -e '}' \
-i ./development/writing-nixos-tests.section.md
''; '';
in rec { in rec {
@ -94,7 +101,7 @@ in rec {
} }
'' ''
# Generate the HTML manual. # Generate the HTML manual.
dst=$out/share/doc/nixos dst=$out/${common.outputPath}
mkdir -p $dst mkdir -p $dst
cp ${../../../doc/style.css} $dst/style.css cp ${../../../doc/style.css} $dst/style.css
@ -115,7 +122,7 @@ in rec {
--toc-depth 1 \ --toc-depth 1 \
--chunk-toc-depth 1 \ --chunk-toc-depth 1 \
./manual.md \ ./manual.md \
$dst/index.html $dst/${common.indexPath}
mkdir -p $out/nix-support mkdir -p $out/nix-support
echo "nix-build out $out" >> $out/nix-support/hydra-build-products echo "nix-build out $out" >> $out/nix-support/hydra-build-products
@ -126,7 +133,7 @@ in rec {
manual = manualHTML; manual = manualHTML;
# Index page of the NixOS manual. # Index page of the NixOS manual.
manualHTMLIndex = "${manualHTML}/share/doc/nixos/index.html"; manualHTMLIndex = "${manualHTML}/${common.outputPath}/${common.indexPath}";
manualEpub = runCommand "nixos-manual-epub" manualEpub = runCommand "nixos-manual-epub"
{ nativeBuildInputs = [ buildPackages.libxml2.bin buildPackages.libxslt.bin buildPackages.zip ]; { nativeBuildInputs = [ buildPackages.libxml2.bin buildPackages.libxslt.bin buildPackages.zip ];
@ -157,7 +164,7 @@ in rec {
} }
'' ''
# Generate the epub manual. # Generate the epub manual.
dst=$out/share/doc/nixos dst=$out/${common.outputPath}
xsltproc \ xsltproc \
--param chapter.autolabel 0 \ --param chapter.autolabel 0 \
@ -192,7 +199,7 @@ in rec {
mkdir -p $out/share/man/man5 mkdir -p $out/share/man/man5
nixos-render-docs -j $NIX_BUILD_CORES options manpage \ nixos-render-docs -j $NIX_BUILD_CORES options manpage \
--revision ${lib.escapeShellArg revision} \ --revision ${lib.escapeShellArg revision} \
${optionsJSON}/share/doc/nixos/options.json \ ${optionsJSON}/${common.outputPath}/options.json \
$out/share/man/man5/configuration.nix.5 $out/share/man/man5/configuration.nix.5
''; '';

View file

@ -139,210 +139,7 @@ to Python as `machine_a`.
The following methods are available on machine objects: The following methods are available on machine objects:
`start` @PYTHON_MACHINE_METHODS@
: Start the virtual machine. This method is asynchronous --- it does
not wait for the machine to finish booting.
`shutdown`
: Shut down the machine, waiting for the VM to exit.
`crash`
: Simulate a sudden power failure, by telling the VM to exit
immediately.
`block`
: Simulate unplugging the Ethernet cable that connects the machine to
the other machines.
`unblock`
: Undo the effect of `block`.
`screenshot`
: Take a picture of the display of the virtual machine, in PNG format.
The screenshot is linked from the HTML log.
`get_screen_text_variants`
: Return a list of different interpretations of what is currently
visible on the machine's screen using optical character
recognition. The number and order of the interpretations is not
specified and is subject to change, but if no exception is raised at
least one will be returned.
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
`get_screen_text`
: Return a textual representation of what is currently visible on the
machine's screen using optical character recognition.
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
`send_monitor_command`
: Send a command to the QEMU monitor. This is rarely used, but allows
doing stuff such as attaching virtual USB disks to a running
machine.
`send_key`
: Simulate pressing keys on the virtual keyboard, e.g.,
`send_key("ctrl-alt-delete")`.
`send_chars`
: Simulate typing a sequence of characters on the virtual keyboard,
e.g., `send_chars("foobar\n")` will type the string `foobar`
followed by the Enter key.
`send_console`
: Send keys to the kernel console. This allows interaction with the systemd
emergency mode, for example. Takes a string that is sent, e.g.,
`send_console("\n\nsystemctl default\n")`.
`execute`
: Execute a shell command, returning a list `(status, stdout)`.
Commands are run with `set -euo pipefail` set:
- If several commands are separated by `;` and one fails, the
command as a whole will fail.
- For pipelines, the last non-zero exit status will be returned
(if there is one; otherwise zero will be returned).
- Dereferencing unset variables fails the command.
- It will wait for stdout to be closed.
If the command detaches, it must close stdout, as `execute` will wait
for this to consume all output reliably. This can be achieved by
redirecting stdout to stderr `>&2`, to `/dev/console`, `/dev/null` or
a file. Examples of detaching commands are `sleep 365d &`, where the
shell forks a new process that can write to stdout and `xclip -i`, where
the `xclip` command itself forks without closing stdout.
Takes an optional parameter `check_return` that defaults to `True`.
Setting this parameter to `False` will not check for the return code
and return -1 instead. This can be used for commands that shut down
the VM and would therefore break the pipe that would be used for
retrieving the return code.
A timeout for the command can be specified (in seconds) using the optional
`timeout` parameter, e.g., `execute(cmd, timeout=10)` or
`execute(cmd, timeout=None)`. The default is 900 seconds.
`succeed`
: Execute a shell command, raising an exception if the exit status is
not zero, otherwise returning the standard output. Similar to `execute`,
except that the timeout is `None` by default. See `execute` for details on
command execution.
`fail`
: Like `succeed`, but raising an exception if the command returns a zero
status.
`wait_until_succeeds`
: Repeat a shell command with 1-second intervals until it succeeds.
Has a default timeout of 900 seconds which can be modified, e.g.
`wait_until_succeeds(cmd, timeout=10)`. See `execute` for details on
command execution.
`wait_until_fails`
: Like `wait_until_succeeds`, but repeating the command until it fails.
`wait_for_unit`
: Wait until the specified systemd unit has reached the "active"
state.
`wait_for_file`
: Wait until the specified file exists.
`wait_for_open_port`
: Wait until a process is listening on the given TCP port and IP address
(default `localhost`).
`wait_for_closed_port`
: Wait until nobody is listening on the given TCP port and IP address
(default `localhost`).
`wait_for_x`
: Wait until the X11 server is accepting connections.
`wait_for_text`
: Wait until the supplied regular expressions matches the textual
contents of the screen by using optical character recognition (see
`get_screen_text` and `get_screen_text_variants`).
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
`wait_for_console_text`
: Wait until the supplied regular expressions match a line of the
serial console output. This method is useful when OCR is not
possible or accurate enough.
`wait_for_window`
: Wait until an X11 window has appeared whose name matches the given
regular expression, e.g., `wait_for_window("Terminal")`.
`copy_from_host`
: Copies a file from host to machine, e.g.,
`copy_from_host("myfile", "/etc/my/important/file")`.
The first argument is the file on the host. The file needs to be
accessible while building the nix derivation. The second argument is
the location of the file on the machine.
`systemctl`
: Runs `systemctl` commands with optional support for
`systemctl --user`
```py
machine.systemctl("list-jobs --no-pager") # runs `systemctl list-jobs --no-pager`
machine.systemctl("list-jobs --no-pager", "any-user") # spawns a shell for `any-user` and runs `systemctl --user list-jobs --no-pager`
```
`shell_interact`
: Allows you to directly interact with the guest shell. This should
only be used during test development, not in production tests.
Killing the interactive session with `Ctrl-d` or `Ctrl-c` also ends
the guest session.
`console_interact`
: Allows you to directly interact with QEMU's stdin. This should
only be used during test development, not in production tests.
Output from QEMU is only read line-wise. `Ctrl-c` kills QEMU and
`Ctrl-d` closes console and returns to the test runner.
To test user units declared by `systemd.user.services` the optional To test user units declared by `systemd.user.services` the optional
`user` argument can be used: `user` argument can be used:

View file

@ -249,14 +249,14 @@ update /etc/fstab.
which will be used by the boot partition. which will be used by the boot partition.
```ShellSession ```ShellSession
# parted /dev/sda -- mkpart primary 512MB -8GB # parted /dev/sda -- mkpart root ext4 512MB -8GB
``` ```
3. Next, add a *swap* partition. The size required will vary according 3. Next, add a *swap* partition. The size required will vary according
to needs, here a 8GB one is created. to needs, here a 8GB one is created.
```ShellSession ```ShellSession
# parted /dev/sda -- mkpart primary linux-swap -8GB 100% # parted /dev/sda -- mkpart swap linux-swap -8GB 100%
``` ```
::: {.note} ::: {.note}
@ -550,8 +550,8 @@ corresponding configuration Nix expression.
### Example partition schemes for NixOS on `/dev/sda` (UEFI) ### Example partition schemes for NixOS on `/dev/sda` (UEFI)
```ShellSession ```ShellSession
# parted /dev/sda -- mklabel gpt # parted /dev/sda -- mklabel gpt
# parted /dev/sda -- mkpart primary 512MB -8GB # parted /dev/sda -- mkpart root ext4 512MB -8GB
# parted /dev/sda -- mkpart primary linux-swap -8GB 100% # parted /dev/sda -- mkpart swap linux-swap -8GB 100%
# parted /dev/sda -- mkpart ESP fat32 1MB 512MB # parted /dev/sda -- mkpart ESP fat32 1MB 512MB
# parted /dev/sda -- set 3 esp on # parted /dev/sda -- set 3 esp on
``` ```

View file

@ -83,6 +83,8 @@ In addition to numerous new and updated packages, this release has the following
- [gitea-actions-runner](https://gitea.com/gitea/act_runner), a CI runner for Gitea/Forgejo Actions. Available as [services.gitea-actions-runner](#opt-services.gitea-actions-runner.instances). - [gitea-actions-runner](https://gitea.com/gitea/act_runner), a CI runner for Gitea/Forgejo Actions. Available as [services.gitea-actions-runner](#opt-services.gitea-actions-runner.instances).
- [evdevremapkeys](https://github.com/philipl/evdevremapkeys), a daemon to remap key events. Available as [services.evdevremapkeys](#opt-services.evdevremapkeys.enable).
- [gmediarender](https://github.com/hzeller/gmrender-resurrect), a simple, headless UPnP/DLNA renderer. Available as [services.gmediarender](options.html#opt-services.gmediarender.enable). - [gmediarender](https://github.com/hzeller/gmrender-resurrect), a simple, headless UPnP/DLNA renderer. Available as [services.gmediarender](options.html#opt-services.gmediarender.enable).
- [go2rtc](https://github.com/AlexxIT/go2rtc), a camera streaming appliation with support for RTSP, WebRTC, HomeKit, FFMPEG, RTMP and other protocols. Available as [services.go2rtc](options.html#opt-services.go2rtc.enable). - [go2rtc](https://github.com/AlexxIT/go2rtc), a camera streaming appliation with support for RTSP, WebRTC, HomeKit, FFMPEG, RTMP and other protocols. Available as [services.go2rtc](options.html#opt-services.go2rtc.enable).

View file

@ -16,16 +16,30 @@
- [river](https://github.com/riverwm/river), A dynamic tiling wayland compositor. Available as [programs.river](#opt-programs.river.enable). - [river](https://github.com/riverwm/river), A dynamic tiling wayland compositor. Available as [programs.river](#opt-programs.river.enable).
- [wayfire](https://wayfire.org), A modular and extensible wayland compositor. Available as [programs.wayfire](#opt-programs.wayfire.enable).
- [GoToSocial](https://gotosocial.org/), an ActivityPub social network server, written in Golang. Available as [services.gotosocial](#opt-services.gotosocial.enable). - [GoToSocial](https://gotosocial.org/), an ActivityPub social network server, written in Golang. Available as [services.gotosocial](#opt-services.gotosocial.enable).
- [Typesense](https://github.com/typesense/typesense), a fast, typo-tolerant search engine for building delightful search experiences. Available as [services.typesense](#opt-services.typesense.enable).
* [NS-USBLoader](https://github.com/developersu/ns-usbloader/), an all-in-one tool for managing Nintendo Switch homebrew. Available as [programs.ns-usbloader](#opt-programs.ns-usbloader.enable).
- [Anuko Time Tracker](https://github.com/anuko/timetracker), a simple, easy to use, open source time tracking system. Available as [services.anuko-time-tracker](#opt-services.anuko-time-tracker.enable). - [Anuko Time Tracker](https://github.com/anuko/timetracker), a simple, easy to use, open source time tracking system. Available as [services.anuko-time-tracker](#opt-services.anuko-time-tracker.enable).
- [sitespeed-io](https://sitespeed.io), a tool that can generate metrics (timings, diagnostics) for websites. Available as [services.sitespeed-io](#opt-services.sitespeed-io.enable). - [sitespeed-io](https://sitespeed.io), a tool that can generate metrics (timings, diagnostics) for websites. Available as [services.sitespeed-io](#opt-services.sitespeed-io.enable).
- [Apache Guacamole](https://guacamole.apache.org/), a cross-platform, clientless remote desktop gateway. Available as [services.guacamole-server](#opt-services.guacamole-server.enable) and [services.guacamole-client](#opt-services.guacamole-client.enable) services. - [Apache Guacamole](https://guacamole.apache.org/), a cross-platform, clientless remote desktop gateway. Available as [services.guacamole-server](#opt-services.guacamole-server.enable) and [services.guacamole-client](#opt-services.guacamole-client.enable) services.
- [pgBouncer](https://www.pgbouncer.org), a PostgreSQL connection pooler. Available as [services.pgbouncer](#opt-services.pgbouncer.enable).
- [trust-dns](https://trust-dns.org/), a Rust based DNS server built to be safe and secure from the ground up. Available as [services.trust-dns](#opt-services.trust-dns.enable). - [trust-dns](https://trust-dns.org/), a Rust based DNS server built to be safe and secure from the ground up. Available as [services.trust-dns](#opt-services.trust-dns.enable).
- [osquery](https://www.osquery.io/), a SQL powered operating system instrumentation, monitoring, and analytics.
- [ebusd](https://ebusd.eu), a daemon for handling communication with eBUS devices connected to a 2-wire bus system (“energy bus” used by numerous heating systems). Available as [services.ebusd](#opt-services.ebusd.enable).
- [systemd-sysupdate](https://www.freedesktop.org/software/systemd/man/systemd-sysupdate.html), atomically updates the host OS, container images, portable service images or other sources. Available as [systemd.sysupdate](opt-systemd.sysupdate).
## Backward Incompatibilities {#sec-release-23.11-incompatibilities} ## Backward Incompatibilities {#sec-release-23.11-incompatibilities}
- The `boot.loader.raspberryPi` options have been marked deprecated, with intent for removal for NixOS 24.11. They had a limited use-case, and do not work like people expect. They required either very old installs ([before mid-2019](https://github.com/NixOS/nixpkgs/pull/62462)) or customized builds out of scope of the standard and generic AArch64 support. That option set never supported the Raspberry Pi 4 family of devices. - The `boot.loader.raspberryPi` options have been marked deprecated, with intent for removal for NixOS 24.11. They had a limited use-case, and do not work like people expect. They required either very old installs ([before mid-2019](https://github.com/NixOS/nixpkgs/pull/62462)) or customized builds out of scope of the standard and generic AArch64 support. That option set never supported the Raspberry Pi 4 family of devices.
@ -62,6 +76,8 @@
- PHP now defaults to PHP 8.2, updated from 8.1. - PHP now defaults to PHP 8.2, updated from 8.1.
- The ISC DHCP package and corresponding module have been removed, because they are end of life upstream. See https://www.isc.org/blogs/isc-dhcp-eol/ for details and switch to a different DHCP implementation like kea or dnsmasq.
- `util-linux` is now supported on Darwin and is no longer an alias to `unixtools`. Use the `unixtools.util-linux` package for access to the Apple variants of the utilities. - `util-linux` is now supported on Darwin and is no longer an alias to `unixtools`. Use the `unixtools.util-linux` package for access to the Apple variants of the utilities.
- `services.keyd` changed API. Now you can create multiple configuration files. - `services.keyd` changed API. Now you can create multiple configuration files.
@ -86,12 +102,20 @@
- `services.outline.sequelizeArguments` has been removed, as `outline` no longer executes database migrations via the `sequelize` cli. - `services.outline.sequelizeArguments` has been removed, as `outline` no longer executes database migrations via the `sequelize` cli.
- The binary of the package `cloud-sql-proxy` has changed from `cloud_sql_proxy` to `cloud-sql-proxy`.
- The `woodpecker-*` CI packages have been updated to 1.0.0. This release is wildly incompatible with the 0.15.X versions that were previously packaged. Please read [upstream's documentation](https://woodpecker-ci.org/docs/next/migrations#100) to learn how to update your CI configurations.
- The Caddy module gained a new option named `services.caddy.enableReload` which is enabled by default. It allows reloading the service instead of restarting it, if only a config file has changed. This option must be disabled if you have turned off the [Caddy admin API](https://caddyserver.com/docs/caddyfile/options#admin). If you keep this option enabled, you should consider setting [`grace_period`](https://caddyserver.com/docs/caddyfile/options#grace-period) to a non-infinite value to prevent Caddy from delaying the reload indefinitely. - The Caddy module gained a new option named `services.caddy.enableReload` which is enabled by default. It allows reloading the service instead of restarting it, if only a config file has changed. This option must be disabled if you have turned off the [Caddy admin API](https://caddyserver.com/docs/caddyfile/options#admin). If you keep this option enabled, you should consider setting [`grace_period`](https://caddyserver.com/docs/caddyfile/options#grace-period) to a non-infinite value to prevent Caddy from delaying the reload indefinitely.
- mdraid support is now optional. This reduces initramfs size and prevents the potentially undesired automatic detection and activation of software RAID pools. It is disabled by default in new configurations (determined by `stateVersion`), but the appropriate settings will be generated by `nixos-generate-config` when installing to a software RAID device, so the standard installation procedure should be unaffected. If you have custom configs relying on mdraid, ensure that you use `stateVersion` correctly or set `boot.swraid.enable` manually. - mdraid support is now optional. This reduces initramfs size and prevents the potentially undesired automatic detection and activation of software RAID pools. It is disabled by default in new configurations (determined by `stateVersion`), but the appropriate settings will be generated by `nixos-generate-config` when installing to a software RAID device, so the standard installation procedure should be unaffected. If you have custom configs relying on mdraid, ensure that you use `stateVersion` correctly or set `boot.swraid.enable` manually.
- The `go-ethereum` package has been updated to v1.12.0. This drops support for proof-of-work. Its GraphQL API now encodes all numeric values as hex strings and the GraphQL UI is updated to version 2.0. The default database has changed from `leveldb` to `pebble` but `leveldb` can be forced with the --db.engine=leveldb flag. The `checkpoint-admin` command was [removed along with trusted checkpoints](https://github.com/ethereum/go-ethereum/pull/27147). - The `go-ethereum` package has been updated to v1.12.0. This drops support for proof-of-work. Its GraphQL API now encodes all numeric values as hex strings and the GraphQL UI is updated to version 2.0. The default database has changed from `leveldb` to `pebble` but `leveldb` can be forced with the --db.engine=leveldb flag. The `checkpoint-admin` command was [removed along with trusted checkpoints](https://github.com/ethereum/go-ethereum/pull/27147).
- The default `kops` version is now 1.27.0 and support for 1.24 and older has been dropped.
- `pharo` has been updated to latest stable (PharoVM 10.0.5), which is compatible with the latest stable and oldstable images (Pharo 10 and 11). The VM in question is the 64bit Spur. The 32bit version has been dropped due to lack of maintenance. The Cog VM has been deleted because it is severily outdated. Finally, the `pharo-launcher` package has been deleted because it was not compatible with the newer VM, and due to lack of maintenance.
## Other Notable Changes {#sec-release-23.11-notable-changes} ## Other Notable Changes {#sec-release-23.11-notable-changes}
- The Cinnamon module now enables XDG desktop integration by default. If you are experiencing collisions related to xdg-desktop-portal-gtk you can safely remove `xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];` from your NixOS configuration. - The Cinnamon module now enables XDG desktop integration by default. If you are experiencing collisions related to xdg-desktop-portal-gtk you can safely remove `xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];` from your NixOS configuration.
@ -110,6 +134,10 @@
- DocBook option documentation is no longer supported, all module documentation now uses markdown. - DocBook option documentation is no longer supported, all module documentation now uses markdown.
- `buildGoModule` `go-modules` attrs have been renamed to `goModules`.
- The `fonts.fonts` and `fonts.enableDefaultFonts` options have been renamed to `fonts.packages` and `fonts.enableDefaultPackages` respectively.
- `services.fail2ban.jails` can now be configured with attribute sets defining settings and filters instead of lines. The stringed options `daemonConfig` and `extraSettings` have respectively been replaced by `daemonSettings` and `jails.DEFAULT.settings` which use attribute sets. - `services.fail2ban.jails` can now be configured with attribute sets defining settings and filters instead of lines. The stringed options `daemonConfig` and `extraSettings` have respectively been replaced by `daemonSettings` and `jails.DEFAULT.settings` which use attribute sets.
- The module [services.ankisyncd](#opt-services.ankisyncd.package) has been switched to [anki-sync-server-rs](https://github.com/ankicommunity/anki-sync-server-rs) from the old python version, which was difficult to update, had not been updated in a while, and did not support recent versions of anki. - The module [services.ankisyncd](#opt-services.ankisyncd.package) has been switched to [anki-sync-server-rs](https://github.com/ankicommunity/anki-sync-server-rs) from the old python version, which was difficult to update, had not been updated in a while, and did not support recent versions of anki.
@ -126,8 +154,16 @@ The module update takes care of the new config syntax and the data itself (user
- `programs.gnupg.agent.pinentryFlavor` is now set in `/etc/gnupg/gpg-agent.conf`, and will no longer take precedence over a `pinentry-program` set in `~/.gnupg/gpg-agent.conf`. - `programs.gnupg.agent.pinentryFlavor` is now set in `/etc/gnupg/gpg-agent.conf`, and will no longer take precedence over a `pinentry-program` set in `~/.gnupg/gpg-agent.conf`.
- `wrapHelm` now exposes `passthru.pluginsDir` which can be passed to `helmfile`. For convenience, a top-level package `helmfile-wrapped` has been added, which inherits `passthru.pluginsDir` from `kubernetes-helm-wrapped`. See [#217768](https://github.com/NixOS/nixpkgs/issues/217768) for details.
- `boot.initrd.network.udhcp.enable` allows control over dhcp during stage 1 regardless of what `networking.useDHCP` is set to.
- Suricata was upgraded from 6.0 to 7.0 and no longer considers HTTP/2 support as experimental, see [upstream release notes](https://forum.suricata.io/t/suricata-7-0-0-released/3715) for more details.
## Nixpkgs internals {#sec-release-23.11-nixpkgs-internals} ## Nixpkgs internals {#sec-release-23.11-nixpkgs-internals}
- The use of `sourceRoot = "source";`, `sourceRoot = "source/subdir";`, and similar lines in package derivations using the default `unpackPhase` is deprecated as it requires `unpackPhase` to always produce a directory named "source". Use `sourceRoot = src.name`, `sourceRoot = "${src.name}/subdir";`, or `setSourceRoot = "sourceRoot=$(echo */subdir)";` or similar instead.
- The `qemu-vm.nix` module by default now identifies block devices via - The `qemu-vm.nix` module by default now identifies block devices via
persistent names available in `/dev/disk/by-*`. Because the rootDevice is persistent names available in `/dev/disk/by-*`. Because the rootDevice is
identfied by its filesystem label, it needs to be formatted before the VM is identfied by its filesystem label, it needs to be formatted before the VM is

View file

@ -0,0 +1,20 @@
let
pkgs = import ../../.. {
config = {};
overlays = [];
};
common = import ./common.nix;
inherit (common) outputPath indexPath;
web-devmode = import ../../../pkgs/tools/nix/web-devmode.nix {
inherit pkgs;
buildArgs = "../../release.nix -A manualHTML.${builtins.currentSystem}";
open = "/${outputPath}/${indexPath}";
};
in
pkgs.mkShell {
packages = [
web-devmode
];
}

View file

@ -109,8 +109,10 @@ let
nixosWithUserModules = noUserModules.extendModules { modules = allUserModules; }; nixosWithUserModules = noUserModules.extendModules { modules = allUserModules; };
withExtraArgs = nixosSystem: nixosSystem // {
inherit extraArgs;
inherit (nixosSystem._module.args) pkgs;
extendModules = args: withExtraArgs (nixosSystem.extendModules args);
};
in in
withWarnings nixosWithUserModules // { withWarnings (withExtraArgs nixosWithUserModules)
inherit extraArgs;
inherit (nixosWithUserModules._module.args) pkgs;
}

View file

@ -572,7 +572,7 @@ let format' = format; in let
${lib.optionalString installBootLoader '' ${lib.optionalString installBootLoader ''
# In this throwaway resource, we only have /dev/vda, but the actual VM may refer to another disk for bootloader, e.g. /dev/vdb # In this throwaway resource, we only have /dev/vda, but the actual VM may refer to another disk for bootloader, e.g. /dev/vdb
# Use this option to create a symlink from vda to any arbitrary device you want. # Use this option to create a symlink from vda to any arbitrary device you want.
${optionalString (config.boot.loader.grub.device != "/dev/vda") '' ${optionalString (config.boot.loader.grub.enable && config.boot.loader.grub.device != "/dev/vda") ''
mkdir -p $(dirname ${config.boot.loader.grub.device}) mkdir -p $(dirname ${config.boot.loader.grub.device})
ln -s /dev/vda ${config.boot.loader.grub.device} ln -s /dev/vda ${config.boot.loader.grub.device}
''} ''}

View file

@ -63,7 +63,12 @@ in rec {
assertMacAddress = name: group: attr: assertMacAddress = name: group: attr:
optional (attr ? ${name} && ! isMacAddress attr.${name}) optional (attr ? ${name} && ! isMacAddress attr.${name})
"Systemd ${group} field `${name}' must be a valid mac address."; "Systemd ${group} field `${name}' must be a valid MAC address.";
assertNetdevMacAddress = name: group: attr:
optional (attr ? ${name} && (! isMacAddress attr.${name} && attr.${name} != "none"))
"Systemd ${group} field `${name}` must be a valid MAC address or the special value `none`.";
isPort = i: i >= 0 && i <= 65535; isPort = i: i >= 0 && i <= 65535;
@ -438,4 +443,21 @@ in rec {
${attrsToSection def.sliceConfig} ${attrsToSection def.sliceConfig}
''; '';
}; };
# Create a directory that contains systemd definition files from an attrset
# that contains the file names as keys and the content as values. The values
# in that attrset are determined by the supplied format.
definitions = directoryName: format: definitionAttrs:
let
listOfDefinitions = lib.mapAttrsToList
(name: format.generate "${name}.conf")
definitionAttrs;
in
pkgs.runCommand directoryName { } ''
mkdir -p $out
${(lib.concatStringsSep "\n"
(map (pkg: "cp ${pkg} $out/${pkg.name}") listOfDefinitions)
)}
'';
} }

View file

@ -0,0 +1,66 @@
import ast
import sys
"""
This program takes all the Machine class methods and prints its methods in
markdown-style. These can then be included in the NixOS test driver
markdown style, assuming the docstrings themselves are also in markdown.
These are included in the test driver documentation in the NixOS manual.
See https://nixos.org/manual/nixos/stable/#ssec-machine-objects
The python input looks like this:
```py
...
class Machine(...):
...
def some_function(self, param1, param2):
""
documentation string of some_function.
foo bar baz.
""
...
```
Output will be:
```markdown
...
some_function(param1, param2)
: documentation string of some_function.
foo bar baz.
...
```
"""
assert len(sys.argv) == 2
with open(sys.argv[1], "r") as f:
module = ast.parse(f.read())
class_definitions = (node for node in module.body if isinstance(node, ast.ClassDef))
machine_class = next(filter(lambda x: x.name == "Machine", class_definitions))
assert machine_class is not None
function_definitions = [
node for node in machine_class.body if isinstance(node, ast.FunctionDef)
]
function_definitions.sort(key=lambda x: x.name)
for f in function_definitions:
docstr = ast.get_docstring(f)
if docstr is not None:
args = ", ".join((a.arg for a in f.args.args[1:]))
args = f"({args})"
docstr = "\n".join((f" {l}" for l in docstr.strip().splitlines()))
print(f"{f.name}{args}\n\n:{docstr[1:]}\n")

View file

@ -0,0 +1,13 @@
{ runCommand
, python3
}:
let
env = { nativeBuildInputs = [ python3 ]; };
in
runCommand "nixos-test-driver-docstrings" env ''
mkdir $out
python3 ${./extract-docstrings.py} ${./test_driver/machine.py} \
> $out/machine-methods.md
''

View file

@ -416,6 +416,10 @@ class Machine:
return answer return answer
def send_monitor_command(self, command: str) -> str: def send_monitor_command(self, command: str) -> str:
"""
Send a command to the QEMU monitor. This allows attaching
virtual USB disks to a running machine, among other things.
"""
self.run_callbacks() self.run_callbacks()
message = f"{command}\n".encode() message = f"{command}\n".encode()
assert self.monitor is not None assert self.monitor is not None
@ -425,9 +429,10 @@ class Machine:
def wait_for_unit( def wait_for_unit(
self, unit: str, user: Optional[str] = None, timeout: int = 900 self, unit: str, user: Optional[str] = None, timeout: int = 900
) -> None: ) -> None:
"""Wait for a systemd unit to get into "active" state. """
Throws exceptions on "failed" and "inactive" states as well as Wait for a systemd unit to get into "active" state.
after timing out. Throws exceptions on "failed" and "inactive" states as well as after
timing out.
""" """
def check_active(_: Any) -> bool: def check_active(_: Any) -> bool:
@ -476,6 +481,19 @@ class Machine:
) )
def systemctl(self, q: str, user: Optional[str] = None) -> Tuple[int, str]: def systemctl(self, q: str, user: Optional[str] = None) -> Tuple[int, str]:
"""
Runs `systemctl` commands with optional support for
`systemctl --user`
```py
# run `systemctl list-jobs --no-pager`
machine.systemctl("list-jobs --no-pager")
# spawn a shell for `any-user` and run
# `systemctl --user list-jobs --no-pager`
machine.systemctl("list-jobs --no-pager", "any-user")
```
"""
if user is not None: if user is not None:
q = q.replace("'", "\\'") q = q.replace("'", "\\'")
return self.execute( return self.execute(
@ -520,6 +538,38 @@ class Machine:
check_output: bool = True, check_output: bool = True,
timeout: Optional[int] = 900, timeout: Optional[int] = 900,
) -> Tuple[int, str]: ) -> Tuple[int, str]:
"""
Execute a shell command, returning a list `(status, stdout)`.
Commands are run with `set -euo pipefail` set:
- If several commands are separated by `;` and one fails, the
command as a whole will fail.
- For pipelines, the last non-zero exit status will be returned
(if there is one; otherwise zero will be returned).
- Dereferencing unset variables fails the command.
- It will wait for stdout to be closed.
If the command detaches, it must close stdout, as `execute` will wait
for this to consume all output reliably. This can be achieved by
redirecting stdout to stderr `>&2`, to `/dev/console`, `/dev/null` or
a file. Examples of detaching commands are `sleep 365d &`, where the
shell forks a new process that can write to stdout and `xclip -i`, where
the `xclip` command itself forks without closing stdout.
Takes an optional parameter `check_return` that defaults to `True`.
Setting this parameter to `False` will not check for the return code
and return -1 instead. This can be used for commands that shut down
the VM and would therefore break the pipe that would be used for
retrieving the return code.
A timeout for the command can be specified (in seconds) using the optional
`timeout` parameter, e.g., `execute(cmd, timeout=10)` or
`execute(cmd, timeout=None)`. The default is 900 seconds.
"""
self.run_callbacks() self.run_callbacks()
self.connect() self.connect()
@ -533,7 +583,7 @@ class Machine:
# While sh is bash on NixOS, this is not the case for every distro. # While sh is bash on NixOS, this is not the case for every distro.
# We explicitly call bash here to allow for the driver to boot other distros as well. # We explicitly call bash here to allow for the driver to boot other distros as well.
out_command = ( out_command = (
f"{timeout_str} bash -c {shlex.quote(command)} | (base64 --wrap 0; echo)\n" f"{timeout_str} bash -c {shlex.quote(command)} | (base64 -w 0; echo)\n"
) )
assert self.shell assert self.shell
@ -555,10 +605,11 @@ class Machine:
return (rc, output.decode(errors="replace")) return (rc, output.decode(errors="replace"))
def shell_interact(self, address: Optional[str] = None) -> None: def shell_interact(self, address: Optional[str] = None) -> None:
"""Allows you to interact with the guest shell for debugging purposes. """
Allows you to directly interact with the guest shell. This should
@address string passed to socat that will be connected to the guest shell. only be used during test development, not in production tests.
Check the `Running Tests interactivly` chapter of NixOS manual for an example. Killing the interactive session with `Ctrl-d` or `Ctrl-c` also ends
the guest session.
""" """
self.connect() self.connect()
@ -577,12 +628,14 @@ class Machine:
pass pass
def console_interact(self) -> None: def console_interact(self) -> None:
"""Allows you to interact with QEMU's stdin """
Allows you to directly interact with QEMU's stdin, by forwarding
The shell can be exited with Ctrl+D. Note that Ctrl+C is not allowed to be used. terminal input to the QEMU process.
QEMU's stdout is read line-wise. This is for use with the interactive test driver, not for production
tests, which run unattended.
Should only be used during test development, not in the production test.""" Output from QEMU is only read line-wise. `Ctrl-c` kills QEMU and
`Ctrl-d` closes console and returns to the test runner.
"""
self.log("Terminal is ready (there is no prompt):") self.log("Terminal is ready (there is no prompt):")
assert self.process assert self.process
@ -599,7 +652,12 @@ class Machine:
self.send_console(char.decode()) self.send_console(char.decode())
def succeed(self, *commands: str, timeout: Optional[int] = None) -> str: def succeed(self, *commands: str, timeout: Optional[int] = None) -> str:
"""Execute each command and check that it succeeds.""" """
Execute a shell command, raising an exception if the exit status is
not zero, otherwise returning the standard output. Similar to `execute`,
except that the timeout is `None` by default. See `execute` for details on
command execution.
"""
output = "" output = ""
for command in commands: for command in commands:
with self.nested(f"must succeed: {command}"): with self.nested(f"must succeed: {command}"):
@ -611,7 +669,10 @@ class Machine:
return output return output
def fail(self, *commands: str, timeout: Optional[int] = None) -> str: def fail(self, *commands: str, timeout: Optional[int] = None) -> str:
"""Execute each command and check that it fails.""" """
Like `succeed`, but raising an exception if the command returns a zero
status.
"""
output = "" output = ""
for command in commands: for command in commands:
with self.nested(f"must fail: {command}"): with self.nested(f"must fail: {command}"):
@ -622,7 +683,11 @@ class Machine:
return output return output
def wait_until_succeeds(self, command: str, timeout: int = 900) -> str: def wait_until_succeeds(self, command: str, timeout: int = 900) -> str:
"""Wait until a command returns success and return its output. """
Repeat a shell command with 1-second intervals until it succeeds.
Has a default timeout of 900 seconds which can be modified, e.g.
`wait_until_succeeds(cmd, timeout=10)`. See `execute` for details on
command execution.
Throws an exception on timeout. Throws an exception on timeout.
""" """
output = "" output = ""
@ -637,8 +702,8 @@ class Machine:
return output return output
def wait_until_fails(self, command: str, timeout: int = 900) -> str: def wait_until_fails(self, command: str, timeout: int = 900) -> str:
"""Wait until a command returns failure. """
Throws an exception on timeout. Like `wait_until_succeeds`, but repeating the command until it fails.
""" """
output = "" output = ""
@ -690,12 +755,19 @@ class Machine:
retry(tty_matches) retry(tty_matches)
def send_chars(self, chars: str, delay: Optional[float] = 0.01) -> None: def send_chars(self, chars: str, delay: Optional[float] = 0.01) -> None:
"""
Simulate typing a sequence of characters on the virtual keyboard,
e.g., `send_chars("foobar\n")` will type the string `foobar`
followed by the Enter key.
"""
with self.nested(f"sending keys {repr(chars)}"): with self.nested(f"sending keys {repr(chars)}"):
for char in chars: for char in chars:
self.send_key(char, delay, log=False) self.send_key(char, delay, log=False)
def wait_for_file(self, filename: str) -> None: def wait_for_file(self, filename: str) -> None:
"""Waits until the file exists in machine's file system.""" """
Waits until the file exists in the machine's file system.
"""
def check_file(_: Any) -> bool: def check_file(_: Any) -> bool:
status, _ = self.execute(f"test -e {filename}") status, _ = self.execute(f"test -e {filename}")
@ -705,6 +777,11 @@ class Machine:
retry(check_file) retry(check_file)
def wait_for_open_port(self, port: int, addr: str = "localhost") -> None: def wait_for_open_port(self, port: int, addr: str = "localhost") -> None:
"""
Wait until a process is listening on the given TCP port and IP address
(default `localhost`).
"""
def port_is_open(_: Any) -> bool: def port_is_open(_: Any) -> bool:
status, _ = self.execute(f"nc -z {addr} {port}") status, _ = self.execute(f"nc -z {addr} {port}")
return status == 0 return status == 0
@ -713,6 +790,11 @@ class Machine:
retry(port_is_open) retry(port_is_open)
def wait_for_closed_port(self, port: int, addr: str = "localhost") -> None: def wait_for_closed_port(self, port: int, addr: str = "localhost") -> None:
"""
Wait until nobody is listening on the given TCP port and IP address
(default `localhost`).
"""
def port_is_closed(_: Any) -> bool: def port_is_closed(_: Any) -> bool:
status, _ = self.execute(f"nc -z {addr} {port}") status, _ = self.execute(f"nc -z {addr} {port}")
return status != 0 return status != 0
@ -751,6 +833,9 @@ class Machine:
# TODO: do we want to bail after a set number of attempts? # TODO: do we want to bail after a set number of attempts?
while not shell_ready(timeout_secs=30): while not shell_ready(timeout_secs=30):
self.log("Guest root shell did not produce any data yet...") self.log("Guest root shell did not produce any data yet...")
self.log(
" To debug, enter the VM and run 'systemctl status backdoor.service'."
)
while True: while True:
chunk = self.shell.recv(1024) chunk = self.shell.recv(1024)
@ -766,6 +851,10 @@ class Machine:
self.connected = True self.connected = True
def screenshot(self, filename: str) -> None: def screenshot(self, filename: str) -> None:
"""
Take a picture of the display of the virtual machine, in PNG format.
The screenshot will be available in the derivation output.
"""
if "." not in filename: if "." not in filename:
filename += ".png" filename += ".png"
if "/" not in filename: if "/" not in filename:
@ -795,8 +884,21 @@ class Machine:
) )
def copy_from_host(self, source: str, target: str) -> None: def copy_from_host(self, source: str, target: str) -> None:
"""Copy a file from the host into the guest via the `shared_dir` shared """
among all the VMs (using a temporary directory). Copies a file from host to machine, e.g.,
`copy_from_host("myfile", "/etc/my/important/file")`.
The first argument is the file on the host. Note that the "host" refers
to the environment in which the test driver runs, which is typically the
Nix build sandbox.
The second argument is the location of the file on the machine that will
be written to.
The file is copied via the `shared_dir` directory which is shared among
all the VMs (using a temporary directory).
The access rights bits will mimic the ones from the host file and
user:group will be root:root.
""" """
host_src = Path(source) host_src = Path(source)
vm_target = Path(target) vm_target = Path(target)
@ -848,12 +950,41 @@ class Machine:
return _perform_ocr_on_screenshot(screenshot_path, model_ids) return _perform_ocr_on_screenshot(screenshot_path, model_ids)
def get_screen_text_variants(self) -> List[str]: def get_screen_text_variants(self) -> List[str]:
"""
Return a list of different interpretations of what is currently
visible on the machine's screen using optical character
recognition. The number and order of the interpretations is not
specified and is subject to change, but if no exception is raised at
least one will be returned.
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
"""
return self._get_screen_text_variants([0, 1, 2]) return self._get_screen_text_variants([0, 1, 2])
def get_screen_text(self) -> str: def get_screen_text(self) -> str:
"""
Return a textual representation of what is currently visible on the
machine's screen using optical character recognition.
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
"""
return self._get_screen_text_variants([2])[0] return self._get_screen_text_variants([2])[0]
def wait_for_text(self, regex: str) -> None: def wait_for_text(self, regex: str) -> None:
"""
Wait until the supplied regular expressions matches the textual
contents of the screen by using optical character recognition (see
`get_screen_text` and `get_screen_text_variants`).
::: {.note}
This requires [`enableOCR`](#test-opt-enableOCR) to be set to `true`.
:::
"""
def screen_matches(last: bool) -> bool: def screen_matches(last: bool) -> bool:
variants = self.get_screen_text_variants() variants = self.get_screen_text_variants()
for text in variants: for text in variants:
@ -870,12 +1001,9 @@ class Machine:
def wait_for_console_text(self, regex: str, timeout: int | None = None) -> None: def wait_for_console_text(self, regex: str, timeout: int | None = None) -> None:
""" """
Wait for the provided regex to appear on console. Wait until the supplied regular expressions match a line of the
For each reads, serial console output.
This method is useful when OCR is not possible or inaccurate.
If timeout is None, timeout is infinite.
`timeout` is in seconds.
""" """
# Buffer the console output, this is needed # Buffer the console output, this is needed
# to match multiline regexes. # to match multiline regexes.
@ -903,6 +1031,13 @@ class Machine:
def send_key( def send_key(
self, key: str, delay: Optional[float] = 0.01, log: Optional[bool] = True self, key: str, delay: Optional[float] = 0.01, log: Optional[bool] = True
) -> None: ) -> None:
"""
Simulate pressing keys on the virtual keyboard, e.g.,
`send_key("ctrl-alt-delete")`.
Please also refer to the QEMU documentation for more information on the
input syntax: https://en.wikibooks.org/wiki/QEMU/Monitor#sendkey_keys
"""
key = CHAR_TO_KEY.get(key, key) key = CHAR_TO_KEY.get(key, key)
context = self.nested(f"sending key {repr(key)}") if log else nullcontext() context = self.nested(f"sending key {repr(key)}") if log else nullcontext()
with context: with context:
@ -911,12 +1046,21 @@ class Machine:
time.sleep(delay) time.sleep(delay)
def send_console(self, chars: str) -> None: def send_console(self, chars: str) -> None:
r"""
Send keys to the kernel console. This allows interaction with the systemd
emergency mode, for example. Takes a string that is sent, e.g.,
`send_console("\n\nsystemctl default\n")`.
"""
assert self.process assert self.process
assert self.process.stdin assert self.process.stdin
self.process.stdin.write(chars.encode()) self.process.stdin.write(chars.encode())
self.process.stdin.flush() self.process.stdin.flush()
def start(self, allow_reboot: bool = False) -> None: def start(self, allow_reboot: bool = False) -> None:
"""
Start the virtual machine. This method is asynchronous --- it does
not wait for the machine to finish booting.
"""
if self.booted: if self.booted:
return return
@ -974,6 +1118,9 @@ class Machine:
rootlog.log("if you want to keep the VM state, pass --keep-vm-state") rootlog.log("if you want to keep the VM state, pass --keep-vm-state")
def shutdown(self) -> None: def shutdown(self) -> None:
"""
Shut down the machine, waiting for the VM to exit.
"""
if not self.booted: if not self.booted:
return return
@ -982,6 +1129,9 @@ class Machine:
self.wait_for_shutdown() self.wait_for_shutdown()
def crash(self) -> None: def crash(self) -> None:
"""
Simulate a sudden power failure, by telling the VM to exit immediately.
"""
if not self.booted: if not self.booted:
return return
@ -999,8 +1149,8 @@ class Machine:
self.connected = False self.connected = False
def wait_for_x(self) -> None: def wait_for_x(self) -> None:
"""Wait until it is possible to connect to the X server. Note that """
testing the existence of /tmp/.X11-unix/X0 is insufficient. Wait until it is possible to connect to the X server.
""" """
def check_x(_: Any) -> bool: def check_x(_: Any) -> bool:
@ -1023,6 +1173,10 @@ class Machine:
).splitlines() ).splitlines()
def wait_for_window(self, regexp: str) -> None: def wait_for_window(self, regexp: str) -> None:
"""
Wait until an X11 window has appeared whose name matches the given
regular expression, e.g., `wait_for_window("Terminal")`.
"""
pattern = re.compile(regexp) pattern = re.compile(regexp)
def window_is_visible(last_try: bool) -> bool: def window_is_visible(last_try: bool) -> bool:
@ -1043,20 +1197,26 @@ class Machine:
self.succeed(f"sleep {secs}") self.succeed(f"sleep {secs}")
def forward_port(self, host_port: int = 8080, guest_port: int = 80) -> None: def forward_port(self, host_port: int = 8080, guest_port: int = 80) -> None:
"""Forward a TCP port on the host to a TCP port on the guest. """
Forward a TCP port on the host to a TCP port on the guest.
Useful during interactive testing. Useful during interactive testing.
""" """
self.send_monitor_command(f"hostfwd_add tcp::{host_port}-:{guest_port}") self.send_monitor_command(f"hostfwd_add tcp::{host_port}-:{guest_port}")
def block(self) -> None: def block(self) -> None:
"""Make the machine unreachable by shutting down eth1 (the multicast """
interface used to talk to the other VMs). We keep eth0 up so that Simulate unplugging the Ethernet cable that connects the machine to
the test driver can continue to talk to the machine. the other machines.
This happens by shutting down eth1 (the multicast interface used to talk
to the other VMs). eth0 is kept online to still enable the test driver
to communicate with the machine.
""" """
self.send_monitor_command("set_link virtio-net-pci.1 off") self.send_monitor_command("set_link virtio-net-pci.1 off")
def unblock(self) -> None: def unblock(self) -> None:
"""Make the machine reachable.""" """
Undo the effect of `block`.
"""
self.send_monitor_command("set_link virtio-net-pci.1 on") self.send_monitor_command("set_link virtio-net-pci.1 on")
def release(self) -> None: def release(self) -> None:

View file

@ -65,7 +65,8 @@ let
echo "${builtins.toString vlanNames}" >> testScriptWithTypes echo "${builtins.toString vlanNames}" >> testScriptWithTypes
echo -n "$testScript" >> testScriptWithTypes echo -n "$testScript" >> testScriptWithTypes
cat -n testScriptWithTypes echo "Running type check (enable/disable: config.skipTypeCheck)"
echo "See https://nixos.org/manual/nixos/stable/#test-opt-skipTypeCheck"
mypy --no-implicit-optional \ mypy --no-implicit-optional \
--pretty \ --pretty \
@ -79,6 +80,9 @@ let
${testDriver}/bin/generate-driver-symbols ${testDriver}/bin/generate-driver-symbols
${lib.optionalString (!config.skipLint) '' ${lib.optionalString (!config.skipLint) ''
echo "Linting test script (enable/disable: config.skipLint)"
echo "See https://nixos.org/manual/nixos/stable/#test-opt-skipLint"
PYFLAKES_BUILTINS="$( PYFLAKES_BUILTINS="$(
echo -n ${lib.escapeShellArg (lib.concatStringsSep "," pythonizedNames)}, echo -n ${lib.escapeShellArg (lib.concatStringsSep "," pythonizedNames)},
< ${lib.escapeShellArg "driver-symbols"} < ${lib.escapeShellArg "driver-symbols"}

View file

@ -42,7 +42,7 @@ let
# looking things up. # looking things up.
makeCacheConf = { }: makeCacheConf = { }:
let let
makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.fonts; }; makeCache = fontconfig: pkgs.makeFontsCache { inherit fontconfig; fontDirectories = config.fonts.packages; };
cache = makeCache pkgs.fontconfig; cache = makeCache pkgs.fontconfig;
cache32 = makeCache pkgs.pkgsi686Linux.fontconfig; cache32 = makeCache pkgs.pkgsi686Linux.fontconfig;
in in
@ -51,7 +51,7 @@ let
<!DOCTYPE fontconfig SYSTEM 'urn:fontconfig:fonts.dtd'> <!DOCTYPE fontconfig SYSTEM 'urn:fontconfig:fonts.dtd'>
<fontconfig> <fontconfig>
<!-- Font directories --> <!-- Font directories -->
${concatStringsSep "\n" (map (font: "<dir>${font}</dir>") config.fonts.fonts)} ${concatStringsSep "\n" (map (font: "<dir>${font}</dir>") config.fonts.packages)}
${optionalString (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) '' ${optionalString (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) ''
<!-- Pre-generated font caches --> <!-- Pre-generated font caches -->
<cachedir>${cache}</cachedir> <cachedir>${cache}</cachedir>

View file

@ -9,7 +9,7 @@ let
x11Fonts = pkgs.runCommand "X11-fonts" { preferLocalBuild = true; } '' x11Fonts = pkgs.runCommand "X11-fonts" { preferLocalBuild = true; } ''
mkdir -p "$out/share/X11/fonts" mkdir -p "$out/share/X11/fonts"
font_regexp='.*\.\(ttf\|ttc\|otb\|otf\|pcf\|pfa\|pfb\|bdf\)\(\.gz\)?' font_regexp='.*\.\(ttf\|ttc\|otb\|otf\|pcf\|pfa\|pfb\|bdf\)\(\.gz\)?'
find ${toString config.fonts.fonts} -regex "$font_regexp" \ find ${toString config.fonts.packages} -regex "$font_regexp" \
-exec ln -sf -t "$out/share/X11/fonts" '{}' \; -exec ln -sf -t "$out/share/X11/fonts" '{}' \;
cd "$out/share/X11/fonts" cd "$out/share/X11/fonts"
${optionalString cfg.decompressFonts '' ${optionalString cfg.decompressFonts ''

View file

@ -1,47 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.fonts;
defaultFonts =
[ pkgs.dejavu_fonts
pkgs.freefont_ttf
pkgs.gyre-fonts # TrueType substitutes for standard PostScript fonts
pkgs.liberation_ttf
pkgs.unifont
pkgs.noto-fonts-emoji
];
in
{
imports = [
(mkRemovedOptionModule [ "fonts" "enableCoreFonts" ] "Use fonts.fonts = [ pkgs.corefonts ]; instead.")
];
options = {
fonts = {
# TODO: find another name for it.
fonts = mkOption {
type = types.listOf types.path;
default = [];
example = literalExpression "[ pkgs.dejavu_fonts ]";
description = lib.mdDoc "List of primary font paths.";
};
enableDefaultFonts = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Enable a basic set of fonts providing several font styles
and families and reasonable coverage of Unicode.
'';
};
};
};
config = { fonts.fonts = mkIf cfg.enableDefaultFonts defaultFonts; };
}

View file

@ -3,31 +3,21 @@
with lib; with lib;
{ {
options = { options = {
fonts.enableGhostscriptFonts = mkOption {
fonts = { type = types.bool;
default = false;
enableGhostscriptFonts = mkOption { description = lib.mdDoc ''
type = types.bool; Whether to add the fonts provided by Ghostscript (such as
default = false; various URW fonts and the Base-14 Postscript fonts) to the
description = lib.mdDoc '' list of system fonts, making them available to X11
Whether to add the fonts provided by Ghostscript (such as applications.
various URW fonts and the Base-14 Postscript fonts) to the '';
list of system fonts, making them available to X11
applications.
'';
};
}; };
}; };
config = mkIf config.fonts.enableGhostscriptFonts { config = mkIf config.fonts.enableGhostscriptFonts {
fonts.packages = [ "${pkgs.ghostscript}/share/ghostscript/fonts" ];
fonts.fonts = [ "${pkgs.ghostscript}/share/ghostscript/fonts" ];
}; };
} }

View file

@ -0,0 +1,43 @@
{ config, lib, pkgs, ... }:
let
cfg = config.fonts;
in
{
imports = [
(lib.mkRemovedOptionModule [ "fonts" "enableCoreFonts" ] "Use fonts.packages = [ pkgs.corefonts ]; instead.")
(lib.mkRenamedOptionModule [ "fonts" "enableDefaultFonts" ] [ "fonts" "enableDefaultPackages" ])
(lib.mkRenamedOptionModule [ "fonts" "fonts" ] [ "fonts" "packages" ])
];
options = {
fonts = {
packages = lib.mkOption {
type = with lib.types; listOf path;
default = [];
example = lib.literalExpression "[ pkgs.dejavu_fonts ]";
description = lib.mdDoc "List of primary font packages.";
};
enableDefaultPackages = lib.mkOption {
type = lib.types.bool;
default = false;
description = lib.mdDoc ''
Enable a basic set of fonts providing several styles
and families and reasonable coverage of Unicode.
'';
};
};
};
config = {
fonts.packages = lib.mkIf cfg.enableDefaultPackages (with pkgs; [
dejavu_fonts
freefont_ttf
gyre-fonts # TrueType substitutes for standard PostScript fonts
liberation_ttf
unifont
noto-fonts-emoji
]);
};
}

View file

@ -3,12 +3,13 @@
configuration to work. configuration to work.
See also See also
- ./nix.nix - ./nix.nix
- ./nix-flakes.nix - ./nix-flakes.nix
*/ */
{ config, lib, ... }: { config, lib, ... }:
let let
inherit (lib) inherit (lib)
mkDefault
mkIf mkIf
mkOption mkOption
stringAfter stringAfter
@ -21,13 +22,42 @@ in
{ {
options = { options = {
nix = { nix = {
channel = {
enable = mkOption {
description = lib.mdDoc ''
Whether the `nix-channel` command and state files are made available on the machine.
The following files are initialized when enabled:
- `/nix/var/nix/profiles/per-user/root/channels`
- `/root/.nix-channels`
- `$HOME/.nix-defexpr/channels` (on login)
Disabling this option will not remove the state files from the system.
'';
type = types.bool;
default = true;
};
};
nixPath = mkOption { nixPath = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = [ default =
"nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos" if cfg.channel.enable
"nixos-config=/etc/nixos/configuration.nix" then [
"/nix/var/nix/profiles/per-user/root/channels" "nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos"
]; "nixos-config=/etc/nixos/configuration.nix"
"/nix/var/nix/profiles/per-user/root/channels"
]
else [ ];
defaultText = ''
if nix.channel.enable
then [
"nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos"
"nixos-config=/etc/nixos/configuration.nix"
"/nix/var/nix/profiles/per-user/root/channels"
]
else [];
'';
description = lib.mdDoc '' description = lib.mdDoc ''
The default Nix expression search path, used by the Nix The default Nix expression search path, used by the Nix
evaluator to look up paths enclosed in angle brackets evaluator to look up paths enclosed in angle brackets
@ -49,22 +79,30 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.extraInit = environment.extraInit =
'' mkIf cfg.channel.enable ''
if [ -e "$HOME/.nix-defexpr/channels" ]; then if [ -e "$HOME/.nix-defexpr/channels" ]; then
export NIX_PATH="$HOME/.nix-defexpr/channels''${NIX_PATH:+:$NIX_PATH}" export NIX_PATH="$HOME/.nix-defexpr/channels''${NIX_PATH:+:$NIX_PATH}"
fi fi
''; '';
environment.extraSetup = mkIf (!cfg.channel.enable) ''
rm --force $out/bin/nix-channel
'';
# NIX_PATH has a non-empty default according to Nix docs, so we don't unset
# it when empty.
environment.sessionVariables = { environment.sessionVariables = {
NIX_PATH = cfg.nixPath; NIX_PATH = cfg.nixPath;
}; };
system.activationScripts.nix-channel = stringAfter [ "etc" "users" ] nix.settings.nix-path = mkIf (! cfg.channel.enable) (mkDefault "");
''
system.activationScripts.nix-channel = mkIf cfg.channel.enable
(stringAfter [ "etc" "users" ] ''
# Subscribe the root user to the NixOS channel by default. # Subscribe the root user to the NixOS channel by default.
if [ ! -e "/root/.nix-channels" ]; then if [ ! -e "/root/.nix-channels" ]; then
echo "${config.system.defaultChannel} nixos" > "/root/.nix-channels" echo "${config.system.defaultChannel} nixos" > "/root/.nix-channels"
fi fi
''; '');
}; };
} }

View file

@ -19,7 +19,7 @@ let
pkgs.qgnomeplatform-qt6 pkgs.qgnomeplatform-qt6
pkgs.adwaita-qt6 pkgs.adwaita-qt6
] ]
else if isQtStyle then [ pkgs.libsForQt5.qtstyleplugins ] else if isQtStyle then [ pkgs.libsForQt5.qtstyleplugins pkgs.qt6Packages.qt6gtk2 ]
else if isQt5ct then [ pkgs.libsForQt5.qt5ct pkgs.qt6Packages.qt6ct ] else if isQt5ct then [ pkgs.libsForQt5.qt5ct pkgs.qt6Packages.qt6ct ]
else if isLxqt then [ pkgs.lxqt.lxqt-qtplugin pkgs.lxqt.lxqt-config ] else if isLxqt then [ pkgs.lxqt.lxqt-qtplugin pkgs.lxqt.lxqt-config ]
else if isKde then [ pkgs.libsForQt5.plasma-integration pkgs.libsForQt5.systemsettings ] else if isKde then [ pkgs.libsForQt5.plasma-integration pkgs.libsForQt5.systemsettings ]
@ -86,6 +86,7 @@ in
"adwaita-qt" "adwaita-qt"
"adwaita-qt6" "adwaita-qt6"
["libsForQt5" "qtstyleplugins"] ["libsForQt5" "qtstyleplugins"]
["qt6Packages" "qt6gtk2"]
]; ];
description = lib.mdDoc '' description = lib.mdDoc ''
Selects the style to use for Qt applications. Selects the style to use for Qt applications.

View file

@ -7,12 +7,15 @@ with lib;
options = { options = {
hardware.usbWwan = { hardware.usb-modeswitch = {
enable = mkOption { enable = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = lib.mdDoc '' description = lib.mdDoc ''
Enable this option to support USB WWAN adapters. Enable this option to support certain USB WLAN and WWAN adapters.
These network adapters initial present themselves as Flash Drives containing their drivers.
This option enables automatic switching to the networking mode.
''; '';
}; };
}; };
@ -20,7 +23,11 @@ with lib;
###### implementation ###### implementation
config = mkIf config.hardware.usbWwan.enable { imports = [
(mkRenamedOptionModule ["hardware" "usbWwan" ] ["hardware" "usb-modeswitch" ])
];
config = mkIf config.hardware.usb-modeswitch.enable {
# Attaches device specific handlers. # Attaches device specific handlers.
services.udev.packages = with pkgs; [ usb-modeswitch-data ]; services.udev.packages = with pkgs; [ usb-modeswitch-data ];

View file

@ -2,8 +2,8 @@
with lib; with lib;
{ {
options.hardware.wooting.enable = options.hardware.wooting.enable = mkEnableOption (lib.mdDoc ''support for Wooting keyboards.
mkEnableOption (lib.mdDoc "support for Wooting keyboards"); Note that users must be in the "input" group for udev rules to apply'');
config = mkIf config.hardware.wooting.enable { config = mkIf config.hardware.wooting.enable {
environment.systemPackages = [ pkgs.wootility ]; environment.systemPackages = [ pkgs.wootility ];

View file

@ -12,12 +12,34 @@ in
i18n.inputMethod.fcitx5 = { i18n.inputMethod.fcitx5 = {
addons = mkOption { addons = mkOption {
type = with types; listOf package; type = with types; listOf package;
default = []; default = [ ];
example = literalExpression "with pkgs; [ fcitx5-rime ]"; example = literalExpression "with pkgs; [ fcitx5-rime ]";
description = lib.mdDoc '' description = lib.mdDoc ''
Enabled Fcitx5 addons. Enabled Fcitx5 addons.
''; '';
}; };
quickPhrase = mkOption {
type = with types; attrsOf string;
default = { };
example = literalExpression ''
{
smile = "";
angry = "()";
}
'';
description = lib.mdDoc "Quick phrases.";
};
quickPhraseFiles = mkOption {
type = with types; attrsOf path;
default = { };
example = literalExpression ''
{
words = ./words.mb;
numbers = ./numbers.mb;
}
'';
description = lib.mdDoc "Quick phrase files.";
};
}; };
}; };
@ -30,6 +52,16 @@ in
config = mkIf (im.enabled == "fcitx5") { config = mkIf (im.enabled == "fcitx5") {
i18n.inputMethod.package = fcitx5Package; i18n.inputMethod.package = fcitx5Package;
i18n.inputMethod.fcitx5.addons = lib.optionals (cfg.quickPhrase != { }) [
(pkgs.writeTextDir "share/fcitx5/data/QuickPhrase.mb"
(lib.concatStringsSep "\n"
(lib.mapAttrsToList (name: value: "${name} ${value}") cfg.quickPhrase)))
] ++ lib.optionals (cfg.quickPhraseFiles != { }) [
(pkgs.linkFarm "quickPhraseFiles" (lib.mapAttrs'
(name: value: lib.nameValuePair ("share/fcitx5/data/quickphrase.d/${name}.mb") value)
cfg.quickPhraseFiles))
];
environment.variables = { environment.variables = {
GTK_IM_MODULE = "fcitx"; GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx"; QT_IM_MODULE = "fcitx";

View file

@ -0,0 +1,112 @@
#!/usr/bin/env python
"""Amend systemd-repart definiton files.
In order to avoid Import-From-Derivation (IFD) when building images with
systemd-repart, the definition files created by Nix need to be amended with the
store paths from the closure.
This is achieved by adding CopyFiles= instructions to the definition files.
The arbitrary files configured via `contents` are also added to the definition
files using the same mechanism.
"""
import json
import sys
import shutil
from pathlib import Path
def add_contents_to_definition(
definition: Path, contents: dict[str, dict[str, str]] | None
) -> None:
"""Add CopyFiles= instructions to a definition for all files in contents."""
if not contents:
return
copy_files_lines: list[str] = []
for target, options in contents.items():
source = options["source"]
copy_files_lines.append(f"CopyFiles={source}:{target}\n")
with open(definition, "a") as f:
f.writelines(copy_files_lines)
def add_closure_to_definition(
definition: Path, closure: Path | None, strip_nix_store_prefix: bool | None
) -> None:
"""Add CopyFiles= instructions to a definition for all paths in the closure.
If strip_nix_store_prefix is True, `/nix/store` is stripped from the target path.
"""
if not closure:
return
copy_files_lines: list[str] = []
with open(closure, "r") as f:
for line in f:
if not isinstance(line, str):
continue
source = Path(line.strip())
target = str(source.relative_to("/nix/store/"))
target = f":{target}" if strip_nix_store_prefix else ""
copy_files_lines.append(f"CopyFiles={source}{target}\n")
with open(definition, "a") as f:
f.writelines(copy_files_lines)
def main() -> None:
"""Amend the provided repart definitions by adding CopyFiles= instructions.
For each file specified in the `contents` field of a partition in the
partiton config file, a `CopyFiles=` instruction is added to the
corresponding definition file.
The same is done for every store path of the `closure` field.
Print the path to a directory that contains the amended repart
definitions to stdout.
"""
partition_config_file = sys.argv[1]
if not partition_config_file:
print("No partition config file was supplied.")
sys.exit(1)
repart_definitions = sys.argv[2]
if not repart_definitions:
print("No repart definitions were supplied.")
sys.exit(1)
with open(partition_config_file, "rb") as f:
partition_config = json.load(f)
if not partition_config:
print("Partition config is empty.")
sys.exit(1)
target_dir = Path("amended-repart.d")
target_dir.mkdir()
shutil.copytree(repart_definitions, target_dir, dirs_exist_ok=True)
for name, config in partition_config.items():
definition = target_dir.joinpath(f"{name}.conf")
definition.chmod(0o644)
contents = config.get("contents")
add_contents_to_definition(definition, contents)
closure = config.get("closure")
strip_nix_store_prefix = config.get("stripStorePaths")
add_closure_to_definition(definition, closure, strip_nix_store_prefix)
print(target_dir.absolute())
if __name__ == "__main__":
main()

View file

@ -0,0 +1,137 @@
# Building Images via `systemd-repart` {#sec-image-repart}
You can build disk images in NixOS with the `image.repart` option provided by
the module [image/repart.nix][]. This module uses `systemd-repart` to build the
images and exposes it's entire interface via the `repartConfig` option.
[image/repart.nix]: https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/image/repart.nix
An example of how to build an image:
```nix
{ config, modulesPath, ... }: {
imports = [ "${modulesPath}/image/repart.nix" ];
image.repart = {
name = "image";
partitions = {
"esp" = {
contents = {
...
};
repartConfig = {
Type = "esp";
...
};
};
"root" = {
storePaths = [ config.system.build.toplevel ];
repartConfig = {
Type = "root";
Label = "nixos";
...
};
};
};
};
}
```
## Nix Store Partition {#sec-image-repart-store-partition}
You can define a partition that only contains the Nix store and then mount it
under `/nix/store`. Because the `/nix/store` part of the paths is already
determined by the mount point, you have to set `stripNixStorePrefix = true;` so
that the prefix is stripped from the paths before copying them into the image.
```nix
fileSystems."/nix/store".device = "/dev/disk/by-partlabel/nix-store"
image.repart.partitions = {
"store" = {
storePaths = [ config.system.build.toplevel ];
stripNixStorePrefix = true;
repartConfig = {
Type = "linux-generic";
Label = "nix-store";
...
};
};
};
```
## Appliance Image {#sec-image-repart-appliance}
The `image/repart.nix` module can also be used to build self-contained [software
appliances][].
[software appliances]: https://en.wikipedia.org/wiki/Software_appliance
The generation based update mechanism of NixOS is not suited for appliances.
Updates of appliances are usually either performed by replacing the entire
image with a new one or by updating partitions via an A/B scheme. See the
[Chrome OS update process][chrome-os-update] for an example of how to achieve
this. The appliance image built in the following example does not contain a
`configuration.nix` and thus you will not be able to call `nixos-rebuild` from
this system.
[chrome-os-update]: https://chromium.googlesource.com/aosp/platform/system/update_engine/+/HEAD/README.md
```nix
let
pkgs = import <nixpkgs> { };
efiArch = pkgs.stdenv.hostPlatform.efiArch;
in
(pkgs.nixos [
({ config, lib, pkgs, modulesPath, ... }: {
imports = [ "${modulesPath}/image/repart.nix" ];
boot.loader.grub.enable = false;
fileSystems."/".device = "/dev/disk/by-label/nixos";
image.repart = {
name = "image";
partitions = {
"esp" = {
contents = {
"/EFI/BOOT/BOOT${lib.toUpper efiArch}.EFI".source =
"${pkgs.systemd}/lib/systemd/boot/efi/systemd-boot${efiArch}.efi";
"/loader/entries/nixos.conf".source = pkgs.writeText "nixos.conf" ''
title NixOS
linux /EFI/nixos/kernel.efi
initrd /EFI/nixos/initrd.efi
options init=${config.system.build.toplevel}/init ${toString config.boot.kernelParams}
'';
"/EFI/nixos/kernel.efi".source =
"${config.boot.kernelPackages.kernel}/${config.system.boot.loader.kernelFile}";
"/EFI/nixos/initrd.efi".source =
"${config.system.build.initialRamdisk}/${config.system.boot.loader.initrdFile}";
};
repartConfig = {
Type = "esp";
Format = "vfat";
SizeMinBytes = "96M";
};
};
"root" = {
storePaths = [ config.system.build.toplevel ];
repartConfig = {
Type = "root";
Format = "ext4";
Label = "nixos";
Minimize = "guess";
};
};
};
};
})
]).image
```

View file

@ -0,0 +1,209 @@
# This module exposes options to build a disk image with a GUID Partition Table
# (GPT). It uses systemd-repart to build the image.
{ config, pkgs, lib, utils, ... }:
let
cfg = config.image.repart;
partitionOptions = {
options = {
storePaths = lib.mkOption {
type = with lib.types; listOf path;
default = [ ];
description = lib.mdDoc "The store paths to include in the partition.";
};
stripNixStorePrefix = lib.mkOption {
type = lib.types.bool;
default = false;
description = lib.mdDoc ''
Whether to strip `/nix/store/` from the store paths. This is useful
when you want to build a partition that only contains store paths and
is mounted under `/nix/store`.
'';
};
contents = lib.mkOption {
type = with lib.types; attrsOf (submodule {
options = {
source = lib.mkOption {
type = types.path;
description = lib.mdDoc "Path of the source file.";
};
};
});
default = { };
example = lib.literalExpression '' {
"/EFI/BOOT/BOOTX64.EFI".source =
"''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi";
"/loader/entries/nixos.conf".source = systemdBootEntry;
}
'';
description = lib.mdDoc "The contents to end up in the filesystem image.";
};
repartConfig = lib.mkOption {
type = with lib.types; attrsOf (oneOf [ str int bool ]);
example = {
Type = "home";
SizeMinBytes = "512M";
SizeMaxBytes = "2G";
};
description = lib.mdDoc ''
Specify the repart options for a partiton as a structural setting.
See <https://www.freedesktop.org/software/systemd/man/repart.d.html>
for all available options.
'';
};
};
};
in
{
options.image.repart = {
name = lib.mkOption {
type = lib.types.str;
description = lib.mdDoc "The name of the image.";
};
seed = lib.mkOption {
type = with lib.types; nullOr str;
# Generated with `uuidgen`. Random but fixed to improve reproducibility.
default = "0867da16-f251-457d-a9e8-c31f9a3c220b";
description = lib.mdDoc ''
A UUID to use as a seed. You can set this to `null` to explicitly
randomize the partition UUIDs.
'';
};
split = lib.mkOption {
type = lib.types.bool;
default = false;
description = lib.mdDoc ''
Enables generation of split artifacts from partitions. If enabled, for
each partition with SplitName= set, a separate output file containing
just the contents of that partition is generated.
'';
};
partitions = lib.mkOption {
type = with lib.types; attrsOf (submodule partitionOptions);
default = { };
example = lib.literalExpression '' {
"10-esp" = {
contents = {
"/EFI/BOOT/BOOTX64.EFI".source =
"''${pkgs.systemd}/lib/systemd/boot/efi/systemd-bootx64.efi";
}
repartConfig = {
Type = "esp";
Format = "fat";
};
};
"20-root" = {
storePaths = [ config.system.build.toplevel ];
repartConfig = {
Type = "root";
Format = "ext4";
Minimize = "guess";
};
};
};
'';
description = lib.mdDoc ''
Specify partitions as a set of the names of the partitions with their
configuration as the key.
'';
};
};
config = {
system.build.image =
let
fileSystemToolMapping = with pkgs; {
"vfat" = [ dosfstools mtools ];
"ext4" = [ e2fsprogs.bin ];
"squashfs" = [ squashfsTools ];
"erofs" = [ erofs-utils ];
"btrfs" = [ btrfs-progs ];
"xfs" = [ xfsprogs ];
};
fileSystems = lib.filter
(f: f != null)
(lib.mapAttrsToList (_n: v: v.repartConfig.Format or null) cfg.partitions);
fileSystemTools = builtins.concatMap (f: fileSystemToolMapping."${f}") fileSystems;
makeClosure = paths: pkgs.closureInfo { rootPaths = paths; };
# Add the closure of the provided Nix store paths to cfg.partitions so
# that amend-repart-definitions.py can read it.
addClosure = _name: partitionConfig: partitionConfig // (
lib.optionalAttrs
(partitionConfig.storePaths or [ ] != [ ])
{ closure = "${makeClosure partitionConfig.storePaths}/store-paths"; }
);
finalPartitions = lib.mapAttrs addClosure cfg.partitions;
amendRepartDefinitions = pkgs.runCommand "amend-repart-definitions.py"
{
nativeBuildInputs = with pkgs; [ black ruff mypy ];
buildInputs = [ pkgs.python3 ];
} ''
install ${./amend-repart-definitions.py} $out
patchShebangs --host $out
black --check --diff $out
ruff --line-length 88 $out
mypy --strict $out
'';
format = pkgs.formats.ini { };
definitionsDirectory = utils.systemdUtils.lib.definitions
"repart.d"
format
(lib.mapAttrs (_n: v: { Partition = v.repartConfig; }) finalPartitions);
partitions = pkgs.writeText "partitions.json" (builtins.toJSON finalPartitions);
in
pkgs.runCommand cfg.name
{
nativeBuildInputs = with pkgs; [
fakeroot
systemd
] ++ fileSystemTools;
} ''
amendedRepartDefinitions=$(${amendRepartDefinitions} ${partitions} ${definitionsDirectory})
mkdir -p $out
cd $out
fakeroot systemd-repart \
--dry-run=no \
--empty=create \
--size=auto \
--seed="${cfg.seed}" \
--definitions="$amendedRepartDefinitions" \
--split="${lib.boolToString cfg.split}" \
--json=pretty \
image.raw \
| tee repart-output.json
'';
meta = {
maintainers = with lib.maintainers; [ nikstur ];
doc = ./repart.md;
};
};
}

View file

@ -206,7 +206,7 @@ if [[ -z $noBootLoader ]]; then
mount --rbind --mkdir / "$mountPoint" mount --rbind --mkdir / "$mountPoint"
mount --make-rslave "$mountPoint" mount --make-rslave "$mountPoint"
/run/current-system/bin/switch-to-configuration boot /run/current-system/bin/switch-to-configuration boot
umount -R "$mountPoint" && rmdir "$mountPoint" umount -R "$mountPoint" && (rmdir "$mountPoint" 2>/dev/null || true)
EOF EOF
)" )"
fi fi

View file

@ -126,7 +126,7 @@ in
# your system. Help is available in the configuration.nix(5) man page # your system. Help is available in the configuration.nix(5) man page
# and in the NixOS manual (accessible by running `nixos-help`). # and in the NixOS manual (accessible by running `nixos-help`).
{ config, pkgs, ... }: { config, lib, pkgs, ... }:
{ {
imports = imports =

View file

@ -4,8 +4,8 @@
./config/debug-info.nix ./config/debug-info.nix
./config/fonts/fontconfig.nix ./config/fonts/fontconfig.nix
./config/fonts/fontdir.nix ./config/fonts/fontdir.nix
./config/fonts/fonts.nix
./config/fonts/ghostscript.nix ./config/fonts/ghostscript.nix
./config/fonts/packages.nix
./config/gnu.nix ./config/gnu.nix
./config/gtk/gtk-icon-cache.nix ./config/gtk/gtk-icon-cache.nix
./config/i18n.nix ./config/i18n.nix
@ -93,8 +93,8 @@
./hardware/tuxedo-keyboard.nix ./hardware/tuxedo-keyboard.nix
./hardware/ubertooth.nix ./hardware/ubertooth.nix
./hardware/uinput.nix ./hardware/uinput.nix
./hardware/usb-modeswitch.nix
./hardware/usb-storage.nix ./hardware/usb-storage.nix
./hardware/usb-wwan.nix
./hardware/video/amdgpu-pro.nix ./hardware/video/amdgpu-pro.nix
./hardware/video/bumblebee.nix ./hardware/video/bumblebee.nix
./hardware/video/capture/mwprocapture.nix ./hardware/video/capture/mwprocapture.nix
@ -221,7 +221,9 @@
./programs/nncp.nix ./programs/nncp.nix
./programs/noisetorch.nix ./programs/noisetorch.nix
./programs/npm.nix ./programs/npm.nix
./programs/ns-usbloader.nix
./programs/oblogout.nix ./programs/oblogout.nix
./programs/oddjobd.nix
./programs/openvpn3.nix ./programs/openvpn3.nix
./programs/pantheon-tweaks.nix ./programs/pantheon-tweaks.nix
./programs/partition-manager.nix ./programs/partition-manager.nix
@ -263,6 +265,7 @@
./programs/wayland/river.nix ./programs/wayland/river.nix
./programs/wayland/sway.nix ./programs/wayland/sway.nix
./programs/wayland/waybar.nix ./programs/wayland/waybar.nix
./programs/wayland/wayfire.nix
./programs/weylus.nix ./programs/weylus.nix
./programs/wireshark.nix ./programs/wireshark.nix
./programs/xastir.nix ./programs/xastir.nix
@ -417,6 +420,7 @@
./services/databases/neo4j.nix ./services/databases/neo4j.nix
./services/databases/openldap.nix ./services/databases/openldap.nix
./services/databases/opentsdb.nix ./services/databases/opentsdb.nix
./services/databases/pgbouncer.nix
./services/databases/pgmanage.nix ./services/databases/pgmanage.nix
./services/databases/postgresql.nix ./services/databases/postgresql.nix
./services/databases/redis.nix ./services/databases/redis.nix
@ -535,6 +539,7 @@
./services/hardware/usbrelayd.nix ./services/hardware/usbrelayd.nix
./services/hardware/vdr.nix ./services/hardware/vdr.nix
./services/hardware/keyd.nix ./services/hardware/keyd.nix
./services/home-automation/ebusd.nix
./services/home-automation/esphome.nix ./services/home-automation/esphome.nix
./services/home-automation/evcc.nix ./services/home-automation/evcc.nix
./services/home-automation/home-assistant.nix ./services/home-automation/home-assistant.nix
@ -598,6 +603,7 @@
./services/matrix/mjolnir.nix ./services/matrix/mjolnir.nix
./services/matrix/mx-puppet-discord.nix ./services/matrix/mx-puppet-discord.nix
./services/matrix/pantalaimon.nix ./services/matrix/pantalaimon.nix
./services/matrix/matrix-sliding-sync.nix
./services/matrix/synapse.nix ./services/matrix/synapse.nix
./services/misc/airsonic.nix ./services/misc/airsonic.nix
./services/misc/ananicy.nix ./services/misc/ananicy.nix
@ -608,6 +614,7 @@
./services/misc/autorandr.nix ./services/misc/autorandr.nix
./services/misc/autosuspend.nix ./services/misc/autosuspend.nix
./services/misc/bazarr.nix ./services/misc/bazarr.nix
./services/misc/bcg.nix
./services/misc/beanstalkd.nix ./services/misc/beanstalkd.nix
./services/misc/bees.nix ./services/misc/bees.nix
./services/misc/bepasty.nix ./services/misc/bepasty.nix
@ -631,6 +638,7 @@
./services/misc/etcd.nix ./services/misc/etcd.nix
./services/misc/etebase-server.nix ./services/misc/etebase-server.nix
./services/misc/etesync-dav.nix ./services/misc/etesync-dav.nix
./services/misc/evdevremapkeys.nix
./services/misc/felix.nix ./services/misc/felix.nix
./services/misc/freeswitch.nix ./services/misc/freeswitch.nix
./services/misc/fstrim.nix ./services/misc/fstrim.nix
@ -665,6 +673,7 @@
./services/misc/mediatomb.nix ./services/misc/mediatomb.nix
./services/misc/metabase.nix ./services/misc/metabase.nix
./services/misc/moonraker.nix ./services/misc/moonraker.nix
./services/misc/mqtt2influxdb.nix
./services/misc/n8n.nix ./services/misc/n8n.nix
./services/misc/nitter.nix ./services/misc/nitter.nix
./services/misc/nix-gc.nix ./services/misc/nix-gc.nix
@ -761,6 +770,7 @@
./services/monitoring/nagios.nix ./services/monitoring/nagios.nix
./services/monitoring/netdata.nix ./services/monitoring/netdata.nix
./services/monitoring/opentelemetry-collector.nix ./services/monitoring/opentelemetry-collector.nix
./services/monitoring/osquery.nix
./services/monitoring/parsedmarc.nix ./services/monitoring/parsedmarc.nix
./services/monitoring/prometheus/alertmanager-irc-relay.nix ./services/monitoring/prometheus/alertmanager-irc-relay.nix
./services/monitoring/prometheus/alertmanager.nix ./services/monitoring/prometheus/alertmanager.nix
@ -855,7 +865,6 @@
./services/networking/croc.nix ./services/networking/croc.nix
./services/networking/dante.nix ./services/networking/dante.nix
./services/networking/dhcpcd.nix ./services/networking/dhcpcd.nix
./services/networking/dhcpd.nix
./services/networking/dnscache.nix ./services/networking/dnscache.nix
./services/networking/dnscrypt-proxy2.nix ./services/networking/dnscrypt-proxy2.nix
./services/networking/dnscrypt-wrapper.nix ./services/networking/dnscrypt-wrapper.nix
@ -1103,6 +1112,7 @@
./services/search/meilisearch.nix ./services/search/meilisearch.nix
./services/search/opensearch.nix ./services/search/opensearch.nix
./services/search/qdrant.nix ./services/search/qdrant.nix
./services/search/typesense.nix
./services/security/aesmd.nix ./services/security/aesmd.nix
./services/security/authelia.nix ./services/security/authelia.nix
./services/security/certmgr.nix ./services/security/certmgr.nix
@ -1142,6 +1152,7 @@
./services/security/vaultwarden/default.nix ./services/security/vaultwarden/default.nix
./services/security/yubikey-agent.nix ./services/security/yubikey-agent.nix
./services/system/automatic-timezoned.nix ./services/system/automatic-timezoned.nix
./services/system/bpftune.nix
./services/system/cachix-agent/default.nix ./services/system/cachix-agent/default.nix
./services/system/cachix-watch-store.nix ./services/system/cachix-watch-store.nix
./services/system/cloud-init.nix ./services/system/cloud-init.nix
@ -1255,6 +1266,7 @@
./services/web-apps/rss-bridge.nix ./services/web-apps/rss-bridge.nix
./services/web-apps/selfoss.nix ./services/web-apps/selfoss.nix
./services/web-apps/shiori.nix ./services/web-apps/shiori.nix
./services/web-apps/slskd.nix
./services/web-apps/snipe-it.nix ./services/web-apps/snipe-it.nix
./services/web-apps/sogo.nix ./services/web-apps/sogo.nix
./services/web-apps/trilium.nix ./services/web-apps/trilium.nix
@ -1387,6 +1399,7 @@
./system/boot/systemd/oomd.nix ./system/boot/systemd/oomd.nix
./system/boot/systemd/repart.nix ./system/boot/systemd/repart.nix
./system/boot/systemd/shutdown.nix ./system/boot/systemd/shutdown.nix
./system/boot/systemd/sysupdate.nix
./system/boot/systemd/tmpfiles.nix ./system/boot/systemd/tmpfiles.nix
./system/boot/systemd/user.nix ./system/boot/systemd/user.nix
./system/boot/systemd/userdbd.nix ./system/boot/systemd/userdbd.nix

View file

@ -21,7 +21,8 @@ in
../virtualisation/nixos-containers.nix ../virtualisation/nixos-containers.nix
../services/x11/desktop-managers/xterm.nix ../services/x11/desktop-managers/xterm.nix
]; ];
config = { }; # swraid's default depends on stateVersion
config.boot.swraid.enable = false;
options.boot.isContainer = lib.mkOption { default = false; internal = true; }; options.boot.isContainer = lib.mkOption { default = false; internal = true; };
} }
]; ];

View file

@ -123,8 +123,8 @@ in
boot.extraModulePackages = [ (lib.mkIf cfg.netatop.enable cfg.netatop.package) ]; boot.extraModulePackages = [ (lib.mkIf cfg.netatop.enable cfg.netatop.package) ];
systemd = systemd =
let let
mkSystemd = type: cond: name: restartTriggers: { mkSystemd = type: name: restartTriggers: {
${name} = lib.mkIf cond { ${name} = {
inherit restartTriggers; inherit restartTriggers;
wantedBy = [ (if type == "services" then "multi-user.target" else if type == "timers" then "timers.target" else null) ]; wantedBy = [ (if type == "services" then "multi-user.target" else if type == "timers" then "timers.target" else null) ];
}; };
@ -134,42 +134,44 @@ in
in in
{ {
packages = [ atop (lib.mkIf cfg.netatop.enable cfg.netatop.package) ]; packages = [ atop (lib.mkIf cfg.netatop.enable cfg.netatop.package) ];
services = services = lib.mkMerge [
mkService cfg.atopService.enable "atop" [ atop ] (lib.mkIf cfg.atopService.enable (lib.recursiveUpdate
// lib.mkIf cfg.atopService.enable { (mkService "atop" [ atop ])
# always convert logs to newer version first {
# XXX might trigger TimeoutStart but restarting atop.service will # always convert logs to newer version first
# convert remainings logs and start eventually # XXX might trigger TimeoutStart but restarting atop.service will
atop.serviceConfig.ExecStartPre = pkgs.writeShellScript "atop-update-log-format" '' # convert remainings logs and start eventually
set -e -u atop.preStart = ''
shopt -s nullglob set -e -u
for logfile in "$LOGPATH"/atop_* shopt -s nullglob
do for logfile in "$LOGPATH"/atop_*
${atop}/bin/atopconvert "$logfile" "$logfile".new do
# only replace old file if version was upgraded to avoid ${atop}/bin/atopconvert "$logfile" "$logfile".new
# false positives for atop-rotate.service # only replace old file if version was upgraded to avoid
if ! ${pkgs.diffutils}/bin/cmp -s "$logfile" "$logfile".new # false positives for atop-rotate.service
then if ! ${pkgs.diffutils}/bin/cmp -s "$logfile" "$logfile".new
${pkgs.coreutils}/bin/mv -v -f "$logfile".new "$logfile" then
else ${pkgs.coreutils}/bin/mv -v -f "$logfile".new "$logfile"
${pkgs.coreutils}/bin/rm -f "$logfile".new else
fi ${pkgs.coreutils}/bin/rm -f "$logfile".new
done fi
''; done
} '';
// mkService cfg.atopacctService.enable "atopacct" [ atop ] }))
// mkService cfg.netatop.enable "netatop" [ cfg.netatop.package ] (lib.mkIf cfg.atopacctService.enable (mkService "atopacct" [ atop ]))
// mkService cfg.atopgpu.enable "atopgpu" [ atop ]; (lib.mkIf cfg.netatop.enable (mkService "netatop" [ cfg.netatop.package ]))
timers = mkTimer cfg.atopRotateTimer.enable "atop-rotate" [ atop ]; (lib.mkIf cfg.atopgpu.enable (mkService "atopgpu" [ atop ]))
];
timers = lib.mkIf cfg.atopRotateTimer.enable (mkTimer "atop-rotate" [ atop ]);
}; };
security.wrappers = lib.mkIf cfg.setuidWrapper.enable { security.wrappers = lib.mkIf cfg.setuidWrapper.enable {
atop = atop = {
{ setuid = true; setuid = true;
owner = "root"; owner = "root";
group = "root"; group = "root";
source = "${atop}/bin/atop"; source = "${atop}/bin/atop";
}; };
}; };
} }
); );

View file

@ -233,7 +233,6 @@ in
nixpkgs.config.firefox = { nixpkgs.config.firefox = {
enableBrowserpass = nmh.browserpass; enableBrowserpass = nmh.browserpass;
enableBukubrow = nmh.bukubrow; enableBukubrow = nmh.bukubrow;
enableEUWebID = nmh.euwebid;
enableTridactylNative = nmh.tridactyl; enableTridactylNative = nmh.tridactyl;
enableUgetIntegrator = nmh.ugetIntegrator; enableUgetIntegrator = nmh.ugetIntegrator;
enableFXCastBridge = nmh.fxCast; enableFXCastBridge = nmh.fxCast;

View file

@ -37,7 +37,7 @@ let
babelfishTranslate = path: name: babelfishTranslate = path: name:
pkgs.runCommandLocal "${name}.fish" { pkgs.runCommandLocal "${name}.fish" {
nativeBuildInputs = [ pkgs.babelfish ]; nativeBuildInputs = [ pkgs.babelfish ];
} "${pkgs.babelfish}/bin/babelfish < ${path} > $out;"; } "babelfish < ${path} > $out;";
in in

View file

@ -6,10 +6,10 @@
with lib; let with lib; let
cfg = config.programs.hyprland; cfg = config.programs.hyprland;
defaultHyprlandPackage = pkgs.hyprland.override { finalPortalPackage = cfg.portalPackage.override {
enableXWayland = cfg.xwayland.enable; hyprland-share-picker = pkgs.hyprland-share-picker.override {
hidpiXWayland = cfg.xwayland.hidpi; hyprland = cfg.finalPackage;
nvidiaPatches = cfg.nvidiaPatches; };
}; };
in in
{ {
@ -25,24 +25,25 @@ in
''; '';
}; };
package = mkOption { package = mkPackageOptionMD pkgs "hyprland" { };
type = types.path;
default = defaultHyprlandPackage; finalPackage = mkOption {
defaultText = literalExpression '' type = types.package;
pkgs.hyprland.override { readOnly = true;
enableXWayland = config.programs.hyprland.xwayland.enable; default = cfg.package.override {
hidpiXWayland = config.programs.hyprland.xwayland.hidpi; enableXWayland = cfg.xwayland.enable;
nvidiaPatches = config.programs.hyprland.nvidiaPatches; hidpiXWayland = cfg.xwayland.hidpi;
} nvidiaPatches = cfg.nvidiaPatches;
''; };
example = literalExpression "<Hyprland flake>.packages.<system>.default"; defaultText = literalExpression
"`wayland.windowManager.hyprland.package` with applied configuration";
description = mdDoc '' description = mdDoc ''
The Hyprland package to use. The Hyprland package after applying configuration.
Setting this option will make {option}`programs.hyprland.xwayland` and
{option}`programs.hyprland.nvidiaPatches` not work.
''; '';
}; };
portalPackage = mkPackageOptionMD pkgs "xdg-desktop-portal-hyprland" { };
xwayland = { xwayland = {
enable = mkEnableOption (mdDoc "XWayland") // { default = true; }; enable = mkEnableOption (mdDoc "XWayland") // { default = true; };
hidpi = mkEnableOption null // { hidpi = mkEnableOption null // {
@ -57,9 +58,9 @@ in
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.finalPackage ];
fonts.enableDefaultFonts = mkDefault true; fonts.enableDefaultPackages = mkDefault true;
hardware.opengl.enable = mkDefault true; hardware.opengl.enable = mkDefault true;
programs = { programs = {
@ -69,13 +70,11 @@ in
security.polkit.enable = true; security.polkit.enable = true;
services.xserver.displayManager.sessionPackages = [ cfg.package ]; services.xserver.displayManager.sessionPackages = [ cfg.finalPackage ];
xdg.portal = { xdg.portal = {
enable = mkDefault true; enable = mkDefault true;
extraPortals = [ extraPortals = [ finalPortalPackage ];
pkgs.xdg-desktop-portal-hyprland
];
}; };
}; };
} }

View file

@ -66,7 +66,7 @@ in {
}; };
hardware.opengl.enable = lib.mkDefault true; hardware.opengl.enable = lib.mkDefault true;
fonts.enableDefaultFonts = lib.mkDefault true; fonts.enableDefaultPackages = lib.mkDefault true;
programs.dconf.enable = lib.mkDefault true; programs.dconf.enable = lib.mkDefault true;
programs.xwayland.enable = lib.mkDefault true; programs.xwayland.enable = lib.mkDefault true;

View file

@ -0,0 +1,18 @@
{ config, lib, pkgs, ... }:
let
cfg = config.programs.ns-usbloader;
in
{
options = {
programs.ns-usbloader = {
enable = lib.mkEnableOption (lib.mdDoc "ns-usbloader application with udev rules applied");
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ pkgs.ns-usbloader ];
services.udev.packages = [ pkgs.ns-usbloader ];
};
meta.maintainers = pkgs.ns-usbloader.meta.maintainers;
}

View file

@ -0,0 +1,33 @@
{ config, pkgs, lib, ... }:
let
cfg = config.programs.oddjobd;
in
{
options.programs.oddjobd = {
enable = lib.mkEnableOption "oddjob";
package = lib.mkPackageOption pkgs "oddjob" {};
};
config = lib.mkIf cfg.enable {
assertions = [
{ assertion = false;
message = "The oddjob service was found to be broken without NixOS test or maintainer. Please take ownership of this service.";
}
];
systemd.packages = [ cfg.package ];
systemd.services.oddjobd = {
wantedBy = [ "multi-user.target"];
after = [ "network.target"];
description = "DBUS Odd-job Daemon";
enable = true;
documentation = [ "man:oddjobd(8)" "man:oddjobd.conf(5)" ];
serviceConfig = {
Type = "dbus";
BusName = "org.freedesktop.oddjob";
ExecStart = "${lib.getBin cfg.package}/bin/oddjobd";
};
};
};
}

View file

@ -0,0 +1,48 @@
{ config, lib, pkgs, ...}:
let
cfg = config.programs.wayfire;
in
{
meta.maintainers = with lib.maintainers; [ rewine ];
options.programs.wayfire = {
enable = lib.mkEnableOption (lib.mdDoc "Wayfire, a wayland compositor based on wlroots.");
package = lib.mkPackageOptionMD pkgs "wayfire" { };
plugins = lib.mkOption {
type = lib.types.listOf lib.types.package;
default = with pkgs.wayfirePlugins; [ wcm wf-shell ];
defaultText = lib.literalExpression "with pkgs.wayfirePlugins; [ wcm wf-shell ]";
example = lib.literalExpression ''
with pkgs.wayfirePlugins; [
wcm
wf-shell
wayfire-plugins-extra
];
'';
description = lib.mdDoc ''
Additional plugins to use with the wayfire window manager.
'';
};
};
config = let
finalPackage = pkgs.wayfire-with-plugins.override {
wayfire = cfg.package;
plugins = cfg.plugins;
};
in
lib.mkIf cfg.enable {
environment.systemPackages = [
finalPackage
];
services.xserver.displayManager.sessionPackages = [ finalPackage ];
xdg.portal = {
enable = lib.mkDefault true;
wlr.enable = lib.mkDefault true;
};
};
}

View file

@ -5,7 +5,7 @@
}; };
hardware.opengl.enable = mkDefault true; hardware.opengl.enable = mkDefault true;
fonts.enableDefaultFonts = mkDefault true; fonts.enableDefaultPackages = mkDefault true;
programs = { programs = {
dconf.enable = mkDefault true; dconf.enable = mkDefault true;

View file

@ -72,7 +72,6 @@ in
(mkRemovedOptionModule [ "services" "mesos" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "mesos" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "moinmoin" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "moinmoin" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "mwlib" ] "The corresponding package was removed from nixpkgs.") (mkRemovedOptionModule [ "services" "mwlib" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "osquery" ] "The osquery module has been removed")
(mkRemovedOptionModule [ "services" "pantheon" "files" ] '' (mkRemovedOptionModule [ "services" "pantheon" "files" ] ''
This module was removed, please add pkgs.pantheon.elementary-files to environment.systemPackages directly. This module was removed, please add pkgs.pantheon.elementary-files to environment.systemPackages directly.
'') '')
@ -115,6 +114,16 @@ in
(mkRemovedOptionModule [ "services" "rtsp-simple-server" ] "Package has been completely rebranded by upstream as mediamtx, and thus the service and the package were renamed in NixOS as well.") (mkRemovedOptionModule [ "services" "rtsp-simple-server" ] "Package has been completely rebranded by upstream as mediamtx, and thus the service and the package were renamed in NixOS as well.")
(mkRemovedOptionModule [ "i18n" "inputMethod" "fcitx" ] "The fcitx module has been removed. Please use fcitx5 instead") (mkRemovedOptionModule [ "i18n" "inputMethod" "fcitx" ] "The fcitx module has been removed. Please use fcitx5 instead")
(mkRemovedOptionModule [ "services" "dhcpd4" ] ''
The dhcpd4 module has been removed because ISC DHCP reached its end of life.
See https://www.isc.org/blogs/isc-dhcp-eol/ for details.
Please switch to a different implementation like kea or dnsmasq.
'')
(mkRemovedOptionModule [ "services" "dhcpd6" ] ''
The dhcpd6 module has been removed because ISC DHCP reached its end of life.
See https://www.isc.org/blogs/isc-dhcp-eol/ for details.
Please switch to a different implementation like kea or dnsmasq.
'')
# Do NOT add any option renames here, see top of the file # Do NOT add any option renames here, see top of the file
]; ];

View file

@ -62,7 +62,7 @@ config.security.apparmor.includes = {
include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/base" include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/base"
r ${pkgs.stdenv.cc.libc}/share/locale/**, r ${pkgs.stdenv.cc.libc}/share/locale/**,
r ${pkgs.stdenv.cc.libc}/share/locale.alias, r ${pkgs.stdenv.cc.libc}/share/locale.alias,
${lib.optionalString (pkgs.glibcLocales != null) "r ${pkgs.glibcLocales}/lib/locale/locale-archive,"} r ${config.i18n.glibcLocales}/lib/locale/locale-archive,
${etcRule "localtime"} ${etcRule "localtime"}
r ${pkgs.tzdata}/share/zoneinfo/**, r ${pkgs.tzdata}/share/zoneinfo/**,
r ${pkgs.stdenv.cc.libc}/share/i18n/**, r ${pkgs.stdenv.cc.libc}/share/i18n/**,
@ -72,7 +72,7 @@ config.security.apparmor.includes = {
# bash inspects filesystems at startup # bash inspects filesystems at startup
# and /etc/mtab is linked to /proc/mounts # and /etc/mtab is linked to /proc/mounts
@{PROC}/mounts r @{PROC}/mounts,
# system-wide bash configuration # system-wide bash configuration
'' + lib.concatMapStringsSep "\n" etcRule [ '' + lib.concatMapStringsSep "\n" etcRule [
@ -211,6 +211,9 @@ config.security.apparmor.includes = {
"abstractions/nis" = '' "abstractions/nis" = ''
include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/nis" include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/nis"
''; '';
"abstractions/nss-systemd" = ''
include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/nss-systemd"
'';
"abstractions/nvidia" = '' "abstractions/nvidia" = ''
include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/nvidia" include "${pkgs.apparmor-profiles}/etc/apparmor.d/abstractions/nvidia"
${etcRule "vdpau_wrapper.cfg"} ${etcRule "vdpau_wrapper.cfg"}
@ -279,6 +282,8 @@ config.security.apparmor.includes = {
r /var/lib/acme/*/chain.pem, r /var/lib/acme/*/chain.pem,
r /var/lib/acme/*/fullchain.pem, r /var/lib/acme/*/fullchain.pem,
r /etc/pki/tls/certs/,
'' + lib.concatMapStringsSep "\n" etcRule [ '' + lib.concatMapStringsSep "\n" etcRule [
"ssl/certs/ca-certificates.crt" "ssl/certs/ca-certificates.crt"
"ssl/certs/ca-bundle.crt" "ssl/certs/ca-bundle.crt"

View file

@ -44,5 +44,5 @@ in
}; };
}; };
meta.maintainers = with maintainers; [ SuperSandro2000 ]; meta.maintainers = with maintainers; [ ];
} }

View file

@ -33,7 +33,7 @@ let
} }
trap on_exit EXIT trap on_exit EXIT
archiveName="${if cfg.archiveBaseName == null then "" else cfg.archiveBaseName + "-"}$(date ${cfg.dateFormat})" archiveName="${optionalString (cfg.archiveBaseName != null) (cfg.archiveBaseName + "-")}$(date ${cfg.dateFormat})"
archiveSuffix="${optionalString cfg.appendFailedSuffix ".failed"}" archiveSuffix="${optionalString cfg.appendFailedSuffix ".failed"}"
${cfg.preHook} ${cfg.preHook}
'' + optionalString cfg.doInit '' '' + optionalString cfg.doInit ''

View file

@ -32,6 +32,8 @@ in
services.tarsnap = { services.tarsnap = {
enable = mkEnableOption (lib.mdDoc "periodic tarsnap backups"); enable = mkEnableOption (lib.mdDoc "periodic tarsnap backups");
package = mkPackageOption pkgs "tarsnap" { };
keyfile = mkOption { keyfile = mkOption {
type = types.str; type = types.str;
default = "/root/tarsnap.key"; default = "/root/tarsnap.key";
@ -307,7 +309,7 @@ in
requires = [ "network-online.target" ]; requires = [ "network-online.target" ];
after = [ "network-online.target" ]; after = [ "network-online.target" ];
path = with pkgs; [ iputils tarsnap util-linux ]; path = with pkgs; [ iputils gcfg.package util-linux ];
# In order for the persistent tarsnap timer to work reliably, we have to # In order for the persistent tarsnap timer to work reliably, we have to
# make sure that the tarsnap server is reachable after systemd starts up # make sure that the tarsnap server is reachable after systemd starts up
@ -318,7 +320,7 @@ in
''; '';
script = let script = let
tarsnap = ''tarsnap --configfile "/etc/tarsnap/${name}.conf"''; tarsnap = ''${lib.getExe gcfg.package} --configfile "/etc/tarsnap/${name}.conf"'';
run = ''${tarsnap} -c -f "${name}-$(date +"%Y%m%d%H%M%S")" \ run = ''${tarsnap} -c -f "${name}-$(date +"%Y%m%d%H%M%S")" \
${optionalString cfg.verbose "-v"} \ ${optionalString cfg.verbose "-v"} \
${optionalString cfg.explicitSymlinks "-H"} \ ${optionalString cfg.explicitSymlinks "-H"} \
@ -355,10 +357,10 @@ in
description = "Tarsnap restore '${name}'"; description = "Tarsnap restore '${name}'";
requires = [ "network-online.target" ]; requires = [ "network-online.target" ];
path = with pkgs; [ iputils tarsnap util-linux ]; path = with pkgs; [ iputils gcfg.package util-linux ];
script = let script = let
tarsnap = ''tarsnap --configfile "/etc/tarsnap/${name}.conf"''; tarsnap = ''${lib.getExe gcfg.package} --configfile "/etc/tarsnap/${name}.conf"'';
lastArchive = "$(${tarsnap} --list-archives | sort | tail -1)"; lastArchive = "$(${tarsnap} --list-archives | sort | tail -1)";
run = ''${tarsnap} -x -f "${lastArchive}" ${optionalString cfg.verbose "-v"}''; run = ''${tarsnap} -x -f "${lastArchive}" ${optionalString cfg.verbose "-v"}'';
cachedir = escapeShellArg cfg.cachedir; cachedir = escapeShellArg cfg.cachedir;
@ -402,6 +404,6 @@ in
{ text = configFile name cfg; { text = configFile name cfg;
}) gcfg.archives; }) gcfg.archives;
environment.systemPackages = [ pkgs.tarsnap ]; environment.systemPackages = [ gcfg.package ];
}; };
} }

View file

@ -6,9 +6,6 @@ let
defaultGroup = "patroni"; defaultGroup = "patroni";
format = pkgs.formats.yaml { }; format = pkgs.formats.yaml { };
#boto doesn't support python 3.10 yet
patroni = pkgs.patroni.override { pythonPackages = pkgs.python39Packages; };
configFileName = "patroni-${cfg.scope}-${cfg.name}.yaml"; configFileName = "patroni-${cfg.scope}-${cfg.name}.yaml";
configFile = format.generate configFileName cfg.settings; configFile = format.generate configFileName cfg.settings;
in in
@ -224,7 +221,7 @@ in
script = '' script = ''
${concatStringsSep "\n" (attrValues (mapAttrs (name: path: ''export ${name}="$(< ${escapeShellArg path})"'') cfg.environmentFiles))} ${concatStringsSep "\n" (attrValues (mapAttrs (name: path: ''export ${name}="$(< ${escapeShellArg path})"'') cfg.environmentFiles))}
exec ${patroni}/bin/patroni ${configFile} exec ${pkgs.patroni}/bin/patroni ${configFile}
''; '';
serviceConfig = mkMerge [ serviceConfig = mkMerge [
@ -252,7 +249,7 @@ in
''; '';
environment.systemPackages = [ environment.systemPackages = [
patroni pkgs.patroni
cfg.postgresqlPackage cfg.postgresqlPackage
(mkIf cfg.raft pkgs.python310Packages.pysyncobj) (mkIf cfg.raft pkgs.python310Packages.pysyncobj)
]; ];

View file

@ -31,6 +31,7 @@ in
type = types.package; type = types.package;
default = pkgs.boinc; default = pkgs.boinc;
defaultText = literalExpression "pkgs.boinc"; defaultText = literalExpression "pkgs.boinc";
example = literalExpression "pkgs.boinc-headless";
description = lib.mdDoc '' description = lib.mdDoc ''
Which BOINC package to use. Which BOINC package to use.
''; '';

View file

@ -1,64 +1,49 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
with lib;
let let
cfg = config.services.buildkite-agents; cfg = config.services.buildkite-agents;
mkHookOption = { name, description, example ? null }: { hooksDir = hooks:
inherit name; let
value = mkOption { mkHookEntry = name: text: ''
default = null; ln --symbolic ${pkgs.writeShellApplication { inherit name text; }}/bin/${name} $out/${name}
description = lib.mdDoc description; '';
type = types.nullOr types.lines; in
} // (lib.optionalAttrs (example != null) { inherit example; }); pkgs.runCommandLocal "buildkite-agent-hooks" { } ''
}; mkdir $out
mkHookOptions = hooks: listToAttrs (map mkHookOption hooks); ${lib.concatStringsSep "\n" (lib.mapAttrsToList mkHookEntry hooks)}
hooksDir = cfg: let
mkHookEntry = name: value: ''
cat > $out/${name} <<'EOF'
#! ${pkgs.runtimeShell}
set -e
${value}
EOF
chmod 755 $out/${name}
''; '';
in pkgs.runCommand "buildkite-agent-hooks" { preferLocalBuild = true; } ''
mkdir $out
${concatStringsSep "\n" (mapAttrsToList mkHookEntry (filterAttrs (n: v: v != null) cfg.hooks))}
'';
buildkiteOptions = { name ? "", config, ... }: { buildkiteOptions = { name ? "", config, ... }: {
options = { options = {
enable = mkOption { enable = lib.mkOption {
default = true; default = true;
type = types.bool; type = lib.types.bool;
description = lib.mdDoc "Whether to enable this buildkite agent"; description = lib.mdDoc "Whether to enable this buildkite agent";
}; };
package = mkOption { package = lib.mkOption {
default = pkgs.buildkite-agent; default = pkgs.buildkite-agent;
defaultText = literalExpression "pkgs.buildkite-agent"; defaultText = lib.literalExpression "pkgs.buildkite-agent";
description = lib.mdDoc "Which buildkite-agent derivation to use"; description = lib.mdDoc "Which buildkite-agent derivation to use";
type = types.package; type = lib.types.package;
}; };
dataDir = mkOption { dataDir = lib.mkOption {
default = "/var/lib/buildkite-agent-${name}"; default = "/var/lib/buildkite-agent-${name}";
description = lib.mdDoc "The workdir for the agent"; description = lib.mdDoc "The workdir for the agent";
type = types.str; type = lib.types.str;
}; };
runtimePackages = mkOption { runtimePackages = lib.mkOption {
default = [ pkgs.bash pkgs.gnutar pkgs.gzip pkgs.git pkgs.nix ]; default = [ pkgs.bash pkgs.gnutar pkgs.gzip pkgs.git pkgs.nix ];
defaultText = literalExpression "[ pkgs.bash pkgs.gnutar pkgs.gzip pkgs.git pkgs.nix ]"; defaultText = lib.literalExpression "[ pkgs.bash pkgs.gnutar pkgs.gzip pkgs.git pkgs.nix ]";
description = lib.mdDoc "Add programs to the buildkite-agent environment"; description = lib.mdDoc "Add programs to the buildkite-agent environment";
type = types.listOf types.package; type = lib.types.listOf lib.types.package;
}; };
tokenPath = mkOption { tokenPath = lib.mkOption {
type = types.path; type = lib.types.path;
description = lib.mdDoc '' description = lib.mdDoc ''
The token from your Buildkite "Agents" page. The token from your Buildkite "Agents" page.
@ -67,25 +52,25 @@ let
''; '';
}; };
name = mkOption { name = lib.mkOption {
type = types.str; type = lib.types.str;
default = "%hostname-${name}-%n"; default = "%hostname-${name}-%n";
description = lib.mdDoc '' description = lib.mdDoc ''
The name of the agent as seen in the buildkite dashboard. The name of the agent as seen in the buildkite dashboard.
''; '';
}; };
tags = mkOption { tags = lib.mkOption {
type = types.attrsOf (types.either types.str (types.listOf types.str)); type = lib.types.attrsOf (lib.types.either lib.types.str (lib.types.listOf lib.types.str));
default = {}; default = { };
example = { queue = "default"; docker = "true"; ruby2 ="true"; }; example = { queue = "default"; docker = "true"; ruby2 = "true"; };
description = lib.mdDoc '' description = lib.mdDoc ''
Tags for the agent. Tags for the agent.
''; '';
}; };
extraConfig = mkOption { extraConfig = lib.mkOption {
type = types.lines; type = lib.types.lines;
default = ""; default = "";
example = "debug=true"; example = "debug=true";
description = lib.mdDoc '' description = lib.mdDoc ''
@ -93,8 +78,8 @@ let
''; '';
}; };
privateSshKeyPath = mkOption { privateSshKeyPath = lib.mkOption {
type = types.nullOr types.path; type = lib.types.nullOr lib.types.path;
default = null; default = null;
## maximum care is taken so that secrets (ssh keys and the CI token) ## maximum care is taken so that secrets (ssh keys and the CI token)
## don't end up in the Nix store. ## don't end up in the Nix store.
@ -108,67 +93,25 @@ let
''; '';
}; };
hooks = mkHookOptions [ hooks = lib.mkOption {
{ name = "checkout"; type = lib.types.attrsOf lib.types.lines;
description = '' default = { };
The `checkout` hook script will replace the default checkout routine of the example = lib.literalExpression ''
bootstrap.sh script. You can use this hook to do your own SCM checkout {
behaviour environment = '''
''; } export SECRET_VAR=`head -1 /run/keys/secret`
{ name = "command"; ''';
description = '' }'';
The `command` hook script will replace the default implementation of running description = lib.mdDoc ''
the build command. "Agent" hooks to install.
''; } See <https://buildkite.com/docs/agent/v3/hooks> for possible options.
{ name = "environment"; '';
description = '' };
The `environment` hook will run before all other commands, and can be used
to set up secrets, data, etc. Anything exported in hooks will be available
to the build script.
Note: the contents of this file will be copied to the world-readable hooksPath = lib.mkOption {
Nix store. type = lib.types.path;
''; default = hooksDir config.hooks;
example = '' defaultText = lib.literalMD "generated from {option}`services.buildkite-agents.<name>.hooks`";
export SECRET_VAR=`head -1 /run/keys/secret`
''; }
{ name = "post-artifact";
description = ''
The `post-artifact` hook will run just after artifacts are uploaded
''; }
{ name = "post-checkout";
description = ''
The `post-checkout` hook will run after the bootstrap script has checked out
your projects source code.
''; }
{ name = "post-command";
description = ''
The `post-command` hook will run after the bootstrap script has run your
build commands
''; }
{ name = "pre-artifact";
description = ''
The `pre-artifact` hook will run just before artifacts are uploaded
''; }
{ name = "pre-checkout";
description = ''
The `pre-checkout` hook will run just before your projects source code is
checked out from your SCM provider
''; }
{ name = "pre-command";
description = ''
The `pre-command` hook will run just before your build command runs
''; }
{ name = "pre-exit";
description = ''
The `pre-exit` hook will run just before your build job finishes
''; }
];
hooksPath = mkOption {
type = types.path;
default = hooksDir config;
defaultText = literalMD "generated from {option}`services.buildkite-agents.<name>.hooks`";
description = lib.mdDoc '' description = lib.mdDoc ''
Path to the directory storing the hooks. Path to the directory storing the hooks.
Consider using {option}`services.buildkite-agents.<name>.hooks.<name>` Consider using {option}`services.buildkite-agents.<name>.hooks.<name>`
@ -176,10 +119,10 @@ let
''; '';
}; };
shell = mkOption { shell = lib.mkOption {
type = types.str; type = lib.types.str;
default = "${pkgs.bash}/bin/bash -e -c"; default = "${pkgs.bash}/bin/bash -e -c";
defaultText = literalExpression ''"''${pkgs.bash}/bin/bash -e -c"''; defaultText = lib.literalExpression ''"''${pkgs.bash}/bin/bash -e -c"'';
description = lib.mdDoc '' description = lib.mdDoc ''
Command that buildkite-agent 3 will execute when it spawns a shell. Command that buildkite-agent 3 will execute when it spawns a shell.
''; '';
@ -190,9 +133,9 @@ let
mapAgents = function: lib.mkMerge (lib.mapAttrsToList function enabledAgents); mapAgents = function: lib.mkMerge (lib.mapAttrsToList function enabledAgents);
in in
{ {
options.services.buildkite-agents = mkOption { options.services.buildkite-agents = lib.mkOption {
type = types.attrsOf (types.submodule buildkiteOptions); type = lib.types.attrsOf (lib.types.submodule buildkiteOptions);
default = {}; default = { };
description = lib.mdDoc '' description = lib.mdDoc ''
Attribute set of buildkite agents. Attribute set of buildkite agents.
The attribute key is combined with the hostname and a unique integer to The attribute key is combined with the hostname and a unique integer to
@ -213,23 +156,24 @@ in
}; };
}); });
config.users.groups = mapAgents (name: cfg: { config.users.groups = mapAgents (name: cfg: {
"buildkite-agent-${name}" = {}; "buildkite-agent-${name}" = { };
}); });
config.systemd.services = mapAgents (name: cfg: { config.systemd.services = mapAgents (name: cfg: {
"buildkite-agent-${name}" = "buildkite-agent-${name}" = {
{ description = "Buildkite Agent"; description = "Buildkite Agent";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "network.target" ]; after = [ "network.target" ];
path = cfg.runtimePackages ++ [ cfg.package pkgs.coreutils ]; path = cfg.runtimePackages ++ [ cfg.package pkgs.coreutils ];
environment = config.networking.proxy.envVars // { environment = config.networking.proxy.envVars // {
HOME = cfg.dataDir; HOME = cfg.dataDir;
NIX_REMOTE = "daemon"; NIX_REMOTE = "daemon";
}; };
## NB: maximum care is taken so that secrets (ssh keys and the CI token) ## NB: maximum care is taken so that secrets (ssh keys and the CI token)
## don't end up in the Nix store. ## don't end up in the Nix store.
preStart = let preStart =
let
sshDir = "${cfg.dataDir}/.ssh"; sshDir = "${cfg.dataDir}/.ssh";
tagStr = name: value: tagStr = name: value:
if lib.isList value if lib.isList value
@ -237,44 +181,39 @@ in
else "${name}=${value}"; else "${name}=${value}";
tagsStr = lib.concatStringsSep "," (lib.mapAttrsToList tagStr cfg.tags); tagsStr = lib.concatStringsSep "," (lib.mapAttrsToList tagStr cfg.tags);
in in
optionalString (cfg.privateSshKeyPath != null) '' lib.optionalString (cfg.privateSshKeyPath != null) ''
mkdir -m 0700 -p "${sshDir}" mkdir -m 0700 -p "${sshDir}"
install -m600 "${toString cfg.privateSshKeyPath}" "${sshDir}/id_rsa" install -m600 "${toString cfg.privateSshKeyPath}" "${sshDir}/id_rsa"
'' + '' '' + ''
cat > "${cfg.dataDir}/buildkite-agent.cfg" <<EOF cat > "${cfg.dataDir}/buildkite-agent.cfg" <<EOF
token="$(cat ${toString cfg.tokenPath})" token="$(cat ${toString cfg.tokenPath})"
name="${cfg.name}" name="${cfg.name}"
shell="${cfg.shell}" shell="${cfg.shell}"
tags="${tagsStr}" tags="${tagsStr}"
build-path="${cfg.dataDir}/builds" build-path="${cfg.dataDir}/builds"
hooks-path="${cfg.hooksPath}" hooks-path="${cfg.hooksPath}"
${cfg.extraConfig} ${cfg.extraConfig}
EOF EOF
''; '';
serviceConfig = serviceConfig = {
{ ExecStart = "${cfg.package}/bin/buildkite-agent start --config ${cfg.dataDir}/buildkite-agent.cfg"; ExecStart = "${cfg.package}/bin/buildkite-agent start --config ${cfg.dataDir}/buildkite-agent.cfg";
User = "buildkite-agent-${name}"; User = "buildkite-agent-${name}";
RestartSec = 5; RestartSec = 5;
Restart = "on-failure"; Restart = "on-failure";
TimeoutSec = 10; TimeoutSec = 10;
# set a long timeout to give buildkite-agent a chance to finish current builds # set a long timeout to give buildkite-agent a chance to finish current builds
TimeoutStopSec = "2 min"; TimeoutStopSec = "2 min";
KillMode = "mixed"; KillMode = "mixed";
};
}; };
};
}); });
config.assertions = mapAgents (name: cfg: [ config.assertions = mapAgents (name: cfg: [{
{ assertion = cfg.hooksPath == (hooksDir cfg) || all (v: v == null) (attrValues cfg.hooks); assertion = cfg.hooksPath != hooksDir cfg.hooks -> cfg.hooks == { };
message = '' message = ''
Options `services.buildkite-agents.${name}.hooksPath' and Options `services.buildkite-agents.${name}.hooksPath' and
`services.buildkite-agents.${name}.hooks.<name>' are mutually exclusive. `services.buildkite-agents.${name}.hooks.<name>' are mutually exclusive.
''; '';
} }]);
]);
imports = [
(mkRemovedOptionModule [ "services" "buildkite-agent"] "services.buildkite-agent has been upgraded from version 2 to version 3 and moved to an attribute set at services.buildkite-agents. Please consult the 20.03 release notes for more information.")
];
} }

View file

@ -210,9 +210,7 @@ in {
preStart = preStart =
let replacePlugins = let replacePlugins =
if cfg.plugins == null optionalString (cfg.plugins != null) (
then ""
else
let pluginCmds = lib.attrsets.mapAttrsToList let pluginCmds = lib.attrsets.mapAttrsToList
(n: v: "cp ${v} ${cfg.home}/plugins/${n}.jpi") (n: v: "cp ${v} ${cfg.home}/plugins/${n}.jpi")
cfg.plugins; cfg.plugins;
@ -220,7 +218,7 @@ in {
rm -r ${cfg.home}/plugins || true rm -r ${cfg.home}/plugins || true
mkdir -p ${cfg.home}/plugins mkdir -p ${cfg.home}/plugins
${lib.strings.concatStringsSep "\n" pluginCmds} ${lib.strings.concatStringsSep "\n" pluginCmds}
''; '');
in '' in ''
rm -rf ${cfg.home}/war rm -rf ${cfg.home}/war
${replacePlugins} ${replacePlugins}

View file

@ -0,0 +1,632 @@
{ lib, pkgs, config, ... } :
with lib;
let
cfg = config.services.pgbouncer;
confFile = pkgs.writeTextFile {
name = "pgbouncer.ini";
text = ''
[databases]
${concatStringsSep "\n"
(mapAttrsToList (dbname : settings : "${dbname} = ${settings}") cfg.databases)}
[users]
${concatStringsSep "\n"
(mapAttrsToList (username : settings : "${username} = ${settings}") cfg.users)}
[peers]
${concatStringsSep "\n"
(mapAttrsToList (peerid : settings : "${peerid} = ${settings}") cfg.peers)}
[pgbouncer]
# general
${optionalString (cfg.ignoreStartupParameters != null) "ignore_startup_parameters = ${cfg.ignoreStartupParameters}"}
listen_port = ${toString cfg.listenPort}
${optionalString (cfg.listenAddress != null) "listen_addr = ${cfg.listenAddress}"}
pool_mode = ${cfg.poolMode}
max_client_conn = ${toString cfg.maxClientConn}
default_pool_size = ${toString cfg.defaultPoolSize}
max_user_connections = ${toString cfg.maxUserConnections}
max_db_connections = ${toString cfg.maxDbConnections}
#auth
auth_type = ${cfg.authType}
${optionalString (cfg.authHbaFile != null) "auth_hba_file = ${cfg.authHbaFile}"}
${optionalString (cfg.authFile != null) "auth_file = ${cfg.authFile}"}
${optionalString (cfg.authUser != null) "auth_user = ${cfg.authUser}"}
${optionalString (cfg.authQuery != null) "auth_query = ${cfg.authQuery}"}
${optionalString (cfg.authDbname != null) "auth_dbname = ${cfg.authDbname}"}
# TLS
${optionalString (cfg.tls.client != null) ''
client_tls_sslmode = ${cfg.tls.client.sslmode}
client_tls_key_file = ${cfg.tls.client.keyFile}
client_tls_cert_file = ${cfg.tls.client.certFile}
client_tls_ca_file = ${cfg.tls.client.caFile}
''}
${optionalString (cfg.tls.server != null) ''
server_tls_sslmode = ${cfg.tls.server.sslmode}
server_tls_key_file = ${cfg.tls.server.keyFile}
server_tls_cert_file = ${cfg.tls.server.certFile}
server_tls_ca_file = ${cfg.tls.server.caFile}
''}
# log
${optionalString (cfg.logFile != null) "logfile = ${cfg.homeDir}/${cfg.logFile}"}
${optionalString (cfg.syslog != null) ''
syslog = ${if cfg.syslog.enable then "1" else "0"}
syslog_ident = ${cfg.syslog.syslogIdent}
syslog_facility = ${cfg.syslog.syslogFacility}
''}
${optionalString (cfg.verbose != null) "verbose = ${toString cfg.verbose}"}
# console access
${optionalString (cfg.adminUsers != null) "admin_users = ${cfg.adminUsers}"}
${optionalString (cfg.statsUsers != null) "stats_users = ${cfg.statsUsers}"}
# linux
pidfile = /run/pgbouncer/pgbouncer.pid
# extra
${cfg.extraConfig}
'';
};
in {
options.services.pgbouncer = {
# NixOS settings
enable = mkEnableOption (lib.mdDoc "PostgreSQL connection pooler");
package = mkOption {
type = types.package;
default = pkgs.pgbouncer;
defaultText = literalExpression "pkgs.pgbouncer";
description = lib.mdDoc ''
The pgbouncer package to use.
'';
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Whether to automatically open the specified TCP port in the firewall.
'';
};
# Generic settings
logFile = mkOption {
type = types.nullOr types.str;
default = "pgbouncer.log";
description = lib.mdDoc ''
Specifies the log file.
Either this or syslog has to be specified.
'';
};
listenAddress = mkOption {
type = types.nullOr types.commas;
example = "*";
default = null;
description = lib.mdDoc ''
Specifies a list (comma-separated) of addresses where to listen for TCP connections.
You may also use * meaning listen on all addresses.
When not set, only Unix socket connections are accepted.
Addresses can be specified numerically (IPv4/IPv6) or by name.
'';
};
listenPort = mkOption {
type = types.port;
default = 6432;
description = lib.mdDoc ''
Which port to listen on. Applies to both TCP and Unix sockets.
'';
};
poolMode = mkOption {
type = types.enum [ "session" "transaction" "statement" ];
default = "session";
description = lib.mdDoc ''
Specifies when a server connection can be reused by other clients.
session
Server is released back to pool after client disconnects. Default.
transaction
Server is released back to pool after transaction finishes.
statement
Server is released back to pool after query finishes.
Transactions spanning multiple statements are disallowed in this mode.
'';
};
maxClientConn = mkOption {
type = types.int;
default = 100;
description = lib.mdDoc ''
Maximum number of client connections allowed.
When this setting is increased, then the file descriptor limits in the operating system
might also have to be increased. Note that the number of file descriptors potentially
used is more than maxClientConn. If each user connects under its own user name to the server,
the theoretical maximum used is:
maxClientConn + (max pool_size * total databases * total users)
If a database user is specified in the connection string (all users connect under the same user name),
the theoretical maximum is:
maxClientConn + (max pool_size * total databases)
The theoretical maximum should never be reached, unless somebody deliberately crafts a special load for it.
Still, it means you should set the number of file descriptors to a safely high number.
'';
};
defaultPoolSize = mkOption {
type = types.int;
default = 20;
description = lib.mdDoc ''
How many server connections to allow per user/database pair.
Can be overridden in the per-database configuration.
'';
};
maxDbConnections = mkOption {
type = types.int;
default = 0;
description = lib.mdDoc ''
Do not allow more than this many server connections per database (regardless of user).
This considers the PgBouncer database that the client has connected to,
not the PostgreSQL database of the outgoing connection.
This can also be set per database in the [databases] section.
Note that when you hit the limit, closing a client connection to one pool will
not immediately allow a server connection to be established for another pool,
because the server connection for the first pool is still open.
Once the server connection closes (due to idle timeout),
a new server connection will immediately be opened for the waiting pool.
0 = unlimited
'';
};
maxUserConnections = mkOption {
type = types.int;
default = 0;
description = lib.mdDoc ''
Do not allow more than this many server connections per user (regardless of database).
This considers the PgBouncer user that is associated with a pool,
which is either the user specified for the server connection
or in absence of that the user the client has connected as.
This can also be set per user in the [users] section.
Note that when you hit the limit, closing a client connection to one pool
will not immediately allow a server connection to be established for another pool,
because the server connection for the first pool is still open.
Once the server connection closes (due to idle timeout), a new server connection
will immediately be opened for the waiting pool.
0 = unlimited
'';
};
ignoreStartupParameters = mkOption {
type = types.nullOr types.commas;
example = "extra_float_digits";
default = null;
description = lib.mdDoc ''
By default, PgBouncer allows only parameters it can keep track of in startup packets:
client_encoding, datestyle, timezone and standard_conforming_strings.
All others parameters will raise an error.
To allow others parameters, they can be specified here, so that PgBouncer knows that
they are handled by the admin and it can ignore them.
If you need to specify multiple values, use a comma-separated list.
IMPORTANT: When using prometheus-pgbouncer-exporter, you need:
extra_float_digits
<https://github.com/prometheus-community/pgbouncer_exporter#pgbouncer-configuration>
'';
};
# Section [databases]
databases = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
exampledb = "host=/run/postgresql/ port=5432 auth_user=exampleuser dbname=exampledb sslmode=require";
bardb = "host=localhost dbname=bazdb";
foodb = "host=host1.example.com port=5432";
};
description = lib.mdDoc ''
Detailed information about PostgreSQL database definitions:
<https://www.pgbouncer.org/config.html#section-databases>
'';
};
# Section [users]
users = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
user1 = "pool_mode=session";
};
description = lib.mdDoc ''
Optional.
Detailed information about PostgreSQL user definitions:
<https://www.pgbouncer.org/config.html#section-users>
'';
};
# Section [peers]
peers = mkOption {
type = types.attrsOf types.str;
default = {};
example = {
"1" = "host=host1.example.com";
"2" = "host=/tmp/pgbouncer-2 port=5555";
};
description = lib.mdDoc ''
Optional.
Detailed information about PostgreSQL database definitions:
<https://www.pgbouncer.org/config.html#section-peers>
'';
};
# Authentication settings
authType = mkOption {
type = types.enum [ "cert" "md5" "scram-sha-256" "plain" "trust" "any" "hba" "pam" ];
default = "md5";
description = lib.mdDoc ''
How to authenticate users.
cert
Client must connect over TLS connection with a valid client certificate.
The user name is then taken from the CommonName field from the certificate.
md5
Use MD5-based password check. This is the default authentication method.
authFile may contain both MD5-encrypted and plain-text passwords.
If md5 is configured and a user has a SCRAM secret, then SCRAM authentication is used automatically instead.
scram-sha-256
Use password check with SCRAM-SHA-256. authFile has to contain SCRAM secrets or plain-text passwords.
plain
The clear-text password is sent over the wire. Deprecated.
trust
No authentication is done. The user name must still exist in authFile.
any
Like the trust method, but the user name given is ignored.
Requires that all databases are configured to log in as a specific user.
Additionally, the console database allows any user to log in as admin.
hba
The actual authentication type is loaded from authHbaFile.
This allows different authentication methods for different access paths,
for example: connections over Unix socket use the peer auth method, connections over TCP must use TLS.
pam
PAM is used to authenticate users, authFile is ignored.
This method is not compatible with databases using the authUser option.
The service name reported to PAM is pgbouncer. pam is not supported in the HBA configuration file.
'';
};
authHbaFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/secrets/pgbouncer_hba";
description = lib.mdDoc ''
HBA configuration file to use when authType is hba.
See HBA file format details:
<https://www.pgbouncer.org/config.html#hba-file-format>
'';
};
authFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/secrets/pgbouncer_authfile";
description = lib.mdDoc ''
The name of the file to load user names and passwords from.
See section Authentication file format details:
<https://www.pgbouncer.org/config.html#authentication-file-format>
Most authentication types require that either authFile or authUser be set;
otherwise there would be no users defined.
'';
};
authUser = mkOption {
type = types.nullOr types.str;
default = null;
example = "pgbouncer";
description = lib.mdDoc ''
If authUser is set, then any user not specified in authFile will be queried
through the authQuery query from pg_shadow in the database, using authUser.
The password of authUser will be taken from authFile.
(If the authUser does not require a password then it does not need to be defined in authFile.)
Direct access to pg_shadow requires admin rights.
It's preferable to use a non-superuser that calls a SECURITY DEFINER function instead.
'';
};
authQuery = mkOption {
type = types.nullOr types.str;
default = null;
example = "SELECT usename, passwd FROM pg_shadow WHERE usename=$1";
description = lib.mdDoc ''
Query to load user's password from database.
Direct access to pg_shadow requires admin rights.
It's preferable to use a non-superuser that calls a SECURITY DEFINER function instead.
Note that the query is run inside the target database.
So if a function is used, it needs to be installed into each database.
'';
};
authDbname = mkOption {
type = types.nullOr types.str;
default = null;
example = "authdb";
description = lib.mdDoc ''
Database name in the [database] section to be used for authentication purposes.
This option can be either global or overriden in the connection string if this parameter is specified.
'';
};
# TLS settings
tls.client = mkOption {
type = types.nullOr (types.submodule {
options = {
sslmode = mkOption {
type = types.enum [ "disable" "allow" "prefer" "require" "verify-ca" "verify-full" ];
default = "disable";
description = lib.mdDoc ''
TLS mode to use for connections from clients.
TLS connections are disabled by default.
When enabled, tls.client.keyFile and tls.client.certFile
must be also configured to set up the key and certificate
PgBouncer uses to accept client connections.
disable
Plain TCP. If client requests TLS, it's ignored. Default.
allow
If client requests TLS, it is used. If not, plain TCP is used.
If the client presents a client certificate, it is not validated.
prefer
Same as allow.
require
Client must use TLS. If not, the client connection is rejected.
If the client presents a client certificate, it is not validated.
verify-ca
Client must use TLS with valid client certificate.
verify-full
Same as verify-ca
'';
};
certFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer.key";
description = lib.mdDoc "Path to certificate for private key. Clients can validate it";
};
keyFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer.crt";
description = lib.mdDoc "Path to private key for PgBouncer to accept client connections";
};
caFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer.crt";
description = lib.mdDoc "Path to root certificate file to validate client certificates";
};
};
});
default = null;
description = lib.mdDoc ''
<https://www.pgbouncer.org/config.html#tls-settings>
'';
};
tls.server = mkOption {
type = types.nullOr (types.submodule {
options = {
sslmode = mkOption {
type = types.enum [ "disable" "allow" "prefer" "require" "verify-ca" "verify-full" ];
default = "disable";
description = lib.mdDoc ''
TLS mode to use for connections to PostgreSQL servers.
TLS connections are disabled by default.
disable
Plain TCP. TLS is not even requested from the server. Default.
allow
FIXME: if server rejects plain, try TLS?
prefer
TLS connection is always requested first from PostgreSQL.
If refused, the connection will be established over plain TCP.
Server certificate is not validated.
require
Connection must go over TLS. If server rejects it, plain TCP is not attempted.
Server certificate is not validated.
verify-ca
Connection must go over TLS and server certificate must be valid according to tls.server.caFile.
Server host name is not checked against certificate.
verify-full
Connection must go over TLS and server certificate must be valid according to tls.server.caFile.
Server host name must match certificate information.
'';
};
certFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer_server.key";
description = lib.mdDoc "Certificate for private key. PostgreSQL server can validate it.";
};
keyFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer_server.crt";
description = lib.mdDoc "Private key for PgBouncer to authenticate against PostgreSQL server.";
};
caFile = mkOption {
type = types.path;
example = "/secrets/pgbouncer_server.crt";
description = lib.mdDoc "Root certificate file to validate PostgreSQL server certificates.";
};
};
});
default = null;
description = lib.mdDoc ''
<https://www.pgbouncer.org/config.html#tls-settings>
'';
};
# Log settings
syslog = mkOption {
type = types.nullOr (types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Toggles syslog on/off.
'';
};
syslogIdent = mkOption {
type = types.str;
default = "pgbouncer";
description = lib.mdDoc ''
Under what name to send logs to syslog.
'';
};
syslogFacility = mkOption {
type = types.enum [ "auth" "authpriv" "daemon" "user" "local0" "local1" "local2" "local3" "local4" "local5" "local6" "local7" ];
default = "daemon";
description = lib.mdDoc ''
Under what facility to send logs to syslog.
'';
};
};
});
default = null;
description = lib.mdDoc ''
<https://www.pgbouncer.org/config.html#log-settings>
'';
};
verbose = lib.mkOption {
type = lib.types.int;
default = 0;
description = lib.mdDoc ''
Increase verbosity. Mirrors the -v switch on the command line.
'';
};
# Console access control
adminUsers = mkOption {
type = types.nullOr types.commas;
default = null;
description = lib.mdDoc ''
Comma-separated list of database users that are allowed to connect and run all commands on the console.
Ignored when authType is any, in which case any user name is allowed in as admin.
'';
};
statsUsers = mkOption {
type = types.nullOr types.commas;
default = null;
description = lib.mdDoc ''
Comma-separated list of database users that are allowed to connect and run read-only queries on the console.
That means all SHOW commands except SHOW FDS.
'';
};
# Linux settings
openFilesLimit = lib.mkOption {
type = lib.types.int;
default = 65536;
description = lib.mdDoc ''
Maximum number of open files.
'';
};
user = mkOption {
type = types.str;
default = "pgbouncer";
description = lib.mdDoc ''
The user pgbouncer is run as.
'';
};
group = mkOption {
type = types.str;
default = "pgbouncer";
description = lib.mdDoc ''
The group pgbouncer is run as.
'';
};
homeDir = mkOption {
type = types.path;
default = "/var/lib/pgbouncer";
description = lib.mdDoc ''
Specifies the home directory.
'';
};
# Extra settings
extraConfig = mkOption {
type = types.lines;
description = lib.mdDoc ''
Any additional text to be appended to config.ini
<https://www.pgbouncer.org/config.html>.
'';
default = "";
};
};
config = mkIf cfg.enable {
users.groups.${cfg.group} = { };
users.users.${cfg.user} = {
description = "PgBouncer service user";
group = cfg.group;
home = cfg.homeDir;
createHome = true;
isSystemUser = true;
};
systemd.services.pgbouncer = {
description = "PgBouncer - PostgreSQL connection pooler";
wants = [ "postgresql.service" ];
after = [ "postgresql.service" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "forking";
User = cfg.user;
Group = cfg.group;
ExecStart = "${pkgs.pgbouncer}/bin/pgbouncer -d ${confFile}";
ExecReload = "${pkgs.coreutils}/bin/kill -SIGHUP $MAINPID";
RuntimeDirectory = "pgbouncer";
PIDFile = "/run/pgbouncer/pgbouncer.pid";
LimitNOFILE = cfg.openFilesLimit;
};
};
networking.firewall.allowedTCPPorts = optional cfg.openFirewall cfg.port;
};
meta.maintainers = [ maintainers._1000101 ];
}

View file

@ -404,8 +404,8 @@ in
{ {
log_connections = true; log_connections = true;
log_statement = "all"; log_statement = "all";
logging_collector = true logging_collector = true;
log_disconnections = true log_disconnections = true;
log_destination = lib.mkForce "syslog"; log_destination = lib.mkForce "syslog";
} }
''; '';

View file

@ -96,7 +96,7 @@ in
environment.systemPackages = [ cfg.package editorScript desktopApplicationFile ]; environment.systemPackages = [ cfg.package editorScript desktopApplicationFile ];
environment.variables.EDITOR = mkIf cfg.defaultEditor (mkOverride 900 "${editorScript}/bin/emacseditor"); environment.variables.EDITOR = mkIf cfg.defaultEditor (mkOverride 900 "emacseditor");
}; };
meta.doc = ./emacs.md; meta.doc = ./emacs.md;

View file

@ -71,12 +71,16 @@ in
environment.systemPackages = [ pkgs.udisks2 ]; environment.systemPackages = [ pkgs.udisks2 ];
environment.etc = (mapAttrs' (name: value: nameValuePair "udisks2/${name}" { source = value; } ) configFiles) // { environment.etc = (mapAttrs' (name: value: nameValuePair "udisks2/${name}" { source = value; } ) configFiles) // (
# We need to make sure /etc/libblockdev/conf.d is populated to avoid let
libblockdev = pkgs.udisks2.libblockdev;
majorVer = versions.major libblockdev.version;
in {
# We need to make sure /etc/libblockdev/@major_ver@/conf.d is populated to avoid
# warnings # warnings
"libblockdev/conf.d/00-default.cfg".source = "${pkgs.libblockdev}/etc/libblockdev/conf.d/00-default.cfg"; "libblockdev/${majorVer}/conf.d/00-default.cfg".source = "${libblockdev}/etc/libblockdev/${majorVer}/conf.d/00-default.cfg";
"libblockdev/conf.d/10-lvm-dbus.cfg".source = "${pkgs.libblockdev}/etc/libblockdev/conf.d/10-lvm-dbus.cfg"; "libblockdev/${majorVer}/conf.d/10-lvm-dbus.cfg".source = "${libblockdev}/etc/libblockdev/${majorVer}/conf.d/10-lvm-dbus.cfg";
}; });
security.polkit.enable = true; security.polkit.enable = true;

View file

@ -0,0 +1,270 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.ebusd;
package = pkgs.ebusd;
arguments = [
"${package}/bin/ebusd"
"--foreground"
"--updatecheck=off"
"--device=${cfg.device}"
"--port=${toString cfg.port}"
"--configpath=${cfg.configpath}"
"--scanconfig=${cfg.scanconfig}"
"--log=main:${cfg.logs.main}"
"--log=network:${cfg.logs.network}"
"--log=bus:${cfg.logs.bus}"
"--log=update:${cfg.logs.update}"
"--log=other:${cfg.logs.other}"
"--log=all:${cfg.logs.all}"
] ++ lib.optionals cfg.readonly [
"--readonly"
] ++ lib.optionals cfg.mqtt.enable [
"--mqtthost=${cfg.mqtt.host}"
"--mqttport=${toString cfg.mqtt.port}"
"--mqttuser=${cfg.mqtt.user}"
"--mqttpass=${cfg.mqtt.password}"
] ++ lib.optionals cfg.mqtt.home-assistant [
"--mqttint=${package}/etc/ebusd/mqtt-hassio.cfg"
"--mqttjson"
] ++ lib.optionals cfg.mqtt.retain [
"--mqttretain"
] ++ cfg.extraArguments;
usesDev = hasPrefix "/" cfg.device;
command = concatStringsSep " " arguments;
in
{
meta.maintainers = with maintainers; [ nathan-gs ];
options.services.ebusd = {
enable = mkEnableOption (lib.mdDoc "ebusd service");
device = mkOption {
type = types.str;
default = "";
example = "IP:PORT";
description = lib.mdDoc ''
Use DEV as eBUS device [/dev/ttyUSB0].
This can be either:
enh:DEVICE or enh:IP:PORT for enhanced device (only adapter v3 and newer),
ens:DEVICE for enhanced high speed serial device (only adapter v3 and newer with firmware since 20220731),
DEVICE for serial device (normal speed, for all other serial adapters like adapter v2 as well as adapter v3 in non-enhanced mode), or
[udp:]IP:PORT for network device.
https://github.com/john30/ebusd/wiki/2.-Run#device-options
'';
};
port = mkOption {
default = 8888;
type = types.port;
description = lib.mdDoc ''
The port on which to listen on
'';
};
readonly = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Only read from device, never write to it
'';
};
configpath = mkOption {
type = types.str;
default = "https://cfg.ebusd.eu/";
description = lib.mdDoc ''
Read CSV config files from PATH (local folder or HTTPS URL) [https://cfg.ebusd.eu/]
'';
};
scanconfig = mkOption {
type = types.str;
default = "full";
description = lib.mdDoc ''
Pick CSV config files matching initial scan ("none" or empty for no initial scan message, "full" for full scan, or a single hex address to scan, default is to send a broadcast ident message).
If combined with --checkconfig, you can add scan message data as arguments for checking a particular scan configuration, e.g. "FF08070400/0AB5454850303003277201". For further details on this option,
see [Automatic configuration](https://github.com/john30/ebusd/wiki/4.7.-Automatic-configuration).
'';
};
logs = {
main = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
network = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
bus = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
update = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
other = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
all = mkOption {
type = types.enum [ "error" "notice" "info" "debug"];
default = "info";
description = lib.mdDoc ''
Only write log for matching AREAs (main|network|bus|update|other|all) below or equal to LEVEL (error|notice|info|debug) [all:notice].
'';
};
};
mqtt = {
enable = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Adds support for MQTT
'';
};
host = mkOption {
type = types.str;
default = "localhost";
description = lib.mdDoc ''
Connect to MQTT broker on HOST.
'';
};
port = mkOption {
default = 1883;
type = types.port;
description = lib.mdDoc ''
The port on which to connect to MQTT
'';
};
home-assistant = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Adds the Home Assistant topics to MQTT, read more at [MQTT Integration](https://github.com/john30/ebusd/wiki/MQTT-integration)
'';
};
retain = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Set the retain flag on all topics instead of only selected global ones
'';
};
user = mkOption {
type = types.str;
description = lib.mdDoc ''
The MQTT user to use
'';
};
password = mkOption {
type = types.str;
description = lib.mdDoc ''
The MQTT password.
'';
};
};
extraArguments = mkOption {
type = types.listOf types.str;
default = [];
description = lib.mdDoc ''
Extra arguments to the ebus daemon
'';
};
};
config = mkIf (cfg.enable) {
systemd.services.ebusd = {
description = "EBUSd Service";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = command;
DynamicUser = true;
Restart = "on-failure";
# Hardening
CapabilityBoundingSet = "";
DeviceAllow = lib.optionals usesDev [
cfg.device
] ;
DevicePolicy = "closed";
LockPersonality = true;
MemoryDenyWriteExecute = false;
NoNewPrivileges = true;
PrivateDevices = usesDev;
PrivateUsers = true;
PrivateTmp = true;
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectProc = "invisible";
ProcSubset = "pid";
ProtectSystem = "strict";
RemoveIPC = true;
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SupplementaryGroups = [
"dialout"
];
SystemCallArchitectures = "native";
SystemCallFilter = [
"@system-service @pkey"
"~@privileged @resources"
];
UMask = "0077";
};
};
};
}

View file

@ -451,6 +451,7 @@ in {
"eufylife_ble" "eufylife_ble"
"esphome" "esphome"
"fjaraskupan" "fjaraskupan"
"gardena_bluetooth"
"govee_ble" "govee_ble"
"homekit_controller" "homekit_controller"
"inkbird" "inkbird"

View file

@ -12,16 +12,14 @@ let
configFile = pkgs.runCommand "matrix-appservice-irc.yml" { configFile = pkgs.runCommand "matrix-appservice-irc.yml" {
# Because this program will be run at build time, we need `nativeBuildInputs` # Because this program will be run at build time, we need `nativeBuildInputs`
nativeBuildInputs = [ (pkgs.python3.withPackages (ps: [ ps.pyyaml ps.jsonschema ])) ]; nativeBuildInputs = [ (pkgs.python3.withPackages (ps: [ ps.jsonschema ])) pkgs.remarshal ];
preferLocalBuild = true; preferLocalBuild = true;
config = builtins.toJSON cfg.settings; config = builtins.toJSON cfg.settings;
passAsFile = [ "config" ]; passAsFile = [ "config" ];
} '' } ''
# The schema is given as yaml, we need to convert it to json # The schema is given as yaml, we need to convert it to json
python -c 'import json; import yaml; import sys; json.dump(yaml.safe_load(sys.stdin), sys.stdout)' \ remarshal --if yaml --of json -i ${pkg}/config.schema.yml -o config.schema.json
< ${pkg}/lib/node_modules/matrix-appservice-irc/config.schema.yml \
> config.schema.json
python -m jsonschema config.schema.json -i $configPath python -m jsonschema config.schema.json -i $configPath
cp "$configPath" "$out" cp "$configPath" "$out"
''; '';
@ -215,7 +213,10 @@ in {
LockPersonality = true; LockPersonality = true;
RestrictRealtime = true; RestrictRealtime = true;
PrivateMounts = true; PrivateMounts = true;
SystemCallFilter = "~@aio @clock @cpu-emulation @debug @keyring @memlock @module @mount @obsolete @raw-io @setuid @swap"; SystemCallFilter = [
"@system-service @pkey"
"~@privileged @resources"
];
SystemCallArchitectures = "native"; SystemCallArchitectures = "native";
# AF_UNIX is required to connect to a postgres socket. # AF_UNIX is required to connect to a postgres socket.
RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6"; RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6";

View file

@ -138,10 +138,12 @@ in
"~@privileged" "~@privileged"
]; ];
StateDirectory = "matrix-conduit"; StateDirectory = "matrix-conduit";
StateDirectoryMode = "0700";
ExecStart = "${cfg.package}/bin/conduit"; ExecStart = "${cfg.package}/bin/conduit";
Restart = "on-failure"; Restart = "on-failure";
RestartSec = 10; RestartSec = 10;
StartLimitBurst = 5; StartLimitBurst = 5;
UMask = "077";
}; };
}; };
}; };

View file

@ -0,0 +1,96 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.matrix-synapse.sliding-sync;
in
{
options.services.matrix-synapse.sliding-sync = {
enable = lib.mkEnableOption (lib.mdDoc "sliding sync");
package = lib.mkPackageOption pkgs "matrix-sliding-sync" { };
settings = lib.mkOption {
type = lib.types.submodule {
freeformType = with lib.types; attrsOf str;
options = {
SYNCV3_SERVER = lib.mkOption {
type = lib.types.str;
description = lib.mdDoc ''
The destination homeserver to talk to not including `/_matrix/` e.g `https://matrix.example.org`.
'';
};
SYNCV3_DB = lib.mkOption {
type = lib.types.str;
default = "postgresql:///matrix-sliding-sync?host=/run/postgresql";
description = lib.mdDoc ''
The postgres connection string.
Refer to <https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING>.
'';
};
SYNCV3_BINDADDR = lib.mkOption {
type = lib.types.str;
default = "127.0.0.1:8009";
example = "[::]:8008";
description = lib.mdDoc "The interface and port to listen on.";
};
SYNCV3_LOG_LEVEL = lib.mkOption {
type = lib.types.enum [ "trace" "debug" "info" "warn" "error" "fatal" ];
default = "info";
description = lib.mdDoc "The level of verbosity for messages logged.";
};
};
};
default = { };
description = ''
Freeform environment variables passed to the sliding sync proxy.
Refer to <https://github.com/matrix-org/sliding-sync#setup> for all supported values.
'';
};
createDatabase = lib.mkOption {
type = lib.types.bool;
default = true;
description = lib.mdDoc ''
Whether to enable and configure `services.postgres` to ensure that the database user `matrix-sliding-sync`
and the database `matrix-sliding-sync` exist.
'';
};
environmentFile = lib.mkOption {
type = lib.types.str;
description = lib.mdDoc ''
Environment file as defined in {manpage}`systemd.exec(5)`.
This must contain the {env}`SYNCV3_SECRET` variable which should
be generated with {command}`openssl rand -hex 32`.
'';
};
};
config = lib.mkIf cfg.enable {
services.postgresql = lib.optionalAttrs cfg.createDatabase {
enable = true;
ensureDatabases = [ "matrix-sliding-sync" ];
ensureUsers = [ rec {
name = "matrix-sliding-sync";
ensurePermissions."DATABASE \"${name}\"" = "ALL PRIVILEGES";
} ];
};
systemd.services.matrix-sliding-sync = {
after = lib.optional cfg.createDatabase "postgresql.service";
wantedBy = [ "multi-user.target" ];
environment = cfg.settings;
serviceConfig = {
DynamicUser = true;
EnvironmentFile = cfg.environmentFile;
ExecStart = lib.getExe cfg.package;
StateDirectory = "matrix-sliding-sync";
WorkingDirectory = "%S/matrix-sliding-sync";
};
};
};
}

Some files were not shown because too many files have changed in this diff Show more