Project import generated by Copybara.

GitOrigin-RevId: 29b0d4d0b600f8f5dd0b86e3362a33d4181938f9
This commit is contained in:
Default email 2021-03-09 11:18:52 +08:00
parent f9be99903a
commit 75ca762b89
2152 changed files with 41155 additions and 20654 deletions

View file

@ -1,4 +1,4 @@
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$'))) MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$' -not -name README.md)))
.PHONY: all .PHONY: all
all: validate format out/html/index.html out/epub/manual.epub all: validate format out/html/index.html out/epub/manual.epub

12
third_party/nixpkgs/doc/README.md vendored Normal file
View file

@ -0,0 +1,12 @@
# Nixpkgs/doc
This directory houses the sources files for the Nixpkgs manual.
You can find the [rendered documentation for Nixpkgs `unstable` on nixos.org](https://nixos.org/manual/nixpkgs/unstable/).
[Docs for Nixpkgs stable](https://nixos.org/manual/nixpkgs/stable/) are also available.
If you want to contribute to the documentation, [here's how to do it](https://nixos.org/manual/nixpkgs/unstable/#chap-contributing).
If you're only getting started with Nix, go to [nixos.org/learn](https://nixos.org/learn).

View file

@ -6,7 +6,7 @@
This chapter describes tools for creating various types of images. This chapter describes tools for creating various types of images.
</para> </para>
<xi:include href="images/appimagetools.xml" /> <xi:include href="images/appimagetools.xml" />
<xi:include href="images/dockertools.xml" /> <xi:include href="images/dockertools.section.xml" />
<xi:include href="images/ocitools.xml" /> <xi:include href="images/ocitools.xml" />
<xi:include href="images/snaptools.xml" /> <xi:include href="images/snaptools.xml" />
</chapter> </chapter>

View file

@ -0,0 +1,298 @@
# pkgs.dockerTools {#sec-pkgs-dockerTools}
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
## buildImage {#ssec-pkgs-dockerTools-buildImage}
This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`.
The parameters of `buildImage` with relative example values are described below:
[]{#ex-dockerTools-buildImage}
[]{#ex-dockerTools-buildImage-runAsRoot}
```nix
buildImage {
name = "redis";
tag = "latest";
fromImage = someBaseImage;
fromImageName = null;
fromImageTag = "latest";
contents = pkgs.redis;
runAsRoot = ''
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = {
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = { "/data" = { }; };
};
}
```
The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically.
- `name` specifies the name of the resulting image. This is the only required argument for `buildImage`.
- `tag` specifies the tag of the resulting image. By default it\'s `null`, which indicates that the nix output hash will be used as tag.
- `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it\'s `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
- `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it\'s `null`, in which case `buildImage` will peek the first image available in the repository.
- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it\'s `null`, in which case `buildImage` will peek the first tag available for the base image.
- `contents` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it\'s `null`.
- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`.
> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
> **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`.
> **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`.
By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this:
```ShellSession
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
```
You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`.
```nix
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
```
and now the Docker CLI will display a reasonable date and sort the images as expected:
```ShellSession
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```
however, the produced images will not be binary reproducible.
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally.
`name`
: The name of the resulting image.
`tag` _optional_
: Tag of the generated image.
*Default:* the output path\'s hash
`contents` _optional_
: Top level paths in the container. Either a single derivation, or a list of derivations.
*Default:* `[]`
`config` _optional_
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`
`created` _optional_
: Date and time the layers were created. Follows the same `now` exception supported by `buildImage`.
*Default:* `1970-01-01T00:00:01Z`
`maxLayers` _optional_
: Maximum number of layers to create.
*Default:* `100`
*Maximum:* `125`
`extraCommands` _optional_
: Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are \"on top\" of all the other layers, so can create additional directories and files.
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
Each path directly listed in `contents` will have a symlink in the root of the image.
For example:
```nix
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
```
will create symlinks for all the paths in the `hello` package:
```ShellSession
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
```
### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config}
The closure of `config` is automatically included in the closure of the final image.
This allows you to make very simple Docker images with very little code. This container will start up and run `hello`:
```nix
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
```
### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers}
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
The first (`maxLayers-2`) most \"popular\" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining \"unpopular\" paths, and finally layer \#`maxLayers` will contain the Image configuration.
Docker\'s Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage}
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon:
```ShellSession
$(nix-build) | docker load
```
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
```
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
Its parameters are described in the example below:
```nix
pullImage {
imageName = "nixos/nix";
imageDigest =
"sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b";
finalImageName = "nix";
finalImageTag = "1.11";
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
os = "linux";
arch = "x86_64";
}
```
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s equal to `imageName`.
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s `latest`.
- `sha256` is the checksum of the whole fetched image. This argument is required.
- `os`, if specified, is the operating system of the fetched image. By default it\'s `linux`.
- `arch`, if specified, is the cpu architecture of the fetched image. By default it\'s `x86_64`.
`nix-prefetch-docker` command can be used to get required image parameters:
```ShellSession
$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
```
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
```
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
```ShellSession
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
```
## exportImage {#ssec-pkgs-dockerTools-exportImage}
This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`.
> **_NOTE:_** Using this function requires the `kvm` device to be available.
The parameters of `exportImage` are the following:
```nix
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
```
The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case.
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
This constant string is a helper for setting up the base files for managing users and groups, only if such files don\'t exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
```nix
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
```
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.

View file

@ -1,499 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-dockerTools">
<title>pkgs.dockerTools</title>
<para>
<varname>pkgs.dockerTools</varname> is a set of functions for creating and manipulating Docker images according to the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120"> Docker Image Specification v1.2.0 </link>. Docker itself is not used to perform any of the operations done by these functions.
</para>
<section xml:id="ssec-pkgs-dockerTools-buildImage">
<title>buildImage</title>
<para>
This function is analogous to the <command>docker build</command> command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with <command>docker load</command>.
</para>
<para>
The parameters of <varname>buildImage</varname> with relative example values are described below:
</para>
<example xml:id='ex-dockerTools-buildImage'>
<title>Docker build</title>
<programlisting>
buildImage {
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' />
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' />
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' />
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
#!${pkgs.runtimeShell}
mkdir -p /data
'';
config = { <co xml:id='ex-dockerTools-buildImage-8' />
Cmd = [ "/bin/redis-server" ];
WorkingDir = "/data";
Volumes = {
"/data" = {};
};
};
}
</programlisting>
</example>
<para>
The above example will build a Docker image <literal>redis/latest</literal> from the given base image. Loading and running this image in Docker results in <literal>redis-server</literal> being started automatically.
</para>
<calloutlist>
<callout arearefs='ex-dockerTools-buildImage-1'>
<para>
<varname>name</varname> specifies the name of the resulting image. This is the only required argument for <varname>buildImage</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-2'>
<para>
<varname>tag</varname> specifies the tag of the resulting image. By default it's <literal>null</literal>, which indicates that the nix output hash will be used as tag.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-3'>
<para>
<varname>fromImage</varname> is the repository tarball containing the base image. It must be a valid Docker image, such as exported by <command>docker save</command>. By default it's <literal>null</literal>, which can be seen as equivalent to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-4'>
<para>
<varname>fromImageName</varname> can be used to further specify the base image within the repository, in case it contains multiple images. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first image available in the repository.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-5'>
<para>
<varname>fromImageTag</varname> can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first tag available for the base image.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-6'>
<para>
<varname>contents</varname> is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as <command>ADD contents/ /</command> in a <filename>Dockerfile</filename>. By default it's <literal>null</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
<para>
<varname>runAsRoot</varname> is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied <varname>contents</varname> derivation. This can be similarly seen as <command>RUN ...</command> in a <filename>Dockerfile</filename>.
<note>
<para>
Using this parameter requires the <literal>kvm</literal> device to be available.
</para>
</note>
</para>
</callout>
<callout arearefs='ex-dockerTools-buildImage-8'>
<para>
<varname>config</varname> is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
</callout>
</calloutlist>
<para>
After the new layer has been created, its closure (to which <varname>contents</varname>, <varname>config</varname> and <varname>runAsRoot</varname> contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
</para>
<para>
At the end of the process, only one new single layer will be produced and added to the resulting image.
</para>
<para>
The resulting repository will only list the single image <varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/> it would be <varname>redis/latest</varname>.
</para>
<para>
It is possible to inspect the arguments with which an image was built using its <varname>buildArgs</varname> attribute.
</para>
<note>
<para>
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal> you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
</para>
</note>
<note>
<para>
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal> you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>.
</para>
</note>
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date">
<title>Impurely Defining a Docker Layer's Creation Date</title>
<para>
By default <function>buildImage</function> will use a static date of one second past the UNIX Epoch. This allows <function>buildImage</function> to produce binary reproducible images. When listing images with <command>docker images</command>, the newly created images will be listed like this:
</para>
<screen>
<prompt>$ </prompt>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest 08c791c7846e 48 years ago 25.2MB
</screen>
<para>
You can break binary reproducibility but have a sorted, meaningful <literal>CREATED</literal> column by setting <literal>created</literal> to <literal>now</literal>.
</para>
<programlisting><![CDATA[
pkgs.dockerTools.buildImage {
name = "hello";
tag = "latest";
created = "now";
contents = pkgs.hello;
config.Cmd = [ "/bin/hello" ];
}
]]></programlisting>
<para>
and now the Docker CLI will display a reasonable date and sort the images as expected:
<screen>
<prompt>$ </prompt>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
</screen>
however, the produced images will not be binary reproducible.
</para>
</example>
</section>
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage">
<title>buildLayeredImage</title>
<para>
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use <function>streamLayeredImage</function> instead, which this function uses internally.
</para>
<variablelist>
<varlistentry>
<term>
<varname>name</varname>
</term>
<listitem>
<para>
The name of the resulting image.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>tag</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Tag of the generated image.
</para>
<para>
<emphasis>Default:</emphasis> the output path's hash
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>contents</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Top level paths in the container. Either a single derivation, or a list of derivations.
</para>
<para>
<emphasis>Default:</emphasis> <literal>[]</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>config</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Run-time configuration of the container. A full list of the options are available at in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>{}</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>created</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Date and time the layers were created. Follows the same <literal>now</literal> exception supported by <literal>buildImage</literal>.
</para>
<para>
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>maxLayers</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Maximum number of layers to create.
</para>
<para>
<emphasis>Default:</emphasis> <literal>100</literal>
</para>
<para>
<emphasis>Maximum:</emphasis> <literal>125</literal>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>
<varname>extraCommands</varname> <emphasis>optional</emphasis>
</term>
<listitem>
<para>
Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="dockerTools-buildLayeredImage-arg-contents">
<title>Behavior of <varname>contents</varname> in the final image</title>
<para>
Each path directly listed in <varname>contents</varname> will have a symlink in the root of the image.
</para>
<para>
For example:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
]]></programlisting>
will create symlinks for all the paths in the <literal>hello</literal> package:
<screen><![CDATA[
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
]]></screen>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-config">
<title>Automatic inclusion of <varname>config</varname> references</title>
<para>
The closure of <varname>config</varname> is automatically included in the closure of the final image.
</para>
<para>
This allows you to make very simple Docker images with very little code. This container will start up and run <command>hello</command>:
<programlisting><![CDATA[
pkgs.dockerTools.buildLayeredImage {
name = "hello";
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
}
]]></programlisting>
</para>
</section>
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers">
<title>Adjusting <varname>maxLayers</varname></title>
<para>
Increasing the <varname>maxLayers</varname> increases the number of layers which have a chance to be shared between different images.
</para>
<para>
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
</para>
<para>
If the produced image will not be extended by other Docker builds, it is safe to set <varname>maxLayers</varname> to <literal>128</literal>. However it will be impossible to extend the image further.
</para>
<para>
The first (<literal>maxLayers-2</literal>) most "popular" paths will have their own individual layers, then layer #<literal>maxLayers-1</literal> will contain all the remaining "unpopular" paths, and finally layer #<literal>maxLayers</literal> will contain the Image configuration.
</para>
<para>
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
</para>
</section>
</section>
<section xml:id="ssec-pkgs-dockerTools-streamLayeredImage">
<title>streamLayeredImage</title>
<para>
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for <function>buildLayeredImage</function>. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
</para>
<para>
The image produced by running the output script can be piped directly into <command>docker load</command>, to load it into the local docker daemon:
<screen><![CDATA[
$(nix-build) | docker load
]]></screen>
</para>
<para>
Alternatively, the image be piped via <command>gzip</command> into <command>skopeo</command>, e.g. to copy it into a registry:
<screen><![CDATA[
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
]]></screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
<title>pullImage</title>
<para>
This function is analogous to the <command>docker pull</command> command, in that it can be used to pull a Docker image from a Docker registry. By default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull images.
</para>
<para>
Its parameters are described in the example below:
</para>
<example xml:id='ex-dockerTools-pullImage'>
<title>Docker pull</title>
<programlisting>
pullImage {
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
}
</programlisting>
</example>
<calloutlist>
<callout arearefs='ex-dockerTools-pullImage-1'>
<para>
<varname>imageName</varname> specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. <literal>nixos</literal>). This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-2'>
<para>
<varname>imageDigest</varname> specifies the digest of the image to be downloaded. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-3'>
<para>
<varname>finalImageName</varname>, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to <varname>imageName</varname>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-4'>
<para>
<varname>finalImageTag</varname>, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's <literal>latest</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-5'>
<para>
<varname>sha256</varname> is the checksum of the whole fetched image. This argument is required.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-6'>
<para>
<varname>os</varname>, if specified, is the operating system of the fetched image. By default it's <literal>linux</literal>.
</para>
</callout>
<callout arearefs='ex-dockerTools-pullImage-7'>
<para>
<varname>arch</varname>, if specified, is the cpu architecture of the fetched image. By default it's <literal>x86_64</literal>.
</para>
</callout>
</calloutlist>
<para>
<literal>nix-prefetch-docker</literal> command can be used to get required image parameters:
<screen>
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
</screen>
Since a given <varname>imageName</varname> may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the <option>--os</option> and <option>--arch</option> arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
</screen>
Desired image name and tag can be set using <option>--final-image-name</option> and <option>--final-image-tag</option> arguments:
<screen>
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
</screen>
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-exportImage">
<title>exportImage</title>
<para>
This function is analogous to the <command>docker export</command> command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with <command>docker import</command>.
</para>
<note>
<para>
Using this function requires the <literal>kvm</literal> device to be available.
</para>
</note>
<para>
The parameters of <varname>exportImage</varname> are the following:
</para>
<example xml:id='ex-dockerTools-exportImage'>
<title>Docker export</title>
<programlisting>
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
</programlisting>
</example>
<para>
The parameters relative to the base image have the same synopsis as described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that <varname>fromImage</varname> is the only required argument in this case.
</para>
<para>
The <varname>name</varname> argument is the name of the derivation output, which defaults to <varname>fromImage.name</varname>.
</para>
</section>
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
<title>shadowSetup</title>
<para>
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a <varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like in the example below:
</para>
<example xml:id='ex-dockerTools-shadowSetup'>
<title>Shadow base files</title>
<programlisting>
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
</programlisting>
</example>
<para>
Creating base files like <literal>/etc/passwd</literal> or <literal>/etc/login.defs</literal> is necessary for shadow-utils to manipulate users and groups.
</para>
</section>
</section>

View file

@ -17,9 +17,11 @@
<section xml:id="sec-citrix-selfservice"> <section xml:id="sec-citrix-selfservice">
<title>Citrix Selfservice</title> <title>Citrix Selfservice</title>
<para> <para>
The <link xlink:href="https://support.citrix.com/article/CTX200337">selfservice</link> is an application managing Citrix desktops and applications. Please note that this feature only works with at least <package>citrix_workspace_20_06_0</package> and later versions. The <link xlink:href="https://support.citrix.com/article/CTX200337">selfservice</link> is an application managing Citrix desktops and applications. Please note that this feature only works with at least <package>citrix_workspace_20_06_0</package> and later versions.
</para> </para>
<para> <para>
In order to set this up, you first have to <link xlink:href="https://its.uiowa.edu/support/article/102186">download the <literal>.cr</literal> file from the Netscaler Gateway</link>. After that you can configure the <command>selfservice</command> like this: In order to set this up, you first have to <link xlink:href="https://its.uiowa.edu/support/article/102186">download the <literal>.cr</literal> file from the Netscaler Gateway</link>. After that you can configure the <command>selfservice</command> like this:
<screen> <screen>

View file

@ -36,7 +36,7 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
;; load some packages ;; load some packages
(use-package company (use-package company
:bind ("&lt;C-tab&gt;" . company-complete) :bind ("<C-tab>" . company-complete)
:diminish company-mode :diminish company-mode
:commands (company-mode global-company-mode) :commands (company-mode global-company-mode)
:defer 1 :defer 1

View file

@ -180,17 +180,12 @@ args.stdenv.mkDerivation (args // {
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Arguments should be listed in the order they are used, with the Arguments should be listed in the order they are used, with the exception of <varname>lib</varname>, which always goes first.
exception of <varname>lib</varname>, which always goes first.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
Prefer using the top-level <varname>lib</varname> over its alias Prefer using the top-level <varname>lib</varname> over its alias <literal>stdenv.lib</literal>. <varname>lib</varname> is unrelated to <varname>stdenv</varname>, and so <literal>stdenv.lib</literal> should only be used as a convenience alias when developing to avoid having to modify the function inputs just to test something out.
<literal>stdenv.lib</literal>. <varname>lib</varname> is unrelated to
<varname>stdenv</varname>, and so <literal>stdenv.lib</literal> should only
be used as a convenience alias when developing to avoid having to modify
the function inputs just to test something out.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
@ -689,8 +684,7 @@ args.stdenv.mkDerivation (args // {
</varlistentry> </varlistentry>
<varlistentry> <varlistentry>
<term> <term>
If its a <emphasis>theme</emphasis> for a <emphasis>desktop environment</emphasis>, If its a <emphasis>theme</emphasis> for a <emphasis>desktop environment</emphasis>, a <emphasis>window manager</emphasis> or a <emphasis>display manager</emphasis>:
a <emphasis>window manager</emphasis> or a <emphasis>display manager</emphasis>:
</term> </term>
<listitem> <listitem>
<para> <para>

View file

@ -1677,8 +1677,7 @@ recursiveUpdate
<xi:include href="./locations.xml" xpointer="lib.attrsets.recurseIntoAttrs" /> <xi:include href="./locations.xml" xpointer="lib.attrsets.recurseIntoAttrs" />
<para> <para>
Make various Nix tools consider the contents of the resulting Make various Nix tools consider the contents of the resulting attribute set when looking for what to build, find, etc.
attribute set when looking for what to build, find, etc.
</para> </para>
<para> <para>
@ -1749,5 +1748,4 @@ cartesianProductOfSets { a = [ 1 2 ]; b = [ 10 20 ]; }
]]></programlisting> ]]></programlisting>
</example> </example>
</section> </section>
</section> </section>

View file

@ -611,7 +611,7 @@ Using the example above, the analagous pytestCheckHook usage would be:
"update" "update"
]; ];
disabledTestFiles = [ disabledTestPaths = [
"tests/test_failing.py" "tests/test_failing.py"
]; ];
``` ```
@ -1188,7 +1188,8 @@ community to help save time. No tool is preferred at the moment.
expressions for your Python project. Note that [sharing derivations from expressions for your Python project. Note that [sharing derivations from
pypi2nix with nixpkgs is possible but not pypi2nix with nixpkgs is possible but not
encouraged](https://github.com/nix-community/pypi2nix/issues/222#issuecomment-443497376). encouraged](https://github.com/nix-community/pypi2nix/issues/222#issuecomment-443497376).
- [python2nix](https://github.com/proger/python2nix) by Vladimir Kirillov. - [nixpkgs-pytools](https://github.com/nix-community/nixpkgs-pytools)
- [poetry2nix](https://github.com/nix-community/poetry2nix)
### Deterministic builds ### Deterministic builds
@ -1554,9 +1555,9 @@ Following rules are desired to be respected:
* Python libraries are called from `python-packages.nix` and packaged with * Python libraries are called from `python-packages.nix` and packaged with
`buildPythonPackage`. The expression of a library should be in `buildPythonPackage`. The expression of a library should be in
`pkgs/development/python-modules/<name>/default.nix`. Libraries in `pkgs/development/python-modules/<name>/default.nix`.
`pkgs/top-level/python-packages.nix` are sorted quasi-alphabetically to avoid * Libraries in `pkgs/top-level/python-packages.nix` are sorted
merge conflicts. alphanumerically to avoid merge conflicts and ease locating attributes.
* Python applications live outside of `python-packages.nix` and are packaged * Python applications live outside of `python-packages.nix` and are packaged
with `buildPythonApplication`. with `buildPythonApplication`.
* Make sure libraries build for all Python interpreters. * Make sure libraries build for all Python interpreters.
@ -1570,3 +1571,4 @@ Following rules are desired to be respected:
[PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names). This [PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names). This
means that characters should be converted to lowercase and `.` and `_` should means that characters should be converted to lowercase and `.` and `_` should
be replaced by a single `-` (foo-bar-baz instead of Foo__Bar.baz ) be replaced by a single `-` (foo-bar-baz instead of Foo__Bar.baz )
* Attribute names in `python-packages.nix` should be sorted alphanumerically.

View file

@ -121,7 +121,7 @@ Use the `meta.broken` attribute to disable the package for unsupported Qt versio
stdenv.mkDerivation { stdenv.mkDerivation {
# ... # ...
# Disable this library with Qt &lt; 5.9.0 # Disable this library with Qt < 5.9.0
meta.broken = lib.versionOlder qtbase.version "5.9.0"; meta.broken = lib.versionOlder qtbase.version "5.9.0";
} }
``` ```

View file

@ -223,7 +223,7 @@ sometimes it may be necessary to disable this so the tests run consecutively.
```nix ```nix
rustPlatform.buildRustPackage { rustPlatform.buildRustPackage {
/* ... */ /* ... */
cargoParallelTestThreads = false; dontUseCargoParallelTests = true;
} }
``` ```
@ -264,6 +264,198 @@ rustPlatform.buildRustPackage rec {
} }
``` ```
## Compiling non-Rust packages that include Rust code
Several non-Rust packages incorporate Rust code for performance- or
security-sensitive parts. `rustPlatform` exposes several functions and
hooks that can be used to integrate Cargo in non-Rust packages.
### Vendoring of dependencies
Since network access is not allowed in sandboxed builds, Rust crate
dependencies need to be retrieved using a fetcher. `rustPlatform`
provides the `fetchCargoTarball` fetcher, which vendors all
dependencies of a crate. For example, given a source path `src`
containing `Cargo.toml` and `Cargo.lock`, `fetchCargoTarball`
can be used as follows:
```nix
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src;
hash = "sha256-BoHIN/519Top1NUBjpB/oEMqi86Omt3zTQcXFWqrek0=";
};
```
The `src` attribute is required, as well as a hash specified through
one of the `sha256` or `hash` attributes. The following optional
attributes can also be used:
* `name`: the name that is used for the dependencies tarball. If
`name` is not specified, then the name `cargo-deps` will be used.
* `sourceRoot`: when the `Cargo.lock`/`Cargo.toml` are in a
subdirectory, `sourceRoot` specifies the relative path to these
files.
* `patches`: patches to apply before vendoring. This is useful when
the `Cargo.lock`/`Cargo.toml` files need to be patched before
vendoring.
### Hooks
`rustPlatform` provides the following hooks to automate Cargo builds:
* `cargoSetupHook`: configure Cargo to use depenencies vendored
through `fetchCargoTarball`. This hook uses the `cargoDeps`
environment variable to find the vendored dependencies. If a project
already vendors its dependencies, the variable `cargoVendorDir` can
be used instead. When the `Cargo.toml`/`Cargo.lock` files are not in
`sourceRoot`, then the optional `cargoRoot` is used to specify the
Cargo root directory relative to `sourceRoot`.
* `cargoBuildHook`: use Cargo to build a crate. If the crate to be
built is a crate in e.g. a Cargo workspace, the relative path to the
crate to build can be set through the optional `buildAndTestSubdir`
environment variable. Additional Cargo build flags can be passed
through `cargoBuildFlags`.
* `maturinBuildHook`: use [Maturin](https://github.com/PyO3/maturin)
to build a Python wheel. Similar to `cargoBuildHook`, the optional
variable `buildAndTestSubdir` can be used to build a crate in a
Cargo workspace. Additional maturin flags can be passed through
`maturinBuildFlags`.
* `cargoCheckHook`: run tests using Cargo. Additional flags can be
passed to Cargo using `checkFlags` and `checkFlagsArray`. By
default, tests are run in parallel. This can be disabled by setting
`dontUseCargoParallelTests`.
* `cargoInstallHook`: install binaries and static/shared libraries
that were built using `cargoBuildHook`.
### Examples
#### Python package using `setuptools-rust`
For Python packages using `setuptools-rust`, you can use
`fetchCargoTarball` and `cargoSetupHook` to retrieve and set up Cargo
dependencies. The build itself is then performed by
`buildPythonPackage`.
The following example outlines how the `tokenizers` Python package is
built. Since the Python package is in the `source/bindings/python`
directory of the *tokenizers* project's source archive, we use
`sourceRoot` to point the tooling to this directory:
```nix
{ fetchFromGitHub
, buildPythonPackage
, rustPlatform
, setuptools-rust
}:
buildPythonPackage rec {
pname = "tokenizers";
version = "0.10.0";
src = fetchFromGitHub {
owner = "huggingface";
repo = pname;
rev = "python-v${version}";
hash = "sha256-rQ2hRV52naEf6PvRsWVCTN7B1oXAQGmnpJw4iIdhamw=";
};
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src sourceRoot;
name = "${pname}-${version}";
hash = "sha256-BoHIN/519Top1NUBjpB/oEMqi86Omt3zTQcXFWqrek0=";
};
sourceRoot = "source/bindings/python";
nativeBuildInputs = [ setuptools-rust ] ++ (with rustPlatform; [
cargoSetupHook
rust.cargo
rust.rustc
]);
# ...
}
```
In some projects, the Rust crate is not in the main Python source
directory. In such cases, the `cargoRoot` attribute can be used to
specify the crate's directory relative to `sourceRoot`. In the
following example, the crate is in `src/rust`, as specified in the
`cargoRoot` attribute. Note that we also need to specify the correct
path for `fetchCargoTarball`.
```nix
{ buildPythonPackage
, fetchPypi
, rustPlatform
, setuptools-rust
, openssl
}:
buildPythonPackage rec {
pname = "cryptography";
version = "3.4.2"; # Also update the hash in vectors.nix
src = fetchPypi {
inherit pname version;
sha256 = "1i1mx5y9hkyfi9jrrkcw804hmkcglxi6rmf7vin7jfnbr2bf4q64";
};
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src;
sourceRoot = "${pname}-${version}/${cargoRoot}";
name = "${pname}-${version}";
hash = "sha256-PS562W4L1NimqDV2H0jl5vYhL08H9est/pbIxSdYVfo=";
};
cargoRoot = "src/rust";
# ...
}
```
#### Python package using `maturin`
Python packages that use [Maturin](https://github.com/PyO3/maturin)
can be built with `fetchCargoTarball`, `cargoSetupHook`, and
`maturinBuildHook`. For example, the following (partial) derivation
builds the `retworkx` Python package. `fetchCargoTarball` and
`cargoSetupHook` are used to fetch and set up the crate dependencies.
`maturinBuildHook` is used to perform the build.
```nix
{ lib
, buildPythonPackage
, rustPlatform
, fetchFromGitHub
}:
buildPythonPackage rec {
pname = "retworkx";
version = "0.6.0";
src = fetchFromGitHub {
owner = "Qiskit";
repo = "retworkx";
rev = version;
sha256 = "11n30ldg3y3y6qxg3hbj837pnbwjkqw3nxq6frds647mmmprrd20";
};
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src;
name = "${pname}-${version}";
hash = "sha256-heOBK8qi2nuc/Ib+I/vLzZ1fUUD/G/KTw9d7M4Hz5O0=";
};
format = "pyproject";
nativeBuildInputs = with rustPlatform; [ cargoSetupHook maturinBuildHook ];
# ...
}
```
## Compiling Rust crates using Nix instead of Cargo ## Compiling Rust crates using Nix instead of Cargo
### Simple operation ### Simple operation

View file

@ -26,7 +26,6 @@
<para> <para>
A number of attributes can be used to work with a derivation with multiple outputs. The attribute <varname>outputs</varname> is a list of strings, which are the names of the outputs. For each of these names, an identically named attribute is created, corresponding to that output. The attribute <varname>meta.outputsToInstall</varname> is used to determine the default set of outputs to install when using the derivation name unqualified. A number of attributes can be used to work with a derivation with multiple outputs. The attribute <varname>outputs</varname> is a list of strings, which are the names of the outputs. For each of these names, an identically named attribute is created, corresponding to that output. The attribute <varname>meta.outputsToInstall</varname> is used to determine the default set of outputs to install when using the derivation name unqualified.
</para> </para>
</section> </section>
<section xml:id="sec-multiple-outputs-installing"> <section xml:id="sec-multiple-outputs-installing">
<title>Installing a split package</title> <title>Installing a split package</title>
@ -154,7 +153,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is for development-only files. These include C(++) headers, pkg-config, cmake and aclocal files. They go to <varname>dev</varname> or <varname>out</varname> by default. is for development-only files. These include C(++) headers (<filename>include/</filename>), pkg-config (<filename>lib/pkgconfig/</filename>), cmake (<filename>lib/cmake/</filename>) and aclocal files (<varname>share/aclocal/</varname>). They go to <varname>dev</varname> or <varname>out</varname> by default.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -164,7 +163,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is meant for user-facing binaries, typically residing in bin/. They go to <varname>bin</varname> or <varname>out</varname> by default. is meant for user-facing binaries, typically residing in <filename>bin/</filename>. They go to <varname>bin</varname> or <varname>out</varname> by default.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -194,7 +193,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users. is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books, typically residing in <filename>share/gtk-doc/</filename> and <filename>share/devhelp/</filename>, in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -204,7 +203,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is for man pages (except for section 3). They go to <varname>man</varname> or <varname>$outputBin</varname> by default. is for man pages (except for section 3), typically residing in <filename>share/man/man[0-9]/</filename>. They go to <varname>man</varname> or <varname>$outputBin</varname> by default.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -214,7 +213,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is for section 3 man pages. They go to <varname>devman</varname> or <varname>$outputMan</varname> by default. is for section 3 man pages, typically residing in <filename>share/man/man3/</filename>. They go to <varname>devman</varname> or <varname>$outputMan</varname> by default.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -224,7 +223,7 @@
</term> </term>
<listitem> <listitem>
<para> <para>
is for info pages. They go to <varname>info</varname> or <varname>$outputBin</varname> by default. is for info pages, typically residing in <filename>share/info/</filename>. They go to <varname>info</varname> or <varname>$outputBin</varname> by default.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>

View file

@ -1839,10 +1839,7 @@ addEnvHooks "$hostOffset" myBashFunction
</term> </term>
<listitem> <listitem>
<para> <para>
This setup hook moves any systemd user units installed in the lib This setup hook moves any systemd user units installed in the lib subdirectory into share. In addition, a link is provided from share to lib for compatibility. This is needed for systemd to find user services when installed into the user profile.
subdirectory into share. In addition, a link is provided from share to
lib for compatibility. This is needed for systemd to find user services
when installed into the user profile.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -2022,8 +2019,7 @@ addEnvHooks "$hostOffset" myBashFunction
This is a special setup hook which helps in packaging proprietary software in that it automatically tries to find missing shared library dependencies of ELF files based on the given <varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>. This is a special setup hook which helps in packaging proprietary software in that it automatically tries to find missing shared library dependencies of ELF files based on the given <varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>.
</para> </para>
<para> <para>
You can also specify a <varname>runtimeDependencies</varname> variable which lists dependencies to be unconditionally added to <glossterm>rpath</glossterm> of all executables. You can also specify a <varname>runtimeDependencies</varname> variable which lists dependencies to be unconditionally added to <glossterm>rpath</glossterm> of all executables. This is useful for programs that use <citerefentry>
This is useful for programs that use <citerefentry>
<refentrytitle>dlopen</refentrytitle> <refentrytitle>dlopen</refentrytitle>
<manvolnum>3</manvolnum> </citerefentry> to load libraries at runtime. <manvolnum>3</manvolnum> </citerefentry> to load libraries at runtime.
</para> </para>

View file

@ -28,8 +28,7 @@
</para> </para>
<para> <para>
NOTE: DO NOT USE THIS in nixpkgs. NOTE: DO NOT USE THIS in nixpkgs. Further overlays can be added by calling the <literal>pkgs.extend</literal> or <literal>pkgs.appendOverlays</literal>, although it is often preferable to avoid these functions, because they recompute the Nixpkgs fixpoint, which is somewhat expensive to do.
Further overlays can be added by calling the <literal>pkgs.extend</literal> or <literal>pkgs.appendOverlays</literal>, although it is often preferable to avoid these functions, because they recompute the Nixpkgs fixpoint, which is somewhat expensive to do.
</para> </para>
</section> </section>
@ -140,36 +139,31 @@ self: super:
</section> </section>
<section xml:id="sec-overlays-alternatives"> <section xml:id="sec-overlays-alternatives">
<title>Using overlays to configure alternatives</title> <title>Using overlays to configure alternatives</title>
<para> <para>
Certain software packages have different implementations of the Certain software packages have different implementations of the same interface. Other distributions have functionality to switch between these. For example, Debian provides <link
same interface. Other distributions have functionality to switch xlink:href="https://wiki.debian.org/DebianAlternatives">DebianAlternatives</link>. Nixpkgs has what we call <literal>alternatives</literal>, which are configured through overlays.
between these. For example, Debian provides <link
xlink:href="https://wiki.debian.org/DebianAlternatives">DebianAlternatives</link>.
Nixpkgs has what we call <literal>alternatives</literal>, which
are configured through overlays.
</para> </para>
<section xml:id="sec-overlays-alternatives-blas-lapack"> <section xml:id="sec-overlays-alternatives-blas-lapack">
<title>BLAS/LAPACK</title> <title>BLAS/LAPACK</title>
<para> <para>
In Nixpkgs, we have multiple implementations of the BLAS/LAPACK In Nixpkgs, we have multiple implementations of the BLAS/LAPACK numerical linear algebra interfaces. They are:
numerical linear algebra interfaces. They are:
</para> </para>
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
<link xlink:href="https://www.openblas.net/">OpenBLAS</link> <link xlink:href="https://www.openblas.net/">OpenBLAS</link>
</para> </para>
<para> <para>
The Nixpkgs attribute is <literal>openblas</literal> for The Nixpkgs attribute is <literal>openblas</literal> for ILP64 (integer width = 64 bits) and <literal>openblasCompat</literal> for LP64 (integer width = 32 bits). <literal>openblasCompat</literal> is the default.
ILP64 (integer width = 64 bits) and
<literal>openblasCompat</literal> for LP64 (integer width =
32 bits). <literal>openblasCompat</literal> is the default.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<link xlink:href="http://www.netlib.org/lapack/">LAPACK <link xlink:href="http://www.netlib.org/lapack/">LAPACK reference</link> (also provides BLAS)
reference</link> (also provides BLAS)
</para> </para>
<para> <para>
The Nixpkgs attribute is <literal>lapack-reference</literal>. The Nixpkgs attribute is <literal>lapack-reference</literal>.
@ -178,8 +172,7 @@ self: super:
<listitem> <listitem>
<para> <para>
<link <link
xlink:href="https://software.intel.com/en-us/mkl">Intel xlink:href="https://software.intel.com/en-us/mkl">Intel MKL</link> (only works on the x86_64 architecture, unfree)
MKL</link> (only works on the x86_64 architecture, unfree)
</para> </para>
<para> <para>
The Nixpkgs attribute is <literal>mkl</literal>. The Nixpkgs attribute is <literal>mkl</literal>.
@ -191,45 +184,25 @@ self: super:
xlink:href="https://github.com/flame/blis">BLIS</link> xlink:href="https://github.com/flame/blis">BLIS</link>
</para> </para>
<para> <para>
BLIS, available through the attribute BLIS, available through the attribute <literal>blis</literal>, is a framework for linear algebra kernels. In addition, it implements the BLAS interface.
<literal>blis</literal>, is a framework for linear algebra kernels. In
addition, it implements the BLAS interface.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<link <link
xlink:href="https://developer.amd.com/amd-aocl/blas-library/">AMD xlink:href="https://developer.amd.com/amd-aocl/blas-library/">AMD BLIS/LIBFLAME</link> (optimized for modern AMD x86_64 CPUs)
BLIS/LIBFLAME</link> (optimized for modern AMD x86_64 CPUs)
</para> </para>
<para> <para>
The AMD fork of the BLIS library, with attribute The AMD fork of the BLIS library, with attribute <literal>amd-blis</literal>, extends BLIS with optimizations for modern AMD CPUs. The changes are usually submitted to the upstream BLIS project after some time. However, AMD BLIS typically provides some performance improvements on AMD Zen CPUs. The complementary AMD LIBFLAME library, with attribute <literal>amd-libflame</literal>, provides a LAPACK implementation.
<literal>amd-blis</literal>, extends BLIS with optimizations for
modern AMD CPUs. The changes are usually submitted to
the upstream BLIS project after some time. However, AMD BLIS
typically provides some performance improvements on AMD Zen CPUs.
The complementary AMD LIBFLAME library, with attribute
<literal>amd-libflame</literal>, provides a LAPACK implementation.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
<para> <para>
Introduced in <link Introduced in <link
xlink:href="https://github.com/NixOS/nixpkgs/pull/83888">PR xlink:href="https://github.com/NixOS/nixpkgs/pull/83888">PR #83888</link>, we are able to override the <literal>blas</literal> and <literal>lapack</literal> packages to use different implementations, through the <literal>blasProvider</literal> and <literal>lapackProvider</literal> argument. This can be used to select a different provider. BLAS providers will have symlinks in <literal>$out/lib/libblas.so.3</literal> and <literal>$out/lib/libcblas.so.3</literal> to their respective BLAS libraries. Likewise, LAPACK providers will have symlinks in <literal>$out/lib/liblapack.so.3</literal> and <literal>$out/lib/liblapacke.so.3</literal> to their respective LAPACK libraries. For example, Intel MKL is both a BLAS and LAPACK provider. An overlay can be created to use Intel MKL that looks like:
#83888</link>, we are able to override the <literal>blas</literal>
and <literal>lapack</literal> packages to use different implementations,
through the <literal>blasProvider</literal> and
<literal>lapackProvider</literal> argument. This can be used
to select a different provider. BLAS providers will have
symlinks in <literal>$out/lib/libblas.so.3</literal> and
<literal>$out/lib/libcblas.so.3</literal> to their respective
BLAS libraries. Likewise, LAPACK providers will have symlinks
in <literal>$out/lib/liblapack.so.3</literal> and
<literal>$out/lib/liblapacke.so.3</literal> to their respective
LAPACK libraries. For example, Intel MKL is both a BLAS and
LAPACK provider. An overlay can be created to use Intel MKL
that looks like:
</para> </para>
<programlisting> <programlisting>
self: super: self: super:
@ -243,45 +216,23 @@ self: super:
}; };
} }
</programlisting> </programlisting>
<para> <para>
This overlay uses Intels MKL library for both BLAS and LAPACK This overlay uses Intels MKL library for both BLAS and LAPACK interfaces. Note that the same can be accomplished at runtime using <literal>LD_LIBRARY_PATH</literal> of <literal>libblas.so.3</literal> and <literal>liblapack.so.3</literal>. For instance:
interfaces. Note that the same can be accomplished at runtime
using <literal>LD_LIBRARY_PATH</literal> of
<literal>libblas.so.3</literal> and
<literal>liblapack.so.3</literal>. For instance:
</para> </para>
<screen> <screen>
<prompt>$ </prompt>LD_LIBRARY_PATH=$(nix-build -A mkl)/lib:$LD_LIBRARY_PATH nix-shell -p octave --run octave <prompt>$ </prompt>LD_LIBRARY_PATH=$(nix-build -A mkl)/lib:$LD_LIBRARY_PATH nix-shell -p octave --run octave
</screen> </screen>
<para> <para>
Intel MKL requires an <literal>openmp</literal> implementation Intel MKL requires an <literal>openmp</literal> implementation when running with multiple processors. By default, <literal>mkl</literal> will use Intels <literal>iomp</literal> implementation if no other is specified, but this is a runtime-only dependency and binary compatible with the LLVM implementation. To use that one instead, Intel recommends users set it with <literal>LD_PRELOAD</literal>. Note that <literal>mkl</literal> is only available on <literal>x86_64-linux</literal> and <literal>x86_64-darwin</literal>. Moreover, Hydra is not building and distributing pre-compiled binaries using it.
when running with multiple processors. By default,
<literal>mkl</literal> will use Intels <literal>iomp</literal>
implementation if no other is specified, but this is a
runtime-only dependency and binary compatible with the LLVM
implementation. To use that one instead, Intel recommends users
set it with <literal>LD_PRELOAD</literal>. Note that
<literal>mkl</literal> is only available on
<literal>x86_64-linux</literal> and
<literal>x86_64-darwin</literal>. Moreover, Hydra is not
building and distributing pre-compiled binaries using it.
</para> </para>
<para> <para>
For BLAS/LAPACK switching to work correctly, all packages must For BLAS/LAPACK switching to work correctly, all packages must depend on <literal>blas</literal> or <literal>lapack</literal>. This ensures that only one BLAS/LAPACK library is used at one time. There are two versions versions of BLAS/LAPACK currently in the wild, <literal>LP64</literal> (integer size = 32 bits) and <literal>ILP64</literal> (integer size = 64 bits). Some software needs special flags or patches to work with <literal>ILP64</literal>. You can check if <literal>ILP64</literal> is used in Nixpkgs with <varname>blas.isILP64</varname> and <varname>lapack.isILP64</varname>. Some software does NOT work with <literal>ILP64</literal>, and derivations need to specify an assertion to prevent this. You can prevent <literal>ILP64</literal> from being used with the following:
depend on <literal>blas</literal> or <literal>lapack</literal>.
This ensures that only one BLAS/LAPACK library is used at one
time. There are two versions versions of BLAS/LAPACK currently
in the wild, <literal>LP64</literal> (integer size = 32 bits)
and <literal>ILP64</literal> (integer size = 64 bits). Some
software needs special flags or patches to work with
<literal>ILP64</literal>. You can check if
<literal>ILP64</literal> is used in Nixpkgs with
<varname>blas.isILP64</varname> and
<varname>lapack.isILP64</varname>. Some software does NOT work
with <literal>ILP64</literal>, and derivations need to specify
an assertion to prevent this. You can prevent
<literal>ILP64</literal> from being used with the following:
</para> </para>
<programlisting> <programlisting>
{ stdenv, blas, lapack, ... }: { stdenv, blas, lapack, ... }:
@ -292,33 +243,30 @@ stdenv.mkDerivation {
} }
</programlisting> </programlisting>
</section> </section>
<section xml:id="sec-overlays-alternatives-mpi"> <section xml:id="sec-overlays-alternatives-mpi">
<title>Switching the MPI implementation</title> <title>Switching the MPI implementation</title>
<para> <para>
All programs that are built with All programs that are built with <link xlink:href="https://en.wikipedia.org/wiki/Message_Passing_Interface">MPI</link> support use the generic attribute <varname>mpi</varname> as an input. At the moment Nixpkgs natively provides two different MPI implementations:
<link xlink:href="https://en.wikipedia.org/wiki/Message_Passing_Interface">MPI</link>
support use the generic attribute <varname>mpi</varname>
as an input. At the moment Nixpkgs natively provides two different
MPI implementations:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<para> <para>
<link xlink:href="https://www.open-mpi.org/">Open MPI</link> <link xlink:href="https://www.open-mpi.org/">Open MPI</link> (default), attribute name <varname>openmpi</varname>
(default), attribute name <varname>openmpi</varname>
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
<link xlink:href="https://www.mpich.org/">MPICH</link>, <link xlink:href="https://www.mpich.org/">MPICH</link>, attribute name <varname>mpich</varname>
attribute name <varname>mpich</varname>
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</para> </para>
<para> <para>
To provide MPI enabled applications that use <literal>MPICH</literal>, instead To provide MPI enabled applications that use <literal>MPICH</literal>, instead of the default <literal>Open MPI</literal>, simply use the following overlay:
of the default <literal>Open MPI</literal>, simply use the following overlay:
</para> </para>
<programlisting> <programlisting>
self: super: self: super:

View file

@ -7,7 +7,7 @@ let
in in
lib.mapAttrs (n: v: v // { shortName = n; }) { lib.mapAttrs (n: v: v // { shortName = n; }) ({
/* License identifiers from spdx.org where possible. /* License identifiers from spdx.org where possible.
* If you cannot find your license here, then look for a similar license or * If you cannot find your license here, then look for a similar license or
* add it to this list. The URL mentioned above is a good source for inspiration. * add it to this list. The URL mentioned above is a good source for inspiration.
@ -877,4 +877,4 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
fullName = "GNU Lesser General Public License v3.0"; fullName = "GNU Lesser General Public License v3.0";
deprecated = true; deprecated = true;
}; };
} })

View file

@ -194,6 +194,12 @@
githubId = 124545; githubId = 124545;
name = "Anthony Cowley"; name = "Anthony Cowley";
}; };
adamlwgriffiths = {
email = "adam.lw.griffiths@gmail.com";
github = "adamlwgriffiths";
githubId = 1239156;
name = "Adam Griffiths";
};
adamt = { adamt = {
email = "mail@adamtulinius.dk"; email = "mail@adamtulinius.dk";
github = "adamtulinius"; github = "adamtulinius";
@ -273,7 +279,7 @@
name = "James Alexander Feldman-Crough"; name = "James Alexander Feldman-Crough";
}; };
aforemny = { aforemny = {
email = "alexanderforemny@googlemail.com"; email = "aforemny@posteo.de";
github = "aforemny"; github = "aforemny";
githubId = 610962; githubId = 610962;
name = "Alexander Foremny"; name = "Alexander Foremny";
@ -1096,6 +1102,12 @@
githubId = 1432730; githubId = 1432730;
name = "Benjamin Staffin"; name = "Benjamin Staffin";
}; };
benneti = {
name = "Benedikt Tissot";
email = "benedikt.tissot@googlemail.com";
github = "benneti";
githubId = 11725645;
};
bennofs = { bennofs = {
email = "benno.fuenfstueck@gmail.com"; email = "benno.fuenfstueck@gmail.com";
github = "bennofs"; github = "bennofs";
@ -1711,6 +1723,12 @@
githubId = 2245737; githubId = 2245737;
name = "Christopher Mark Poole"; name = "Christopher Mark Poole";
}; };
chuahou = {
email = "human+github@chuahou.dev";
github = "chuahou";
githubId = 12386805;
name = "Chua Hou";
};
chvp = { chvp = {
email = "nixpkgs@cvpetegem.be"; email = "nixpkgs@cvpetegem.be";
github = "chvp"; github = "chvp";
@ -2417,6 +2435,16 @@
githubId = 6806011; githubId = 6806011;
name = "Robert Schütz"; name = "Robert Schütz";
}; };
dottedmag = {
email = "dottedmag@dottedmag.net";
github = "dottedmag";
githubId = 16120;
name = "Misha Gusarov";
keys = [{
longkeyid = "rsa4096/0x9D20F6503E338888";
fingerprint = "A8DF 1326 9E5D 9A38 E57C FAC2 9D20 F650 3E33 8888";
}];
};
doublec = { doublec = {
email = "chris.double@double.co.nz"; email = "chris.double@double.co.nz";
github = "doublec"; github = "doublec";
@ -3061,6 +3089,12 @@
githubId = 1276854; githubId = 1276854;
name = "Florian Peter"; name = "Florian Peter";
}; };
fbrs = {
email = "yuuki@protonmail.com";
github = "cideM";
githubId = 4246921;
name = "Florian Beeres";
};
fdns = { fdns = {
email = "fdns02@gmail.com"; email = "fdns02@gmail.com";
github = "fdns"; github = "fdns";
@ -3073,6 +3107,12 @@
githubId = 9959940; githubId = 9959940;
name = "Andreas Fehn"; name = "Andreas Fehn";
}; };
felixscheinost = {
name = "Felix Scheinost";
email = "felix.scheinost@posteo.de";
github = "felixscheinost";
githubId = 31761492;
};
felixsinger = { felixsinger = {
email = "felixsinger@posteo.net"; email = "felixsinger@posteo.net";
github = "felixsinger"; github = "felixsinger";
@ -4051,6 +4091,16 @@
fingerprint = "7311 2700 AB4F 4CDF C68C F6A5 79C3 C47D C652 EA54"; fingerprint = "7311 2700 AB4F 4CDF C68C F6A5 79C3 C47D C652 EA54";
}]; }];
}; };
ivankovnatsky = {
email = "ikovnatsky@protonmail.ch";
github = "ivankovnatsky";
githubId = 75213;
name = "Ivan Kovnatsky";
keys = [{
longkeyid = "rsa4096/0x3A33FA4C82ED674F";
fingerprint = "6BD3 7248 30BD 941E 9180 C1A3 3A33 FA4C 82ED 674F";
}];
};
ivar = { ivar = {
email = "ivar.scholten@protonmail.com"; email = "ivar.scholten@protonmail.com";
github = "IvarWithoutBones"; github = "IvarWithoutBones";
@ -6055,7 +6105,7 @@
name = "Celine Mercier"; name = "Celine Mercier";
}; };
metadark = { metadark = {
email = "kira.bruneau@gmail.com"; email = "kira.bruneau@pm.me";
name = "Kira Bruneau"; name = "Kira Bruneau";
github = "metadark"; github = "metadark";
githubId = 382041; githubId = 382041;
@ -7203,6 +7253,12 @@
githubId = 157610; githubId = 157610;
name = "Piotr Bogdan"; name = "Piotr Bogdan";
}; };
pborzenkov = {
email = "pavel@borzenkov.net";
github = "pborzenkov";
githubId = 434254;
name = "Pavel Borzenkov";
};
pblkt = { pblkt = {
email = "pebblekite@gmail.com"; email = "pebblekite@gmail.com";
github = "pblkt"; github = "pblkt";
@ -7227,6 +7283,12 @@
githubId = 13225611; githubId = 13225611;
name = "Nicolas Martin"; name = "Nicolas Martin";
}; };
p3psi = {
name = "Elliot Boo";
email = "p3psi.boo@gmail.com";
github = "p3psi-boo";
githubId = 43925055;
};
periklis = { periklis = {
email = "theopompos@gmail.com"; email = "theopompos@gmail.com";
github = "periklis"; github = "periklis";
@ -7419,6 +7481,16 @@
githubId = 103822; githubId = 103822;
name = "Patrick Mahoney"; name = "Patrick Mahoney";
}; };
pmenke = {
email = "nixos@pmenke.de";
github = "pmenke-de";
githubId = 898922;
name = "Philipp Menke";
keys = [{
longkeyid = "rsa4096/0xEB7F2D4CCBE23B69";
fingerprint = "ED54 5EFD 64B6 B5AA EC61 8C16 EB7F 2D4C CBE2 3B69";
}];
};
pmeunier = { pmeunier = {
email = "pierre-etienne.meunier@inria.fr"; email = "pierre-etienne.meunier@inria.fr";
github = "P-E-Meunier"; github = "P-E-Meunier";
@ -7449,6 +7521,16 @@
githubId = 11365056; githubId = 11365056;
name = "Kevin Liu"; name = "Kevin Liu";
}; };
pnotequalnp = {
email = "kevin@pnotequalnp.com";
github = "pnotequalnp";
githubId = 46154511;
name = "Kevin Mullins";
keys = [{
longkeyid = "rsa4096/361820A45DB41E9A";
fingerprint = "2CD2 B030 BD22 32EF DF5A 008A 3618 20A4 5DB4 1E9A";
}];
};
polyrod = { polyrod = {
email = "dc1mdp@gmail.com"; email = "dc1mdp@gmail.com";
github = "polyrod"; github = "polyrod";
@ -8033,6 +8115,12 @@
githubId = 3708689; githubId = 3708689;
name = "Roberto Di Remigio"; name = "Roberto Di Remigio";
}; };
robertoszek = {
email = "robertoszek@robertoszek.xyz";
github = "robertoszek";
githubId = 1080963;
name = "Roberto";
};
robgssp = { robgssp = {
email = "robgssp@gmail.com"; email = "robgssp@gmail.com";
github = "robgssp"; github = "robgssp";
@ -8075,6 +8163,16 @@
githubId = 1312525; githubId = 1312525;
name = "Rongcui Dong"; name = "Rongcui Dong";
}; };
ronthecookie = {
name = "Ron B";
email = "me@ronthecookie.me";
github = "ronthecookie";
githubId = 2526321;
keys = [{
longkeyid = "rsa2048/0x6F5B32DE5E5FA80C";
fingerprint = "4B2C DDA5 FA35 642D 956D 7294 6F5B 32DE 5E5F A80C";
}];
};
roosemberth = { roosemberth = {
email = "roosembert.palacios+nixpkgs@gmail.com"; email = "roosembert.palacios+nixpkgs@gmail.com";
github = "roosemberth"; github = "roosemberth";
@ -9564,7 +9662,7 @@
name = "Tom Smeets"; name = "Tom Smeets";
}; };
toonn = { toonn = {
email = "nnoot@toonn.io"; email = "nixpkgs@toonn.io";
github = "toonn"; github = "toonn";
githubId = 1486805; githubId = 1486805;
name = "Toon Nolten"; name = "Toon Nolten";
@ -10012,6 +10110,12 @@
githubId = 7677567; githubId = 7677567;
name = "Victor SENE"; name = "Victor SENE";
}; };
vtuan10 = {
email = "mail@tuan-vo.de";
github = "vtuan10";
githubId = 16415673;
name = "Van Tuan Vo";
};
vyorkin = { vyorkin = {
email = "vasiliy.yorkin@gmail.com"; email = "vasiliy.yorkin@gmail.com";
github = "vyorkin"; github = "vyorkin";
@ -10458,6 +10562,12 @@
githubId = 1141948; githubId = 1141948;
name = "Zack Grannan"; name = "Zack Grannan";
}; };
zhaofengli = {
email = "hello@zhaofeng.li";
github = "zhaofengli";
githubId = 2189609;
name = "Zhaofeng Li";
};
zimbatm = { zimbatm = {
email = "zimbatm@zimbatm.com"; email = "zimbatm@zimbatm.com";
github = "zimbatm"; github = "zimbatm";
@ -10750,16 +10860,20 @@
github = "pulsation"; github = "pulsation";
githubId = 1838397; githubId = 1838397;
}; };
zseri = {
name = "zseri";
email = "zseri.devel@ytrizja.de";
github = "zseri";
githubId = 1618343;
keys = [{
longkeyid = "rsa4096/0x229E63AE5644A96D";
fingerprint = "7AFB C595 0D3A 77BD B00F 947B 229E 63AE 5644 A96D";
}];
};
zupo = { zupo = {
name = "Nejc Zupan"; name = "Nejc Zupan";
email = "nejczupan+nix@gmail.com"; email = "nejczupan+nix@gmail.com";
github = "zupo"; github = "zupo";
githubId = 311580; githubId = 311580;
}; };
felixscheinost = {
name = "Felix Scheinost";
email = "felix.scheinost@posteo.de";
github = "felixscheinost";
githubId = 31761492;
};
} }

View file

@ -3,7 +3,8 @@
stdenv.mkDerivation { stdenv.mkDerivation {
name = "nixpkgs-lint-1"; name = "nixpkgs-lint-1";
buildInputs = [ makeWrapper perl perlPackages.XMLSimple ]; nativeBuildInputs = [ makeWrapper ];
buildInputs = [ perl perlPackages.XMLSimple ];
dontUnpack = true; dontUnpack = true;
buildPhase = "true"; buildPhase = "true";

View file

@ -16,9 +16,10 @@
The first line (<literal>{ config, pkgs, ... }:</literal>) denotes that this The first line (<literal>{ config, pkgs, ... }:</literal>) denotes that this
is actually a function that takes at least the two arguments is actually a function that takes at least the two arguments
<varname>config</varname> and <varname>pkgs</varname>. (These are explained <varname>config</varname> and <varname>pkgs</varname>. (These are explained
later.) The function returns a <emphasis>set</emphasis> of option definitions later, in chapter <xref linkend="sec-writing-modules" />) The function returns
(<literal>{ <replaceable>...</replaceable> }</literal>). These definitions a <emphasis>set</emphasis> of option definitions (<literal>{
have the form <literal><replaceable>name</replaceable> = <replaceable>...</replaceable> }</literal>). These definitions have the form
<literal><replaceable>name</replaceable> =
<replaceable>value</replaceable></literal>, where <replaceable>value</replaceable></literal>, where
<replaceable>name</replaceable> is the name of an option and <replaceable>name</replaceable> is the name of an option and
<replaceable>value</replaceable> is its value. For example, <replaceable>value</replaceable> is its value. For example,

View file

@ -74,7 +74,10 @@ linkend="sec-configuration-syntax"/>, we saw the following structure
<callout arearefs='module-syntax-1'> <callout arearefs='module-syntax-1'>
<para> <para>
This line makes the current Nix expression a function. The variable This line makes the current Nix expression a function. The variable
<varname>pkgs</varname> contains Nixpkgs, while <varname>config</varname> <varname>pkgs</varname> contains Nixpkgs (by default, it takes the
<varname>nixpkgs</varname> entry of <envar>NIX_PATH</envar>, see the <link
xlink:href="https://nixos.org/manual/nix/stable/#sec-common-env">Nix
manual</link> for further details), while <varname>config</varname>
contains the full system configuration. This line can be omitted if there contains the full system configuration. This line can be omitted if there
is no reference to <varname>pkgs</varname> and <varname>config</varname> is no reference to <varname>pkgs</varname> and <varname>config</varname>
inside the module. inside the module.

View file

@ -523,6 +523,21 @@ self: super:
as an hardware RNG, as it will automatically run the krngd task to periodically collect random as an hardware RNG, as it will automatically run the krngd task to periodically collect random
data from the device and mix it into the kernel's RNG. data from the device and mix it into the kernel's RNG.
</para> </para>
<para>
The default SMTP port for GitLab has been changed to
<literal>25</literal> from its previous default of
<literal>465</literal>. If you depended on this default, you
should now set the <xref linkend="opt-services.gitlab.smtp.port" />
option.
</para>
</listitem>
<listitem>
<para>
The default version of ImageMagick has been updated from 6 to 7.
You can use <package>imagemagick6</package>,
<package>imagemagick6_light</package>, and
<package>imagemagick6Big</package> if you need the older version.
</para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</section> </section>
@ -558,14 +573,16 @@ self: super:
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The default-version of <literal>nextcloud</literal> is <package>nextcloud20</package>. The default-version of <literal>nextcloud</literal> is <package>nextcloud21</package>.
Please note that it's <emphasis>not</emphasis> possible to upgrade <literal>nextcloud</literal> Please note that it's <emphasis>not</emphasis> possible to upgrade <literal>nextcloud</literal>
across multiple major versions! This means that it's e.g. not possible to upgrade across multiple major versions! This means that it's e.g. not possible to upgrade
from <package>nextcloud18</package> to <package>nextcloud20</package> in a single deploy. from <package>nextcloud18</package> to <package>nextcloud20</package> in a single deploy and
most <literal>20.09</literal> users will have to upgrade to <package>nextcloud20</package>
first.
</para> </para>
<para> <para>
The package can be manually upgraded by setting <xref linkend="opt-services.nextcloud.package" /> The package can be manually upgraded by setting <xref linkend="opt-services.nextcloud.package" />
to <package>nextcloud20</package>. to <package>nextcloud21</package>.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
@ -730,6 +747,56 @@ self: super:
terminology has been deprecated and should be replaced with Far/Near in the configuration file. terminology has been deprecated and should be replaced with Far/Near in the configuration file.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The nix-gc service now accepts randomizedDelaySec (default: 0) and persistent (default: true) parameters.
By default nix-gc will now run immediately if it would have been triggered at least
once during the time when the timer was inactive.
</para>
</listitem>
<listitem>
<para>
The <literal>rustPlatform.buildRustPackage</literal> function is split into several hooks:
<package>cargoSetupHook</package> to set up vendoring for Cargo-based projects,
<package>cargoBuildHook</package> to build a project using Cargo,
<package>cargoInstallHook</package> to install a project using Cargo, and
<package>cargoCheckHook</package> to run tests in Cargo-based projects. With this change,
mixed-language projects can use the relevant hooks within builders other than
<literal>buildRustPackage</literal>. However, these changes also required several API changes to
<literal>buildRustPackage</literal> itself:
<itemizedlist>
<listitem>
<para>
The <literal>target</literal> argument was removed. Instead, <literal>buildRustPackage</literal>
will always use the same target as the C/C++ compiler that is used.
</para>
</listitem>
<listitem>
<para>
The <literal>cargoParallelTestThreads</literal> argument was removed. Parallel tests are
now disabled through <literal>dontUseCargoParallelTests</literal>.
</para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>
The <literal>rustPlatform.maturinBuildHook</literal> hook was added. This hook can be used
with <literal>buildPythonPackage</literal> to build Python packages that are written in Rust
and use Maturin as their build tool.
</para>
</listitem>
<listitem>
<para>
Kubernetes has <link xlink:href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/">deprecated docker</link> as container runtime.
As a consequence, the Kubernetes module now has support for configuration of custom remote container runtimes and enables containerd by default.
Note that containerd is more strict regarding container image OCI-compliance.
As an example, images with CMD or ENTRYPOINT defined as strings (not lists) will fail on containerd, while working fine on docker.
Please test your setup and container images with containerd prior to upgrading.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
</section> </section>

View file

@ -23,6 +23,6 @@ stdenv.mkDerivation {
# Generate the squashfs image. # Generate the squashfs image.
mksquashfs nix-path-registration $(cat $closureInfo/store-paths) $out \ mksquashfs nix-path-registration $(cat $closureInfo/store-paths) $out \
-keep-as-directory -all-root -b 1048576 -comp ${comp} -no-hardlinks -keep-as-directory -all-root -b 1048576 -comp ${comp}
''; '';
} }

View file

@ -18,13 +18,15 @@ rec {
]; ];
qemuSerialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0" qemuSerialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0"
else if pkgs.stdenv.isAarch32 || pkgs.stdenv.isAarch64 then "ttyAMA0" else if (with pkgs.stdenv.hostPlatform; isAarch32 || isAarch64 || isPower) then "ttyAMA0"
else throw "Unknown QEMU serial device for system '${pkgs.stdenv.hostPlatform.system}'"; else throw "Unknown QEMU serial device for system '${pkgs.stdenv.hostPlatform.system}'";
qemuBinary = qemuPkg: { qemuBinary = qemuPkg: {
x86_64-linux = "${qemuPkg}/bin/qemu-kvm -cpu max"; x86_64-linux = "${qemuPkg}/bin/qemu-kvm -cpu max";
armv7l-linux = "${qemuPkg}/bin/qemu-system-arm -enable-kvm -machine virt -cpu host"; armv7l-linux = "${qemuPkg}/bin/qemu-system-arm -enable-kvm -machine virt -cpu host";
aarch64-linux = "${qemuPkg}/bin/qemu-system-aarch64 -enable-kvm -machine virt,gic-version=host -cpu host"; aarch64-linux = "${qemuPkg}/bin/qemu-system-aarch64 -enable-kvm -machine virt,gic-version=host -cpu host";
powerpc64le-linux = "${qemuPkg}/bin/qemu-system-ppc64 -machine powernv";
powerpc64-linux = "${qemuPkg}/bin/qemu-system-ppc64 -machine powernv";
x86_64-darwin = "${qemuPkg}/bin/qemu-kvm -cpu max"; x86_64-darwin = "${qemuPkg}/bin/qemu-kvm -cpu max";
}.${pkgs.stdenv.hostPlatform.system} or "${qemuPkg}/bin/qemu-kvm"; }.${pkgs.stdenv.hostPlatform.system} or "${qemuPkg}/bin/qemu-kvm";
} }

View file

@ -26,12 +26,12 @@ in {
systemd.services.enable-ksm = { systemd.services.enable-ksm = {
description = "Enable Kernel Same-Page Merging"; description = "Enable Kernel Same-Page Merging";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "systemd-udev-settle.service" ]; script =
script = '' ''
if [ -e /sys/kernel/mm/ksm ]; then
echo 1 > /sys/kernel/mm/ksm/run echo 1 > /sys/kernel/mm/ksm/run
${optionalString (cfg.sleep != null) ''echo ${toString cfg.sleep} > /sys/kernel/mm/ksm/sleep_millisecs''} '' + optionalString (cfg.sleep != null)
fi ''
echo ${toString cfg.sleep} > /sys/kernel/mm/ksm/sleep_millisecs
''; '';
}; };
}; };

View file

@ -257,6 +257,7 @@
./services/backup/zfs-replication.nix ./services/backup/zfs-replication.nix
./services/backup/znapzend.nix ./services/backup/znapzend.nix
./services/blockchain/ethereum/geth.nix ./services/blockchain/ethereum/geth.nix
./services/backup/zrepl.nix
./services/cluster/hadoop/default.nix ./services/cluster/hadoop/default.nix
./services/cluster/k3s/default.nix ./services/cluster/k3s/default.nix
./services/cluster/kubernetes/addons/dns.nix ./services/cluster/kubernetes/addons/dns.nix
@ -381,6 +382,7 @@
./services/hardware/sane.nix ./services/hardware/sane.nix
./services/hardware/sane_extra_backends/brscan4.nix ./services/hardware/sane_extra_backends/brscan4.nix
./services/hardware/sane_extra_backends/dsseries.nix ./services/hardware/sane_extra_backends/dsseries.nix
./services/hardware/spacenavd.nix
./services/hardware/tcsd.nix ./services/hardware/tcsd.nix
./services/hardware/tlp.nix ./services/hardware/tlp.nix
./services/hardware/thinkfan.nix ./services/hardware/thinkfan.nix
@ -488,6 +490,7 @@
./services/misc/logkeys.nix ./services/misc/logkeys.nix
./services/misc/leaps.nix ./services/misc/leaps.nix
./services/misc/lidarr.nix ./services/misc/lidarr.nix
./services/misc/lifecycled.nix
./services/misc/mame.nix ./services/misc/mame.nix
./services/misc/matrix-appservice-discord.nix ./services/misc/matrix-appservice-discord.nix
./services/misc/matrix-synapse.nix ./services/misc/matrix-synapse.nix
@ -510,6 +513,7 @@
./services/misc/paperless.nix ./services/misc/paperless.nix
./services/misc/parsoid.nix ./services/misc/parsoid.nix
./services/misc/plex.nix ./services/misc/plex.nix
./services/misc/plikd.nix
./services/misc/tautulli.nix ./services/misc/tautulli.nix
./services/misc/pinnwand.nix ./services/misc/pinnwand.nix
./services/misc/pykms.nix ./services/misc/pykms.nix
@ -1049,6 +1053,7 @@
./testing/service-runner.nix ./testing/service-runner.nix
./virtualisation/anbox.nix ./virtualisation/anbox.nix
./virtualisation/container-config.nix ./virtualisation/container-config.nix
./virtualisation/containerd.nix
./virtualisation/containers.nix ./virtualisation/containers.nix
./virtualisation/nixos-containers.nix ./virtualisation/nixos-containers.nix
./virtualisation/oci-containers.nix ./virtualisation/oci-containers.nix

View file

@ -1,13 +1,14 @@
--- a/create_manpage_completions.py --- a/create_manpage_completions.py
+++ b/create_manpage_completions.py +++ b/create_manpage_completions.py
@@ -844,10 +844,6 @@ def parse_manpage_at_path(manpage_path, output_directory): @@ -879,10 +879,6 @@ def parse_manpage_at_path(manpage_path, output_directory):
)
built_command_output.insert(0, "# " + CMDNAME) return False
- # Output the magic word Autogenerated so we can tell if we can overwrite this - # Output the magic word Autogenerated so we can tell if we can overwrite this
- built_command_output.insert( - built_command_output.insert(
- 1, "# Autogenerated from man page " + manpage_path - 0, "# " + CMDNAME + "\n# Autogenerated from man page " + manpage_path
- ) - )
# built_command_output.insert(2, "# using " + parser.__class__.__name__) # XXX MISATTRIBUTES THE CULPABILE PARSER! Was really using Type2 but reporting TypeDeroffManParser # built_command_output.insert(2, "# using " + parser.__class__.__name__) # XXX MISATTRIBUTES THE CULPABLE PARSER! Was really using Type2 but reporting TypeDeroffManParser
for line in built_command_output: for line in built_command_output:

View file

@ -12,11 +12,30 @@ let
else [ package32 ] ++ extraPackages32; else [ package32 ] ++ extraPackages32;
}; };
in { in {
options.programs.steam.enable = mkEnableOption "steam"; options.programs.steam = {
enable = mkEnableOption "steam";
remotePlay.openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Open ports in the firewall for Steam Remote Play.
'';
};
dedicatedServer.openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Open ports in the firewall for Source Dedicated Server.
'';
};
};
config = mkIf cfg.enable { config = mkIf cfg.enable {
hardware.opengl = { # this fixes the "glXChooseVisual failed" bug, context: https://github.com/NixOS/nixpkgs/issues/47932 hardware.opengl = { # this fixes the "glXChooseVisual failed" bug, context: https://github.com/NixOS/nixpkgs/issues/47932
enable = true; enable = true;
driSupport = true;
driSupport32Bit = true; driSupport32Bit = true;
}; };
@ -26,6 +45,18 @@ in {
hardware.steam-hardware.enable = true; hardware.steam-hardware.enable = true;
environment.systemPackages = [ steam steam.run ]; environment.systemPackages = [ steam steam.run ];
networking.firewall = lib.mkMerge [
(mkIf cfg.remotePlay.openFirewall {
allowedTCPPorts = [ 27036 ];
allowedUDPPortRanges = [ { from = 27031; to = 27036; } ];
})
(mkIf cfg.dedicatedServer.openFirewall {
allowedTCPPorts = [ 27015 ]; # SRCDS Rcon port
allowedUDPPorts = [ 27015 ]; # Gameplay traffic
})
];
}; };
meta.maintainers = with maintainers; [ mkg20001 ]; meta.maintainers = with maintainers; [ mkg20001 ];

View file

@ -0,0 +1,54 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.zrepl;
format = pkgs.formats.yaml { };
configFile = format.generate "zrepl.yml" cfg.settings;
in
{
meta.maintainers = with maintainers; [ cole-h ];
options = {
services.zrepl = {
enable = mkEnableOption "zrepl";
settings = mkOption {
default = { };
description = ''
Configuration for zrepl. See <link
xlink:href="https://zrepl.github.io/configuration.html"/>
for more information.
'';
type = types.submodule {
freeformType = format.type;
};
};
};
};
### Implementation ###
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.zrepl ];
# zrepl looks for its config in this location by default. This
# allows the use of e.g. `zrepl signal wakeup <job>` without having
# to specify the storepath of the config.
environment.etc."zrepl/zrepl.yml".source = configFile;
systemd.packages = [ pkgs.zrepl ];
systemd.services.zrepl = {
requires = [ "local-fs.target" ];
wantedBy = [ "zfs.target" ];
after = [ "zfs.target" ];
path = [ config.boot.zfs.package ];
restartTriggers = [ configFile ];
serviceConfig = {
Restart = "on-failure";
};
};
};
}

View file

@ -3,7 +3,7 @@
with lib; with lib;
let let
version = "1.6.4"; version = "1.7.1";
cfg = config.services.kubernetes.addons.dns; cfg = config.services.kubernetes.addons.dns;
ports = { ports = {
dns = 10053; dns = 10053;
@ -55,9 +55,9 @@ in {
type = types.attrs; type = types.attrs;
default = { default = {
imageName = "coredns/coredns"; imageName = "coredns/coredns";
imageDigest = "sha256:493ee88e1a92abebac67cbd4b5658b4730e0f33512461442d8d9214ea6734a9b"; imageDigest = "sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef";
finalImageTag = version; finalImageTag = version;
sha256 = "0fm9zdjavpf5hni8g7fkdd3csjbhd7n7py7llxjc66sbii087028"; sha256 = "02r440xcdsgi137k5lmmvp0z5w5fmk8g9mysq5pnysq1wl8sj6mw";
}; };
}; };
}; };
@ -156,7 +156,6 @@ in {
health :${toString ports.health} health :${toString ports.health}
kubernetes ${cfg.clusterDomain} in-addr.arpa ip6.arpa { kubernetes ${cfg.clusterDomain} in-addr.arpa ip6.arpa {
pods insecure pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa fallthrough in-addr.arpa ip6.arpa
} }
prometheus :${toString ports.metrics} prometheus :${toString ports.metrics}

View file

@ -238,14 +238,40 @@ in
type = int; type = int;
}; };
apiAudiences = mkOption {
description = ''
Kubernetes apiserver ServiceAccount issuer.
'';
default = "api,https://kubernetes.default.svc";
type = str;
};
serviceAccountIssuer = mkOption {
description = ''
Kubernetes apiserver ServiceAccount issuer.
'';
default = "https://kubernetes.default.svc";
type = str;
};
serviceAccountSigningKeyFile = mkOption {
description = ''
Path to the file that contains the current private key of the service
account token issuer. The issuer will sign issued ID tokens with this
private key.
'';
type = path;
};
serviceAccountKeyFile = mkOption { serviceAccountKeyFile = mkOption {
description = '' description = ''
Kubernetes apiserver PEM-encoded x509 RSA private or public key file, File containing PEM-encoded x509 RSA or ECDSA private or public keys,
used to verify ServiceAccount tokens. By default tls private key file used to verify ServiceAccount tokens. The specified file can contain
is used. multiple keys, and the flag can be specified multiple times with
different files. If unspecified, --tls-private-key-file is used.
Must be specified when --service-account-signing-key is provided
''; '';
default = null; type = path;
type = nullOr path;
}; };
serviceClusterIpRange = mkOption { serviceClusterIpRange = mkOption {
@ -357,8 +383,10 @@ in
${optionalString (cfg.runtimeConfig != "") ${optionalString (cfg.runtimeConfig != "")
"--runtime-config=${cfg.runtimeConfig}"} \ "--runtime-config=${cfg.runtimeConfig}"} \
--secure-port=${toString cfg.securePort} \ --secure-port=${toString cfg.securePort} \
${optionalString (cfg.serviceAccountKeyFile!=null) --api-audiences=${toString cfg.apiAudiences} \
"--service-account-key-file=${cfg.serviceAccountKeyFile}"} \ --service-account-issuer=${toString cfg.serviceAccountIssuer} \
--service-account-signing-key-file=${cfg.serviceAccountSigningKeyFile} \
--service-account-key-file=${cfg.serviceAccountKeyFile} \
--service-cluster-ip-range=${cfg.serviceClusterIpRange} \ --service-cluster-ip-range=${cfg.serviceClusterIpRange} \
--storage-backend=${cfg.storageBackend} \ --storage-backend=${cfg.storageBackend} \
${optionalString (cfg.tlsCertFile != null) ${optionalString (cfg.tlsCertFile != null)

View file

@ -5,6 +5,29 @@ with lib;
let let
cfg = config.services.kubernetes; cfg = config.services.kubernetes;
defaultContainerdConfigFile = pkgs.writeText "containerd.toml" ''
version = 2
root = "/var/lib/containerd/daemon"
state = "/var/run/containerd/daemon"
oom_score = 0
[grpc]
address = "/var/run/containerd/containerd.sock"
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "pause:latest"
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
max_conf_num = 0
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."io.containerd.runc.v2".options]
SystemdCgroup = true
'';
mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON { mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON {
apiVersion = "v1"; apiVersion = "v1";
kind = "Config"; kind = "Config";
@ -222,14 +245,9 @@ in {
}) })
(mkIf cfg.kubelet.enable { (mkIf cfg.kubelet.enable {
virtualisation.docker = { virtualisation.containerd = {
enable = mkDefault true; enable = mkDefault true;
configFile = mkDefault defaultContainerdConfigFile;
# kubernetes needs access to logs
logDriver = mkDefault "json-file";
# iptables must be disabled for kubernetes
extraOptions = "--iptables=false --ip-masq=false";
}; };
}) })
@ -269,7 +287,6 @@ in {
users.users.kubernetes = { users.users.kubernetes = {
uid = config.ids.uids.kubernetes; uid = config.ids.uids.kubernetes;
description = "Kubernetes user"; description = "Kubernetes user";
extraGroups = [ "docker" ];
group = "kubernetes"; group = "kubernetes";
home = cfg.dataDir; home = cfg.dataDir;
createHome = true; createHome = true;

View file

@ -8,16 +8,6 @@ let
# we want flannel to use kubernetes itself as configuration backend, not direct etcd # we want flannel to use kubernetes itself as configuration backend, not direct etcd
storageBackend = "kubernetes"; storageBackend = "kubernetes";
# needed for flannel to pass options to docker
mkDockerOpts = pkgs.runCommand "mk-docker-opts" {
buildInputs = [ pkgs.makeWrapper ];
} ''
mkdir -p $out
# bashInteractive needed for `compgen`
makeWrapper ${pkgs.bashInteractive}/bin/bash $out/mk-docker-opts --add-flags "${pkgs.kubernetes}/bin/mk-docker-opts.sh"
'';
in in
{ {
###### interface ###### interface
@ -43,43 +33,17 @@ in
cniVersion = "0.3.1"; cniVersion = "0.3.1";
delegate = { delegate = {
isDefaultGateway = true; isDefaultGateway = true;
bridge = "docker0"; bridge = "mynet";
}; };
}]; }];
}; };
systemd.services.mk-docker-opts = {
description = "Pre-Docker Actions";
path = with pkgs; [ gawk gnugrep ];
script = ''
${mkDockerOpts}/mk-docker-opts -d /run/flannel/docker
systemctl restart docker
'';
serviceConfig.Type = "oneshot";
};
systemd.paths.flannel-subnet-env = {
wantedBy = [ "flannel.service" ];
pathConfig = {
PathModified = "/run/flannel/subnet.env";
Unit = "mk-docker-opts.service";
};
};
systemd.services.docker = {
environment.DOCKER_OPTS = "-b none";
serviceConfig.EnvironmentFile = "-/run/flannel/docker";
};
# read environment variables generated by mk-docker-opts
virtualisation.docker.extraOptions = "$DOCKER_OPTS";
networking = { networking = {
firewall.allowedUDPPorts = [ firewall.allowedUDPPorts = [
8285 # flannel udp 8285 # flannel udp
8472 # flannel vxlan 8472 # flannel vxlan
]; ];
dhcpcd.denyInterfaces = [ "docker*" "flannel*" ]; dhcpcd.denyInterfaces = [ "mynet*" "flannel*" ];
}; };
services.kubernetes.pki.certs = { services.kubernetes.pki.certs = {

View file

@ -23,7 +23,7 @@ let
name = "pause"; name = "pause";
tag = "latest"; tag = "latest";
contents = top.package.pause; contents = top.package.pause;
config.Cmd = "/bin/pause"; config.Cmd = ["/bin/pause"];
}; };
kubeconfig = top.lib.mkKubeConfig "kubelet" cfg.kubeconfig; kubeconfig = top.lib.mkKubeConfig "kubelet" cfg.kubeconfig;
@ -125,6 +125,18 @@ in
}; };
}; };
containerRuntime = mkOption {
description = "Which container runtime type to use";
type = enum ["docker" "remote"];
default = "remote";
};
containerRuntimeEndpoint = mkOption {
description = "Endpoint at which to find the container runtime api interface/socket";
type = str;
default = "unix:///var/run/containerd/containerd.sock";
};
enable = mkEnableOption "Kubernetes kubelet."; enable = mkEnableOption "Kubernetes kubelet.";
extraOpts = mkOption { extraOpts = mkOption {
@ -235,16 +247,24 @@ in
###### implementation ###### implementation
config = mkMerge [ config = mkMerge [
(mkIf cfg.enable { (mkIf cfg.enable {
environment.etc."cni/net.d".source = cniConfig;
services.kubernetes.kubelet.seedDockerImages = [infraContainer]; services.kubernetes.kubelet.seedDockerImages = [infraContainer];
boot.kernel.sysctl = {
"net.bridge.bridge-nf-call-iptables" = 1;
"net.ipv4.ip_forward" = 1;
"net.bridge.bridge-nf-call-ip6tables" = 1;
};
systemd.services.kubelet = { systemd.services.kubelet = {
description = "Kubernetes Kubelet Service"; description = "Kubernetes Kubelet Service";
wantedBy = [ "kubernetes.target" ]; wantedBy = [ "kubernetes.target" ];
after = [ "network.target" "docker.service" "kube-apiserver.service" ]; after = [ "containerd.service" "network.target" "kube-apiserver.service" ];
path = with pkgs; [ path = with pkgs; [
gitMinimal gitMinimal
openssh openssh
docker
util-linux util-linux
iproute iproute
ethtool ethtool
@ -254,8 +274,12 @@ in
] ++ lib.optional config.boot.zfs.enabled config.boot.zfs.package ++ top.path; ] ++ lib.optional config.boot.zfs.enabled config.boot.zfs.package ++ top.path;
preStart = '' preStart = ''
${concatMapStrings (img: '' ${concatMapStrings (img: ''
echo "Seeding docker image: ${img}" echo "Seeding container image: ${img}"
docker load <${img} ${if (lib.hasSuffix "gz" img) then
''${pkgs.gzip}/bin/zcat "${img}" | ${pkgs.containerd}/bin/ctr -n k8s.io image import -''
else
''${pkgs.coreutils}/bin/cat "${img}" | ${pkgs.containerd}/bin/ctr -n k8s.io image import -''
}
'') cfg.seedDockerImages} '') cfg.seedDockerImages}
rm /opt/cni/bin/* || true rm /opt/cni/bin/* || true
@ -306,6 +330,9 @@ in
${optionalString (cfg.tlsKeyFile != null) ${optionalString (cfg.tlsKeyFile != null)
"--tls-private-key-file=${cfg.tlsKeyFile}"} \ "--tls-private-key-file=${cfg.tlsKeyFile}"} \
${optionalString (cfg.verbosity != null) "--v=${toString cfg.verbosity}"} \ ${optionalString (cfg.verbosity != null) "--v=${toString cfg.verbosity}"} \
--container-runtime=${cfg.containerRuntime} \
--container-runtime-endpoint=${cfg.containerRuntimeEndpoint} \
--cgroup-driver=systemd \
${cfg.extraOpts} ${cfg.extraOpts}
''; '';
WorkingDirectory = top.dataDir; WorkingDirectory = top.dataDir;
@ -315,7 +342,7 @@ in
# Allways include cni plugins # Allways include cni plugins
services.kubernetes.kubelet.cni.packages = [pkgs.cni-plugins]; services.kubernetes.kubelet.cni.packages = [pkgs.cni-plugins];
boot.kernelModules = ["br_netfilter"]; boot.kernelModules = ["br_netfilter" "overlay"];
services.kubernetes.kubelet.hostname = with config.networking; services.kubernetes.kubelet.hostname = with config.networking;
mkDefault (hostName + optionalString (domain != null) ".${domain}"); mkDefault (hostName + optionalString (domain != null) ".${domain}");

View file

@ -361,6 +361,7 @@ in
tlsCertFile = mkDefault cert; tlsCertFile = mkDefault cert;
tlsKeyFile = mkDefault key; tlsKeyFile = mkDefault key;
serviceAccountKeyFile = mkDefault cfg.certs.serviceAccount.cert; serviceAccountKeyFile = mkDefault cfg.certs.serviceAccount.cert;
serviceAccountSigningKeyFile = mkDefault cfg.certs.serviceAccount.key;
kubeletClientCaFile = mkDefault caCert; kubeletClientCaFile = mkDefault caCert;
kubeletClientCertFile = mkDefault cfg.certs.apiserverKubeletClient.cert; kubeletClientCertFile = mkDefault cfg.certs.apiserverKubeletClient.cert;
kubeletClientKeyFile = mkDefault cfg.certs.apiserverKubeletClient.key; kubeletClientKeyFile = mkDefault cfg.certs.apiserverKubeletClient.key;

View file

@ -89,6 +89,11 @@ in
example = "dbi:Pg:dbname=hydra;host=postgres.example.org;user=foo;"; example = "dbi:Pg:dbname=hydra;host=postgres.example.org;user=foo;";
description = '' description = ''
The DBI string for Hydra database connection. The DBI string for Hydra database connection.
NOTE: Attempts to set `application_name` will be overridden by
`hydra-TYPE` (where TYPE is e.g. `evaluator`, `queue-runner`,
etc.) in all hydra services to more easily distinguish where
queries are coming from.
''; '';
}; };
@ -284,7 +289,9 @@ in
{ wantedBy = [ "multi-user.target" ]; { wantedBy = [ "multi-user.target" ];
requires = optional haveLocalDB "postgresql.service"; requires = optional haveLocalDB "postgresql.service";
after = optional haveLocalDB "postgresql.service"; after = optional haveLocalDB "postgresql.service";
environment = env; environment = env // {
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-init";
};
preStart = '' preStart = ''
mkdir -p ${baseDir} mkdir -p ${baseDir}
chown hydra.hydra ${baseDir} chown hydra.hydra ${baseDir}
@ -339,7 +346,9 @@ in
{ wantedBy = [ "multi-user.target" ]; { wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ]; requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ]; after = [ "hydra-init.service" ];
environment = serverEnv; environment = serverEnv // {
HYDRA_DBI = "${serverEnv.HYDRA_DBI};application_name=hydra-server";
};
restartTriggers = [ hydraConf ]; restartTriggers = [ hydraConf ];
serviceConfig = serviceConfig =
{ ExecStart = { ExecStart =
@ -361,6 +370,7 @@ in
environment = env // { environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
IN_SYSTEMD = "1"; # to get log severity levels IN_SYSTEMD = "1"; # to get log severity levels
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-queue-runner";
}; };
serviceConfig = serviceConfig =
{ ExecStart = "@${hydra-package}/bin/hydra-queue-runner hydra-queue-runner -v"; { ExecStart = "@${hydra-package}/bin/hydra-queue-runner hydra-queue-runner -v";
@ -380,7 +390,9 @@ in
after = [ "hydra-init.service" "network.target" ]; after = [ "hydra-init.service" "network.target" ];
path = with pkgs; [ hydra-package nettools jq ]; path = with pkgs; [ hydra-package nettools jq ];
restartTriggers = [ hydraConf ]; restartTriggers = [ hydraConf ];
environment = env; environment = env // {
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-evaluator";
};
serviceConfig = serviceConfig =
{ ExecStart = "@${hydra-package}/bin/hydra-evaluator hydra-evaluator"; { ExecStart = "@${hydra-package}/bin/hydra-evaluator hydra-evaluator";
User = "hydra"; User = "hydra";
@ -392,7 +404,9 @@ in
systemd.services.hydra-update-gc-roots = systemd.services.hydra-update-gc-roots =
{ requires = [ "hydra-init.service" ]; { requires = [ "hydra-init.service" ];
after = [ "hydra-init.service" ]; after = [ "hydra-init.service" ];
environment = env; environment = env // {
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-update-gc-roots";
};
serviceConfig = serviceConfig =
{ ExecStart = "@${hydra-package}/bin/hydra-update-gc-roots hydra-update-gc-roots"; { ExecStart = "@${hydra-package}/bin/hydra-update-gc-roots hydra-update-gc-roots";
User = "hydra"; User = "hydra";
@ -403,7 +417,9 @@ in
systemd.services.hydra-send-stats = systemd.services.hydra-send-stats =
{ wantedBy = [ "multi-user.target" ]; { wantedBy = [ "multi-user.target" ];
after = [ "hydra-init.service" ]; after = [ "hydra-init.service" ];
environment = env; environment = env // {
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-send-stats";
};
serviceConfig = serviceConfig =
{ ExecStart = "@${hydra-package}/bin/hydra-send-stats hydra-send-stats"; { ExecStart = "@${hydra-package}/bin/hydra-send-stats hydra-send-stats";
User = "hydra"; User = "hydra";
@ -417,6 +433,7 @@ in
restartTriggers = [ hydraConf ]; restartTriggers = [ hydraConf ];
environment = env // { environment = env // {
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; PGPASSFILE = "${baseDir}/pgpass-queue-runner";
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-notify";
}; };
serviceConfig = serviceConfig =
{ ExecStart = "@${hydra-package}/bin/hydra-notify hydra-notify"; { ExecStart = "@${hydra-package}/bin/hydra-notify hydra-notify";

View file

@ -0,0 +1,34 @@
{
"properties": {},
"rules": [
{
"matches": [
{
"device.name": "~alsa_card.*"
}
],
"actions": {
"update-props": {
"api.alsa.use-acp": true,
"api.acp.auto-profile": false,
"api.acp.auto-port": false
}
}
},
{
"matches": [
{
"node.name": "~alsa_input.*"
},
{
"node.name": "~alsa_output.*"
}
],
"actions": {
"update-props": {
"node.pause-on-idle": false
}
}
}
]
}

View file

@ -0,0 +1,30 @@
{
"properties": {},
"rules": [
{
"matches": [
{
"device.name": "~bluez_card.*"
}
],
"actions": {
"update-props": {}
}
},
{
"matches": [
{
"node.name": "~bluez_input.*"
},
{
"node.name": "~bluez_output.*"
}
],
"actions": {
"update-props": {
"node.pause-on-idle": false
}
}
}
]
}

View file

@ -0,0 +1,26 @@
{
"context.properties": {
"log.level": 0
},
"context.spa-libs": {
"audio.convert.*": "audioconvert/libspa-audioconvert",
"support.*": "support/libspa-support"
},
"context.modules": {
"libpipewire-module-rtkit": {
"args": {},
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-protocol-native": null,
"libpipewire-module-client-node": null,
"libpipewire-module-client-device": null,
"libpipewire-module-adapter": null,
"libpipewire-module-metadata": null,
"libpipewire-module-session-manager": null
},
"filter.properties": {},
"stream.properties": {}
}

View file

@ -0,0 +1,19 @@
{
"context.properties": {
"log.level": 0
},
"context.spa-libs": {
"audio.convert.*": "audioconvert/libspa-audioconvert",
"support.*": "support/libspa-support"
},
"context.modules": {
"libpipewire-module-protocol-native": null,
"libpipewire-module-client-node": null,
"libpipewire-module-client-device": null,
"libpipewire-module-adapter": null,
"libpipewire-module-metadata": null,
"libpipewire-module-session-manager": null
},
"filter.properties": {},
"stream.properties": {}
}

View file

@ -0,0 +1,21 @@
{
"context.properties": {
"log.level": 0
},
"context.spa-libs": {
"support.*": "support/libspa-support"
},
"context.modules": {
"libpipewire-module-rtkit": {
"args": {},
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-protocol-native": null,
"libpipewire-module-client-node": null,
"libpipewire-module-metadata": null
},
"jack.properties": {}
}

View file

@ -0,0 +1,53 @@
{
"context.properties": {},
"context.spa-libs": {
"api.bluez5.*": "bluez5/libspa-bluez5",
"api.alsa.*": "alsa/libspa-alsa",
"api.v4l2.*": "v4l2/libspa-v4l2",
"api.libcamera.*": "libcamera/libspa-libcamera"
},
"context.modules": {
"libpipewire-module-rtkit": {
"args": {},
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-protocol-native": null,
"libpipewire-module-client-node": null,
"libpipewire-module-client-device": null,
"libpipewire-module-adapter": null,
"libpipewire-module-metadata": null,
"libpipewire-module-session-manager": null
},
"session.modules": {
"default": [
"flatpak",
"portal",
"v4l2",
"suspend-node",
"policy-node"
],
"with-audio": [
"metadata",
"default-nodes",
"default-profile",
"default-routes",
"alsa-seq",
"alsa-monitor"
],
"with-alsa": [
"with-audio"
],
"with-jack": [
"with-audio"
],
"with-pulseaudio": [
"with-audio",
"bluez5",
"restore-stream",
"streams-follow-default"
]
}
}

View file

@ -9,18 +9,36 @@ let
&& pkgs.stdenv.isx86_64 && pkgs.stdenv.isx86_64
&& pkgs.pkgsi686Linux.pipewire != null; && pkgs.pkgsi686Linux.pipewire != null;
prioritizeNativeProtocol = {
"context.modules" = {
"libpipewire-module-protocol-native" = {
_priority = -100;
_content = null;
};
};
};
# Use upstream config files passed through spa-json-dump as the base
# Patched here as necessary for them to work with this module
defaults = {
alsa-monitor = (builtins.fromJSON (builtins.readFile ./alsa-monitor.conf.json));
bluez-monitor = (builtins.fromJSON (builtins.readFile ./bluez-monitor.conf.json));
media-session = recursiveUpdate (builtins.fromJSON (builtins.readFile ./media-session.conf.json)) prioritizeNativeProtocol;
v4l2-monitor = (builtins.fromJSON (builtins.readFile ./v4l2-monitor.conf.json));
};
# Helpers for generating the pipewire JSON config file # Helpers for generating the pipewire JSON config file
mkSPAValueString = v: mkSPAValueString = v:
if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]" if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]"
else if lib.types.attrs.check v then else if lib.types.attrs.check v then
"{${lib.concatStringsSep " " (mkSPAKeyValue v)}}" "{${lib.concatStringsSep " " (mkSPAKeyValue v)}}"
else if builtins.isString v then "\"${lib.generators.mkValueStringDefault { } v}\""
else lib.generators.mkValueStringDefault { } v; else lib.generators.mkValueStringDefault { } v;
mkSPAKeyValue = attrs: map (def: def.content) ( mkSPAKeyValue = attrs: map (def: def.content) (
lib.sortProperties lib.sortProperties
( (
lib.mapAttrsToList lib.mapAttrsToList
(k: v: lib.mkOrder (v._priority or 1000) "${lib.escape [ "=" ] k} = ${mkSPAValueString (v._content or v)}") (k: v: lib.mkOrder (v._priority or 1000) "${lib.escape [ "=" ":" ] k} = ${mkSPAValueString (v._content or v)}")
attrs attrs
) )
); );
@ -51,272 +69,41 @@ in {
''; '';
}; };
config = mkOption { config = {
media-session = mkOption {
type = types.attrs; type = types.attrs;
description = '' description = ''
Configuration for the media session core. Configuration for the media session core. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/media-session.conf
''; '';
default = { default = {};
# media-session config file
properties = {
# Properties to configure the session and some
# modules
#mem.mlock-all = false;
#context.profile.modules = "default,rtkit";
}; };
spa-libs = { alsa-monitor = mkOption {
# Mapping from factory name to library.
"api.bluez5.*" = "bluez5/libspa-bluez5";
"api.alsa.*" = "alsa/libspa-alsa";
"api.v4l2.*" = "v4l2/libspa-v4l2";
"api.libcamera.*" = "libcamera/libspa-libcamera";
};
modules = {
# These are the modules that are enabled when a file with
# the key name is found in the media-session.d config directory.
# the default bundle is always enabled.
default = [
"flatpak" # manages flatpak access
"portal" # manage portal permissions
"v4l2" # video for linux udev detection
#"libcamera" # libcamera udev detection
"suspend-node" # suspend inactive nodes
"policy-node" # configure and link nodes
#"metadata" # export metadata API
#"default-nodes" # restore default nodes
#"default-profile" # restore default profiles
#"default-routes" # restore default route
#"streams-follow-default" # move streams when default changes
#"alsa-seq" # alsa seq midi support
#"alsa-monitor" # alsa udev detection
#"bluez5" # bluetooth support
#"restore-stream" # restore stream settings
];
"with-audio" = [
"metadata"
"default-nodes"
"default-profile"
"default-routes"
"alsa-seq"
"alsa-monitor"
];
"with-alsa" = [
"with-audio"
];
"with-jack" = [
"with-audio"
];
"with-pulseaudio" = [
"with-audio"
"bluez5"
"restore-stream"
"streams-follow-default"
];
};
};
};
alsaMonitorConfig = mkOption {
type = types.attrs; type = types.attrs;
description = '' description = ''
Configuration for the alsa monitor. Configuration for the alsa monitor. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/alsa-monitor.conf
''; '';
default = { default = {};
# alsa-monitor config file
properties = {
#alsa.jack-device = true
}; };
rules = [ bluez-monitor = mkOption {
# an array of matches/actions to evaluate
{
# rules for matching a device or node. It is an array of
# properties that all need to match the regexp. If any of the
# matches work, the actions are executed for the object.
matches = [
{
# this matches all cards
device.name = "~alsa_card.*";
}
];
actions = {
# actions can update properties on the matched object.
update-props = {
api.alsa.use-acp = true;
#api.alsa.use-ucm = true;
#api.alsa.soft-mixer = false;
#api.alsa.ignore-dB = false;
#device.profile-set = "profileset-name";
#device.profile = "default profile name";
api.acp.auto-profile = false;
api.acp.auto-port = false;
#device.nick = "My Device";
};
};
}
{
matches = [
{
# matches all sinks
node.name = "~alsa_input.*";
}
{
# matches all sources
node.name = "~alsa_output.*";
}
];
actions = {
update-props = {
#node.nick = "My Node";
#node.nick = null;
#priority.driver = 100;
#priority.session = 100;
#node.pause-on-idle = false;
#resample.quality = 4;
#channelmix.normalize = false;
#channelmix.mix-lfe = false;
#audio.channels = 2;
#audio.format = "S16LE";
#audio.rate = 44100;
#audio.position = "FL,FR";
#api.alsa.period-size = 1024;
#api.alsa.headroom = 0;
#api.alsa.disable-mmap = false;
#api.alsa.disable-batch = false;
};
};
}
];
};
};
bluezMonitorConfig = mkOption {
type = types.attrs; type = types.attrs;
description = '' description = ''
Configuration for the bluez5 monitor. Configuration for the bluez5 monitor. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/bluez-monitor.conf
''; '';
default = { default = {};
# bluez-monitor config file
properties = {
# msbc is not expected to work on all headset + adapter combinations.
#bluez5.msbc-support = true;
#bluez5.sbc-xq-support = true;
# Enabled headset roles (default: [ hsp_hs hfp_ag ]), this
# property only applies to native backend. Currently some headsets
# (Sony WH-1000XM3) are not working with both hsp_ag and hfp_ag
# enabled, disable either hsp_ag or hfp_ag to work around it.
#
# Supported headset roles: hsp_hs (HSP Headset),
# hsp_ag (HSP Audio Gateway),
# hfp_ag (HFP Audio Gateway)
#bluez5.headset-roles = [ "hsp_hs" "hsp_ag" "hfp_ag" ];
# Enabled A2DP codecs (default: all)
#bluez5.codecs = [ "sbc" "aac" "ldac" "aptx" "aptx_hd" ];
}; };
rules = [ v4l2-monitor = mkOption {
# an array of matches/actions to evaluate
{
# rules for matching a device or node. It is an array of
# properties that all need to match the regexp. If any of the
# matches work, the actions are executed for the object.
matches = [
{
# this matches all cards
device.name = "~bluez_card.*";
}
];
actions = {
# actions can update properties on the matched object.
update-props = {
#device.nick = "My Device";
};
};
}
{
matches = [
{
# matches all sinks
node.name = "~bluez_input.*";
}
{
# matches all sources
node.name = "~bluez_output.*";
}
];
actions = {
update-props = {
#node.nick = "My Node"
#node.nick = null;
#priority.driver = 100;
#priority.session = 100;
#node.pause-on-idle = false;
#resample.quality = 4;
#channelmix.normalize = false;
#channelmix.mix-lfe = false;
};
};
}
];
};
};
v4l2MonitorConfig = mkOption {
type = types.attrs; type = types.attrs;
description = '' description = ''
Configuration for the V4L2 monitor. Configuration for the V4L2 monitor. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/v4l2-monitor.conf
''; '';
default = { default = {};
# v4l2-monitor config file
properties = {
};
rules = [
# an array of matches/actions to evaluate
{
# rules for matching a device or node. It is an array of
# properties that all need to match the regexp. If any of the
# matches work, the actions are executed for the object.
matches = [
{
# this matches all devices
device.name = "~v4l2_device.*";
}
];
actions = {
# actions can update properties on the matched object.
update-props = {
#device.nick = "My Device";
};
};
}
{
matches = [
{
# matches all sinks
node.name = "~v4l2_input.*";
}
{
# matches all sources
node.name = "~v4l2_output.*";
}
];
actions = {
update-props = {
#node.nick = "My Node";
#node.nick = null;
#priority.driver = 100;
#priority.session = 100;
#node.pause-on-idle = true;
};
};
}
];
}; };
}; };
}; };
@ -325,16 +112,17 @@ in {
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
services.pipewire.sessionManagerExecutable = "${cfg.package}/bin/pipewire-media-session"; systemd.packages = [ cfg.package ];
systemd.user.services.pipewire-media-session.wantedBy = [ "pipewire.service" ];
environment.etc."pipewire/media-session.d/media-session.conf" = { text = toSPAJSON cfg.config; }; environment.etc."pipewire/media-session.d/media-session.conf" = { text = toSPAJSON (recursiveUpdate defaults.media-session cfg.config.media-session); };
environment.etc."pipewire/media-session.d/v4l2-monitor.conf" = { text = toSPAJSON cfg.v4l2MonitorConfig; }; environment.etc."pipewire/media-session.d/v4l2-monitor.conf" = { text = toSPAJSON (recursiveUpdate defaults.v4l2-monitor cfg.config.v4l2-monitor); };
environment.etc."pipewire/media-session.d/with-alsa" = mkIf config.services.pipewire.alsa.enable { text = ""; }; environment.etc."pipewire/media-session.d/with-alsa" = mkIf config.services.pipewire.alsa.enable { text = ""; };
environment.etc."pipewire/media-session.d/alsa-monitor.conf" = mkIf config.services.pipewire.alsa.enable { text = toSPAJSON cfg.alsaMonitorConfig; }; environment.etc."pipewire/media-session.d/alsa-monitor.conf" = mkIf config.services.pipewire.alsa.enable { text = toSPAJSON (recursiveUpdate defaults.alsa-monitor cfg.config.alsa-monitor); };
environment.etc."pipewire/media-session.d/with-pulseaudio" = mkIf config.services.pipewire.pulse.enable { text = ""; }; environment.etc."pipewire/media-session.d/with-pulseaudio" = mkIf config.services.pipewire.pulse.enable { text = ""; };
environment.etc."pipewire/media-session.d/bluez-monitor.conf" = mkIf config.services.pipewire.pulse.enable { text = toSPAJSON cfg.bluezMonitorConfig; }; environment.etc."pipewire/media-session.d/bluez-monitor.conf" = mkIf config.services.pipewire.pulse.enable { text = toSPAJSON (recursiveUpdate defaults.bluez-monitor cfg.config.bluez-monitor); };
environment.etc."pipewire/media-session.d/with-jack" = mkIf config.services.pipewire.jack.enable { text = ""; }; environment.etc."pipewire/media-session.d/with-jack" = mkIf config.services.pipewire.jack.enable { text = ""; };
}; };

View file

@ -0,0 +1,28 @@
{
"context.properties": {},
"context.spa-libs": {
"audio.convert.*": "audioconvert/libspa-audioconvert",
"support.*": "support/libspa-support"
},
"context.modules": {
"libpipewire-module-rtkit": {
"args": {},
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-protocol-native": null,
"libpipewire-module-client-node": null,
"libpipewire-module-adapter": null,
"libpipewire-module-metadata": null,
"libpipewire-module-protocol-pulse": {
"args": {
"server.address": [
"unix:native"
]
}
}
},
"stream.properties": {}
}

View file

@ -0,0 +1,55 @@
{
"context.properties": {
"link.max-buffers": 16,
"core.daemon": true,
"core.name": "pipewire-0"
},
"context.spa-libs": {
"audio.convert.*": "audioconvert/libspa-audioconvert",
"api.alsa.*": "alsa/libspa-alsa",
"api.v4l2.*": "v4l2/libspa-v4l2",
"api.libcamera.*": "libcamera/libspa-libcamera",
"api.bluez5.*": "bluez5/libspa-bluez5",
"api.vulkan.*": "vulkan/libspa-vulkan",
"api.jack.*": "jack/libspa-jack",
"support.*": "support/libspa-support"
},
"context.modules": {
"libpipewire-module-rtkit": {
"args": {},
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-protocol-native": null,
"libpipewire-module-profiler": null,
"libpipewire-module-metadata": null,
"libpipewire-module-spa-device-factory": null,
"libpipewire-module-spa-node-factory": null,
"libpipewire-module-client-node": null,
"libpipewire-module-client-device": null,
"libpipewire-module-portal": {
"flags": [
"ifexists",
"nofail"
]
},
"libpipewire-module-access": {
"args": {}
},
"libpipewire-module-adapter": null,
"libpipewire-module-link-factory": null,
"libpipewire-module-session-manager": null
},
"context.objects": {
"spa-node-factory": {
"args": {
"factory.name": "support.node.driver",
"node.name": "Dummy-Driver",
"priority.driver": 8000
}
}
},
"context.exec": {}
}

View file

@ -18,11 +18,53 @@ let
ln -s "${cfg.package.jack}/lib" "$out/lib/pipewire" ln -s "${cfg.package.jack}/lib" "$out/lib/pipewire"
''; '';
prioritizeNativeProtocol = {
"context.modules" = {
# Most other modules depend on this, so put it first
"libpipewire-module-protocol-native" = {
_priority = -100;
_content = null;
};
};
};
fixDaemonModulePriorities = {
"context.modules" = {
# Most other modules depend on thism so put it first
"libpipewire-module-protocol-native" = {
_priority = -100;
_content = null;
};
# Needs to be before libpipewire-module-access
"libpipewire-module-portal" = {
_priority = -50;
_content = {
flags = [
"ifexists"
"nofail"
];
};
};
};
};
# Use upstream config files passed through spa-json-dump as the base
# Patched here as necessary for them to work with this module
defaults = {
client = recursiveUpdate (builtins.fromJSON (builtins.readFile ./client.conf.json)) prioritizeNativeProtocol;
client-rt = recursiveUpdate (builtins.fromJSON (builtins.readFile ./client-rt.conf.json)) prioritizeNativeProtocol;
jack = recursiveUpdate (builtins.fromJSON (builtins.readFile ./jack.conf.json)) prioritizeNativeProtocol;
# Remove session manager invocation from the upstream generated file, it points to the wrong path
pipewire = recursiveUpdate (builtins.fromJSON (builtins.readFile ./pipewire.conf.json)) fixDaemonModulePriorities;
pipewire-pulse = recursiveUpdate (builtins.fromJSON (builtins.readFile ./pipewire-pulse.conf.json)) prioritizeNativeProtocol;
};
# Helpers for generating the pipewire JSON config file # Helpers for generating the pipewire JSON config file
mkSPAValueString = v: mkSPAValueString = v:
if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]" if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]"
else if lib.types.attrs.check v then else if lib.types.attrs.check v then
"{${lib.concatStringsSep " " (mkSPAKeyValue v)}}" "{${lib.concatStringsSep " " (mkSPAKeyValue v)}}"
else if builtins.isString v then "\"${lib.generators.mkValueStringDefault { } v}\""
else lib.generators.mkValueStringDefault { } v; else lib.generators.mkValueStringDefault { } v;
mkSPAKeyValue = attrs: map (def: def.content) ( mkSPAKeyValue = attrs: map (def: def.content) (
@ -64,131 +106,53 @@ in {
''; '';
}; };
config = mkOption { config = {
client = mkOption {
type = types.attrs; type = types.attrs;
default = {};
description = '' description = ''
Configuration for the pipewire daemon. Configuration for pipewire clients. For details see
''; https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/client.conf.in
default = {
properties = {
## set-prop is used to configure properties in the system
#
# "library.name.system" = "support/libspa-support";
# "context.data-loop.library.name.system" = "support/libspa-support";
"link.max-buffers" = 16; # version < 3 clients can't handle more than 16
#"mem.allow-mlock" = false;
#"mem.mlock-all" = true;
## https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/pipewire/pipewire.h#L93
#"log.level" = 2; # 5 is trace, which is verbose as hell, default is 2 which is warnings, 4 is debug output, 3 is info
## Properties for the DSP configuration
#
#"default.clock.rate" = 48000;
#"default.clock.quantum" = 1024;
#"default.clock.min-quantum" = 32;
#"default.clock.max-quantum" = 8192;
#"default.video.width" = 640;
#"default.video.height" = 480;
#"default.video.rate.num" = 25;
#"default.video.rate.denom" = 1;
};
spa-libs = {
## add-spa-lib <factory-name regex> <library-name>
#
# used to find spa factory names. It maps an spa factory name
# regular expression to a library name that should contain
# that factory.
#
"audio.convert*" = "audioconvert/libspa-audioconvert";
"api.alsa.*" = "alsa/libspa-alsa";
"api.v4l2.*" = "v4l2/libspa-v4l2";
"api.libcamera.*" = "libcamera/libspa-libcamera";
"api.bluez5.*" = "bluez5/libspa-bluez5";
"api.vulkan.*" = "vulkan/libspa-vulkan";
"api.jack.*" = "jack/libspa-jack";
"support.*" = "support/libspa-support";
# "videotestsrc" = "videotestsrc/libspa-videotestsrc";
# "audiotestsrc" = "audiotestsrc/libspa-audiotestsrc";
};
modules = {
## <module-name> = { [args = "<key>=<value> ..."]
# [flags = ifexists] }
# [flags = [ifexists]|[nofail]}
#
# Loads a module with the given parameters.
# If ifexists is given, the module is ignoed when it is not found.
# If nofail is given, module initialization failures are ignored.
#
libpipewire-module-rtkit = {
args = {
#rt.prio = 20;
#rt.time.soft = 200000;
#rt.time.hard = 200000;
#nice.level = -11;
};
flags = "ifexists|nofail";
};
libpipewire-module-protocol-native = { _priority = -100; _content = "null"; };
libpipewire-module-profiler = "null";
libpipewire-module-metadata = "null";
libpipewire-module-spa-device-factory = "null";
libpipewire-module-spa-node-factory = "null";
libpipewire-module-client-node = "null";
libpipewire-module-client-device = "null";
libpipewire-module-portal = "null";
libpipewire-module-access = {
args.access = {
allowed = ["${builtins.unsafeDiscardStringContext cfg.sessionManagerExecutable}"];
rejected = [];
restricted = [];
force = "flatpak";
};
};
libpipewire-module-adapter = "null";
libpipewire-module-link-factory = "null";
libpipewire-module-session-manager = "null";
};
objects = {
## create-object [-nofail] <factory-name> [<key>=<value> ...]
#
# Creates an object from a PipeWire factory with the given parameters.
# If -nofail is given, errors are ignored (and no object is created)
#
};
exec = {
## exec <program-name>
#
# Execute the given program. This is usually used to start the
# session manager. run the session manager with -h for options
#
"${builtins.unsafeDiscardStringContext cfg.sessionManagerExecutable}" = { args = "\"${lib.concatStringsSep " " cfg.sessionManagerArguments}\""; };
};
};
};
sessionManagerExecutable = mkOption {
type = types.str;
default = "";
example = literalExample ''${pkgs.pipewire.mediaSession}/bin/pipewire-media-session'';
description = ''
Path to the session manager executable.
''; '';
}; };
sessionManagerArguments = mkOption { client-rt = mkOption {
type = types.listOf types.str; type = types.attrs;
default = []; default = {};
example = literalExample ''["-p" "bluez5.msbc-support=true"]'';
description = '' description = ''
Arguments passed to the pipewire session manager. Configuration for realtime pipewire clients. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/client-rt.conf.in
''; '';
}; };
jack = mkOption {
type = types.attrs;
default = {};
description = ''
Configuration for the pipewire daemon's jack module. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/jack.conf.in
'';
};
pipewire = mkOption {
type = types.attrs;
default = {};
description = ''
Configuration for the pipewire daemon. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/pipewire.conf.in
'';
};
pipewire-pulse = mkOption {
type = types.attrs;
default = {};
description = ''
Configuration for the pipewire-pulse daemon. For details see
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/pipewire-pulse.conf.in
'';
};
};
alsa = { alsa = {
enable = mkEnableOption "ALSA support"; enable = mkEnableOption "ALSA support";
support32Bit = mkEnableOption "32-bit ALSA support on 64-bit systems"; support32Bit = mkEnableOption "32-bit ALSA support on 64-bit systems";
@ -253,13 +217,16 @@ in {
source = "${cfg.package}/share/alsa/alsa.conf.d/99-pipewire-default.conf"; source = "${cfg.package}/share/alsa/alsa.conf.d/99-pipewire-default.conf";
}; };
environment.etc."pipewire/client.conf" = { text = toSPAJSON (recursiveUpdate defaults.client cfg.config.client); };
environment.etc."pipewire/client-rt.conf" = { text = toSPAJSON (recursiveUpdate defaults.client-rt cfg.config.client-rt); };
environment.etc."pipewire/jack.conf" = { text = toSPAJSON (recursiveUpdate defaults.jack cfg.config.jack); };
environment.etc."pipewire/pipewire.conf" = { text = toSPAJSON (recursiveUpdate defaults.pipewire cfg.config.pipewire); };
environment.etc."pipewire/pipewire-pulse.conf" = { text = toSPAJSON (recursiveUpdate defaults.pipewire-pulse cfg.config.pipewire-pulse); };
environment.sessionVariables.LD_LIBRARY_PATH = environment.sessionVariables.LD_LIBRARY_PATH =
lib.optional cfg.jack.enable "/run/current-system/sw/lib/pipewire"; lib.optional cfg.jack.enable "/run/current-system/sw/lib/pipewire";
# https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/464#note_723554 # https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/464#note_723554
systemd.user.services.pipewire.environment = { systemd.user.services.pipewire.environment."PIPEWIRE_LINK_PASSIVE" = "1";
"PIPEWIRE_LINK_PASSIVE" = "1";
"PIPEWIRE_CONFIG_FILE" = pkgs.writeText "pipewire.conf" (toSPAJSON cfg.config);
};
}; };
} }

View file

@ -0,0 +1,30 @@
{
"properties": {},
"rules": [
{
"matches": [
{
"device.name": "~v4l2_device.*"
}
],
"actions": {
"update-props": {}
}
},
{
"matches": [
{
"node.name": "~v4l2_input.*"
},
{
"node.name": "~v4l2_output.*"
}
],
"actions": {
"update-props": {
"node.pause-on-idle": false
}
}
}
]
}

View file

@ -4,7 +4,7 @@ with lib;
let let
cfg = config.services.minetest-server; cfg = config.services.minetest-server;
flag = val: name: if val != null then "--${name} ${val} " else ""; flag = val: name: if val != null then "--${name} ${toString val} " else "";
flags = [ flags = [
(flag cfg.gameId "gameid") (flag cfg.gameId "gameid")
(flag cfg.world "world") (flag cfg.world "world")

View file

@ -3,21 +3,22 @@
with lib; with lib;
let let
cfg = config.services.acpid;
canonicalHandlers = { canonicalHandlers = {
powerEvent = { powerEvent = {
event = "button/power.*"; event = "button/power.*";
action = config.services.acpid.powerEventCommands; action = cfg.powerEventCommands;
}; };
lidEvent = { lidEvent = {
event = "button/lid.*"; event = "button/lid.*";
action = config.services.acpid.lidEventCommands; action = cfg.lidEventCommands;
}; };
acEvent = { acEvent = {
event = "ac_adapter.*"; event = "ac_adapter.*";
action = config.services.acpid.acEventCommands; action = cfg.acEventCommands;
}; };
}; };
@ -33,7 +34,7 @@ let
echo "event=${handler.event}" > $fn echo "event=${handler.event}" > $fn
echo "action=${pkgs.writeShellScriptBin "${name}.sh" handler.action }/bin/${name}.sh '%e'" >> $fn echo "action=${pkgs.writeShellScriptBin "${name}.sh" handler.action }/bin/${name}.sh '%e'" >> $fn
''; '';
in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // config.services.acpid.handlers)) in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // cfg.handlers))
} }
''; '';
@ -47,11 +48,7 @@ in
services.acpid = { services.acpid = {
enable = mkOption { enable = mkEnableOption "the ACPI daemon";
type = types.bool;
default = false;
description = "Whether to enable the ACPI daemon.";
};
logEvents = mkOption { logEvents = mkOption {
type = types.bool; type = types.bool;
@ -129,26 +126,28 @@ in
###### implementation ###### implementation
config = mkIf config.services.acpid.enable { config = mkIf cfg.enable {
systemd.services.acpid = { systemd.services.acpid = {
description = "ACPI Daemon"; description = "ACPI Daemon";
documentation = [ "man:acpid(8)" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "systemd-udev-settle.service" ];
path = [ pkgs.acpid ];
serviceConfig = { serviceConfig = {
Type = "forking"; ExecStart = escapeShellArgs
([ "${pkgs.acpid}/bin/acpid"
"--foreground"
"--netlink"
"--confdir" "${acpiConfDir}"
] ++ optional cfg.logEvents "--logevents"
);
}; };
unitConfig = { unitConfig = {
ConditionVirtualization = "!systemd-nspawn"; ConditionVirtualization = "!systemd-nspawn";
ConditionPathExists = [ "/proc/acpi" ]; ConditionPathExists = [ "/proc/acpi" ];
}; };
script = "acpid ${optionalString config.services.acpid.logEvents "--logevents"} --confdir ${acpiConfDir}";
}; };
}; };

View file

@ -0,0 +1,26 @@
{ config, lib, pkgs, ... }:
with lib;
let cfg = config.hardware.spacenavd;
in {
options = {
hardware.spacenavd = {
enable = mkEnableOption "spacenavd to support 3DConnexion devices";
};
};
config = mkIf cfg.enable {
systemd.user.services.spacenavd = {
description = "Daemon for the Spacenavigator 6DOF mice by 3Dconnexion";
after = [ "syslog.target" ];
wantedBy = [ "graphical.target" ];
serviceConfig = {
ExecStart = "${pkgs.spacenavd}/bin/spacenavd -d -l syslog";
StandardError = "syslog";
};
};
};
}

View file

@ -48,7 +48,7 @@ in {
systemd.services.trezord = { systemd.services.trezord = {
description = "Trezor Bridge"; description = "Trezor Bridge";
after = [ "systemd-udev-settle.service" "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
path = []; path = [];
serviceConfig = { serviceConfig = {

View file

@ -1,69 +0,0 @@
worker_processes 3
listen ENV["UNICORN_PATH"] + "/tmp/sockets/gitlab.socket", :backlog => 1024
listen "/run/gitlab/gitlab.socket", :backlog => 1024
working_directory ENV["GITLAB_PATH"]
pid ENV["UNICORN_PATH"] + "/tmp/pids/unicorn.pid"
timeout 60
# combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings
# http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
check_client_connection false
before_fork do |server, worker|
# the following is highly recommended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
# The following is only recommended for memory/DB-constrained
# installations. It is not needed if your system can house
# twice as many worker_processes as you have configured.
#
# This allows a new master process to incrementally
# phase out the old master process with SIGTTOU to avoid a
# thundering herd (especially in the "preload_app false" case)
# when doing a transparent upgrade. The last worker spawned
# will then kill off the old master process with a SIGQUIT.
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
# Throttle the master from forking too quickly by sleeping. Due
# to the implementation of standard Unix signal handlers, this
# helps (but does not completely) prevent identical, repeated signals
# from being lost when the receiving process is busy.
# sleep 1
end
after_fork do |server, worker|
# per-process listener ports for debugging/admin/migrations
# addr = "127.0.0.1:#{9293 + worker.nr}"
# server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true)
# the following is *required* for Rails + "preload_app true",
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
# reset prometheus client, this will cause any opened metrics files to be closed
defined?(::Prometheus::Client.reinitialize_on_pid_change) &&
Prometheus::Client.reinitialize_on_pid_change
# if preload_app is true, then you may also want to check and
# restart any other shared sockets/descriptors such as Memcached,
# and Redis. TokyoCabinet file handles are safe to reuse
# between any number of forked children (assuming your kernel
# correctly implements pread()/pwrite() system calls)
end

View file

@ -142,7 +142,7 @@ let
gitlabEnv = { gitlabEnv = {
HOME = "${cfg.statePath}/home"; HOME = "${cfg.statePath}/home";
UNICORN_PATH = "${cfg.statePath}/"; PUMA_PATH = "${cfg.statePath}/";
GITLAB_PATH = "${cfg.packages.gitlab}/share/gitlab/"; GITLAB_PATH = "${cfg.packages.gitlab}/share/gitlab/";
SCHEMA = "${cfg.statePath}/db/structure.sql"; SCHEMA = "${cfg.statePath}/db/structure.sql";
GITLAB_UPLOADS_PATH = "${cfg.statePath}/uploads"; GITLAB_UPLOADS_PATH = "${cfg.statePath}/uploads";
@ -424,7 +424,7 @@ in {
port = mkOption { port = mkOption {
type = types.int; type = types.int;
default = 465; default = 25;
description = "Port of the SMTP server for Gitlab."; description = "Port of the SMTP server for Gitlab.";
}; };
@ -641,6 +641,11 @@ in {
environment.systemPackages = [ pkgs.git gitlab-rake gitlab-rails cfg.packages.gitlab-shell ]; environment.systemPackages = [ pkgs.git gitlab-rake gitlab-rails cfg.packages.gitlab-shell ];
systemd.targets.gitlab = {
description = "Common target for all GitLab services.";
wantedBy = [ "multi-user.target" ];
};
# Redis is required for the sidekiq queue runner. # Redis is required for the sidekiq queue runner.
services.redis.enable = mkDefault true; services.redis.enable = mkDefault true;
@ -655,36 +660,45 @@ in {
# here. # here.
systemd.services.gitlab-postgresql = let pgsql = config.services.postgresql; in mkIf databaseActuallyCreateLocally { systemd.services.gitlab-postgresql = let pgsql = config.services.postgresql; in mkIf databaseActuallyCreateLocally {
after = [ "postgresql.service" ]; after = [ "postgresql.service" ];
wantedBy = [ "multi-user.target" ]; bindsTo = [ "postgresql.service" ];
path = [ pgsql.package ]; wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
path = [
pgsql.package
pkgs.util-linux
];
script = '' script = ''
set -eu set -eu
PSQL="${pkgs.util-linux}/bin/runuser -u ${pgsql.superUser} -- psql --port=${toString pgsql.port}" PSQL() {
psql --port=${toString pgsql.port} "$@"
}
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${cfg.databaseName}'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "${cfg.databaseName}" OWNER "${cfg.databaseUsername}"' PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${cfg.databaseName}'" | grep -q 1 || PSQL -tAc 'CREATE DATABASE "${cfg.databaseName}" OWNER "${cfg.databaseUsername}"'
current_owner=$($PSQL -tAc "SELECT pg_catalog.pg_get_userbyid(datdba) FROM pg_catalog.pg_database WHERE datname = '${cfg.databaseName}'") current_owner=$(PSQL -tAc "SELECT pg_catalog.pg_get_userbyid(datdba) FROM pg_catalog.pg_database WHERE datname = '${cfg.databaseName}'")
if [[ "$current_owner" != "${cfg.databaseUsername}" ]]; then if [[ "$current_owner" != "${cfg.databaseUsername}" ]]; then
$PSQL -tAc 'ALTER DATABASE "${cfg.databaseName}" OWNER TO "${cfg.databaseUsername}"' PSQL -tAc 'ALTER DATABASE "${cfg.databaseName}" OWNER TO "${cfg.databaseUsername}"'
if [[ -e "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}" ]]; then if [[ -e "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}" ]]; then
echo "Reassigning ownership of database ${cfg.databaseName} to user ${cfg.databaseUsername} failed on last boot. Failing..." echo "Reassigning ownership of database ${cfg.databaseName} to user ${cfg.databaseUsername} failed on last boot. Failing..."
exit 1 exit 1
fi fi
touch "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}" touch "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}"
$PSQL "${cfg.databaseName}" -tAc "REASSIGN OWNED BY \"$current_owner\" TO \"${cfg.databaseUsername}\"" PSQL "${cfg.databaseName}" -tAc "REASSIGN OWNED BY \"$current_owner\" TO \"${cfg.databaseUsername}\""
rm "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}" rm "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}"
fi fi
$PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS pg_trgm" PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS pg_trgm"
$PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS btree_gist;" PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS btree_gist;"
''; '';
serviceConfig = { serviceConfig = {
User = pgsql.superUser;
Type = "oneshot"; Type = "oneshot";
RemainAfterExit = true;
}; };
}; };
# Use postfix to send out mails. # Use postfix to send out mails.
services.postfix.enable = mkDefault true; services.postfix.enable = mkDefault (cfg.smtp.enable && cfg.smtp.address == "localhost");
users.users.${cfg.user} = users.users.${cfg.user} =
{ group = cfg.group; { group = cfg.group;
@ -703,7 +717,6 @@ in {
"d ${cfg.statePath} 0750 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath} 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/builds 0750 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath}/builds 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/config 0750 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath}/config 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/config/initializers 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/db 0750 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath}/db 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/log 0750 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath}/log 0750 ${cfg.user} ${cfg.group} -"
"d ${cfg.statePath}/repositories 2770 ${cfg.user} ${cfg.group} -" "d ${cfg.statePath}/repositories 2770 ${cfg.user} ${cfg.group} -"
@ -726,13 +739,156 @@ in {
"L+ /run/gitlab/uploads - - - - ${cfg.statePath}/uploads" "L+ /run/gitlab/uploads - - - - ${cfg.statePath}/uploads"
"L+ /run/gitlab/shell-config.yml - - - - ${pkgs.writeText "config.yml" (builtins.toJSON gitlabShellConfig)}" "L+ /run/gitlab/shell-config.yml - - - - ${pkgs.writeText "config.yml" (builtins.toJSON gitlabShellConfig)}"
"L+ ${cfg.statePath}/config/unicorn.rb - - - - ${./defaultUnicornConfig.rb}"
]; ];
systemd.services.gitlab-config = {
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
path = with pkgs; [
jq
openssl
replace
git
];
serviceConfig = {
Type = "oneshot";
User = cfg.user;
Group = cfg.group;
TimeoutSec = "infinity";
Restart = "on-failure";
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
RemainAfterExit = true;
ExecStartPre = let
preStartFullPrivileges = ''
shopt -s dotglob nullglob
set -eu
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/*
if [[ -n "$(ls -A '${cfg.statePath}'/config/)" ]]; then
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/config/*
fi
'';
in "+${pkgs.writeShellScript "gitlab-pre-start-full-privileges" preStartFullPrivileges}";
ExecStart = pkgs.writeShellScript "gitlab-config" ''
set -eu
umask u=rwx,g=rx,o=
cp -f ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
rm -rf ${cfg.statePath}/db/*
rm -f ${cfg.statePath}/lib
find '${cfg.statePath}/config/' -maxdepth 1 -mindepth 1 -type d -execdir rm -rf {} \;
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/db/* ${cfg.statePath}/db
ln -sf ${extraGitlabRb} ${cfg.statePath}/config/initializers/extra-gitlab.rb
${cfg.packages.gitlab-shell}/bin/install
${optionalString cfg.smtp.enable ''
install -m u=rw ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
${optionalString (cfg.smtp.passwordFile != null) ''
smtp_password=$(<'${cfg.smtp.passwordFile}')
replace-literal -e '@smtpPassword@' "$smtp_password" '${cfg.statePath}/config/initializers/smtp_settings.rb'
''}
''}
(
umask u=rwx,g=,o=
openssl rand -hex 32 > ${cfg.statePath}/gitlab_shell_secret
rm -f '${cfg.statePath}/config/database.yml'
${if cfg.databasePasswordFile != null then ''
export db_password="$(<'${cfg.databasePasswordFile}')"
if [[ -z "$db_password" ]]; then
>&2 echo "Database password was an empty string!"
exit 1
fi
jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
'.production.password = $ENV.db_password' \
>'${cfg.statePath}/config/database.yml'
''
else ''
jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
>'${cfg.statePath}/config/database.yml'
''
}
${utils.genJqSecretsReplacementSnippet
gitlabConfig
"${cfg.statePath}/config/gitlab.yml"
}
rm -f '${cfg.statePath}/config/secrets.yml'
export secret="$(<'${cfg.secrets.secretFile}')"
export db="$(<'${cfg.secrets.dbFile}')"
export otp="$(<'${cfg.secrets.otpFile}')"
export jws="$(<'${cfg.secrets.jwsFile}')"
jq -n '{production: {secret_key_base: $ENV.secret,
otp_key_base: $ENV.otp,
db_key_base: $ENV.db,
openid_connect_signing_key: $ENV.jws}}' \
> '${cfg.statePath}/config/secrets.yml'
)
# We remove potentially broken links to old gitlab-shell versions
rm -Rf ${cfg.statePath}/repositories/**/*.git/hooks
git config --global core.autocrlf "input"
'';
};
};
systemd.services.gitlab-db-config = {
after = [ "gitlab-config.service" "gitlab-postgresql.service" "postgresql.service" ];
bindsTo = [
"gitlab-config.service"
] ++ optional (cfg.databaseHost == "") "postgresql.service"
++ optional databaseActuallyCreateLocally "gitlab-postgresql.service";
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
serviceConfig = {
Type = "oneshot";
User = cfg.user;
Group = cfg.group;
TimeoutSec = "infinity";
Restart = "on-failure";
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
RemainAfterExit = true;
ExecStart = pkgs.writeShellScript "gitlab-db-config" ''
set -eu
umask u=rwx,g=rx,o=
initial_root_password="$(<'${cfg.initialRootPasswordFile}')"
${gitlab-rake}/bin/gitlab-rake gitlab:db:configure GITLAB_ROOT_PASSWORD="$initial_root_password" \
GITLAB_ROOT_EMAIL='${cfg.initialRootEmail}' > /dev/null
'';
};
};
systemd.services.gitlab-sidekiq = { systemd.services.gitlab-sidekiq = {
after = [ "network.target" "redis.service" "gitlab.service" ]; after = [
wantedBy = [ "multi-user.target" ]; "network.target"
"redis.service"
"postgresql.service"
"gitlab-config.service"
"gitlab-db-config.service"
];
bindsTo = [
"redis.service"
"gitlab-config.service"
"gitlab-db-config.service"
] ++ optional (cfg.databaseHost == "") "postgresql.service";
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
environment = gitlabEnv; environment = gitlabEnv;
path = with pkgs; [ path = with pkgs; [
postgresqlPackage postgresqlPackage
@ -758,9 +914,10 @@ in {
}; };
systemd.services.gitaly = { systemd.services.gitaly = {
after = [ "network.target" "gitlab.service" ]; after = [ "network.target" "gitlab-config.service" ];
bindsTo = [ "gitlab.service" ]; bindsTo = [ "gitlab-config.service" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
path = with pkgs; [ path = with pkgs; [
openssh openssh
procps # See https://gitlab.com/gitlab-org/gitaly/issues/1562 procps # See https://gitlab.com/gitlab-org/gitaly/issues/1562
@ -783,8 +940,10 @@ in {
systemd.services.gitlab-pages = mkIf (gitlabConfig.production.pages.enabled or false) { systemd.services.gitlab-pages = mkIf (gitlabConfig.production.pages.enabled or false) {
description = "GitLab static pages daemon"; description = "GitLab static pages daemon";
after = [ "network.target" "redis.service" "gitlab.service" ]; # gitlab.service creates configs after = [ "network.target" "gitlab-config.service" ];
wantedBy = [ "multi-user.target" ]; bindsTo = [ "gitlab-config.service" ];
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
path = [ pkgs.unzip ]; path = [ pkgs.unzip ];
@ -803,7 +962,8 @@ in {
systemd.services.gitlab-workhorse = { systemd.services.gitlab-workhorse = {
after = [ "network.target" ]; after = [ "network.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
path = with pkgs; [ path = with pkgs; [
exiftool exiftool
git git
@ -832,8 +992,10 @@ in {
systemd.services.gitlab-mailroom = mkIf (gitlabConfig.production.incoming_email.enabled or false) { systemd.services.gitlab-mailroom = mkIf (gitlabConfig.production.incoming_email.enabled or false) {
description = "GitLab incoming mail daemon"; description = "GitLab incoming mail daemon";
after = [ "network.target" "redis.service" "gitlab.service" ]; # gitlab.service creates configs after = [ "network.target" "redis.service" "gitlab-config.service" ];
wantedBy = [ "multi-user.target" ]; bindsTo = [ "gitlab-config.service" ];
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
environment = gitlabEnv; environment = gitlabEnv;
serviceConfig = { serviceConfig = {
Type = "simple"; Type = "simple";
@ -842,15 +1004,26 @@ in {
User = cfg.user; User = cfg.user;
Group = cfg.group; Group = cfg.group;
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/bundle exec mail_room -c ${cfg.packages.gitlab}/share/gitlab/config.dist/mail_room.yml"; ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/bundle exec mail_room -c ${cfg.statePath}/config/mail_room.yml";
WorkingDirectory = gitlabEnv.HOME; WorkingDirectory = gitlabEnv.HOME;
}; };
}; };
systemd.services.gitlab = { systemd.services.gitlab = {
after = [ "gitlab-workhorse.service" "network.target" "gitlab-postgresql.service" "redis.service" ]; after = [
requires = [ "gitlab-sidekiq.service" ]; "gitlab-workhorse.service"
wantedBy = [ "multi-user.target" ]; "network.target"
"redis.service"
"gitlab-config.service"
"gitlab-db-config.service"
];
bindsTo = [
"redis.service"
"gitlab-config.service"
"gitlab-db-config.service"
] ++ optional (cfg.databaseHost == "") "postgresql.service";
wantedBy = [ "gitlab.target" ];
partOf = [ "gitlab.target" ];
environment = gitlabEnv; environment = gitlabEnv;
path = with pkgs; [ path = with pkgs; [
postgresqlPackage postgresqlPackage
@ -868,96 +1041,7 @@ in {
TimeoutSec = "infinity"; TimeoutSec = "infinity";
Restart = "on-failure"; Restart = "on-failure";
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab"; WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
ExecStartPre = let ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/puma -C ${cfg.statePath}/config/puma.rb -e production";
preStartFullPrivileges = ''
shopt -s dotglob nullglob
set -eu
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/*
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/config/*
'';
preStart = ''
set -eu
cp -f ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
rm -rf ${cfg.statePath}/db/*
rm -rf ${cfg.statePath}/config/initializers/*
rm -f ${cfg.statePath}/lib
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/db/* ${cfg.statePath}/db
ln -sf ${extraGitlabRb} ${cfg.statePath}/config/initializers/extra-gitlab.rb
${cfg.packages.gitlab-shell}/bin/install
${optionalString cfg.smtp.enable ''
install -m u=rw ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
${optionalString (cfg.smtp.passwordFile != null) ''
smtp_password=$(<'${cfg.smtp.passwordFile}')
${pkgs.replace}/bin/replace-literal -e '@smtpPassword@' "$smtp_password" '${cfg.statePath}/config/initializers/smtp_settings.rb'
''}
''}
(
umask u=rwx,g=,o=
${pkgs.openssl}/bin/openssl rand -hex 32 > ${cfg.statePath}/gitlab_shell_secret
if [[ -h '${cfg.statePath}/config/database.yml' ]]; then
rm '${cfg.statePath}/config/database.yml'
fi
${if cfg.databasePasswordFile != null then ''
export db_password="$(<'${cfg.databasePasswordFile}')"
if [[ -z "$db_password" ]]; then
>&2 echo "Database password was an empty string!"
exit 1
fi
${pkgs.jq}/bin/jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
'.production.password = $ENV.db_password' \
>'${cfg.statePath}/config/database.yml'
''
else ''
${pkgs.jq}/bin/jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
>'${cfg.statePath}/config/database.yml'
''
}
${utils.genJqSecretsReplacementSnippet
gitlabConfig
"${cfg.statePath}/config/gitlab.yml"
}
if [[ -h '${cfg.statePath}/config/secrets.yml' ]]; then
rm '${cfg.statePath}/config/secrets.yml'
fi
export secret="$(<'${cfg.secrets.secretFile}')"
export db="$(<'${cfg.secrets.dbFile}')"
export otp="$(<'${cfg.secrets.otpFile}')"
export jws="$(<'${cfg.secrets.jwsFile}')"
${pkgs.jq}/bin/jq -n '{production: {secret_key_base: $ENV.secret,
otp_key_base: $ENV.otp,
db_key_base: $ENV.db,
openid_connect_signing_key: $ENV.jws}}' \
> '${cfg.statePath}/config/secrets.yml'
)
initial_root_password="$(<'${cfg.initialRootPasswordFile}')"
${gitlab-rake}/bin/gitlab-rake gitlab:db:configure GITLAB_ROOT_PASSWORD="$initial_root_password" \
GITLAB_ROOT_EMAIL='${cfg.initialRootEmail}' > /dev/null
# We remove potentially broken links to old gitlab-shell versions
rm -Rf ${cfg.statePath}/repositories/**/*.git/hooks
${pkgs.git}/bin/git config --global core.autocrlf "input"
'';
in [
"+${pkgs.writeShellScript "gitlab-pre-start-full-privileges" preStartFullPrivileges}"
"${pkgs.writeShellScript "gitlab-pre-start" preStart}"
];
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/unicorn -c ${cfg.statePath}/config/unicorn.rb -E production";
}; };
}; };

View file

@ -115,4 +115,6 @@ in
}; };
}; };
}; };
meta.maintainers = with lib.maintainers; [ erictapen ];
} }

View file

@ -183,8 +183,14 @@ in {
}; };
package = mkOption { package = mkOption {
default = pkgs.home-assistant; default = pkgs.home-assistant.overrideAttrs (oldAttrs: {
defaultText = "pkgs.home-assistant"; doInstallCheck = false;
});
defaultText = literalExample ''
pkgs.home-assistant.overrideAttrs (oldAttrs: {
doInstallCheck = false;
})
'';
type = types.package; type = types.package;
example = literalExample '' example = literalExample ''
pkgs.home-assistant.override { pkgs.home-assistant.override {
@ -192,7 +198,7 @@ in {
} }
''; '';
description = '' description = ''
Home Assistant package to use. Home Assistant package to use. By default the tests are disabled, as they take a considerable amout of time to complete.
Override <literal>extraPackages</literal> or <literal>extraComponents</literal> in order to add additional dependencies. Override <literal>extraPackages</literal> or <literal>extraComponents</literal> in order to add additional dependencies.
If you specify <option>config</option> and do not set <option>autoExtraComponents</option> If you specify <option>config</option> and do not set <option>autoExtraComponents</option>
to <literal>false</literal>, overriding <literal>extraComponents</literal> will have no effect. to <literal>false</literal>, overriding <literal>extraComponents</literal> will have no effect.

View file

@ -0,0 +1,164 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.lifecycled;
# TODO: Add the ability to extend this with an rfc 42-like interface.
# In the meantime, one can modify the environment (as
# long as it's not overriding anything from here) with
# systemd.services.lifecycled.serviceConfig.Environment
configFile = pkgs.writeText "lifecycled" ''
LIFECYCLED_HANDLER=${cfg.handler}
${lib.optionalString (cfg.cloudwatchGroup != null) "LIFECYCLED_CLOUDWATCH_GROUP=${cfg.cloudwatchGroup}"}
${lib.optionalString (cfg.cloudwatchStream != null) "LIFECYCLED_CLOUDWATCH_STREAM=${cfg.cloudwatchStream}"}
${lib.optionalString cfg.debug "LIFECYCLED_DEBUG=${lib.boolToString cfg.debug}"}
${lib.optionalString (cfg.instanceId != null) "LIFECYCLED_INSTANCE_ID=${cfg.instanceId}"}
${lib.optionalString cfg.json "LIFECYCLED_JSON=${lib.boolToString cfg.json}"}
${lib.optionalString cfg.noSpot "LIFECYCLED_NO_SPOT=${lib.boolToString cfg.noSpot}"}
${lib.optionalString (cfg.snsTopic != null) "LIFECYCLED_SNS_TOPIC=${cfg.snsTopic}"}
${lib.optionalString (cfg.awsRegion != null) "AWS_REGION=${cfg.awsRegion}"}
'';
in
{
meta.maintainers = with maintainers; [ cole-h grahamc ];
options = {
services.lifecycled = {
enable = mkEnableOption "lifecycled";
queueCleaner = {
enable = mkEnableOption "lifecycled-queue-cleaner";
frequency = mkOption {
type = types.str;
default = "hourly";
description = ''
How often to trigger the queue cleaner.
NOTE: This string should be a valid value for a systemd
timer's <literal>OnCalendar</literal> configuration. See
<citerefentry><refentrytitle>systemd.timer</refentrytitle><manvolnum>5</manvolnum></citerefentry>
for more information.
'';
};
parallel = mkOption {
type = types.ints.unsigned;
default = 20;
description = ''
The number of parallel deletes to run.
'';
};
};
instanceId = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The instance ID to listen for events for.
'';
};
snsTopic = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The SNS topic that receives events.
'';
};
noSpot = mkOption {
type = types.bool;
default = false;
description = ''
Disable the spot termination listener.
'';
};
handler = mkOption {
type = types.path;
description = ''
The script to invoke to handle events.
'';
};
json = mkOption {
type = types.bool;
default = false;
description = ''
Enable JSON logging.
'';
};
cloudwatchGroup = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Write logs to a specific Cloudwatch Logs group.
'';
};
cloudwatchStream = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Write logs to a specific Cloudwatch Logs stream. Defaults to the instance ID.
'';
};
debug = mkOption {
type = types.bool;
default = false;
description = ''
Enable debugging information.
'';
};
# XXX: Can be removed if / when
# https://github.com/buildkite/lifecycled/pull/91 is merged.
awsRegion = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
The region used for accessing AWS services.
'';
};
};
};
### Implementation ###
config = mkMerge [
(mkIf cfg.enable {
environment.etc."lifecycled".source = configFile;
systemd.packages = [ pkgs.lifecycled ];
systemd.services.lifecycled = {
wantedBy = [ "network-online.target" ];
restartTriggers = [ configFile ];
};
})
(mkIf cfg.queueCleaner.enable {
systemd.services.lifecycled-queue-cleaner = {
description = "Lifecycle Daemon Queue Cleaner";
environment = optionalAttrs (cfg.awsRegion != null) { AWS_REGION = cfg.awsRegion; };
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.lifecycled}/bin/lifecycled-queue-cleaner -parallel ${toString cfg.queueCleaner.parallel}";
};
};
systemd.timers.lifecycled-queue-cleaner = {
description = "Lifecycle Daemon Queue Cleaner Timer";
wantedBy = [ "timers.target" ];
after = [ "network-online.target" ];
timerConfig = {
Unit = "lifecycled-queue-cleaner.service";
OnCalendar = "${cfg.queueCleaner.frequency}";
};
};
})
];
}

View file

@ -21,13 +21,45 @@ in
}; };
dates = mkOption { dates = mkOption {
default = "03:15";
type = types.str; type = types.str;
default = "03:15";
example = "weekly";
description = '' description = ''
Specification (in the format described by How often or when garbage collection is performed. For most desktop and server systems
a sufficient garbage collection is once a week.
The format is described in
<citerefentry><refentrytitle>systemd.time</refentrytitle> <citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>) of the time at <manvolnum>7</manvolnum></citerefentry>.
which the garbage collector will run. '';
};
randomizedDelaySec = mkOption {
default = "0";
type = types.str;
example = "45min";
description = ''
Add a randomized delay before each automatic upgrade.
The delay will be chosen between zero and this value.
This value must be a time span in the format specified by
<citerefentry><refentrytitle>systemd.time</refentrytitle>
<manvolnum>7</manvolnum></citerefentry>
'';
};
persistent = mkOption {
default = true;
type = types.bool;
example = false;
description = ''
Takes a boolean argument. If true, the time when the service
unit was last triggered is stored on disk. When the timer is
activated, the service unit is triggered immediately if it
would have been triggered at least once during the time when
the timer was inactive. Such triggering is nonetheless
subject to the delay imposed by RandomizedDelaySec=. This is
useful to catch up on missed runs of the service when the
system was powered down.
''; '';
}; };
@ -50,12 +82,19 @@ in
config = { config = {
systemd.services.nix-gc = systemd.services.nix-gc = {
{ description = "Nix Garbage Collector"; description = "Nix Garbage Collector";
script = "exec ${config.nix.package.out}/bin/nix-collect-garbage ${cfg.options}"; script = "exec ${config.nix.package.out}/bin/nix-collect-garbage ${cfg.options}";
startAt = optional cfg.automatic cfg.dates; startAt = optional cfg.automatic cfg.dates;
}; };
systemd.timers.nix-gc = lib.mkIf cfg.automatic {
timerConfig = {
RandomizedDelaySec = cfg.randomizedDelaySec;
Persistent = cfg.persistent;
};
};
}; };
} }

View file

@ -0,0 +1,82 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.plikd;
format = pkgs.formats.toml {};
plikdCfg = format.generate "plikd.cfg" cfg.settings;
in
{
options = {
services.plikd = {
enable = mkEnableOption "the plikd server";
openFirewall = mkOption {
type = types.bool;
default = false;
description = "Open ports in the firewall for the plikd.";
};
settings = mkOption {
type = format.type;
default = {};
description = ''
Configuration for plikd, see <link xlink:href="https://github.com/root-gg/plik/blob/master/server/plikd.cfg"/>
for supported values.
'';
};
};
};
config = mkIf cfg.enable {
services.plikd.settings = mapAttrs (name: mkDefault) {
ListenPort = 8080;
ListenAddress = "localhost";
DataBackend = "file";
DataBackendConfig = {
Directory = "/var/lib/plikd";
};
MetadataBackendConfig = {
Driver = "sqlite3";
ConnectionString = "/var/lib/plikd/plik.db";
};
};
systemd.services.plikd = {
description = "Plikd file sharing server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${pkgs.plikd}/bin/plikd --config ${plikdCfg}";
Restart = "on-failure";
StateDirectory = "plikd";
LogsDirectory = "plikd";
DynamicUser = true;
# Basic hardening
NoNewPrivileges = "yes";
PrivateTmp = "yes";
PrivateDevices = "yes";
DevicePolicy = "closed";
ProtectSystem = "strict";
ProtectHome = "read-only";
ProtectControlGroups = "yes";
ProtectKernelModules = "yes";
ProtectKernelTunables = "yes";
RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6 AF_NETLINK";
RestrictNamespaces = "yes";
RestrictRealtime = "yes";
RestrictSUIDSGID = "yes";
MemoryDenyWriteExecute = "yes";
LockPersonality = "yes";
};
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.settings.ListenPort ];
};
};
}

View file

@ -95,13 +95,13 @@ in
ALERTA_SVR_CONF_FILE = alertaConf; ALERTA_SVR_CONF_FILE = alertaConf;
}; };
serviceConfig = { serviceConfig = {
ExecStart = "${pkgs.python36Packages.alerta-server}/bin/alertad run --port ${toString cfg.port} --host ${cfg.bind}"; ExecStart = "${pkgs.alerta-server}/bin/alertad run --port ${toString cfg.port} --host ${cfg.bind}";
User = "alerta"; User = "alerta";
Group = "alerta"; Group = "alerta";
}; };
}; };
environment.systemPackages = [ pkgs.python36Packages.alerta ]; environment.systemPackages = [ pkgs.alerta ];
users.users.alerta = { users.users.alerta = {
uid = config.ids.uids.alerta; uid = config.ids.uids.alerta;

View file

@ -65,10 +65,18 @@ let
dashboardFile = pkgs.writeText "dashboard.yaml" (builtins.toJSON dashboardConfiguration); dashboardFile = pkgs.writeText "dashboard.yaml" (builtins.toJSON dashboardConfiguration);
notifierConfiguration = {
apiVersion = 1;
notifiers = cfg.provision.notifiers;
};
notifierFile = pkgs.writeText "notifier.yaml" (builtins.toJSON notifierConfiguration);
provisionConfDir = pkgs.runCommand "grafana-provisioning" { } '' provisionConfDir = pkgs.runCommand "grafana-provisioning" { } ''
mkdir -p $out/{datasources,dashboards} mkdir -p $out/{datasources,dashboards,notifiers}
ln -sf ${datasourceFile} $out/datasources/datasource.yaml ln -sf ${datasourceFile} $out/datasources/datasource.yaml
ln -sf ${dashboardFile} $out/dashboards/dashboard.yaml ln -sf ${dashboardFile} $out/dashboards/dashboard.yaml
ln -sf ${notifierFile} $out/notifiers/notifier.yaml
''; '';
# Get a submodule without any embedded metadata: # Get a submodule without any embedded metadata:
@ -79,80 +87,80 @@ let
options = { options = {
name = mkOption { name = mkOption {
type = types.str; type = types.str;
description = "Name of the datasource. Required"; description = "Name of the datasource. Required.";
}; };
type = mkOption { type = mkOption {
type = types.enum ["graphite" "prometheus" "cloudwatch" "elasticsearch" "influxdb" "opentsdb" "mysql" "mssql" "postgres" "loki"]; type = types.enum ["graphite" "prometheus" "cloudwatch" "elasticsearch" "influxdb" "opentsdb" "mysql" "mssql" "postgres" "loki"];
description = "Datasource type. Required"; description = "Datasource type. Required.";
}; };
access = mkOption { access = mkOption {
type = types.enum ["proxy" "direct"]; type = types.enum ["proxy" "direct"];
default = "proxy"; default = "proxy";
description = "Access mode. proxy or direct (Server or Browser in the UI). Required"; description = "Access mode. proxy or direct (Server or Browser in the UI). Required.";
}; };
orgId = mkOption { orgId = mkOption {
type = types.int; type = types.int;
default = 1; default = 1;
description = "Org id. will default to orgId 1 if not specified"; description = "Org id. will default to orgId 1 if not specified.";
}; };
url = mkOption { url = mkOption {
type = types.str; type = types.str;
description = "Url of the datasource"; description = "Url of the datasource.";
}; };
password = mkOption { password = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = "Database password, if used"; description = "Database password, if used.";
}; };
user = mkOption { user = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = "Database user, if used"; description = "Database user, if used.";
}; };
database = mkOption { database = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = "Database name, if used"; description = "Database name, if used.";
}; };
basicAuth = mkOption { basicAuth = mkOption {
type = types.nullOr types.bool; type = types.nullOr types.bool;
default = null; default = null;
description = "Enable/disable basic auth"; description = "Enable/disable basic auth.";
}; };
basicAuthUser = mkOption { basicAuthUser = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = "Basic auth username"; description = "Basic auth username.";
}; };
basicAuthPassword = mkOption { basicAuthPassword = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = "Basic auth password"; description = "Basic auth password.";
}; };
withCredentials = mkOption { withCredentials = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Enable/disable with credentials headers"; description = "Enable/disable with credentials headers.";
}; };
isDefault = mkOption { isDefault = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Mark as default datasource. Max one per org"; description = "Mark as default datasource. Max one per org.";
}; };
jsonData = mkOption { jsonData = mkOption {
type = types.nullOr types.attrs; type = types.nullOr types.attrs;
default = null; default = null;
description = "Datasource specific configuration"; description = "Datasource specific configuration.";
}; };
secureJsonData = mkOption { secureJsonData = mkOption {
type = types.nullOr types.attrs; type = types.nullOr types.attrs;
default = null; default = null;
description = "Datasource specific secure configuration"; description = "Datasource specific secure configuration.";
}; };
version = mkOption { version = mkOption {
type = types.int; type = types.int;
default = 1; default = 1;
description = "Version"; description = "Version.";
}; };
editable = mkOption { editable = mkOption {
type = types.bool; type = types.bool;
@ -168,41 +176,99 @@ let
name = mkOption { name = mkOption {
type = types.str; type = types.str;
default = "default"; default = "default";
description = "Provider name"; description = "Provider name.";
}; };
orgId = mkOption { orgId = mkOption {
type = types.int; type = types.int;
default = 1; default = 1;
description = "Organization ID"; description = "Organization ID.";
}; };
folder = mkOption { folder = mkOption {
type = types.str; type = types.str;
default = ""; default = "";
description = "Add dashboards to the specified folder"; description = "Add dashboards to the specified folder.";
}; };
type = mkOption { type = mkOption {
type = types.str; type = types.str;
default = "file"; default = "file";
description = "Dashboard provider type"; description = "Dashboard provider type.";
}; };
disableDeletion = mkOption { disableDeletion = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Disable deletion when JSON file is removed"; description = "Disable deletion when JSON file is removed.";
}; };
updateIntervalSeconds = mkOption { updateIntervalSeconds = mkOption {
type = types.int; type = types.int;
default = 10; default = 10;
description = "How often Grafana will scan for changed dashboards"; description = "How often Grafana will scan for changed dashboards.";
}; };
options = { options = {
path = mkOption { path = mkOption {
type = types.path; type = types.path;
description = "Path grafana will watch for dashboards"; description = "Path grafana will watch for dashboards.";
}; };
}; };
}; };
}; };
grafanaTypes.notifierConfig = types.submodule {
options = {
name = mkOption {
type = types.str;
default = "default";
description = "Notifier name.";
};
type = mkOption {
type = types.enum ["dingding" "discord" "email" "googlechat" "hipchat" "kafka" "line" "teams" "opsgenie" "pagerduty" "prometheus-alertmanager" "pushover" "sensu" "sensugo" "slack" "telegram" "threema" "victorops" "webhook"];
description = "Notifier type.";
};
uid = mkOption {
type = types.str;
description = "Unique notifier identifier.";
};
org_id = mkOption {
type = types.int;
default = 1;
description = "Organization ID.";
};
org_name = mkOption {
type = types.str;
default = "Main Org.";
description = "Organization name.";
};
is_default = mkOption {
type = types.bool;
description = "Is the default notifier.";
default = false;
};
send_reminder = mkOption {
type = types.bool;
default = true;
description = "Should the notifier be sent reminder notifications while alerts continue to fire.";
};
frequency = mkOption {
type = types.str;
default = "5m";
description = "How frequently should the notifier be sent reminders.";
};
disable_resolve_message = mkOption {
type = types.bool;
default = false;
description = "Turn off the message that sends when an alert returns to OK.";
};
settings = mkOption {
type = types.nullOr types.attrs;
default = null;
description = "Settings for the notifier type.";
};
secure_settings = mkOption {
type = types.nullOr types.attrs;
default = null;
description = "Secure settings for the notifier type.";
};
};
};
in { in {
options.services.grafana = { options.services.grafana = {
enable = mkEnableOption "grafana"; enable = mkEnableOption "grafana";
@ -337,17 +403,23 @@ in {
provision = { provision = {
enable = mkEnableOption "provision"; enable = mkEnableOption "provision";
datasources = mkOption { datasources = mkOption {
description = "Grafana datasources configuration"; description = "Grafana datasources configuration.";
default = []; default = [];
type = types.listOf grafanaTypes.datasourceConfig; type = types.listOf grafanaTypes.datasourceConfig;
apply = x: map _filter x; apply = x: map _filter x;
}; };
dashboards = mkOption { dashboards = mkOption {
description = "Grafana dashboard configuration"; description = "Grafana dashboard configuration.";
default = []; default = [];
type = types.listOf grafanaTypes.dashboardConfig; type = types.listOf grafanaTypes.dashboardConfig;
apply = x: map _filter x; apply = x: map _filter x;
}; };
notifiers = mkOption {
description = "Grafana notifier configuration.";
default = [];
type = types.listOf grafanaTypes.notifierConfig;
apply = x: map _filter x;
};
}; };
security = { security = {
@ -391,12 +463,12 @@ in {
smtp = { smtp = {
enable = mkEnableOption "smtp"; enable = mkEnableOption "smtp";
host = mkOption { host = mkOption {
description = "Host to connect to"; description = "Host to connect to.";
default = "localhost:25"; default = "localhost:25";
type = types.str; type = types.str;
}; };
user = mkOption { user = mkOption {
description = "User used for authentication"; description = "User used for authentication.";
default = ""; default = "";
type = types.str; type = types.str;
}; };
@ -417,7 +489,7 @@ in {
type = types.nullOr types.path; type = types.nullOr types.path;
}; };
fromAddress = mkOption { fromAddress = mkOption {
description = "Email address used for sending"; description = "Email address used for sending.";
default = "admin@grafana.localhost"; default = "admin@grafana.localhost";
type = types.str; type = types.str;
}; };
@ -425,7 +497,7 @@ in {
users = { users = {
allowSignUp = mkOption { allowSignUp = mkOption {
description = "Disable user signup / registration"; description = "Disable user signup / registration.";
default = false; default = false;
type = types.bool; type = types.bool;
}; };
@ -451,17 +523,17 @@ in {
auth.anonymous = { auth.anonymous = {
enable = mkOption { enable = mkOption {
description = "Whether to allow anonymous access"; description = "Whether to allow anonymous access.";
default = false; default = false;
type = types.bool; type = types.bool;
}; };
org_name = mkOption { org_name = mkOption {
description = "Which organization to allow anonymous access to"; description = "Which organization to allow anonymous access to.";
default = "Main Org."; default = "Main Org.";
type = types.str; type = types.str;
}; };
org_role = mkOption { org_role = mkOption {
description = "Which role anonymous users have in the organization"; description = "Which role anonymous users have in the organization.";
default = "Viewer"; default = "Viewer";
type = types.str; type = types.str;
}; };
@ -470,7 +542,7 @@ in {
analytics.reporting = { analytics.reporting = {
enable = mkOption { enable = mkOption {
description = "Whether to allow anonymous usage reporting to stats.grafana.net"; description = "Whether to allow anonymous usage reporting to stats.grafana.net.";
default = true; default = true;
type = types.bool; type = types.bool;
}; };
@ -496,6 +568,9 @@ in {
(optional ( (optional (
any (x: x.password != null || x.basicAuthPassword != null || x.secureJsonData != null) cfg.provision.datasources any (x: x.password != null || x.basicAuthPassword != null || x.secureJsonData != null) cfg.provision.datasources
) "Datasource passwords will be stored as plaintext in the Nix store!") ) "Datasource passwords will be stored as plaintext in the Nix store!")
(optional (
any (x: x.secure_settings != null) cfg.provision.notifiers
) "Notifier secure settings will be stored as plaintext in the Nix store!")
]; ];
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];

View file

@ -468,7 +468,7 @@ let
''; '';
}; };
value = mkOption { values = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = []; default = [];
description = '' description = ''

View file

@ -316,7 +316,7 @@ in
client = { client = {
enable = mkEnableOption "Ceph client configuration"; enable = mkEnableOption "Ceph client configuration";
extraConfig = mkOption { extraConfig = mkOption {
type = with types; attrsOf str; type = with types; attrsOf (attrsOf str);
default = {}; default = {};
example = '' example = ''
{ {

View file

@ -162,10 +162,7 @@ in {
NODE_NAME = cfg.nodeName; NODE_NAME = cfg.nodeName;
}; };
path = [ pkgs.iptables ]; path = [ pkgs.iptables ];
preStart = '' preStart = optionalString (cfg.storageBackend == "etcd") ''
mkdir -p /run/flannel
touch /run/flannel/docker
'' + optionalString (cfg.storageBackend == "etcd") ''
echo "setting network configuration" echo "setting network configuration"
until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}' until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
do do
@ -177,6 +174,7 @@ in {
ExecStart = "${cfg.package}/bin/flannel"; ExecStart = "${cfg.package}/bin/flannel";
Restart = "always"; Restart = "always";
RestartSec = "10s"; RestartSec = "10s";
RuntimeDirectory = "flannel";
}; };
}; };

View file

@ -8,9 +8,9 @@ let
# Convert systemd-style address specification to kresd config line(s). # Convert systemd-style address specification to kresd config line(s).
# On Nix level we don't attempt to precisely validate the address specifications. # On Nix level we don't attempt to precisely validate the address specifications.
mkListen = kind: addr: let mkListen = kind: addr: let
al_v4 = builtins.match "([0-9.]\+):([0-9]\+)" addr; al_v4 = builtins.match "([0-9.]+):([0-9]+)" addr;
al_v6 = builtins.match "\\[(.\+)]:([0-9]\+)" addr; al_v6 = builtins.match "\\[(.+)]:([0-9]+)" addr;
al_portOnly = builtins.match "()([0-9]\+)" addr; al_portOnly = builtins.match "()([0-9]+)" addr;
al = findFirst (a: a != null) al = findFirst (a: a != null)
(throw "services.kresd.*: incorrect address specification '${addr}'") (throw "services.kresd.*: incorrect address specification '${addr}'")
[ al_v4 al_v6 al_portOnly ]; [ al_v4 al_v6 al_portOnly ];

View file

@ -8,30 +8,19 @@ let
cfg = config.services.clamav; cfg = config.services.clamav;
pkg = pkgs.clamav; pkg = pkgs.clamav;
clamdConfigFile = pkgs.writeText "clamd.conf" '' toKeyValue = generators.toKeyValue {
DatabaseDirectory ${stateDir} mkKeyValue = generators.mkKeyValueDefault {} " ";
LocalSocket ${runDir}/clamd.ctl listsAsDuplicateKeys = true;
PidFile ${runDir}/clamd.pid };
TemporaryDirectory /tmp
User clamav
Foreground yes
${cfg.daemon.extraConfig} clamdConfigFile = pkgs.writeText "clamd.conf" (toKeyValue cfg.daemon.settings);
''; freshclamConfigFile = pkgs.writeText "freshclam.conf" (toKeyValue cfg.updater.settings);
freshclamConfigFile = pkgs.writeText "freshclam.conf" ''
DatabaseDirectory ${stateDir}
Foreground yes
Checks ${toString cfg.updater.frequency}
${cfg.updater.extraConfig}
DatabaseMirror database.clamav.net
'';
in in
{ {
imports = [ imports = [
(mkRenamedOptionModule [ "services" "clamav" "updater" "config" ] [ "services" "clamav" "updater" "extraConfig" ]) (mkRemovedOptionModule [ "services" "clamav" "updater" "config" ] "Use services.clamav.updater.settings instead.")
(mkRemovedOptionModule [ "services" "clamav" "updater" "extraConfig" ] "Use services.clamav.updater.settings instead.")
(mkRemovedOptionModule [ "services" "clamav" "daemon" "extraConfig" ] "Use services.clamav.daemon.settings instead.")
]; ];
options = { options = {
@ -39,12 +28,12 @@ in
daemon = { daemon = {
enable = mkEnableOption "ClamAV clamd daemon"; enable = mkEnableOption "ClamAV clamd daemon";
extraConfig = mkOption { settings = mkOption {
type = types.lines; type = with types; attrsOf (oneOf [ bool int str (listOf str) ]);
default = ""; default = {};
description = '' description = ''
Extra configuration for clamd. Contents will be added verbatim to the ClamAV configuration. Refer to <link xlink:href="https://linux.die.net/man/5/clamd.conf"/>,
configuration file. for details on supported values.
''; '';
}; };
}; };
@ -68,12 +57,12 @@ in
''; '';
}; };
extraConfig = mkOption { settings = mkOption {
type = types.lines; type = with types; attrsOf (oneOf [ bool int str (listOf str) ]);
default = ""; default = {};
description = '' description = ''
Extra configuration for freshclam. Contents will be added verbatim to the freshclam configuration. Refer to <link xlink:href="https://linux.die.net/man/5/freshclam.conf"/>,
configuration file. for details on supported values.
''; '';
}; };
}; };
@ -93,6 +82,22 @@ in
users.groups.${clamavGroup} = users.groups.${clamavGroup} =
{ gid = config.ids.gids.clamav; }; { gid = config.ids.gids.clamav; };
services.clamav.daemon.settings = {
DatabaseDirectory = stateDir;
LocalSocket = "${runDir}/clamd.ctl";
PidFile = "${runDir}/clamd.pid";
TemporaryDirectory = "/tmp";
User = "clamav";
Foreground = true;
};
services.clamav.updater.settings = {
DatabaseDirectory = stateDir;
Foreground = true;
Checks = cfg.updater.frequency;
DatabaseMirror = [ "database.clamav.net" ];
};
environment.etc."clamav/freshclam.conf".source = freshclamConfigFile; environment.etc."clamav/freshclam.conf".source = freshclamConfigFile;
environment.etc."clamav/clamd.conf".source = clamdConfigFile; environment.etc."clamav/clamd.conf".source = clamdConfigFile;

View file

@ -329,7 +329,7 @@ in
extraConfig = "internal;"; extraConfig = "internal;";
}; };
locations."~ ^/lib.*\.(js|css|gif|png|ico|jpg|jpeg)$" = { locations."~ ^/lib.*\\.(js|css|gif|png|ico|jpg|jpeg)$" = {
extraConfig = "expires 365d;"; extraConfig = "expires 365d;";
}; };
@ -349,7 +349,7 @@ in
''; '';
}; };
locations."~ \.php$" = { locations."~ \\.php$" = {
extraConfig = '' extraConfig = ''
try_files $uri $uri/ /doku.php; try_files $uri $uri/ /doku.php;
include ${pkgs.nginx}/conf/fastcgi_params; include ${pkgs.nginx}/conf/fastcgi_params;

View file

@ -28,7 +28,10 @@ let
upload_max_filesize = cfg.maxUploadSize; upload_max_filesize = cfg.maxUploadSize;
post_max_size = cfg.maxUploadSize; post_max_size = cfg.maxUploadSize;
memory_limit = cfg.maxUploadSize; memory_limit = cfg.maxUploadSize;
} // cfg.phpOptions; } // cfg.phpOptions
// optionalAttrs cfg.caching.apcu {
"apc.enable_cli" = "1";
};
occ = pkgs.writeScriptBin "nextcloud-occ" '' occ = pkgs.writeScriptBin "nextcloud-occ" ''
#! ${pkgs.runtimeShell} #! ${pkgs.runtimeShell}
@ -86,7 +89,7 @@ in {
package = mkOption { package = mkOption {
type = types.package; type = types.package;
description = "Which package to use for the Nextcloud instance."; description = "Which package to use for the Nextcloud instance.";
relatedPackages = [ "nextcloud18" "nextcloud19" "nextcloud20" ]; relatedPackages = [ "nextcloud19" "nextcloud20" "nextcloud21" ];
}; };
maxUploadSize = mkOption { maxUploadSize = mkOption {
@ -280,6 +283,24 @@ in {
may be served via HTTPS. may be served via HTTPS.
''; '';
}; };
defaultPhoneRegion = mkOption {
default = null;
type = types.nullOr types.str;
example = "DE";
description = ''
<warning>
<para>This option exists since Nextcloud 21! If older versions are used,
this will throw an eval-error!</para>
</warning>
<link xlink:href="https://www.iso.org/iso-3166-country-codes.html">ISO 3611-1</link>
country codes for automatic phone-number detection without a country code.
With e.g. <literal>DE</literal> set, the <literal>+49</literal> can be omitted for
phone-numbers.
'';
};
}; };
caching = { caching = {
@ -345,10 +366,13 @@ in {
&& !(acfg.adminpass != null && acfg.adminpassFile != null)); && !(acfg.adminpass != null && acfg.adminpassFile != null));
message = "Please specify exactly one of adminpass or adminpassFile"; message = "Please specify exactly one of adminpass or adminpassFile";
} }
{ assertion = versionOlder cfg.package.version "21" -> cfg.config.defaultPhoneRegion == null;
message = "The `defaultPhoneRegion'-setting is only supported for Nextcloud >=21!";
}
]; ];
warnings = let warnings = let
latest = 20; latest = 21;
upgradeWarning = major: nixos: upgradeWarning = major: nixos:
'' ''
A legacy Nextcloud install (from before NixOS ${nixos}) may be installed. A legacy Nextcloud install (from before NixOS ${nixos}) may be installed.
@ -366,9 +390,9 @@ in {
Using config.services.nextcloud.poolConfig is deprecated and will become unsupported in a future release. Using config.services.nextcloud.poolConfig is deprecated and will become unsupported in a future release.
Please migrate your configuration to config.services.nextcloud.poolSettings. Please migrate your configuration to config.services.nextcloud.poolSettings.
'') '')
++ (optional (versionOlder cfg.package.version "18") (upgradeWarning 17 "20.03"))
++ (optional (versionOlder cfg.package.version "19") (upgradeWarning 18 "20.09")) ++ (optional (versionOlder cfg.package.version "19") (upgradeWarning 18 "20.09"))
++ (optional (versionOlder cfg.package.version "20") (upgradeWarning 19 "21.05")); ++ (optional (versionOlder cfg.package.version "20") (upgradeWarning 19 "21.05"))
++ (optional (versionOlder cfg.package.version "21") (upgradeWarning 20 "21.05"));
services.nextcloud.package = with pkgs; services.nextcloud.package = with pkgs;
mkDefault ( mkDefault (
@ -378,14 +402,13 @@ in {
nextcloud defined in an overlay, please set `services.nextcloud.package` to nextcloud defined in an overlay, please set `services.nextcloud.package` to
`pkgs.nextcloud`. `pkgs.nextcloud`.
'' ''
else if versionOlder stateVersion "20.03" then nextcloud17
else if versionOlder stateVersion "20.09" then nextcloud18 else if versionOlder stateVersion "20.09" then nextcloud18
# 21.03 will not be an official release - it was instead 21.05. # 21.03 will not be an official release - it was instead 21.05.
# This versionOlder statement remains set to 21.03 for backwards compatibility. # This versionOlder statement remains set to 21.03 for backwards compatibility.
# See https://github.com/NixOS/nixpkgs/pull/108899 and # See https://github.com/NixOS/nixpkgs/pull/108899 and
# https://github.com/NixOS/rfcs/blob/master/rfcs/0080-nixos-release-schedule.md. # https://github.com/NixOS/rfcs/blob/master/rfcs/0080-nixos-release-schedule.md.
else if versionOlder stateVersion "21.03" then nextcloud19 else if versionOlder stateVersion "21.03" then nextcloud19
else nextcloud20 else nextcloud21
); );
} }
@ -443,6 +466,7 @@ in {
'dbtype' => '${c.dbtype}', 'dbtype' => '${c.dbtype}',
'trusted_domains' => ${writePhpArrary ([ cfg.hostName ] ++ c.extraTrustedDomains)}, 'trusted_domains' => ${writePhpArrary ([ cfg.hostName ] ++ c.extraTrustedDomains)},
'trusted_proxies' => ${writePhpArrary (c.trustedProxies)}, 'trusted_proxies' => ${writePhpArrary (c.trustedProxies)},
${optionalString (c.defaultPhoneRegion != null) "'default_phone_region' => '${c.defaultPhoneRegion}',"}
]; ];
''; '';
occInstallCmd = let occInstallCmd = let
@ -591,6 +615,14 @@ in {
access_log off; access_log off;
''; '';
}; };
"= /" = {
priority = 100;
extraConfig = ''
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
'';
};
"/" = { "/" = {
priority = 900; priority = 900;
extraConfig = "rewrite ^ /index.php;"; extraConfig = "rewrite ^ /index.php;";
@ -609,6 +641,9 @@ in {
location = /.well-known/caldav { location = /.well-known/caldav {
return 301 /remote.php/dav; return 301 /remote.php/dav;
} }
location ~ ^/\.well-known/(?!acme-challenge|pki-validation) {
return 301 /index.php$request_uri;
}
try_files $uri $uri/ =404; try_files $uri $uri/ =404;
''; '';
}; };

View file

@ -11,7 +11,7 @@
desktop client is packaged at <literal>pkgs.nextcloud-client</literal>. desktop client is packaged at <literal>pkgs.nextcloud-client</literal>.
</para> </para>
<para> <para>
The current default by NixOS is <package>nextcloud20</package> which is also the latest The current default by NixOS is <package>nextcloud21</package> which is also the latest
major version available. major version available.
</para> </para>
<section xml:id="module-services-nextcloud-basic-usage"> <section xml:id="module-services-nextcloud-basic-usage">

View file

@ -22,7 +22,9 @@ let
php = cfg.phpPackage.override { apacheHttpd = pkg; }; php = cfg.phpPackage.override { apacheHttpd = pkg; };
phpMajorVersion = lib.versions.major (lib.getVersion php); phpModuleName = let
majorVersion = lib.versions.major (lib.getVersion php);
in (if majorVersion == "8" then "php" else "php${majorVersion}");
mod_perl = pkgs.apacheHttpdPackages.mod_perl.override { apacheHttpd = pkg; }; mod_perl = pkgs.apacheHttpdPackages.mod_perl.override { apacheHttpd = pkg; };
@ -63,7 +65,7 @@ let
++ optional enableSSL "ssl" ++ optional enableSSL "ssl"
++ optional enableUserDir "userdir" ++ optional enableUserDir "userdir"
++ optional cfg.enableMellon { name = "auth_mellon"; path = "${pkgs.apacheHttpdPackages.mod_auth_mellon}/modules/mod_auth_mellon.so"; } ++ optional cfg.enableMellon { name = "auth_mellon"; path = "${pkgs.apacheHttpdPackages.mod_auth_mellon}/modules/mod_auth_mellon.so"; }
++ optional cfg.enablePHP { name = "php${phpMajorVersion}"; path = "${php}/modules/libphp${phpMajorVersion}.so"; } ++ optional cfg.enablePHP { name = phpModuleName; path = "${php}/modules/lib${phpModuleName}.so"; }
++ optional cfg.enablePerl { name = "perl"; path = "${mod_perl}/modules/mod_perl.so"; } ++ optional cfg.enablePerl { name = "perl"; path = "${mod_perl}/modules/mod_perl.so"; }
++ cfg.extraModules; ++ cfg.extraModules;

View file

@ -804,7 +804,7 @@ in
ProtectControlGroups = true; ProtectControlGroups = true;
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ]; RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
LockPersonality = true; LockPersonality = true;
MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) pkgs.nginx.modules); MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) cfg.package.modules);
RestrictRealtime = true; RestrictRealtime = true;
RestrictSUIDSGID = true; RestrictSUIDSGID = true;
PrivateMounts = true; PrivateMounts = true;

View file

@ -58,7 +58,7 @@ in
noDesktop = mkOption { noDesktop = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = "Don't install XFCE desktop components (xfdesktop, panel and notification daemon)."; description = "Don't install XFCE desktop components (xfdesktop and panel).";
}; };
enableXfwm = mkOption { enableXfwm = mkOption {
@ -98,6 +98,7 @@ in
parole parole
ristretto ristretto
xfce4-appfinder xfce4-appfinder
xfce4-notifyd
xfce4-screenshooter xfce4-screenshooter
xfce4-session xfce4-session
xfce4-settings xfce4-settings
@ -119,7 +120,6 @@ in
xfwm4 xfwm4
xfwm4-themes xfwm4-themes
] ++ optionals (!cfg.noDesktop) [ ] ++ optionals (!cfg.noDesktop) [
xfce4-notifyd
xfce4-panel xfce4-panel
xfdesktop xfdesktop
]; ];
@ -166,7 +166,8 @@ in
# Systemd services # Systemd services
systemd.packages = with pkgs.xfce; [ systemd.packages = with pkgs.xfce; [
(thunar.override { thunarPlugins = cfg.thunarPlugins; }) (thunar.override { thunarPlugins = cfg.thunarPlugins; })
] ++ optional (!cfg.noDesktop) xfce4-notifyd; xfce4-notifyd
];
}; };
} }

View file

@ -37,6 +37,11 @@ let
. /etc/profile . /etc/profile
cd "$HOME" cd "$HOME"
# Allow the user to execute commands at the beginning of the X session.
if test -f ~/.xprofile; then
source ~/.xprofile
fi
${optionalString cfg.displayManager.job.logToJournal '' ${optionalString cfg.displayManager.job.logToJournal ''
if [ -z "$_DID_SYSTEMD_CAT" ]; then if [ -z "$_DID_SYSTEMD_CAT" ]; then
export _DID_SYSTEMD_CAT=1 export _DID_SYSTEMD_CAT=1
@ -64,22 +69,23 @@ let
# Speed up application start by 50-150ms according to # Speed up application start by 50-150ms according to
# http://kdemonkey.blogspot.nl/2008/04/magic-trick.html # http://kdemonkey.blogspot.nl/2008/04/magic-trick.html
rm -rf "$HOME/.compose-cache" compose_cache="''${XCOMPOSECACHE:-$HOME/.compose-cache}"
mkdir "$HOME/.compose-cache" mkdir -p "$compose_cache"
# To avoid accidentally deleting a wrongly set up XCOMPOSECACHE directory,
# defensively try to delete cache *files* only, following the file format specified in
# https://gitlab.freedesktop.org/xorg/lib/libx11/-/blob/master/modules/im/ximcp/imLcIm.c#L353-358
# sprintf (*res, "%s/%c%d_%03x_%08x_%08x", dir, _XimGetMyEndian(), XIM_CACHE_VERSION, (unsigned int)sizeof (DefTree), hash, hash2);
${pkgs.findutils}/bin/find "$compose_cache" -maxdepth 1 -regextype posix-extended -regex '.*/[Bl][0-9]+_[0-9a-f]{3}_[0-9a-f]{8}_[0-9a-f]{8}' -delete
unset compose_cache
# Work around KDE errors when a user first logs in and # Work around KDE errors when a user first logs in and
# .local/share doesn't exist yet. # .local/share doesn't exist yet.
mkdir -p "$HOME/.local/share" mkdir -p "''${XDG_DATA_HOME:-$HOME/.local/share}"
unset _DID_SYSTEMD_CAT unset _DID_SYSTEMD_CAT
${cfg.displayManager.sessionCommands} ${cfg.displayManager.sessionCommands}
# Allow the user to execute commands at the beginning of the X session.
if test -f ~/.xprofile; then
source ~/.xprofile
fi
# Start systemd user services for graphical sessions # Start systemd user services for graphical sessions
/run/current-system/systemd/bin/systemctl --user start graphical-session.target /run/current-system/systemd/bin/systemctl --user start graphical-session.target

View file

@ -2,24 +2,6 @@
with lib; with lib;
let let
findWinner = candidates: winner:
any (x: x == winner) candidates;
# winners is an ordered list where first item wins over 2nd etc
mergeAnswer = winners: locs: defs:
let
values = map (x: x.value) defs;
inter = intersectLists values winners;
winner = head winners;
in
if defs == [] then abort "This case should never happen."
else if winner == [] then abort "Give a valid list of winner"
else if inter == [] then mergeOneOption locs defs
else if findWinner values winner then
winner
else
mergeAnswer (tail winners) locs defs;
mergeFalseByDefault = locs: defs: mergeFalseByDefault = locs: defs:
if defs == [] then abort "This case should never happen." if defs == [] then abort "This case should never happen."
else if any (x: x == false) (getValues defs) then false else if any (x: x == false) (getValues defs) then false
@ -28,9 +10,7 @@ let
kernelItem = types.submodule { kernelItem = types.submodule {
options = { options = {
tristate = mkOption { tristate = mkOption {
type = types.enum [ "y" "m" "n" null ] // { type = types.enum [ "y" "m" "n" null ];
merge = mergeAnswer [ "y" "m" "n" ];
};
default = null; default = null;
internal = true; internal = true;
visible = true; visible = true;

View file

@ -436,7 +436,8 @@ let
"IPv4ProxyARP" "IPv4ProxyARP"
"IPv6ProxyNDP" "IPv6ProxyNDP"
"IPv6ProxyNDPAddress" "IPv6ProxyNDPAddress"
"IPv6PrefixDelegation" "IPv6SendRA"
"DHCPv6PrefixDelegation"
"IPv6MTUBytes" "IPv6MTUBytes"
"Bridge" "Bridge"
"Bond" "Bond"
@ -477,7 +478,8 @@ let
(assertMinimum "IPv6HopLimit" 0) (assertMinimum "IPv6HopLimit" 0)
(assertValueOneOf "IPv4ProxyARP" boolValues) (assertValueOneOf "IPv4ProxyARP" boolValues)
(assertValueOneOf "IPv6ProxyNDP" boolValues) (assertValueOneOf "IPv6ProxyNDP" boolValues)
(assertValueOneOf "IPv6PrefixDelegation" ["static" "dhcpv6" "yes" "false"]) (assertValueOneOf "IPv6SendRA" boolValues)
(assertValueOneOf "DHCPv6PrefixDelegation" boolValues)
(assertByteFormat "IPv6MTUBytes") (assertByteFormat "IPv6MTUBytes")
(assertValueOneOf "ActiveSlave" boolValues) (assertValueOneOf "ActiveSlave" boolValues)
(assertValueOneOf "PrimarySlave" boolValues) (assertValueOneOf "PrimarySlave" boolValues)
@ -643,18 +645,63 @@ let
sectionDHCPv6 = checkUnitConfig "DHCPv6" [ sectionDHCPv6 = checkUnitConfig "DHCPv6" [
(assertOnlyFields [ (assertOnlyFields [
"UseAddress"
"UseDNS" "UseDNS"
"UseNTP" "UseNTP"
"RouteMetric"
"RapidCommit" "RapidCommit"
"MUDURL"
"RequestOptions"
"SendVendorOption"
"ForceDHCPv6PDOtherInformation" "ForceDHCPv6PDOtherInformation"
"PrefixDelegationHint" "PrefixDelegationHint"
"RouteMetric" "WithoutRA"
"SendOption"
"UserClass"
"VendorClass"
]) ])
(assertValueOneOf "UseAddress" boolValues)
(assertValueOneOf "UseDNS" boolValues) (assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "UseNTP" boolValues) (assertValueOneOf "UseNTP" boolValues)
(assertInt "RouteMetric")
(assertValueOneOf "RapidCommit" boolValues) (assertValueOneOf "RapidCommit" boolValues)
(assertValueOneOf "ForceDHCPv6PDOtherInformation" boolValues) (assertValueOneOf "ForceDHCPv6PDOtherInformation" boolValues)
(assertInt "RouteMetric") (assertValueOneOf "WithoutRA" ["solicit" "information-request"])
(assertRange "SendOption" 1 65536)
];
sectionDHCPv6PrefixDelegation = checkUnitConfig "DHCPv6PrefixDelegation" [
(assertOnlyFields [
"SubnetId"
"Announce"
"Assign"
"Token"
])
(assertValueOneOf "Announce" boolValues)
(assertValueOneOf "Assign" boolValues)
];
sectionIPv6AcceptRA = checkUnitConfig "IPv6AcceptRA" [
(assertOnlyFields [
"UseDNS"
"UseDomains"
"RouteTable"
"UseAutonomousPrefix"
"UseOnLinkPrefix"
"RouterDenyList"
"RouterAllowList"
"PrefixDenyList"
"PrefixAllowList"
"RouteDenyList"
"RouteAllowList"
"DHCPv6Client"
])
(assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "UseDomains" (boolValues ++ ["route"]))
(assertRange "RouteTable" 0 4294967295)
(assertValueOneOf "UseAutonomousPrefix" boolValues)
(assertValueOneOf "UseOnLinkPrefix" boolValues)
(assertValueOneOf "DHCPv6Client" (boolValues ++ ["always"]))
]; ];
sectionDHCPServer = checkUnitConfig "DHCPServer" [ sectionDHCPServer = checkUnitConfig "DHCPServer" [
@ -685,7 +732,7 @@ let
(assertValueOneOf "EmitTimezone" boolValues) (assertValueOneOf "EmitTimezone" boolValues)
]; ];
sectionIPv6PrefixDelegation = checkUnitConfig "IPv6PrefixDelegation" [ sectionIPv6SendRA = checkUnitConfig "IPv6SendRA" [
(assertOnlyFields [ (assertOnlyFields [
"Managed" "Managed"
"OtherInformation" "OtherInformation"
@ -1090,6 +1137,30 @@ let
''; '';
}; };
dhcpV6PrefixDelegationConfig = mkOption {
default = {};
example = { SubnetId = "auto"; Announce = true; };
type = types.addCheck (types.attrsOf unitOption) check.network.sectionDHCPv6PrefixDelegation;
description = ''
Each attribute in this set specifies an option in the
<literal>[DHCPv6PrefixDelegation]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
ipv6AcceptRAConfig = mkOption {
default = {};
example = { UseDNS = true; DHCPv6Client = "always"; };
type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6AcceptRA;
description = ''
Each attribute in this set specifies an option in the
<literal>[IPv6AcceptRA]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details.
'';
};
dhcpServerConfig = mkOption { dhcpServerConfig = mkOption {
default = {}; default = {};
example = { PoolOffset = 50; EmitDNS = false; }; example = { PoolOffset = 50; EmitDNS = false; };
@ -1102,13 +1173,20 @@ let
''; '';
}; };
# systemd.network.networks.*.ipv6PrefixDelegationConfig has been deprecated
# in 247 in favor of systemd.network.networks.*.ipv6SendRAConfig.
ipv6PrefixDelegationConfig = mkOption { ipv6PrefixDelegationConfig = mkOption {
visible = false;
apply = _: throw "The option `systemd.network.networks.*.ipv6PrefixDelegationConfig` has been replaced by `systemd.network.networks.*.ipv6SendRAConfig`.";
};
ipv6SendRAConfig = mkOption {
default = {}; default = {};
example = { EmitDNS = true; Managed = true; OtherInformation = true; }; example = { EmitDNS = true; Managed = true; OtherInformation = true; };
type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6PrefixDelegation; type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6SendRA;
description = '' description = ''
Each attribute in this set specifies an option in the Each attribute in this set specifies an option in the
<literal>[IPv6PrefixDelegation]</literal> section of the unit. See <literal>[IPv6SendRA]</literal> section of the unit. See
<citerefentry><refentrytitle>systemd.network</refentrytitle> <citerefentry><refentrytitle>systemd.network</refentrytitle>
<manvolnum>5</manvolnum></citerefentry> for details. <manvolnum>5</manvolnum></citerefentry> for details.
''; '';
@ -1457,13 +1535,21 @@ let
[DHCPv6] [DHCPv6]
${attrsToSection def.dhcpV6Config} ${attrsToSection def.dhcpV6Config}
'' ''
+ optionalString (def.dhcpV6PrefixDelegationConfig != { }) ''
[DHCPv6PrefixDelegation]
${attrsToSection def.dhcpV6PrefixDelegationConfig}
''
+ optionalString (def.ipv6AcceptRAConfig != { }) ''
[IPv6AcceptRA]
${attrsToSection def.ipv6AcceptRAConfig}
''
+ optionalString (def.dhcpServerConfig != { }) '' + optionalString (def.dhcpServerConfig != { }) ''
[DHCPServer] [DHCPServer]
${attrsToSection def.dhcpServerConfig} ${attrsToSection def.dhcpServerConfig}
'' ''
+ optionalString (def.ipv6PrefixDelegationConfig != { }) '' + optionalString (def.ipv6SendRAConfig != { }) ''
[IPv6PrefixDelegation] [IPv6SendRA]
${attrsToSection def.ipv6PrefixDelegationConfig} ${attrsToSection def.ipv6SendRAConfig}
'' ''
+ flip concatMapStrings def.ipv6Prefixes (x: '' + flip concatMapStrings def.ipv6Prefixes (x: ''
[IPv6Prefix] [IPv6Prefix]
@ -1479,7 +1565,6 @@ let
in in
{ {
options = { options = {
systemd.network.enable = mkOption { systemd.network.enable = mkOption {

View file

@ -4,8 +4,7 @@ with lib;
let let
inherit (pkgs) plymouth; inherit (pkgs) plymouth nixos-icons;
inherit (pkgs) nixos-icons;
cfg = config.boot.plymouth; cfg = config.boot.plymouth;
@ -16,14 +15,37 @@ let
osVersion = config.system.nixos.release; osVersion = config.system.nixos.release;
}; };
plymouthLogos = pkgs.runCommand "plymouth-logos" { inherit (cfg) logo; } ''
mkdir -p $out
# For themes that are compiled with PLYMOUTH_LOGO_FILE
mkdir -p $out/etc/plymouth
ln -s $logo $out/etc/plymouth/logo.png
# Logo for bgrt theme
# Note this is technically an abuse of watermark for the bgrt theme
# See: https://gitlab.freedesktop.org/plymouth/plymouth/-/issues/95#note_813768
mkdir -p $out/share/plymouth/themes/spinner
ln -s $logo $out/share/plymouth/themes/spinner/watermark.png
# Logo for spinfinity theme
# See: https://gitlab.freedesktop.org/plymouth/plymouth/-/issues/106
mkdir -p $out/share/plymouth/themes/spinfinity
ln -s $logo $out/share/plymouth/themes/spinfinity/header-image.png
'';
themesEnv = pkgs.buildEnv { themesEnv = pkgs.buildEnv {
name = "plymouth-themes"; name = "plymouth-themes";
paths = [ plymouth ] ++ cfg.themePackages; paths = [
plymouth
plymouthLogos
] ++ cfg.themePackages;
}; };
configFile = pkgs.writeText "plymouthd.conf" '' configFile = pkgs.writeText "plymouthd.conf" ''
[Daemon] [Daemon]
ShowDelay=0 ShowDelay=0
DeviceTimeout=8
Theme=${cfg.theme} Theme=${cfg.theme}
${cfg.extraConfig} ${cfg.extraConfig}
''; '';
@ -47,7 +69,7 @@ in
}; };
themePackages = mkOption { themePackages = mkOption {
default = [ nixosBreezePlymouth ]; default = lib.optional (cfg.theme == "breeze") nixosBreezePlymouth;
type = types.listOf types.package; type = types.listOf types.package;
description = '' description = ''
Extra theme packages for plymouth. Extra theme packages for plymouth.
@ -55,7 +77,7 @@ in
}; };
theme = mkOption { theme = mkOption {
default = "breeze"; default = "bgrt";
type = types.str; type = types.str;
description = '' description = ''
Splash screen theme. Splash screen theme.
@ -64,7 +86,8 @@ in
logo = mkOption { logo = mkOption {
type = types.path; type = types.path;
default = "${nixos-icons}/share/icons/hicolor/128x128/apps/nix-snowflake.png"; # Dimensions are 48x48 to match GDM logo
default = "${nixos-icons}/share/icons/hicolor/48x48/apps/nix-snowflake-white.png";
defaultText = ''pkgs.fetchurl { defaultText = ''pkgs.fetchurl {
url = "https://nixos.org/logo/nixos-hires.png"; url = "https://nixos.org/logo/nixos-hires.png";
sha256 = "1ivzgd7iz0i06y36p8m5w48fd8pjqwxhdaavc0pxs7w1g7mcy5si"; sha256 = "1ivzgd7iz0i06y36p8m5w48fd8pjqwxhdaavc0pxs7w1g7mcy5si";
@ -114,8 +137,14 @@ in
systemd.paths.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ]; systemd.paths.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ];
boot.initrd.extraUtilsCommands = '' boot.initrd.extraUtilsCommands = ''
copy_bin_and_libs ${pkgs.plymouth}/bin/plymouthd copy_bin_and_libs ${plymouth}/bin/plymouth
copy_bin_and_libs ${pkgs.plymouth}/bin/plymouth copy_bin_and_libs ${plymouth}/bin/plymouthd
# Check if the actual requested theme is here
if [[ ! -d ${themesEnv}/share/plymouth/themes/${cfg.theme} ]]; then
echo "The requested theme: ${cfg.theme} is not provided by any of the packages in boot.plymouth.themePackages"
exit 1
fi
moduleName="$(sed -n 's,ModuleName *= *,,p' ${themesEnv}/share/plymouth/themes/${cfg.theme}/${cfg.theme}.plymouth)" moduleName="$(sed -n 's,ModuleName *= *,,p' ${themesEnv}/share/plymouth/themes/${cfg.theme}/${cfg.theme}.plymouth)"
@ -127,21 +156,29 @@ in
mkdir -p $out/share/plymouth/themes mkdir -p $out/share/plymouth/themes
cp ${plymouth}/share/plymouth/plymouthd.defaults $out/share/plymouth cp ${plymouth}/share/plymouth/plymouthd.defaults $out/share/plymouth
# copy themes into working directory for patching # Copy themes into working directory for patching
mkdir themes mkdir themes
# use -L to copy the directories proper, not the symlinks to them
cp -r -L ${themesEnv}/share/plymouth/themes/{text,details,${cfg.theme}} themes
# patch out any attempted references to the theme or plymouth's themes directory # Use -L to copy the directories proper, not the symlinks to them.
# Copy all themes because they're not large assets, and bgrt depends on the ImageDir of
# the spinner theme.
cp -r -L ${themesEnv}/share/plymouth/themes/* themes
# Patch out any attempted references to the theme or plymouth's themes directory
chmod -R +w themes chmod -R +w themes
find themes -type f | while read file find themes -type f | while read file
do do
sed -i "s,/nix/.*/share/plymouth/themes,$out/share/plymouth/themes,g" $file sed -i "s,/nix/.*/share/plymouth/themes,$out/share/plymouth/themes,g" $file
done done
# Install themes
cp -r themes/* $out/share/plymouth/themes cp -r themes/* $out/share/plymouth/themes
cp ${cfg.logo} $out/share/plymouth/logo.png
# Install logo
mkdir -p $out/etc/plymouth
cp -r -L ${themesEnv}/etc/plymouth $out
# Setup font
mkdir -p $out/share/fonts mkdir -p $out/share/fonts
cp ${cfg.font} $out/share/fonts cp ${cfg.font} $out/share/fonts
mkdir -p $out/etc/fonts mkdir -p $out/etc/fonts

View file

@ -614,12 +614,17 @@ echo /sbin/modprobe > /proc/sys/kernel/modprobe
# Start stage 2. `switch_root' deletes all files in the ramfs on the # Start stage 2. `switch_root' deletes all files in the ramfs on the
# current root. Note that $stage2Init might be an absolute symlink, # current root. The path has to be valid in the chroot not outside.
# in which case "-e" won't work because we're not in the chroot yet. if [ ! -e "$targetRoot/$stage2Init" ]; then
if [ ! -e "$targetRoot/$stage2Init" ] && [ ! -L "$targetRoot/$stage2Init" ] ; then stage2Check=${stage2Init}
while [ "$stage2Check" != "${stage2Check%/*}" ] && [ ! -L "$targetRoot/$stage2Check" ]; do
stage2Check=${stage2Check%/*}
done
if [ ! -L "$targetRoot/$stage2Check" ]; then
echo "stage 2 init script ($targetRoot/$stage2Init) not found" echo "stage 2 init script ($targetRoot/$stage2Init) not found"
fail fail
fi fi
fi
mkdir -m 0755 -p $targetRoot/proc $targetRoot/sys $targetRoot/dev $targetRoot/run mkdir -m 0755 -p $targetRoot/proc $targetRoot/sys $targetRoot/dev $targetRoot/run

View file

@ -93,17 +93,7 @@ in
(if i.useDHCP != null then i.useDHCP else false)); (if i.useDHCP != null then i.useDHCP else false));
address = forEach (interfaceIps i) address = forEach (interfaceIps i)
(ip: "${ip.address}/${toString ip.prefixLength}"); (ip: "${ip.address}/${toString ip.prefixLength}");
# IPv6PrivacyExtensions=kernel seems to be broken with networkd. networkConfig.IPv6PrivacyExtensions = "kernel";
# Instead of using IPv6PrivacyExtensions=kernel, configure it according to the value of
# `tempAddress`:
networkConfig.IPv6PrivacyExtensions = {
# generate temporary addresses and use them by default
"default" = true;
# generate temporary addresses but keep using the standard EUI-64 ones by default
"enabled" = "prefer-public";
# completely disable temporary addresses
"disabled" = false;
}.${i.tempAddress};
linkConfig = optionalAttrs (i.macAddress != null) { linkConfig = optionalAttrs (i.macAddress != null) {
MACAddress = i.macAddress; MACAddress = i.macAddress;
} // optionalAttrs (i.mtu != null) { } // optionalAttrs (i.mtu != null) {

View file

@ -1,6 +1,10 @@
{ config, pkgs, ... }: { config, lib, pkgs, ... }:
with lib;
let let
cfg = config.virtualisation.amazon-init;
script = '' script = ''
#!${pkgs.runtimeShell} -eu #!${pkgs.runtimeShell} -eu
@ -41,6 +45,18 @@ let
nixos-rebuild switch nixos-rebuild switch
''; '';
in { in {
options.virtualisation.amazon-init = {
enable = mkOption {
default = true;
type = types.bool;
description = ''
Enable or disable the amazon-init service.
'';
};
};
config = mkIf cfg.enable {
systemd.services.amazon-init = { systemd.services.amazon-init = {
inherit script; inherit script;
description = "Reconfigure the system from EC2 userdata on startup"; description = "Reconfigure the system from EC2 userdata on startup";
@ -57,4 +73,5 @@ in {
RemainAfterExit = true; RemainAfterExit = true;
}; };
}; };
};
} }

View file

@ -98,7 +98,6 @@ in
environment.XDG_RUNTIME_DIR="${anboxloc}"; environment.XDG_RUNTIME_DIR="${anboxloc}";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "systemd-udev-settle.service" ];
preStart = let preStart = let
initsh = pkgs.writeText "nixos-init" ('' initsh = pkgs.writeText "nixos-init" (''
#!/system/bin/sh #!/system/bin/sh

View file

@ -0,0 +1,60 @@
{ pkgs, lib, config, ... }:
let
cfg = config.virtualisation.containerd;
containerdConfigChecked = pkgs.runCommand "containerd-config-checked.toml" { nativeBuildInputs = [pkgs.containerd]; } ''
containerd -c ${cfg.configFile} config dump >/dev/null
ln -s ${cfg.configFile} $out
'';
in
{
options.virtualisation.containerd = with lib.types; {
enable = lib.mkEnableOption "containerd container runtime";
configFile = lib.mkOption {
default = null;
description = "path to containerd config file";
type = nullOr path;
};
args = lib.mkOption {
default = {};
description = "extra args to append to the containerd cmdline";
type = attrsOf str;
};
};
config = lib.mkIf cfg.enable {
virtualisation.containerd.args.config = lib.mkIf (cfg.configFile != null) (toString containerdConfigChecked);
environment.systemPackages = [pkgs.containerd];
systemd.services.containerd = {
description = "containerd - container runtime";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = with pkgs; [
containerd
runc
iptables
];
serviceConfig = {
ExecStart = ''${pkgs.containerd}/bin/containerd ${lib.concatStringsSep " " (lib.cli.toGNUCommandLine {} cfg.args)}'';
Delegate = "yes";
KillMode = "process";
Type = "notify";
Restart = "always";
RestartSec = "5";
StartLimitBurst = "8";
StartLimitIntervalSec = "120s";
# "limits" defined below are adopted from upstream: https://github.com/containerd/containerd/blob/master/containerd.service
LimitNPROC = "infinity";
LimitCORE = "infinity";
LimitNOFILE = "infinity";
TasksMax = "infinity";
OOMScoreAdjust = "-999";
};
};
};
}

View file

@ -221,7 +221,7 @@ in {
systemd.services.libvirtd = { systemd.services.libvirtd = {
requires = [ "libvirtd-config.service" ]; requires = [ "libvirtd-config.service" ];
after = [ "systemd-udev-settle.service" "libvirtd-config.service" ] after = [ "libvirtd-config.service" ]
++ optional vswitch.enable "ovs-vswitchd.service"; ++ optional vswitch.enable "ovs-vswitchd.service";
environment.LIBVIRTD_ARGS = escapeShellArgs ( environment.LIBVIRTD_ARGS = escapeShellArgs (

View file

@ -66,7 +66,7 @@ in {
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = ''
enables various settings to avoid common pitfalls when Enables various settings to avoid common pitfalls when
running containers requiring many file operations. running containers requiring many file operations.
Fixes errors like "Too many open files" or Fixes errors like "Too many open files" or
"neighbour: ndisc_cache: neighbor table overflow!". "neighbour: ndisc_cache: neighbor table overflow!".
@ -74,6 +74,17 @@ in {
for details. for details.
''; '';
}; };
startTimeout = mkOption {
type = types.int;
default = 600;
apply = toString;
description = ''
Time to wait (in seconds) for LXD to become ready to process requests.
If LXD does not reply within the configured time, lxd.service will be
considered failed and systemd will attempt to restart it.
'';
};
}; };
}; };
@ -81,40 +92,58 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
security.apparmor = { # Note: the following options are also declared in virtualisation.lxc, but
enable = true; # the latter can't be simply enabled to reuse the formers, because it
profiles = [ # does a bunch of unrelated things.
"${cfg.lxcPackage}/etc/apparmor.d/usr.bin.lxc-start" systemd.tmpfiles.rules = [ "d /var/lib/lxc/rootfs 0755 root root -" ];
security.apparmor.packages = [ cfg.lxcPackage ];
security.apparmor.profiles = [
"${cfg.lxcPackage}/etc/apparmor.d/lxc-containers" "${cfg.lxcPackage}/etc/apparmor.d/lxc-containers"
"${cfg.lxcPackage}/etc/apparmor.d/usr.bin.lxc-start"
]; ];
packages = [ cfg.lxcPackage ];
};
# TODO: remove once LXD gets proper support for cgroupsv2 # TODO: remove once LXD gets proper support for cgroupsv2
# (currently most of the e.g. CPU accounting stuff doesn't work) # (currently most of the e.g. CPU accounting stuff doesn't work)
systemd.enableUnifiedCgroupHierarchy = false; systemd.enableUnifiedCgroupHierarchy = false;
systemd.sockets.lxd = {
description = "LXD UNIX socket";
wantedBy = [ "sockets.target" ];
socketConfig = {
ListenStream = "/var/lib/lxd/unix.socket";
SocketMode = "0660";
SocketGroup = "lxd";
Service = "lxd.service";
};
};
systemd.services.lxd = { systemd.services.lxd = {
description = "LXD Container Management Daemon"; description = "LXD Container Management Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
after = [ "systemd-udev-settle.service" ]; after = [ "network-online.target" "lxcfs.service" ];
requires = [ "network-online.target" "lxd.socket" "lxcfs.service" ];
documentation = [ "man:lxd(1)" ];
path = lib.optional config.boot.zfs.enabled config.boot.zfs.package; path = optional cfg.zfsSupport config.boot.zfs.package;
preStart = ''
mkdir -m 0755 -p /var/lib/lxc/rootfs
'';
serviceConfig = { serviceConfig = {
ExecStart = "@${cfg.package}/bin/lxd lxd --group lxd"; ExecStart = "@${cfg.package}/bin/lxd lxd --group lxd";
Type = "simple"; ExecStartPost = "${cfg.package}/bin/lxd waitready --timeout=${cfg.startTimeout}";
ExecStop = "${cfg.package}/bin/lxd shutdown";
KillMode = "process"; # when stopping, leave the containers alone KillMode = "process"; # when stopping, leave the containers alone
LimitMEMLOCK = "infinity"; LimitMEMLOCK = "infinity";
LimitNOFILE = "1048576"; LimitNOFILE = "1048576";
LimitNPROC = "infinity"; LimitNPROC = "infinity";
TasksMax = "infinity"; TasksMax = "infinity";
Restart = "on-failure";
TimeoutStartSec = "${cfg.startTimeout}s";
TimeoutStopSec = "30s";
# By default, `lxd` loads configuration files from hard-coded # By default, `lxd` loads configuration files from hard-coded
# `/usr/share/lxc/config` - since this is a no-go for us, we have to # `/usr/share/lxc/config` - since this is a no-go for us, we have to
# explicitly tell it where the actual configuration files are # explicitly tell it where the actual configuration files are

View file

@ -271,8 +271,8 @@ let
DeviceAllow = map (d: "${d.node} ${d.modifier}") cfg.allowedDevices; DeviceAllow = map (d: "${d.node} ${d.modifier}") cfg.allowedDevices;
}; };
system = config.nixpkgs.localSystem.system; system = config.nixpkgs.localSystem.system;
kernelVersion = config.boot.kernelPackages.kernel.version;
bindMountOpts = { name, ... }: { bindMountOpts = { name, ... }: {
@ -321,7 +321,6 @@ let
}; };
}; };
mkBindFlag = d: mkBindFlag = d:
let flagPrefix = if d.isReadOnly then " --bind-ro=" else " --bind="; let flagPrefix = if d.isReadOnly then " --bind-ro=" else " --bind=";
mountstr = if d.hostPath != null then "${d.hostPath}:${d.mountPoint}" else "${d.mountPoint}"; mountstr = if d.hostPath != null then "${d.hostPath}:${d.mountPoint}" else "${d.mountPoint}";
@ -482,11 +481,16 @@ in
networking.useDHCP = false; networking.useDHCP = false;
assertions = [ assertions = [
{ {
assertion = config.privateNetwork -> stringLength name < 12; assertion =
(builtins.compareVersions kernelVersion "5.8" <= 0)
-> config.privateNetwork
-> stringLength name <= 11;
message = '' message = ''
Container name `${name}` is too long: When `privateNetwork` is enabled, container names can Container name `${name}` is too long: When `privateNetwork` is enabled, container names can
not be longer than 11 characters, because the container's interface name is derived from it. not be longer than 11 characters, because the container's interface name is derived from it.
This might be fixed in the future. See https://github.com/NixOS/nixpkgs/issues/38509 You should either make the container name shorter or upgrade to a more recent kernel that
supports interface altnames (i.e. at least Linux 5.8 - please see https://github.com/NixOS/nixpkgs/issues/38509
for details).
''; '';
} }
]; ];

View file

@ -277,6 +277,18 @@ in
''; '';
}; };
virtualisation.msize =
mkOption {
default = null;
type = types.nullOr types.ints.unsigned;
description =
''
msize (maximum packet size) option passed to 9p file systems, in
bytes. Increasing this should increase performance significantly,
at the cost of higher RAM usage.
'';
};
virtualisation.diskSize = virtualisation.diskSize =
mkOption { mkOption {
default = 512; default = 512;
@ -666,7 +678,7 @@ in
${if cfg.writableStore then "/nix/.ro-store" else "/nix/store"} = ${if cfg.writableStore then "/nix/.ro-store" else "/nix/store"} =
{ device = "store"; { device = "store";
fsType = "9p"; fsType = "9p";
options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ]; options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
neededForBoot = true; neededForBoot = true;
}; };
"/tmp" = mkIf config.boot.tmpOnTmpfs "/tmp" = mkIf config.boot.tmpOnTmpfs
@ -679,13 +691,13 @@ in
"/tmp/xchg" = "/tmp/xchg" =
{ device = "xchg"; { device = "xchg";
fsType = "9p"; fsType = "9p";
options = [ "trans=virtio" "version=9p2000.L" ]; options = [ "trans=virtio" "version=9p2000.L" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
neededForBoot = true; neededForBoot = true;
}; };
"/tmp/shared" = "/tmp/shared" =
{ device = "shared"; { device = "shared";
fsType = "9p"; fsType = "9p";
options = [ "trans=virtio" "version=9p2000.L" ]; options = [ "trans=virtio" "version=9p2000.L" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
neededForBoot = true; neededForBoot = true;
}; };
} // optionalAttrs (cfg.writableStore && cfg.writableStoreUseTmpfs) } // optionalAttrs (cfg.writableStore && cfg.writableStoreUseTmpfs)

View file

@ -73,6 +73,7 @@ in
containers-imperative = handleTest ./containers-imperative.nix {}; containers-imperative = handleTest ./containers-imperative.nix {};
containers-ip = handleTest ./containers-ip.nix {}; containers-ip = handleTest ./containers-ip.nix {};
containers-macvlans = handleTest ./containers-macvlans.nix {}; containers-macvlans = handleTest ./containers-macvlans.nix {};
containers-names = handleTest ./containers-names.nix {};
containers-physical_interfaces = handleTest ./containers-physical_interfaces.nix {}; containers-physical_interfaces = handleTest ./containers-physical_interfaces.nix {};
containers-portforward = handleTest ./containers-portforward.nix {}; containers-portforward = handleTest ./containers-portforward.nix {};
containers-reloadable = handleTest ./containers-reloadable.nix {}; containers-reloadable = handleTest ./containers-reloadable.nix {};
@ -196,6 +197,7 @@ in
keymap = handleTest ./keymap.nix {}; keymap = handleTest ./keymap.nix {};
knot = handleTest ./knot.nix {}; knot = handleTest ./knot.nix {};
krb5 = discoverTests (import ./krb5 {}); krb5 = discoverTests (import ./krb5 {});
ksm = handleTest ./ksm.nix {};
kubernetes.dns = handleTestOn ["x86_64-linux"] ./kubernetes/dns.nix {}; kubernetes.dns = handleTestOn ["x86_64-linux"] ./kubernetes/dns.nix {};
# kubernetes.e2e should eventually replace kubernetes.rbac when it works # kubernetes.e2e should eventually replace kubernetes.rbac when it works
#kubernetes.e2e = handleTestOn ["x86_64-linux"] ./kubernetes/e2e.nix {}; #kubernetes.e2e = handleTestOn ["x86_64-linux"] ./kubernetes/e2e.nix {};
@ -238,6 +240,7 @@ in
mosquitto = handleTest ./mosquitto.nix {}; mosquitto = handleTest ./mosquitto.nix {};
mpd = handleTest ./mpd.nix {}; mpd = handleTest ./mpd.nix {};
mumble = handleTest ./mumble.nix {}; mumble = handleTest ./mumble.nix {};
musescore = handleTest ./musescore.nix {};
munin = handleTest ./munin.nix {}; munin = handleTest ./munin.nix {};
mutableUsers = handleTest ./mutable-users.nix {}; mutableUsers = handleTest ./mutable-users.nix {};
mxisd = handleTest ./mxisd.nix {}; mxisd = handleTest ./mxisd.nix {};
@ -304,9 +307,13 @@ in
pgjwt = handleTest ./pgjwt.nix {}; pgjwt = handleTest ./pgjwt.nix {};
pgmanage = handleTest ./pgmanage.nix {}; pgmanage = handleTest ./pgmanage.nix {};
php = handleTest ./php {}; php = handleTest ./php {};
php73 = handleTest ./php { php = pkgs.php73; };
php74 = handleTest ./php { php = pkgs.php74; };
php80 = handleTest ./php { php = pkgs.php80; };
pinnwand = handleTest ./pinnwand.nix {}; pinnwand = handleTest ./pinnwand.nix {};
plasma5 = handleTest ./plasma5.nix {}; plasma5 = handleTest ./plasma5.nix {};
pleroma = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./pleroma.nix {}; pleroma = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./pleroma.nix {};
plikd = handleTest ./plikd.nix {};
plotinus = handleTest ./plotinus.nix {}; plotinus = handleTest ./plotinus.nix {};
podman = handleTestOn ["x86_64-linux"] ./podman.nix {}; podman = handleTestOn ["x86_64-linux"] ./podman.nix {};
postfix = handleTest ./postfix.nix {}; postfix = handleTest ./postfix.nix {};

View file

@ -1,5 +1,3 @@
# Test for NixOS' container support.
let let
hostIp = "192.168.0.1"; hostIp = "192.168.0.1";
containerIp = "192.168.0.100/24"; containerIp = "192.168.0.100/24";
@ -7,10 +5,10 @@ let
containerIp6 = "fc00::2/7"; containerIp6 = "fc00::2/7";
in in
import ./make-test-python.nix ({ pkgs, ...} : { import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "containers-bridge"; name = "containers-bridge";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ aristid aszlig eelco kampfschlaefer ]; maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
}; };
machine = machine =

View file

@ -8,8 +8,8 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : let
in { in {
name = "containers-custom-pkgs"; name = "containers-custom-pkgs";
meta = with lib.maintainers; { meta = {
maintainers = [ adisbladis earvstedt ]; maintainers = with lib.maintainers; [ adisbladis earvstedt ];
}; };
machine = { config, ... }: { machine = { config, ... }: {

View file

@ -1,7 +1,8 @@
# Test for NixOS' container support. import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-ephemeral"; name = "containers-ephemeral";
meta = {
maintainers = with lib.maintainers; [ patryk27 ];
};
machine = { pkgs, ... }: { machine = { pkgs, ... }: {
virtualisation.memorySize = 768; virtualisation.memorySize = 768;

View file

@ -1,9 +1,7 @@
# Test for NixOS' container support. import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-extra_veth"; name = "containers-extra_veth";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ kampfschlaefer ]; maintainers = with lib.maintainers; [ kampfschlaefer ];
}; };
machine = machine =

View file

@ -1,9 +1,7 @@
# Test for NixOS' container support. import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-hosts"; name = "containers-hosts";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ montag451 ]; maintainers = with lib.maintainers; [ montag451 ];
}; };
machine = machine =

View file

@ -1,9 +1,7 @@
# Test for NixOS' container support. import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-imperative"; name = "containers-imperative";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ aristid aszlig eelco kampfschlaefer ]; maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
}; };
machine = machine =

View file

@ -1,5 +1,3 @@
# Test for NixOS' container support.
let let
webserverFor = hostAddress: localAddress: { webserverFor = hostAddress: localAddress: {
inherit hostAddress localAddress; inherit hostAddress localAddress;
@ -13,10 +11,10 @@ let
}; };
}; };
in import ./make-test-python.nix ({ pkgs, ...} : { in import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "containers-ipv4-ipv6"; name = "containers-ipv4-ipv6";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ aristid aszlig eelco kampfschlaefer ]; maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
}; };
machine = machine =

View file

@ -1,15 +1,13 @@
# Test for NixOS' container support.
let let
# containers IP on VLAN 1 # containers IP on VLAN 1
containerIp1 = "192.168.1.253"; containerIp1 = "192.168.1.253";
containerIp2 = "192.168.1.254"; containerIp2 = "192.168.1.254";
in in
import ./make-test-python.nix ({ pkgs, ...} : { import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "containers-macvlans"; name = "containers-macvlans";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ montag451 ]; maintainers = with lib.maintainers; [ montag451 ];
}; };
nodes = { nodes = {

View file

@ -0,0 +1,37 @@
import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "containers-names";
meta = {
maintainers = with lib.maintainers; [ patryk27 ];
};
machine = { ... }: {
# We're using the newest kernel, so that we can test containers with long names.
# Please see https://github.com/NixOS/nixpkgs/issues/38509 for details.
boot.kernelPackages = pkgs.linuxPackages_latest;
containers = let
container = subnet: {
autoStart = true;
privateNetwork = true;
hostAddress = "192.168.${subnet}.1";
localAddress = "192.168.${subnet}.2";
config = { };
};
in {
first = container "1";
second = container "2";
really-long-name = container "3";
really-long-long-name-2 = container "4";
};
};
testScript = ''
machine.wait_for_unit("default.target")
machine.succeed("ip link show | grep ve-first")
machine.succeed("ip link show | grep ve-second")
machine.succeed("ip link show | grep ve-really-lFYWO")
machine.succeed("ip link show | grep ve-really-l3QgY")
'';
})

View file

@ -1,8 +1,7 @@
import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-physical_interfaces"; name = "containers-physical_interfaces";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ kampfschlaefer ]; maintainers = with lib.maintainers; [ kampfschlaefer ];
}; };
nodes = { nodes = {

View file

@ -1,5 +1,3 @@
# Test for NixOS' container support.
let let
hostIp = "192.168.0.1"; hostIp = "192.168.0.1";
hostPort = 10080; hostPort = 10080;
@ -7,10 +5,10 @@ let
containerPort = 80; containerPort = 80;
in in
import ./make-test-python.nix ({ pkgs, ...} : { import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "containers-portforward"; name = "containers-portforward";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ aristid aszlig eelco kampfschlaefer ianwookim ]; maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ianwookim ];
}; };
machine = machine =

View file

@ -1,7 +1,6 @@
import ./make-test-python.nix ({ pkgs, lib, ... }: import ./make-test-python.nix ({ pkgs, lib, ... }:
let let
client_base = { client_base = {
containers.test1 = { containers.test1 = {
autoStart = true; autoStart = true;
config = { config = {
@ -16,8 +15,8 @@ let
}; };
in { in {
name = "containers-reloadable"; name = "containers-reloadable";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ danbst ]; maintainers = with lib.maintainers; [ danbst ];
}; };
nodes = { nodes = {

View file

@ -1,5 +1,3 @@
# Test for NixOS' container support.
let let
client_base = { client_base = {
networking.firewall.enable = false; networking.firewall.enable = false;
@ -16,11 +14,11 @@ let
}; };
}; };
}; };
in import ./make-test-python.nix ({ pkgs, ...} : in import ./make-test-python.nix ({ pkgs, lib, ... }:
{ {
name = "containers-restart_networking"; name = "containers-restart_networking";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ kampfschlaefer ]; maintainers = with lib.maintainers; [ kampfschlaefer ];
}; };
nodes = { nodes = {

View file

@ -1,9 +1,7 @@
# Test for NixOS' container support. import ./make-test-python.nix ({ pkgs, lib, ... }: {
import ./make-test-python.nix ({ pkgs, ...} : {
name = "containers-tmpfs"; name = "containers-tmpfs";
meta = with pkgs.lib.maintainers; { meta = {
maintainers = [ ]; maintainers = with lib.maintainers; [ patryk27 ];
}; };
machine = machine =

View file

@ -11,6 +11,8 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
nodes = { nodes = {
gitlab = { ... }: { gitlab = { ... }: {
imports = [ common/user-account.nix ];
virtualisation.memorySize = if pkgs.stdenv.is64bit then 4096 else 2047; virtualisation.memorySize = if pkgs.stdenv.is64bit then 4096 else 2047;
systemd.services.gitlab.serviceConfig.Restart = mkForce "no"; systemd.services.gitlab.serviceConfig.Restart = mkForce "no";
systemd.services.gitlab-workhorse.serviceConfig.Restart = mkForce "no"; systemd.services.gitlab-workhorse.serviceConfig.Restart = mkForce "no";
@ -27,11 +29,31 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
}; };
}; };
services.dovecot2 = {
enable = true;
enableImap = true;
};
services.gitlab = { services.gitlab = {
enable = true; enable = true;
databasePasswordFile = pkgs.writeText "dbPassword" "xo0daiF4"; databasePasswordFile = pkgs.writeText "dbPassword" "xo0daiF4";
initialRootPasswordFile = pkgs.writeText "rootPassword" initialRootPassword; initialRootPasswordFile = pkgs.writeText "rootPassword" initialRootPassword;
smtp.enable = true; smtp.enable = true;
extraConfig = {
incoming_email = {
enabled = true;
mailbox = "inbox";
address = "alice@localhost";
user = "alice";
password = "foobar";
host = "localhost";
port = 143;
};
pages = {
enabled = true;
host = "localhost";
};
};
secrets = { secrets = {
secretFile = pkgs.writeText "secret" "r8X9keSKynU7p4aKlh4GO1Bo77g5a7vj"; secretFile = pkgs.writeText "secret" "r8X9keSKynU7p4aKlh4GO1Bo77g5a7vj";
otpFile = pkgs.writeText "otpsecret" "Zu5hGx3YvQx40DvI8WoZJQpX2paSDOlG"; otpFile = pkgs.writeText "otpsecret" "Zu5hGx3YvQx40DvI8WoZJQpX2paSDOlG";
@ -64,12 +86,16 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
in in
'' ''
gitlab.start() gitlab.start()
gitlab.wait_for_unit("gitaly.service") gitlab.wait_for_unit("gitaly.service")
gitlab.wait_for_unit("gitlab-workhorse.service") gitlab.wait_for_unit("gitlab-workhorse.service")
gitlab.wait_for_unit("gitlab-pages.service")
gitlab.wait_for_unit("gitlab-mailroom.service")
gitlab.wait_for_unit("gitlab.service") gitlab.wait_for_unit("gitlab.service")
gitlab.wait_for_unit("gitlab-sidekiq.service") gitlab.wait_for_unit("gitlab-sidekiq.service")
gitlab.wait_for_file("/var/gitlab/state/tmp/sockets/gitlab.socket") gitlab.wait_for_file("/var/gitlab/state/tmp/sockets/gitlab.socket")
gitlab.wait_until_succeeds("curl -sSf http://gitlab/users/sign_in") gitlab.wait_until_succeeds("curl -sSf http://gitlab/users/sign_in")
gitlab.succeed( gitlab.succeed(
"curl -isSf http://gitlab | grep -i location | grep -q http://gitlab/users/sign_in" "curl -isSf http://gitlab | grep -i location | grep -q http://gitlab/users/sign_in"
) )

View file

@ -24,6 +24,8 @@ in {
services.home-assistant = { services.home-assistant = {
inherit configDir; inherit configDir;
enable = true; enable = true;
# includes the package with all tests enabled
package = pkgs.home-assistant;
config = { config = {
homeassistant = { homeassistant = {
name = "Home"; name = "Home";

Some files were not shown because too many files have changed in this diff Show more