Project import generated by Copybara.
GitOrigin-RevId: 29b0d4d0b600f8f5dd0b86e3362a33d4181938f9
This commit is contained in:
parent
f9be99903a
commit
75ca762b89
2152 changed files with 41155 additions and 20654 deletions
2
third_party/nixpkgs/doc/Makefile
vendored
2
third_party/nixpkgs/doc/Makefile
vendored
|
@ -1,4 +1,4 @@
|
|||
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$')))
|
||||
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$' -not -name README.md)))
|
||||
|
||||
.PHONY: all
|
||||
all: validate format out/html/index.html out/epub/manual.epub
|
||||
|
|
12
third_party/nixpkgs/doc/README.md
vendored
Normal file
12
third_party/nixpkgs/doc/README.md
vendored
Normal file
|
@ -0,0 +1,12 @@
|
|||
|
||||
# Nixpkgs/doc
|
||||
|
||||
This directory houses the sources files for the Nixpkgs manual.
|
||||
|
||||
You can find the [rendered documentation for Nixpkgs `unstable` on nixos.org](https://nixos.org/manual/nixpkgs/unstable/).
|
||||
|
||||
[Docs for Nixpkgs stable](https://nixos.org/manual/nixpkgs/stable/) are also available.
|
||||
|
||||
If you want to contribute to the documentation, [here's how to do it](https://nixos.org/manual/nixpkgs/unstable/#chap-contributing).
|
||||
|
||||
If you're only getting started with Nix, go to [nixos.org/learn](https://nixos.org/learn).
|
2
third_party/nixpkgs/doc/builders/images.xml
vendored
2
third_party/nixpkgs/doc/builders/images.xml
vendored
|
@ -6,7 +6,7 @@
|
|||
This chapter describes tools for creating various types of images.
|
||||
</para>
|
||||
<xi:include href="images/appimagetools.xml" />
|
||||
<xi:include href="images/dockertools.xml" />
|
||||
<xi:include href="images/dockertools.section.xml" />
|
||||
<xi:include href="images/ocitools.xml" />
|
||||
<xi:include href="images/snaptools.xml" />
|
||||
</chapter>
|
||||
|
|
298
third_party/nixpkgs/doc/builders/images/dockertools.section.md
vendored
Normal file
298
third_party/nixpkgs/doc/builders/images/dockertools.section.md
vendored
Normal file
|
@ -0,0 +1,298 @@
|
|||
# pkgs.dockerTools {#sec-pkgs-dockerTools}
|
||||
|
||||
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
|
||||
|
||||
## buildImage {#ssec-pkgs-dockerTools-buildImage}
|
||||
|
||||
This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`.
|
||||
|
||||
The parameters of `buildImage` with relative example values are described below:
|
||||
|
||||
[]{#ex-dockerTools-buildImage}
|
||||
[]{#ex-dockerTools-buildImage-runAsRoot}
|
||||
|
||||
```nix
|
||||
buildImage {
|
||||
name = "redis";
|
||||
tag = "latest";
|
||||
|
||||
fromImage = someBaseImage;
|
||||
fromImageName = null;
|
||||
fromImageTag = "latest";
|
||||
|
||||
contents = pkgs.redis;
|
||||
runAsRoot = ''
|
||||
#!${pkgs.runtimeShell}
|
||||
mkdir -p /data
|
||||
'';
|
||||
|
||||
config = {
|
||||
Cmd = [ "/bin/redis-server" ];
|
||||
WorkingDir = "/data";
|
||||
Volumes = { "/data" = { }; };
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically.
|
||||
|
||||
- `name` specifies the name of the resulting image. This is the only required argument for `buildImage`.
|
||||
|
||||
- `tag` specifies the tag of the resulting image. By default it\'s `null`, which indicates that the nix output hash will be used as tag.
|
||||
|
||||
- `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it\'s `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`.
|
||||
|
||||
- `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it\'s `null`, in which case `buildImage` will peek the first image available in the repository.
|
||||
|
||||
- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it\'s `null`, in which case `buildImage` will peek the first tag available for the base image.
|
||||
|
||||
- `contents` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it\'s `null`.
|
||||
|
||||
- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`.
|
||||
|
||||
> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
|
||||
|
||||
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
|
||||
|
||||
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
|
||||
|
||||
At the end of the process, only one new single layer will be produced and added to the resulting image.
|
||||
|
||||
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
|
||||
|
||||
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
|
||||
|
||||
> **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`.
|
||||
|
||||
> **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`.
|
||||
|
||||
By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this:
|
||||
|
||||
```ShellSession
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
hello latest 08c791c7846e 48 years ago 25.2MB
|
||||
```
|
||||
|
||||
You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`.
|
||||
|
||||
```nix
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "hello";
|
||||
tag = "latest";
|
||||
created = "now";
|
||||
contents = pkgs.hello;
|
||||
|
||||
config.Cmd = [ "/bin/hello" ];
|
||||
}
|
||||
```
|
||||
|
||||
and now the Docker CLI will display a reasonable date and sort the images as expected:
|
||||
|
||||
```ShellSession
|
||||
$ docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
hello latest de2bf4786de6 About a minute ago 25.2MB
|
||||
```
|
||||
|
||||
however, the produced images will not be binary reproducible.
|
||||
|
||||
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
|
||||
|
||||
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally.
|
||||
|
||||
`name`
|
||||
|
||||
: The name of the resulting image.
|
||||
|
||||
`tag` _optional_
|
||||
|
||||
: Tag of the generated image.
|
||||
|
||||
*Default:* the output path\'s hash
|
||||
|
||||
`contents` _optional_
|
||||
|
||||
: Top level paths in the container. Either a single derivation, or a list of derivations.
|
||||
|
||||
*Default:* `[]`
|
||||
|
||||
`config` _optional_
|
||||
|
||||
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
|
||||
|
||||
*Default:* `{}`
|
||||
|
||||
`created` _optional_
|
||||
|
||||
: Date and time the layers were created. Follows the same `now` exception supported by `buildImage`.
|
||||
|
||||
*Default:* `1970-01-01T00:00:01Z`
|
||||
|
||||
`maxLayers` _optional_
|
||||
|
||||
: Maximum number of layers to create.
|
||||
|
||||
*Default:* `100`
|
||||
|
||||
*Maximum:* `125`
|
||||
|
||||
`extraCommands` _optional_
|
||||
|
||||
: Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are \"on top\" of all the other layers, so can create additional directories and files.
|
||||
|
||||
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
|
||||
|
||||
Each path directly listed in `contents` will have a symlink in the root of the image.
|
||||
|
||||
For example:
|
||||
|
||||
```nix
|
||||
pkgs.dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ pkgs.hello ];
|
||||
}
|
||||
```
|
||||
|
||||
will create symlinks for all the paths in the `hello` package:
|
||||
|
||||
```ShellSession
|
||||
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
|
||||
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
|
||||
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
|
||||
```
|
||||
|
||||
### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config}
|
||||
|
||||
The closure of `config` is automatically included in the closure of the final image.
|
||||
|
||||
This allows you to make very simple Docker images with very little code. This container will start up and run `hello`:
|
||||
|
||||
```nix
|
||||
pkgs.dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
|
||||
}
|
||||
```
|
||||
|
||||
### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers}
|
||||
|
||||
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
|
||||
|
||||
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
|
||||
|
||||
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
|
||||
|
||||
The first (`maxLayers-2`) most \"popular\" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining \"unpopular\" paths, and finally layer \#`maxLayers` will contain the Image configuration.
|
||||
|
||||
Docker\'s Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
|
||||
|
||||
## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage}
|
||||
|
||||
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
|
||||
|
||||
The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon:
|
||||
|
||||
```ShellSession
|
||||
$(nix-build) | docker load
|
||||
```
|
||||
|
||||
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
|
||||
|
||||
```ShellSession
|
||||
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
|
||||
```
|
||||
|
||||
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
|
||||
|
||||
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
|
||||
|
||||
Its parameters are described in the example below:
|
||||
|
||||
```nix
|
||||
pullImage {
|
||||
imageName = "nixos/nix";
|
||||
imageDigest =
|
||||
"sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b";
|
||||
finalImageName = "nix";
|
||||
finalImageTag = "1.11";
|
||||
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8";
|
||||
os = "linux";
|
||||
arch = "x86_64";
|
||||
}
|
||||
```
|
||||
|
||||
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
|
||||
|
||||
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
|
||||
|
||||
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s equal to `imageName`.
|
||||
|
||||
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s `latest`.
|
||||
|
||||
- `sha256` is the checksum of the whole fetched image. This argument is required.
|
||||
|
||||
- `os`, if specified, is the operating system of the fetched image. By default it\'s `linux`.
|
||||
|
||||
- `arch`, if specified, is the cpu architecture of the fetched image. By default it\'s `x86_64`.
|
||||
|
||||
`nix-prefetch-docker` command can be used to get required image parameters:
|
||||
|
||||
```ShellSession
|
||||
$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
|
||||
```
|
||||
|
||||
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
|
||||
|
||||
```ShellSession
|
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
|
||||
```
|
||||
|
||||
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
|
||||
|
||||
```ShellSession
|
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
|
||||
```
|
||||
|
||||
## exportImage {#ssec-pkgs-dockerTools-exportImage}
|
||||
|
||||
This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`.
|
||||
|
||||
> **_NOTE:_** Using this function requires the `kvm` device to be available.
|
||||
|
||||
The parameters of `exportImage` are the following:
|
||||
|
||||
```nix
|
||||
exportImage {
|
||||
fromImage = someLayeredImage;
|
||||
fromImageName = null;
|
||||
fromImageTag = null;
|
||||
|
||||
name = someLayeredImage.name;
|
||||
}
|
||||
```
|
||||
|
||||
The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case.
|
||||
|
||||
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
|
||||
|
||||
## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup}
|
||||
|
||||
This constant string is a helper for setting up the base files for managing users and groups, only if such files don\'t exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below:
|
||||
|
||||
```nix
|
||||
buildImage {
|
||||
name = "shadow-basic";
|
||||
|
||||
runAsRoot = ''
|
||||
#!${pkgs.runtimeShell}
|
||||
${shadowSetup}
|
||||
groupadd -r redis
|
||||
useradd -r -g redis redis
|
||||
mkdir /data
|
||||
chown redis:redis /data
|
||||
'';
|
||||
}
|
||||
```
|
||||
|
||||
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
|
|
@ -1,499 +0,0 @@
|
|||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xml:id="sec-pkgs-dockerTools">
|
||||
<title>pkgs.dockerTools</title>
|
||||
|
||||
<para>
|
||||
<varname>pkgs.dockerTools</varname> is a set of functions for creating and manipulating Docker images according to the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120"> Docker Image Specification v1.2.0 </link>. Docker itself is not used to perform any of the operations done by these functions.
|
||||
</para>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-buildImage">
|
||||
<title>buildImage</title>
|
||||
|
||||
<para>
|
||||
This function is analogous to the <command>docker build</command> command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with <command>docker load</command>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The parameters of <varname>buildImage</varname> with relative example values are described below:
|
||||
</para>
|
||||
|
||||
<example xml:id='ex-dockerTools-buildImage'>
|
||||
<title>Docker build</title>
|
||||
<programlisting>
|
||||
buildImage {
|
||||
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' />
|
||||
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' />
|
||||
|
||||
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' />
|
||||
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' />
|
||||
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' />
|
||||
|
||||
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' />
|
||||
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' />
|
||||
#!${pkgs.runtimeShell}
|
||||
mkdir -p /data
|
||||
'';
|
||||
|
||||
config = { <co xml:id='ex-dockerTools-buildImage-8' />
|
||||
Cmd = [ "/bin/redis-server" ];
|
||||
WorkingDir = "/data";
|
||||
Volumes = {
|
||||
"/data" = {};
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</example>
|
||||
|
||||
<para>
|
||||
The above example will build a Docker image <literal>redis/latest</literal> from the given base image. Loading and running this image in Docker results in <literal>redis-server</literal> being started automatically.
|
||||
</para>
|
||||
|
||||
<calloutlist>
|
||||
<callout arearefs='ex-dockerTools-buildImage-1'>
|
||||
<para>
|
||||
<varname>name</varname> specifies the name of the resulting image. This is the only required argument for <varname>buildImage</varname>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-2'>
|
||||
<para>
|
||||
<varname>tag</varname> specifies the tag of the resulting image. By default it's <literal>null</literal>, which indicates that the nix output hash will be used as tag.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-3'>
|
||||
<para>
|
||||
<varname>fromImage</varname> is the repository tarball containing the base image. It must be a valid Docker image, such as exported by <command>docker save</command>. By default it's <literal>null</literal>, which can be seen as equivalent to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-4'>
|
||||
<para>
|
||||
<varname>fromImageName</varname> can be used to further specify the base image within the repository, in case it contains multiple images. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first image available in the repository.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-5'>
|
||||
<para>
|
||||
<varname>fromImageTag</varname> can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first tag available for the base image.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-6'>
|
||||
<para>
|
||||
<varname>contents</varname> is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as <command>ADD contents/ /</command> in a <filename>Dockerfile</filename>. By default it's <literal>null</literal>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'>
|
||||
<para>
|
||||
<varname>runAsRoot</varname> is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied <varname>contents</varname> derivation. This can be similarly seen as <command>RUN ...</command> in a <filename>Dockerfile</filename>.
|
||||
<note>
|
||||
<para>
|
||||
Using this parameter requires the <literal>kvm</literal> device to be available.
|
||||
</para>
|
||||
</note>
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-buildImage-8'>
|
||||
<para>
|
||||
<varname>config</varname> is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
|
||||
<para>
|
||||
After the new layer has been created, its closure (to which <varname>contents</varname>, <varname>config</varname> and <varname>runAsRoot</varname> contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
At the end of the process, only one new single layer will be produced and added to the resulting image.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The resulting repository will only list the single image <varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/> it would be <varname>redis/latest</varname>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
It is possible to inspect the arguments with which an image was built using its <varname>buildArgs</varname> attribute.
|
||||
</para>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal> you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal> you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date">
|
||||
<title>Impurely Defining a Docker Layer's Creation Date</title>
|
||||
<para>
|
||||
By default <function>buildImage</function> will use a static date of one second past the UNIX Epoch. This allows <function>buildImage</function> to produce binary reproducible images. When listing images with <command>docker images</command>, the newly created images will be listed like this:
|
||||
</para>
|
||||
<screen>
|
||||
<prompt>$ </prompt>docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
hello latest 08c791c7846e 48 years ago 25.2MB
|
||||
</screen>
|
||||
<para>
|
||||
You can break binary reproducibility but have a sorted, meaningful <literal>CREATED</literal> column by setting <literal>created</literal> to <literal>now</literal>.
|
||||
</para>
|
||||
<programlisting><![CDATA[
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "hello";
|
||||
tag = "latest";
|
||||
created = "now";
|
||||
contents = pkgs.hello;
|
||||
|
||||
config.Cmd = [ "/bin/hello" ];
|
||||
}
|
||||
]]></programlisting>
|
||||
<para>
|
||||
and now the Docker CLI will display a reasonable date and sort the images as expected:
|
||||
<screen>
|
||||
<prompt>$ </prompt>docker images
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
hello latest de2bf4786de6 About a minute ago 25.2MB
|
||||
</screen>
|
||||
however, the produced images will not be binary reproducible.
|
||||
</para>
|
||||
</example>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage">
|
||||
<title>buildLayeredImage</title>
|
||||
|
||||
<para>
|
||||
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use <function>streamLayeredImage</function> instead, which this function uses internally.
|
||||
</para>
|
||||
|
||||
<variablelist>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>name</varname>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
The name of the resulting image.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>tag</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Tag of the generated image.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Default:</emphasis> the output path's hash
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>contents</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Top level paths in the container. Either a single derivation, or a list of derivations.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Default:</emphasis> <literal>[]</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>config</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Run-time configuration of the container. A full list of the options are available at in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Default:</emphasis> <literal>{}</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>created</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Date and time the layers were created. Follows the same <literal>now</literal> exception supported by <literal>buildImage</literal>.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>maxLayers</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Maximum number of layers to create.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Default:</emphasis> <literal>100</literal>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Maximum:</emphasis> <literal>125</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>extraCommands</varname> <emphasis>optional</emphasis>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-contents">
|
||||
<title>Behavior of <varname>contents</varname> in the final image</title>
|
||||
|
||||
<para>
|
||||
Each path directly listed in <varname>contents</varname> will have a symlink in the root of the image.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For example:
|
||||
<programlisting><![CDATA[
|
||||
pkgs.dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ pkgs.hello ];
|
||||
}
|
||||
]]></programlisting>
|
||||
will create symlinks for all the paths in the <literal>hello</literal> package:
|
||||
<screen><![CDATA[
|
||||
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello
|
||||
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info
|
||||
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo
|
||||
]]></screen>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-config">
|
||||
<title>Automatic inclusion of <varname>config</varname> references</title>
|
||||
|
||||
<para>
|
||||
The closure of <varname>config</varname> is automatically included in the closure of the final image.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
This allows you to make very simple Docker images with very little code. This container will start up and run <command>hello</command>:
|
||||
<programlisting><![CDATA[
|
||||
pkgs.dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
config.Cmd = [ "${pkgs.hello}/bin/hello" ];
|
||||
}
|
||||
]]></programlisting>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers">
|
||||
<title>Adjusting <varname>maxLayers</varname></title>
|
||||
|
||||
<para>
|
||||
Increasing the <varname>maxLayers</varname> increases the number of layers which have a chance to be shared between different images.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
If the produced image will not be extended by other Docker builds, it is safe to set <varname>maxLayers</varname> to <literal>128</literal>. However it will be impossible to extend the image further.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The first (<literal>maxLayers-2</literal>) most "popular" paths will have their own individual layers, then layer #<literal>maxLayers-1</literal> will contain all the remaining "unpopular" paths, and finally layer #<literal>maxLayers</literal> will contain the Image configuration.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-streamLayeredImage">
|
||||
<title>streamLayeredImage</title>
|
||||
|
||||
<para>
|
||||
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for <function>buildLayeredImage</function>. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The image produced by running the output script can be piped directly into <command>docker load</command>, to load it into the local docker daemon:
|
||||
<screen><![CDATA[
|
||||
$(nix-build) | docker load
|
||||
]]></screen>
|
||||
</para>
|
||||
<para>
|
||||
Alternatively, the image be piped via <command>gzip</command> into <command>skopeo</command>, e.g. to copy it into a registry:
|
||||
<screen><![CDATA[
|
||||
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag
|
||||
]]></screen>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry">
|
||||
<title>pullImage</title>
|
||||
|
||||
<para>
|
||||
This function is analogous to the <command>docker pull</command> command, in that it can be used to pull a Docker image from a Docker registry. By default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull images.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Its parameters are described in the example below:
|
||||
</para>
|
||||
|
||||
<example xml:id='ex-dockerTools-pullImage'>
|
||||
<title>Docker pull</title>
|
||||
<programlisting>
|
||||
pullImage {
|
||||
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' />
|
||||
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' />
|
||||
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' />
|
||||
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' />
|
||||
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' />
|
||||
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' />
|
||||
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' />
|
||||
}
|
||||
</programlisting>
|
||||
</example>
|
||||
|
||||
<calloutlist>
|
||||
<callout arearefs='ex-dockerTools-pullImage-1'>
|
||||
<para>
|
||||
<varname>imageName</varname> specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. <literal>nixos</literal>). This argument is required.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-2'>
|
||||
<para>
|
||||
<varname>imageDigest</varname> specifies the digest of the image to be downloaded. This argument is required.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-3'>
|
||||
<para>
|
||||
<varname>finalImageName</varname>, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to <varname>imageName</varname>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-4'>
|
||||
<para>
|
||||
<varname>finalImageTag</varname>, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's <literal>latest</literal>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-5'>
|
||||
<para>
|
||||
<varname>sha256</varname> is the checksum of the whole fetched image. This argument is required.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-6'>
|
||||
<para>
|
||||
<varname>os</varname>, if specified, is the operating system of the fetched image. By default it's <literal>linux</literal>.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-dockerTools-pullImage-7'>
|
||||
<para>
|
||||
<varname>arch</varname>, if specified, is the cpu architecture of the fetched image. By default it's <literal>x86_64</literal>.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
|
||||
<para>
|
||||
<literal>nix-prefetch-docker</literal> command can be used to get required image parameters:
|
||||
<screen>
|
||||
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5
|
||||
</screen>
|
||||
Since a given <varname>imageName</varname> may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the <option>--os</option> and <option>--arch</option> arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
|
||||
<screen>
|
||||
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
|
||||
</screen>
|
||||
Desired image name and tag can be set using <option>--final-image-name</option> and <option>--final-image-tag</option> arguments:
|
||||
<screen>
|
||||
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
|
||||
</screen>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-exportImage">
|
||||
<title>exportImage</title>
|
||||
|
||||
<para>
|
||||
This function is analogous to the <command>docker export</command> command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with <command>docker import</command>.
|
||||
</para>
|
||||
|
||||
<note>
|
||||
<para>
|
||||
Using this function requires the <literal>kvm</literal> device to be available.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<para>
|
||||
The parameters of <varname>exportImage</varname> are the following:
|
||||
</para>
|
||||
|
||||
<example xml:id='ex-dockerTools-exportImage'>
|
||||
<title>Docker export</title>
|
||||
<programlisting>
|
||||
exportImage {
|
||||
fromImage = someLayeredImage;
|
||||
fromImageName = null;
|
||||
fromImageTag = null;
|
||||
|
||||
name = someLayeredImage.name;
|
||||
}
|
||||
</programlisting>
|
||||
</example>
|
||||
|
||||
<para>
|
||||
The parameters relative to the base image have the same synopsis as described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that <varname>fromImage</varname> is the only required argument in this case.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The <varname>name</varname> argument is the name of the derivation output, which defaults to <varname>fromImage.name</varname>.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-shadowSetup">
|
||||
<title>shadowSetup</title>
|
||||
|
||||
<para>
|
||||
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a <varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like in the example below:
|
||||
</para>
|
||||
|
||||
<example xml:id='ex-dockerTools-shadowSetup'>
|
||||
<title>Shadow base files</title>
|
||||
<programlisting>
|
||||
buildImage {
|
||||
name = "shadow-basic";
|
||||
|
||||
runAsRoot = ''
|
||||
#!${pkgs.runtimeShell}
|
||||
${shadowSetup}
|
||||
groupadd -r redis
|
||||
useradd -r -g redis redis
|
||||
mkdir /data
|
||||
chown redis:redis /data
|
||||
'';
|
||||
}
|
||||
</programlisting>
|
||||
</example>
|
||||
|
||||
<para>
|
||||
Creating base files like <literal>/etc/passwd</literal> or <literal>/etc/login.defs</literal> is necessary for shadow-utils to manipulate users and groups.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
|
@ -17,9 +17,11 @@
|
|||
|
||||
<section xml:id="sec-citrix-selfservice">
|
||||
<title>Citrix Selfservice</title>
|
||||
|
||||
<para>
|
||||
The <link xlink:href="https://support.citrix.com/article/CTX200337">selfservice</link> is an application managing Citrix desktops and applications. Please note that this feature only works with at least <package>citrix_workspace_20_06_0</package> and later versions.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
In order to set this up, you first have to <link xlink:href="https://its.uiowa.edu/support/article/102186">download the <literal>.cr</literal> file from the Netscaler Gateway</link>. After that you can configure the <command>selfservice</command> like this:
|
||||
<screen>
|
||||
|
|
|
@ -36,7 +36,7 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
|
|||
;; load some packages
|
||||
|
||||
(use-package company
|
||||
:bind ("<C-tab>" . company-complete)
|
||||
:bind ("<C-tab>" . company-complete)
|
||||
:diminish company-mode
|
||||
:commands (company-mode global-company-mode)
|
||||
:defer 1
|
||||
|
|
|
@ -180,17 +180,12 @@ args.stdenv.mkDerivation (args // {
|
|||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Arguments should be listed in the order they are used, with the
|
||||
exception of <varname>lib</varname>, which always goes first.
|
||||
Arguments should be listed in the order they are used, with the exception of <varname>lib</varname>, which always goes first.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Prefer using the top-level <varname>lib</varname> over its alias
|
||||
<literal>stdenv.lib</literal>. <varname>lib</varname> is unrelated to
|
||||
<varname>stdenv</varname>, and so <literal>stdenv.lib</literal> should only
|
||||
be used as a convenience alias when developing to avoid having to modify
|
||||
the function inputs just to test something out.
|
||||
Prefer using the top-level <varname>lib</varname> over its alias <literal>stdenv.lib</literal>. <varname>lib</varname> is unrelated to <varname>stdenv</varname>, and so <literal>stdenv.lib</literal> should only be used as a convenience alias when developing to avoid having to modify the function inputs just to test something out.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
@ -689,8 +684,7 @@ args.stdenv.mkDerivation (args // {
|
|||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
If it’s a <emphasis>theme</emphasis> for a <emphasis>desktop environment</emphasis>,
|
||||
a <emphasis>window manager</emphasis> or a <emphasis>display manager</emphasis>:
|
||||
If it’s a <emphasis>theme</emphasis> for a <emphasis>desktop environment</emphasis>, a <emphasis>window manager</emphasis> or a <emphasis>display manager</emphasis>:
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
|
|
|
@ -1677,8 +1677,7 @@ recursiveUpdate
|
|||
<xi:include href="./locations.xml" xpointer="lib.attrsets.recurseIntoAttrs" />
|
||||
|
||||
<para>
|
||||
Make various Nix tools consider the contents of the resulting
|
||||
attribute set when looking for what to build, find, etc.
|
||||
Make various Nix tools consider the contents of the resulting attribute set when looking for what to build, find, etc.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
|
@ -1749,5 +1748,4 @@ cartesianProductOfSets { a = [ 1 2 ]; b = [ 10 20 ]; }
|
|||
]]></programlisting>
|
||||
</example>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
|
|
|
@ -611,7 +611,7 @@ Using the example above, the analagous pytestCheckHook usage would be:
|
|||
"update"
|
||||
];
|
||||
|
||||
disabledTestFiles = [
|
||||
disabledTestPaths = [
|
||||
"tests/test_failing.py"
|
||||
];
|
||||
```
|
||||
|
@ -1188,7 +1188,8 @@ community to help save time. No tool is preferred at the moment.
|
|||
expressions for your Python project. Note that [sharing derivations from
|
||||
pypi2nix with nixpkgs is possible but not
|
||||
encouraged](https://github.com/nix-community/pypi2nix/issues/222#issuecomment-443497376).
|
||||
- [python2nix](https://github.com/proger/python2nix) by Vladimir Kirillov.
|
||||
- [nixpkgs-pytools](https://github.com/nix-community/nixpkgs-pytools)
|
||||
- [poetry2nix](https://github.com/nix-community/poetry2nix)
|
||||
|
||||
### Deterministic builds
|
||||
|
||||
|
@ -1554,9 +1555,9 @@ Following rules are desired to be respected:
|
|||
|
||||
* Python libraries are called from `python-packages.nix` and packaged with
|
||||
`buildPythonPackage`. The expression of a library should be in
|
||||
`pkgs/development/python-modules/<name>/default.nix`. Libraries in
|
||||
`pkgs/top-level/python-packages.nix` are sorted quasi-alphabetically to avoid
|
||||
merge conflicts.
|
||||
`pkgs/development/python-modules/<name>/default.nix`.
|
||||
* Libraries in `pkgs/top-level/python-packages.nix` are sorted
|
||||
alphanumerically to avoid merge conflicts and ease locating attributes.
|
||||
* Python applications live outside of `python-packages.nix` and are packaged
|
||||
with `buildPythonApplication`.
|
||||
* Make sure libraries build for all Python interpreters.
|
||||
|
@ -1570,3 +1571,4 @@ Following rules are desired to be respected:
|
|||
[PEP 0503](https://www.python.org/dev/peps/pep-0503/#normalized-names). This
|
||||
means that characters should be converted to lowercase and `.` and `_` should
|
||||
be replaced by a single `-` (foo-bar-baz instead of Foo__Bar.baz )
|
||||
* Attribute names in `python-packages.nix` should be sorted alphanumerically.
|
||||
|
|
|
@ -121,7 +121,7 @@ Use the `meta.broken` attribute to disable the package for unsupported Qt versio
|
|||
|
||||
stdenv.mkDerivation {
|
||||
# ...
|
||||
# Disable this library with Qt < 5.9.0
|
||||
# Disable this library with Qt < 5.9.0
|
||||
meta.broken = lib.versionOlder qtbase.version "5.9.0";
|
||||
}
|
||||
```
|
||||
|
|
|
@ -223,7 +223,7 @@ sometimes it may be necessary to disable this so the tests run consecutively.
|
|||
```nix
|
||||
rustPlatform.buildRustPackage {
|
||||
/* ... */
|
||||
cargoParallelTestThreads = false;
|
||||
dontUseCargoParallelTests = true;
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -264,6 +264,198 @@ rustPlatform.buildRustPackage rec {
|
|||
}
|
||||
```
|
||||
|
||||
## Compiling non-Rust packages that include Rust code
|
||||
|
||||
Several non-Rust packages incorporate Rust code for performance- or
|
||||
security-sensitive parts. `rustPlatform` exposes several functions and
|
||||
hooks that can be used to integrate Cargo in non-Rust packages.
|
||||
|
||||
### Vendoring of dependencies
|
||||
|
||||
Since network access is not allowed in sandboxed builds, Rust crate
|
||||
dependencies need to be retrieved using a fetcher. `rustPlatform`
|
||||
provides the `fetchCargoTarball` fetcher, which vendors all
|
||||
dependencies of a crate. For example, given a source path `src`
|
||||
containing `Cargo.toml` and `Cargo.lock`, `fetchCargoTarball`
|
||||
can be used as follows:
|
||||
|
||||
```nix
|
||||
cargoDeps = rustPlatform.fetchCargoTarball {
|
||||
inherit src;
|
||||
hash = "sha256-BoHIN/519Top1NUBjpB/oEMqi86Omt3zTQcXFWqrek0=";
|
||||
};
|
||||
```
|
||||
|
||||
The `src` attribute is required, as well as a hash specified through
|
||||
one of the `sha256` or `hash` attributes. The following optional
|
||||
attributes can also be used:
|
||||
|
||||
* `name`: the name that is used for the dependencies tarball. If
|
||||
`name` is not specified, then the name `cargo-deps` will be used.
|
||||
* `sourceRoot`: when the `Cargo.lock`/`Cargo.toml` are in a
|
||||
subdirectory, `sourceRoot` specifies the relative path to these
|
||||
files.
|
||||
* `patches`: patches to apply before vendoring. This is useful when
|
||||
the `Cargo.lock`/`Cargo.toml` files need to be patched before
|
||||
vendoring.
|
||||
|
||||
### Hooks
|
||||
|
||||
`rustPlatform` provides the following hooks to automate Cargo builds:
|
||||
|
||||
* `cargoSetupHook`: configure Cargo to use depenencies vendored
|
||||
through `fetchCargoTarball`. This hook uses the `cargoDeps`
|
||||
environment variable to find the vendored dependencies. If a project
|
||||
already vendors its dependencies, the variable `cargoVendorDir` can
|
||||
be used instead. When the `Cargo.toml`/`Cargo.lock` files are not in
|
||||
`sourceRoot`, then the optional `cargoRoot` is used to specify the
|
||||
Cargo root directory relative to `sourceRoot`.
|
||||
* `cargoBuildHook`: use Cargo to build a crate. If the crate to be
|
||||
built is a crate in e.g. a Cargo workspace, the relative path to the
|
||||
crate to build can be set through the optional `buildAndTestSubdir`
|
||||
environment variable. Additional Cargo build flags can be passed
|
||||
through `cargoBuildFlags`.
|
||||
* `maturinBuildHook`: use [Maturin](https://github.com/PyO3/maturin)
|
||||
to build a Python wheel. Similar to `cargoBuildHook`, the optional
|
||||
variable `buildAndTestSubdir` can be used to build a crate in a
|
||||
Cargo workspace. Additional maturin flags can be passed through
|
||||
`maturinBuildFlags`.
|
||||
* `cargoCheckHook`: run tests using Cargo. Additional flags can be
|
||||
passed to Cargo using `checkFlags` and `checkFlagsArray`. By
|
||||
default, tests are run in parallel. This can be disabled by setting
|
||||
`dontUseCargoParallelTests`.
|
||||
* `cargoInstallHook`: install binaries and static/shared libraries
|
||||
that were built using `cargoBuildHook`.
|
||||
|
||||
### Examples
|
||||
|
||||
#### Python package using `setuptools-rust`
|
||||
|
||||
For Python packages using `setuptools-rust`, you can use
|
||||
`fetchCargoTarball` and `cargoSetupHook` to retrieve and set up Cargo
|
||||
dependencies. The build itself is then performed by
|
||||
`buildPythonPackage`.
|
||||
|
||||
The following example outlines how the `tokenizers` Python package is
|
||||
built. Since the Python package is in the `source/bindings/python`
|
||||
directory of the *tokenizers* project's source archive, we use
|
||||
`sourceRoot` to point the tooling to this directory:
|
||||
|
||||
```nix
|
||||
{ fetchFromGitHub
|
||||
, buildPythonPackage
|
||||
, rustPlatform
|
||||
, setuptools-rust
|
||||
}:
|
||||
|
||||
buildPythonPackage rec {
|
||||
pname = "tokenizers";
|
||||
version = "0.10.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "huggingface";
|
||||
repo = pname;
|
||||
rev = "python-v${version}";
|
||||
hash = "sha256-rQ2hRV52naEf6PvRsWVCTN7B1oXAQGmnpJw4iIdhamw=";
|
||||
};
|
||||
|
||||
cargoDeps = rustPlatform.fetchCargoTarball {
|
||||
inherit src sourceRoot;
|
||||
name = "${pname}-${version}";
|
||||
hash = "sha256-BoHIN/519Top1NUBjpB/oEMqi86Omt3zTQcXFWqrek0=";
|
||||
};
|
||||
|
||||
sourceRoot = "source/bindings/python";
|
||||
|
||||
nativeBuildInputs = [ setuptools-rust ] ++ (with rustPlatform; [
|
||||
cargoSetupHook
|
||||
rust.cargo
|
||||
rust.rustc
|
||||
]);
|
||||
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
In some projects, the Rust crate is not in the main Python source
|
||||
directory. In such cases, the `cargoRoot` attribute can be used to
|
||||
specify the crate's directory relative to `sourceRoot`. In the
|
||||
following example, the crate is in `src/rust`, as specified in the
|
||||
`cargoRoot` attribute. Note that we also need to specify the correct
|
||||
path for `fetchCargoTarball`.
|
||||
|
||||
```nix
|
||||
|
||||
{ buildPythonPackage
|
||||
, fetchPypi
|
||||
, rustPlatform
|
||||
, setuptools-rust
|
||||
, openssl
|
||||
}:
|
||||
|
||||
buildPythonPackage rec {
|
||||
pname = "cryptography";
|
||||
version = "3.4.2"; # Also update the hash in vectors.nix
|
||||
|
||||
src = fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "1i1mx5y9hkyfi9jrrkcw804hmkcglxi6rmf7vin7jfnbr2bf4q64";
|
||||
};
|
||||
|
||||
cargoDeps = rustPlatform.fetchCargoTarball {
|
||||
inherit src;
|
||||
sourceRoot = "${pname}-${version}/${cargoRoot}";
|
||||
name = "${pname}-${version}";
|
||||
hash = "sha256-PS562W4L1NimqDV2H0jl5vYhL08H9est/pbIxSdYVfo=";
|
||||
};
|
||||
|
||||
cargoRoot = "src/rust";
|
||||
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
#### Python package using `maturin`
|
||||
|
||||
Python packages that use [Maturin](https://github.com/PyO3/maturin)
|
||||
can be built with `fetchCargoTarball`, `cargoSetupHook`, and
|
||||
`maturinBuildHook`. For example, the following (partial) derivation
|
||||
builds the `retworkx` Python package. `fetchCargoTarball` and
|
||||
`cargoSetupHook` are used to fetch and set up the crate dependencies.
|
||||
`maturinBuildHook` is used to perform the build.
|
||||
|
||||
```nix
|
||||
{ lib
|
||||
, buildPythonPackage
|
||||
, rustPlatform
|
||||
, fetchFromGitHub
|
||||
}:
|
||||
|
||||
buildPythonPackage rec {
|
||||
pname = "retworkx";
|
||||
version = "0.6.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "Qiskit";
|
||||
repo = "retworkx";
|
||||
rev = version;
|
||||
sha256 = "11n30ldg3y3y6qxg3hbj837pnbwjkqw3nxq6frds647mmmprrd20";
|
||||
};
|
||||
|
||||
cargoDeps = rustPlatform.fetchCargoTarball {
|
||||
inherit src;
|
||||
name = "${pname}-${version}";
|
||||
hash = "sha256-heOBK8qi2nuc/Ib+I/vLzZ1fUUD/G/KTw9d7M4Hz5O0=";
|
||||
};
|
||||
|
||||
format = "pyproject";
|
||||
|
||||
nativeBuildInputs = with rustPlatform; [ cargoSetupHook maturinBuildHook ];
|
||||
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
## Compiling Rust crates using Nix instead of Cargo
|
||||
|
||||
### Simple operation
|
||||
|
|
|
@ -26,7 +26,6 @@
|
|||
<para>
|
||||
A number of attributes can be used to work with a derivation with multiple outputs. The attribute <varname>outputs</varname> is a list of strings, which are the names of the outputs. For each of these names, an identically named attribute is created, corresponding to that output. The attribute <varname>meta.outputsToInstall</varname> is used to determine the default set of outputs to install when using the derivation name unqualified.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
<section xml:id="sec-multiple-outputs-installing">
|
||||
<title>Installing a split package</title>
|
||||
|
@ -154,7 +153,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is for development-only files. These include C(++) headers, pkg-config, cmake and aclocal files. They go to <varname>dev</varname> or <varname>out</varname> by default.
|
||||
is for development-only files. These include C(++) headers (<filename>include/</filename>), pkg-config (<filename>lib/pkgconfig/</filename>), cmake (<filename>lib/cmake/</filename>) and aclocal files (<varname>share/aclocal/</varname>). They go to <varname>dev</varname> or <varname>out</varname> by default.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -164,7 +163,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is meant for user-facing binaries, typically residing in bin/. They go to <varname>bin</varname> or <varname>out</varname> by default.
|
||||
is meant for user-facing binaries, typically residing in <filename>bin/</filename>. They go to <varname>bin</varname> or <varname>out</varname> by default.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -194,7 +193,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
|
||||
is for <emphasis>developer</emphasis> documentation. Currently we count gtk-doc and devhelp books, typically residing in <filename>share/gtk-doc/</filename> and <filename>share/devhelp/</filename>, in there. It goes to <varname>devdoc</varname> or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -204,7 +203,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is for man pages (except for section 3). They go to <varname>man</varname> or <varname>$outputBin</varname> by default.
|
||||
is for man pages (except for section 3), typically residing in <filename>share/man/man[0-9]/</filename>. They go to <varname>man</varname> or <varname>$outputBin</varname> by default.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -214,7 +213,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is for section 3 man pages. They go to <varname>devman</varname> or <varname>$outputMan</varname> by default.
|
||||
is for section 3 man pages, typically residing in <filename>share/man/man3/</filename>. They go to <varname>devman</varname> or <varname>$outputMan</varname> by default.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -224,7 +223,7 @@
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
is for info pages. They go to <varname>info</varname> or <varname>$outputBin</varname> by default.
|
||||
is for info pages, typically residing in <filename>share/info/</filename>. They go to <varname>info</varname> or <varname>$outputBin</varname> by default.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
|
8
third_party/nixpkgs/doc/stdenv/stdenv.xml
vendored
8
third_party/nixpkgs/doc/stdenv/stdenv.xml
vendored
|
@ -1839,10 +1839,7 @@ addEnvHooks "$hostOffset" myBashFunction
|
|||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
This setup hook moves any systemd user units installed in the lib
|
||||
subdirectory into share. In addition, a link is provided from share to
|
||||
lib for compatibility. This is needed for systemd to find user services
|
||||
when installed into the user profile.
|
||||
This setup hook moves any systemd user units installed in the lib subdirectory into share. In addition, a link is provided from share to lib for compatibility. This is needed for systemd to find user services when installed into the user profile.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
@ -2022,8 +2019,7 @@ addEnvHooks "$hostOffset" myBashFunction
|
|||
This is a special setup hook which helps in packaging proprietary software in that it automatically tries to find missing shared library dependencies of ELF files based on the given <varname>buildInputs</varname> and <varname>nativeBuildInputs</varname>.
|
||||
</para>
|
||||
<para>
|
||||
You can also specify a <varname>runtimeDependencies</varname> variable which lists dependencies to be unconditionally added to <glossterm>rpath</glossterm> of all executables.
|
||||
This is useful for programs that use <citerefentry>
|
||||
You can also specify a <varname>runtimeDependencies</varname> variable which lists dependencies to be unconditionally added to <glossterm>rpath</glossterm> of all executables. This is useful for programs that use <citerefentry>
|
||||
<refentrytitle>dlopen</refentrytitle>
|
||||
<manvolnum>3</manvolnum> </citerefentry> to load libraries at runtime.
|
||||
</para>
|
||||
|
|
124
third_party/nixpkgs/doc/using/overlays.xml
vendored
124
third_party/nixpkgs/doc/using/overlays.xml
vendored
|
@ -28,8 +28,7 @@
|
|||
</para>
|
||||
|
||||
<para>
|
||||
NOTE: DO NOT USE THIS in nixpkgs.
|
||||
Further overlays can be added by calling the <literal>pkgs.extend</literal> or <literal>pkgs.appendOverlays</literal>, although it is often preferable to avoid these functions, because they recompute the Nixpkgs fixpoint, which is somewhat expensive to do.
|
||||
NOTE: DO NOT USE THIS in nixpkgs. Further overlays can be added by calling the <literal>pkgs.extend</literal> or <literal>pkgs.appendOverlays</literal>, although it is often preferable to avoid these functions, because they recompute the Nixpkgs fixpoint, which is somewhat expensive to do.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
|
@ -140,36 +139,31 @@ self: super:
|
|||
</section>
|
||||
<section xml:id="sec-overlays-alternatives">
|
||||
<title>Using overlays to configure alternatives</title>
|
||||
|
||||
<para>
|
||||
Certain software packages have different implementations of the
|
||||
same interface. Other distributions have functionality to switch
|
||||
between these. For example, Debian provides <link
|
||||
xlink:href="https://wiki.debian.org/DebianAlternatives">DebianAlternatives</link>.
|
||||
Nixpkgs has what we call <literal>alternatives</literal>, which
|
||||
are configured through overlays.
|
||||
Certain software packages have different implementations of the same interface. Other distributions have functionality to switch between these. For example, Debian provides <link
|
||||
xlink:href="https://wiki.debian.org/DebianAlternatives">DebianAlternatives</link>. Nixpkgs has what we call <literal>alternatives</literal>, which are configured through overlays.
|
||||
</para>
|
||||
|
||||
<section xml:id="sec-overlays-alternatives-blas-lapack">
|
||||
<title>BLAS/LAPACK</title>
|
||||
|
||||
<para>
|
||||
In Nixpkgs, we have multiple implementations of the BLAS/LAPACK
|
||||
numerical linear algebra interfaces. They are:
|
||||
In Nixpkgs, we have multiple implementations of the BLAS/LAPACK numerical linear algebra interfaces. They are:
|
||||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.openblas.net/">OpenBLAS</link>
|
||||
</para>
|
||||
<para>
|
||||
The Nixpkgs attribute is <literal>openblas</literal> for
|
||||
ILP64 (integer width = 64 bits) and
|
||||
<literal>openblasCompat</literal> for LP64 (integer width =
|
||||
32 bits). <literal>openblasCompat</literal> is the default.
|
||||
The Nixpkgs attribute is <literal>openblas</literal> for ILP64 (integer width = 64 bits) and <literal>openblasCompat</literal> for LP64 (integer width = 32 bits). <literal>openblasCompat</literal> is the default.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="http://www.netlib.org/lapack/">LAPACK
|
||||
reference</link> (also provides BLAS)
|
||||
<link xlink:href="http://www.netlib.org/lapack/">LAPACK reference</link> (also provides BLAS)
|
||||
</para>
|
||||
<para>
|
||||
The Nixpkgs attribute is <literal>lapack-reference</literal>.
|
||||
|
@ -178,8 +172,7 @@ self: super:
|
|||
<listitem>
|
||||
<para>
|
||||
<link
|
||||
xlink:href="https://software.intel.com/en-us/mkl">Intel
|
||||
MKL</link> (only works on the x86_64 architecture, unfree)
|
||||
xlink:href="https://software.intel.com/en-us/mkl">Intel MKL</link> (only works on the x86_64 architecture, unfree)
|
||||
</para>
|
||||
<para>
|
||||
The Nixpkgs attribute is <literal>mkl</literal>.
|
||||
|
@ -191,46 +184,26 @@ self: super:
|
|||
xlink:href="https://github.com/flame/blis">BLIS</link>
|
||||
</para>
|
||||
<para>
|
||||
BLIS, available through the attribute
|
||||
<literal>blis</literal>, is a framework for linear algebra kernels. In
|
||||
addition, it implements the BLAS interface.
|
||||
BLIS, available through the attribute <literal>blis</literal>, is a framework for linear algebra kernels. In addition, it implements the BLAS interface.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link
|
||||
xlink:href="https://developer.amd.com/amd-aocl/blas-library/">AMD
|
||||
BLIS/LIBFLAME</link> (optimized for modern AMD x86_64 CPUs)
|
||||
xlink:href="https://developer.amd.com/amd-aocl/blas-library/">AMD BLIS/LIBFLAME</link> (optimized for modern AMD x86_64 CPUs)
|
||||
</para>
|
||||
<para>
|
||||
The AMD fork of the BLIS library, with attribute
|
||||
<literal>amd-blis</literal>, extends BLIS with optimizations for
|
||||
modern AMD CPUs. The changes are usually submitted to
|
||||
the upstream BLIS project after some time. However, AMD BLIS
|
||||
typically provides some performance improvements on AMD Zen CPUs.
|
||||
The complementary AMD LIBFLAME library, with attribute
|
||||
<literal>amd-libflame</literal>, provides a LAPACK implementation.
|
||||
The AMD fork of the BLIS library, with attribute <literal>amd-blis</literal>, extends BLIS with optimizations for modern AMD CPUs. The changes are usually submitted to the upstream BLIS project after some time. However, AMD BLIS typically provides some performance improvements on AMD Zen CPUs. The complementary AMD LIBFLAME library, with attribute <literal>amd-libflame</literal>, provides a LAPACK implementation.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<para>
|
||||
Introduced in <link
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/pull/83888">PR
|
||||
#83888</link>, we are able to override the <literal>blas</literal>
|
||||
and <literal>lapack</literal> packages to use different implementations,
|
||||
through the <literal>blasProvider</literal> and
|
||||
<literal>lapackProvider</literal> argument. This can be used
|
||||
to select a different provider. BLAS providers will have
|
||||
symlinks in <literal>$out/lib/libblas.so.3</literal> and
|
||||
<literal>$out/lib/libcblas.so.3</literal> to their respective
|
||||
BLAS libraries. Likewise, LAPACK providers will have symlinks
|
||||
in <literal>$out/lib/liblapack.so.3</literal> and
|
||||
<literal>$out/lib/liblapacke.so.3</literal> to their respective
|
||||
LAPACK libraries. For example, Intel MKL is both a BLAS and
|
||||
LAPACK provider. An overlay can be created to use Intel MKL
|
||||
that looks like:
|
||||
xlink:href="https://github.com/NixOS/nixpkgs/pull/83888">PR #83888</link>, we are able to override the <literal>blas</literal> and <literal>lapack</literal> packages to use different implementations, through the <literal>blasProvider</literal> and <literal>lapackProvider</literal> argument. This can be used to select a different provider. BLAS providers will have symlinks in <literal>$out/lib/libblas.so.3</literal> and <literal>$out/lib/libcblas.so.3</literal> to their respective BLAS libraries. Likewise, LAPACK providers will have symlinks in <literal>$out/lib/liblapack.so.3</literal> and <literal>$out/lib/liblapacke.so.3</literal> to their respective LAPACK libraries. For example, Intel MKL is both a BLAS and LAPACK provider. An overlay can be created to use Intel MKL that looks like:
|
||||
</para>
|
||||
<programlisting>
|
||||
|
||||
<programlisting>
|
||||
self: super:
|
||||
|
||||
{
|
||||
|
@ -243,46 +216,24 @@ self: super:
|
|||
};
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
This overlay uses Intel’s MKL library for both BLAS and LAPACK
|
||||
interfaces. Note that the same can be accomplished at runtime
|
||||
using <literal>LD_LIBRARY_PATH</literal> of
|
||||
<literal>libblas.so.3</literal> and
|
||||
<literal>liblapack.so.3</literal>. For instance:
|
||||
This overlay uses Intel’s MKL library for both BLAS and LAPACK interfaces. Note that the same can be accomplished at runtime using <literal>LD_LIBRARY_PATH</literal> of <literal>libblas.so.3</literal> and <literal>liblapack.so.3</literal>. For instance:
|
||||
</para>
|
||||
|
||||
<screen>
|
||||
<prompt>$ </prompt>LD_LIBRARY_PATH=$(nix-build -A mkl)/lib:$LD_LIBRARY_PATH nix-shell -p octave --run octave
|
||||
</screen>
|
||||
|
||||
<para>
|
||||
Intel MKL requires an <literal>openmp</literal> implementation
|
||||
when running with multiple processors. By default,
|
||||
<literal>mkl</literal> will use Intel’s <literal>iomp</literal>
|
||||
implementation if no other is specified, but this is a
|
||||
runtime-only dependency and binary compatible with the LLVM
|
||||
implementation. To use that one instead, Intel recommends users
|
||||
set it with <literal>LD_PRELOAD</literal>. Note that
|
||||
<literal>mkl</literal> is only available on
|
||||
<literal>x86_64-linux</literal> and
|
||||
<literal>x86_64-darwin</literal>. Moreover, Hydra is not
|
||||
building and distributing pre-compiled binaries using it.
|
||||
Intel MKL requires an <literal>openmp</literal> implementation when running with multiple processors. By default, <literal>mkl</literal> will use Intel’s <literal>iomp</literal> implementation if no other is specified, but this is a runtime-only dependency and binary compatible with the LLVM implementation. To use that one instead, Intel recommends users set it with <literal>LD_PRELOAD</literal>. Note that <literal>mkl</literal> is only available on <literal>x86_64-linux</literal> and <literal>x86_64-darwin</literal>. Moreover, Hydra is not building and distributing pre-compiled binaries using it.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For BLAS/LAPACK switching to work correctly, all packages must
|
||||
depend on <literal>blas</literal> or <literal>lapack</literal>.
|
||||
This ensures that only one BLAS/LAPACK library is used at one
|
||||
time. There are two versions versions of BLAS/LAPACK currently
|
||||
in the wild, <literal>LP64</literal> (integer size = 32 bits)
|
||||
and <literal>ILP64</literal> (integer size = 64 bits). Some
|
||||
software needs special flags or patches to work with
|
||||
<literal>ILP64</literal>. You can check if
|
||||
<literal>ILP64</literal> is used in Nixpkgs with
|
||||
<varname>blas.isILP64</varname> and
|
||||
<varname>lapack.isILP64</varname>. Some software does NOT work
|
||||
with <literal>ILP64</literal>, and derivations need to specify
|
||||
an assertion to prevent this. You can prevent
|
||||
<literal>ILP64</literal> from being used with the following:
|
||||
For BLAS/LAPACK switching to work correctly, all packages must depend on <literal>blas</literal> or <literal>lapack</literal>. This ensures that only one BLAS/LAPACK library is used at one time. There are two versions versions of BLAS/LAPACK currently in the wild, <literal>LP64</literal> (integer size = 32 bits) and <literal>ILP64</literal> (integer size = 64 bits). Some software needs special flags or patches to work with <literal>ILP64</literal>. You can check if <literal>ILP64</literal> is used in Nixpkgs with <varname>blas.isILP64</varname> and <varname>lapack.isILP64</varname>. Some software does NOT work with <literal>ILP64</literal>, and derivations need to specify an assertion to prevent this. You can prevent <literal>ILP64</literal> from being used with the following:
|
||||
</para>
|
||||
<programlisting>
|
||||
|
||||
<programlisting>
|
||||
{ stdenv, blas, lapack, ... }:
|
||||
|
||||
assert (!blas.isILP64) && (!lapack.isILP64);
|
||||
|
@ -292,34 +243,31 @@ stdenv.mkDerivation {
|
|||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="sec-overlays-alternatives-mpi">
|
||||
<title>Switching the MPI implementation</title>
|
||||
|
||||
<para>
|
||||
All programs that are built with
|
||||
<link xlink:href="https://en.wikipedia.org/wiki/Message_Passing_Interface">MPI</link>
|
||||
support use the generic attribute <varname>mpi</varname>
|
||||
as an input. At the moment Nixpkgs natively provides two different
|
||||
MPI implementations:
|
||||
All programs that are built with <link xlink:href="https://en.wikipedia.org/wiki/Message_Passing_Interface">MPI</link> support use the generic attribute <varname>mpi</varname> as an input. At the moment Nixpkgs natively provides two different MPI implementations:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.open-mpi.org/">Open MPI</link>
|
||||
(default), attribute name <varname>openmpi</varname>
|
||||
<link xlink:href="https://www.open-mpi.org/">Open MPI</link> (default), attribute name <varname>openmpi</varname>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.mpich.org/">MPICH</link>,
|
||||
attribute name <varname>mpich</varname>
|
||||
<link xlink:href="https://www.mpich.org/">MPICH</link>, attribute name <varname>mpich</varname>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
To provide MPI enabled applications that use <literal>MPICH</literal>, instead
|
||||
of the default <literal>Open MPI</literal>, simply use the following overlay:
|
||||
To provide MPI enabled applications that use <literal>MPICH</literal>, instead of the default <literal>Open MPI</literal>, simply use the following overlay:
|
||||
</para>
|
||||
<programlisting>
|
||||
|
||||
<programlisting>
|
||||
self: super:
|
||||
|
||||
{
|
||||
|
|
4
third_party/nixpkgs/lib/licenses.nix
vendored
4
third_party/nixpkgs/lib/licenses.nix
vendored
|
@ -7,7 +7,7 @@ let
|
|||
|
||||
in
|
||||
|
||||
lib.mapAttrs (n: v: v // { shortName = n; }) {
|
||||
lib.mapAttrs (n: v: v // { shortName = n; }) ({
|
||||
/* License identifiers from spdx.org where possible.
|
||||
* If you cannot find your license here, then look for a similar license or
|
||||
* add it to this list. The URL mentioned above is a good source for inspiration.
|
||||
|
@ -877,4 +877,4 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
|
|||
fullName = "GNU Lesser General Public License v3.0";
|
||||
deprecated = true;
|
||||
};
|
||||
}
|
||||
})
|
||||
|
|
132
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
132
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
|
@ -194,6 +194,12 @@
|
|||
githubId = 124545;
|
||||
name = "Anthony Cowley";
|
||||
};
|
||||
adamlwgriffiths = {
|
||||
email = "adam.lw.griffiths@gmail.com";
|
||||
github = "adamlwgriffiths";
|
||||
githubId = 1239156;
|
||||
name = "Adam Griffiths";
|
||||
};
|
||||
adamt = {
|
||||
email = "mail@adamtulinius.dk";
|
||||
github = "adamtulinius";
|
||||
|
@ -273,7 +279,7 @@
|
|||
name = "James Alexander Feldman-Crough";
|
||||
};
|
||||
aforemny = {
|
||||
email = "alexanderforemny@googlemail.com";
|
||||
email = "aforemny@posteo.de";
|
||||
github = "aforemny";
|
||||
githubId = 610962;
|
||||
name = "Alexander Foremny";
|
||||
|
@ -1096,6 +1102,12 @@
|
|||
githubId = 1432730;
|
||||
name = "Benjamin Staffin";
|
||||
};
|
||||
benneti = {
|
||||
name = "Benedikt Tissot";
|
||||
email = "benedikt.tissot@googlemail.com";
|
||||
github = "benneti";
|
||||
githubId = 11725645;
|
||||
};
|
||||
bennofs = {
|
||||
email = "benno.fuenfstueck@gmail.com";
|
||||
github = "bennofs";
|
||||
|
@ -1711,6 +1723,12 @@
|
|||
githubId = 2245737;
|
||||
name = "Christopher Mark Poole";
|
||||
};
|
||||
chuahou = {
|
||||
email = "human+github@chuahou.dev";
|
||||
github = "chuahou";
|
||||
githubId = 12386805;
|
||||
name = "Chua Hou";
|
||||
};
|
||||
chvp = {
|
||||
email = "nixpkgs@cvpetegem.be";
|
||||
github = "chvp";
|
||||
|
@ -2417,6 +2435,16 @@
|
|||
githubId = 6806011;
|
||||
name = "Robert Schütz";
|
||||
};
|
||||
dottedmag = {
|
||||
email = "dottedmag@dottedmag.net";
|
||||
github = "dottedmag";
|
||||
githubId = 16120;
|
||||
name = "Misha Gusarov";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x9D20F6503E338888";
|
||||
fingerprint = "A8DF 1326 9E5D 9A38 E57C FAC2 9D20 F650 3E33 8888";
|
||||
}];
|
||||
};
|
||||
doublec = {
|
||||
email = "chris.double@double.co.nz";
|
||||
github = "doublec";
|
||||
|
@ -3061,6 +3089,12 @@
|
|||
githubId = 1276854;
|
||||
name = "Florian Peter";
|
||||
};
|
||||
fbrs = {
|
||||
email = "yuuki@protonmail.com";
|
||||
github = "cideM";
|
||||
githubId = 4246921;
|
||||
name = "Florian Beeres";
|
||||
};
|
||||
fdns = {
|
||||
email = "fdns02@gmail.com";
|
||||
github = "fdns";
|
||||
|
@ -3073,6 +3107,12 @@
|
|||
githubId = 9959940;
|
||||
name = "Andreas Fehn";
|
||||
};
|
||||
felixscheinost = {
|
||||
name = "Felix Scheinost";
|
||||
email = "felix.scheinost@posteo.de";
|
||||
github = "felixscheinost";
|
||||
githubId = 31761492;
|
||||
};
|
||||
felixsinger = {
|
||||
email = "felixsinger@posteo.net";
|
||||
github = "felixsinger";
|
||||
|
@ -4051,6 +4091,16 @@
|
|||
fingerprint = "7311 2700 AB4F 4CDF C68C F6A5 79C3 C47D C652 EA54";
|
||||
}];
|
||||
};
|
||||
ivankovnatsky = {
|
||||
email = "ikovnatsky@protonmail.ch";
|
||||
github = "ivankovnatsky";
|
||||
githubId = 75213;
|
||||
name = "Ivan Kovnatsky";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x3A33FA4C82ED674F";
|
||||
fingerprint = "6BD3 7248 30BD 941E 9180 C1A3 3A33 FA4C 82ED 674F";
|
||||
}];
|
||||
};
|
||||
ivar = {
|
||||
email = "ivar.scholten@protonmail.com";
|
||||
github = "IvarWithoutBones";
|
||||
|
@ -6055,7 +6105,7 @@
|
|||
name = "Celine Mercier";
|
||||
};
|
||||
metadark = {
|
||||
email = "kira.bruneau@gmail.com";
|
||||
email = "kira.bruneau@pm.me";
|
||||
name = "Kira Bruneau";
|
||||
github = "metadark";
|
||||
githubId = 382041;
|
||||
|
@ -7203,6 +7253,12 @@
|
|||
githubId = 157610;
|
||||
name = "Piotr Bogdan";
|
||||
};
|
||||
pborzenkov = {
|
||||
email = "pavel@borzenkov.net";
|
||||
github = "pborzenkov";
|
||||
githubId = 434254;
|
||||
name = "Pavel Borzenkov";
|
||||
};
|
||||
pblkt = {
|
||||
email = "pebblekite@gmail.com";
|
||||
github = "pblkt";
|
||||
|
@ -7227,6 +7283,12 @@
|
|||
githubId = 13225611;
|
||||
name = "Nicolas Martin";
|
||||
};
|
||||
p3psi = {
|
||||
name = "Elliot Boo";
|
||||
email = "p3psi.boo@gmail.com";
|
||||
github = "p3psi-boo";
|
||||
githubId = 43925055;
|
||||
};
|
||||
periklis = {
|
||||
email = "theopompos@gmail.com";
|
||||
github = "periklis";
|
||||
|
@ -7419,6 +7481,16 @@
|
|||
githubId = 103822;
|
||||
name = "Patrick Mahoney";
|
||||
};
|
||||
pmenke = {
|
||||
email = "nixos@pmenke.de";
|
||||
github = "pmenke-de";
|
||||
githubId = 898922;
|
||||
name = "Philipp Menke";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0xEB7F2D4CCBE23B69";
|
||||
fingerprint = "ED54 5EFD 64B6 B5AA EC61 8C16 EB7F 2D4C CBE2 3B69";
|
||||
}];
|
||||
};
|
||||
pmeunier = {
|
||||
email = "pierre-etienne.meunier@inria.fr";
|
||||
github = "P-E-Meunier";
|
||||
|
@ -7449,6 +7521,16 @@
|
|||
githubId = 11365056;
|
||||
name = "Kevin Liu";
|
||||
};
|
||||
pnotequalnp = {
|
||||
email = "kevin@pnotequalnp.com";
|
||||
github = "pnotequalnp";
|
||||
githubId = 46154511;
|
||||
name = "Kevin Mullins";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/361820A45DB41E9A";
|
||||
fingerprint = "2CD2 B030 BD22 32EF DF5A 008A 3618 20A4 5DB4 1E9A";
|
||||
}];
|
||||
};
|
||||
polyrod = {
|
||||
email = "dc1mdp@gmail.com";
|
||||
github = "polyrod";
|
||||
|
@ -8033,6 +8115,12 @@
|
|||
githubId = 3708689;
|
||||
name = "Roberto Di Remigio";
|
||||
};
|
||||
robertoszek = {
|
||||
email = "robertoszek@robertoszek.xyz";
|
||||
github = "robertoszek";
|
||||
githubId = 1080963;
|
||||
name = "Roberto";
|
||||
};
|
||||
robgssp = {
|
||||
email = "robgssp@gmail.com";
|
||||
github = "robgssp";
|
||||
|
@ -8075,6 +8163,16 @@
|
|||
githubId = 1312525;
|
||||
name = "Rongcui Dong";
|
||||
};
|
||||
ronthecookie = {
|
||||
name = "Ron B";
|
||||
email = "me@ronthecookie.me";
|
||||
github = "ronthecookie";
|
||||
githubId = 2526321;
|
||||
keys = [{
|
||||
longkeyid = "rsa2048/0x6F5B32DE5E5FA80C";
|
||||
fingerprint = "4B2C DDA5 FA35 642D 956D 7294 6F5B 32DE 5E5F A80C";
|
||||
}];
|
||||
};
|
||||
roosemberth = {
|
||||
email = "roosembert.palacios+nixpkgs@gmail.com";
|
||||
github = "roosemberth";
|
||||
|
@ -9564,7 +9662,7 @@
|
|||
name = "Tom Smeets";
|
||||
};
|
||||
toonn = {
|
||||
email = "nnoot@toonn.io";
|
||||
email = "nixpkgs@toonn.io";
|
||||
github = "toonn";
|
||||
githubId = 1486805;
|
||||
name = "Toon Nolten";
|
||||
|
@ -10012,6 +10110,12 @@
|
|||
githubId = 7677567;
|
||||
name = "Victor SENE";
|
||||
};
|
||||
vtuan10 = {
|
||||
email = "mail@tuan-vo.de";
|
||||
github = "vtuan10";
|
||||
githubId = 16415673;
|
||||
name = "Van Tuan Vo";
|
||||
};
|
||||
vyorkin = {
|
||||
email = "vasiliy.yorkin@gmail.com";
|
||||
github = "vyorkin";
|
||||
|
@ -10458,6 +10562,12 @@
|
|||
githubId = 1141948;
|
||||
name = "Zack Grannan";
|
||||
};
|
||||
zhaofengli = {
|
||||
email = "hello@zhaofeng.li";
|
||||
github = "zhaofengli";
|
||||
githubId = 2189609;
|
||||
name = "Zhaofeng Li";
|
||||
};
|
||||
zimbatm = {
|
||||
email = "zimbatm@zimbatm.com";
|
||||
github = "zimbatm";
|
||||
|
@ -10750,16 +10860,20 @@
|
|||
github = "pulsation";
|
||||
githubId = 1838397;
|
||||
};
|
||||
zseri = {
|
||||
name = "zseri";
|
||||
email = "zseri.devel@ytrizja.de";
|
||||
github = "zseri";
|
||||
githubId = 1618343;
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x229E63AE5644A96D";
|
||||
fingerprint = "7AFB C595 0D3A 77BD B00F 947B 229E 63AE 5644 A96D";
|
||||
}];
|
||||
};
|
||||
zupo = {
|
||||
name = "Nejc Zupan";
|
||||
email = "nejczupan+nix@gmail.com";
|
||||
github = "zupo";
|
||||
githubId = 311580;
|
||||
};
|
||||
felixscheinost = {
|
||||
name = "Felix Scheinost";
|
||||
email = "felix.scheinost@posteo.de";
|
||||
github = "felixscheinost";
|
||||
githubId = 31761492;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -3,7 +3,8 @@
|
|||
stdenv.mkDerivation {
|
||||
name = "nixpkgs-lint-1";
|
||||
|
||||
buildInputs = [ makeWrapper perl perlPackages.XMLSimple ];
|
||||
nativeBuildInputs = [ makeWrapper ];
|
||||
buildInputs = [ perl perlPackages.XMLSimple ];
|
||||
|
||||
dontUnpack = true;
|
||||
buildPhase = "true";
|
||||
|
|
|
@ -16,9 +16,10 @@
|
|||
The first line (<literal>{ config, pkgs, ... }:</literal>) denotes that this
|
||||
is actually a function that takes at least the two arguments
|
||||
<varname>config</varname> and <varname>pkgs</varname>. (These are explained
|
||||
later.) The function returns a <emphasis>set</emphasis> of option definitions
|
||||
(<literal>{ <replaceable>...</replaceable> }</literal>). These definitions
|
||||
have the form <literal><replaceable>name</replaceable> =
|
||||
later, in chapter <xref linkend="sec-writing-modules" />) The function returns
|
||||
a <emphasis>set</emphasis> of option definitions (<literal>{
|
||||
<replaceable>...</replaceable> }</literal>). These definitions have the form
|
||||
<literal><replaceable>name</replaceable> =
|
||||
<replaceable>value</replaceable></literal>, where
|
||||
<replaceable>name</replaceable> is the name of an option and
|
||||
<replaceable>value</replaceable> is its value. For example,
|
||||
|
|
|
@ -74,7 +74,10 @@ linkend="sec-configuration-syntax"/>, we saw the following structure
|
|||
<callout arearefs='module-syntax-1'>
|
||||
<para>
|
||||
This line makes the current Nix expression a function. The variable
|
||||
<varname>pkgs</varname> contains Nixpkgs, while <varname>config</varname>
|
||||
<varname>pkgs</varname> contains Nixpkgs (by default, it takes the
|
||||
<varname>nixpkgs</varname> entry of <envar>NIX_PATH</envar>, see the <link
|
||||
xlink:href="https://nixos.org/manual/nix/stable/#sec-common-env">Nix
|
||||
manual</link> for further details), while <varname>config</varname>
|
||||
contains the full system configuration. This line can be omitted if there
|
||||
is no reference to <varname>pkgs</varname> and <varname>config</varname>
|
||||
inside the module.
|
||||
|
|
|
@ -523,6 +523,21 @@ self: super:
|
|||
as an hardware RNG, as it will automatically run the krngd task to periodically collect random
|
||||
data from the device and mix it into the kernel's RNG.
|
||||
</para>
|
||||
<para>
|
||||
The default SMTP port for GitLab has been changed to
|
||||
<literal>25</literal> from its previous default of
|
||||
<literal>465</literal>. If you depended on this default, you
|
||||
should now set the <xref linkend="opt-services.gitlab.smtp.port" />
|
||||
option.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The default version of ImageMagick has been updated from 6 to 7.
|
||||
You can use <package>imagemagick6</package>,
|
||||
<package>imagemagick6_light</package>, and
|
||||
<package>imagemagick6Big</package> if you need the older version.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
@ -558,14 +573,16 @@ self: super:
|
|||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The default-version of <literal>nextcloud</literal> is <package>nextcloud20</package>.
|
||||
The default-version of <literal>nextcloud</literal> is <package>nextcloud21</package>.
|
||||
Please note that it's <emphasis>not</emphasis> possible to upgrade <literal>nextcloud</literal>
|
||||
across multiple major versions! This means that it's e.g. not possible to upgrade
|
||||
from <package>nextcloud18</package> to <package>nextcloud20</package> in a single deploy.
|
||||
from <package>nextcloud18</package> to <package>nextcloud20</package> in a single deploy and
|
||||
most <literal>20.09</literal> users will have to upgrade to <package>nextcloud20</package>
|
||||
first.
|
||||
</para>
|
||||
<para>
|
||||
The package can be manually upgraded by setting <xref linkend="opt-services.nextcloud.package" />
|
||||
to <package>nextcloud20</package>.
|
||||
to <package>nextcloud21</package>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
@ -730,6 +747,56 @@ self: super:
|
|||
terminology has been deprecated and should be replaced with Far/Near in the configuration file.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The nix-gc service now accepts randomizedDelaySec (default: 0) and persistent (default: true) parameters.
|
||||
By default nix-gc will now run immediately if it would have been triggered at least
|
||||
once during the time when the timer was inactive.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>rustPlatform.buildRustPackage</literal> function is split into several hooks:
|
||||
<package>cargoSetupHook</package> to set up vendoring for Cargo-based projects,
|
||||
<package>cargoBuildHook</package> to build a project using Cargo,
|
||||
<package>cargoInstallHook</package> to install a project using Cargo, and
|
||||
<package>cargoCheckHook</package> to run tests in Cargo-based projects. With this change,
|
||||
mixed-language projects can use the relevant hooks within builders other than
|
||||
<literal>buildRustPackage</literal>. However, these changes also required several API changes to
|
||||
<literal>buildRustPackage</literal> itself:
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>target</literal> argument was removed. Instead, <literal>buildRustPackage</literal>
|
||||
will always use the same target as the C/C++ compiler that is used.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>cargoParallelTestThreads</literal> argument was removed. Parallel tests are
|
||||
now disabled through <literal>dontUseCargoParallelTests</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>rustPlatform.maturinBuildHook</literal> hook was added. This hook can be used
|
||||
with <literal>buildPythonPackage</literal> to build Python packages that are written in Rust
|
||||
and use Maturin as their build tool.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Kubernetes has <link xlink:href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/">deprecated docker</link> as container runtime.
|
||||
As a consequence, the Kubernetes module now has support for configuration of custom remote container runtimes and enables containerd by default.
|
||||
Note that containerd is more strict regarding container image OCI-compliance.
|
||||
As an example, images with CMD or ENTRYPOINT defined as strings (not lists) will fail on containerd, while working fine on docker.
|
||||
Please test your setup and container images with containerd prior to upgrading.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -23,6 +23,6 @@ stdenv.mkDerivation {
|
|||
|
||||
# Generate the squashfs image.
|
||||
mksquashfs nix-path-registration $(cat $closureInfo/store-paths) $out \
|
||||
-keep-as-directory -all-root -b 1048576 -comp ${comp}
|
||||
-no-hardlinks -keep-as-directory -all-root -b 1048576 -comp ${comp}
|
||||
'';
|
||||
}
|
||||
|
|
4
third_party/nixpkgs/nixos/lib/qemu-flags.nix
vendored
4
third_party/nixpkgs/nixos/lib/qemu-flags.nix
vendored
|
@ -18,13 +18,15 @@ rec {
|
|||
];
|
||||
|
||||
qemuSerialDevice = if pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64 then "ttyS0"
|
||||
else if pkgs.stdenv.isAarch32 || pkgs.stdenv.isAarch64 then "ttyAMA0"
|
||||
else if (with pkgs.stdenv.hostPlatform; isAarch32 || isAarch64 || isPower) then "ttyAMA0"
|
||||
else throw "Unknown QEMU serial device for system '${pkgs.stdenv.hostPlatform.system}'";
|
||||
|
||||
qemuBinary = qemuPkg: {
|
||||
x86_64-linux = "${qemuPkg}/bin/qemu-kvm -cpu max";
|
||||
armv7l-linux = "${qemuPkg}/bin/qemu-system-arm -enable-kvm -machine virt -cpu host";
|
||||
aarch64-linux = "${qemuPkg}/bin/qemu-system-aarch64 -enable-kvm -machine virt,gic-version=host -cpu host";
|
||||
powerpc64le-linux = "${qemuPkg}/bin/qemu-system-ppc64 -machine powernv";
|
||||
powerpc64-linux = "${qemuPkg}/bin/qemu-system-ppc64 -machine powernv";
|
||||
x86_64-darwin = "${qemuPkg}/bin/qemu-kvm -cpu max";
|
||||
}.${pkgs.stdenv.hostPlatform.system} or "${qemuPkg}/bin/qemu-kvm";
|
||||
}
|
||||
|
|
|
@ -26,12 +26,12 @@ in {
|
|||
systemd.services.enable-ksm = {
|
||||
description = "Enable Kernel Same-Page Merging";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "systemd-udev-settle.service" ];
|
||||
script = ''
|
||||
if [ -e /sys/kernel/mm/ksm ]; then
|
||||
script =
|
||||
''
|
||||
echo 1 > /sys/kernel/mm/ksm/run
|
||||
${optionalString (cfg.sleep != null) ''echo ${toString cfg.sleep} > /sys/kernel/mm/ksm/sleep_millisecs''}
|
||||
fi
|
||||
'' + optionalString (cfg.sleep != null)
|
||||
''
|
||||
echo ${toString cfg.sleep} > /sys/kernel/mm/ksm/sleep_millisecs
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
|
|
@ -257,6 +257,7 @@
|
|||
./services/backup/zfs-replication.nix
|
||||
./services/backup/znapzend.nix
|
||||
./services/blockchain/ethereum/geth.nix
|
||||
./services/backup/zrepl.nix
|
||||
./services/cluster/hadoop/default.nix
|
||||
./services/cluster/k3s/default.nix
|
||||
./services/cluster/kubernetes/addons/dns.nix
|
||||
|
@ -381,6 +382,7 @@
|
|||
./services/hardware/sane.nix
|
||||
./services/hardware/sane_extra_backends/brscan4.nix
|
||||
./services/hardware/sane_extra_backends/dsseries.nix
|
||||
./services/hardware/spacenavd.nix
|
||||
./services/hardware/tcsd.nix
|
||||
./services/hardware/tlp.nix
|
||||
./services/hardware/thinkfan.nix
|
||||
|
@ -488,6 +490,7 @@
|
|||
./services/misc/logkeys.nix
|
||||
./services/misc/leaps.nix
|
||||
./services/misc/lidarr.nix
|
||||
./services/misc/lifecycled.nix
|
||||
./services/misc/mame.nix
|
||||
./services/misc/matrix-appservice-discord.nix
|
||||
./services/misc/matrix-synapse.nix
|
||||
|
@ -510,6 +513,7 @@
|
|||
./services/misc/paperless.nix
|
||||
./services/misc/parsoid.nix
|
||||
./services/misc/plex.nix
|
||||
./services/misc/plikd.nix
|
||||
./services/misc/tautulli.nix
|
||||
./services/misc/pinnwand.nix
|
||||
./services/misc/pykms.nix
|
||||
|
@ -1049,6 +1053,7 @@
|
|||
./testing/service-runner.nix
|
||||
./virtualisation/anbox.nix
|
||||
./virtualisation/container-config.nix
|
||||
./virtualisation/containerd.nix
|
||||
./virtualisation/containers.nix
|
||||
./virtualisation/nixos-containers.nix
|
||||
./virtualisation/oci-containers.nix
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
--- a/create_manpage_completions.py
|
||||
+++ b/create_manpage_completions.py
|
||||
@@ -844,10 +844,6 @@ def parse_manpage_at_path(manpage_path, output_directory):
|
||||
|
||||
built_command_output.insert(0, "# " + CMDNAME)
|
||||
@@ -879,10 +879,6 @@ def parse_manpage_at_path(manpage_path, output_directory):
|
||||
)
|
||||
return False
|
||||
|
||||
- # Output the magic word Autogenerated so we can tell if we can overwrite this
|
||||
- built_command_output.insert(
|
||||
- 1, "# Autogenerated from man page " + manpage_path
|
||||
- 0, "# " + CMDNAME + "\n# Autogenerated from man page " + manpage_path
|
||||
- )
|
||||
# built_command_output.insert(2, "# using " + parser.__class__.__name__) # XXX MISATTRIBUTES THE CULPABILE PARSER! Was really using Type2 but reporting TypeDeroffManParser
|
||||
# built_command_output.insert(2, "# using " + parser.__class__.__name__) # XXX MISATTRIBUTES THE CULPABLE PARSER! Was really using Type2 but reporting TypeDeroffManParser
|
||||
|
||||
for line in built_command_output:
|
||||
|
||||
|
|
|
@ -12,11 +12,30 @@ let
|
|||
else [ package32 ] ++ extraPackages32;
|
||||
};
|
||||
in {
|
||||
options.programs.steam.enable = mkEnableOption "steam";
|
||||
options.programs.steam = {
|
||||
enable = mkEnableOption "steam";
|
||||
|
||||
remotePlay.openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Open ports in the firewall for Steam Remote Play.
|
||||
'';
|
||||
};
|
||||
|
||||
dedicatedServer.openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Open ports in the firewall for Source Dedicated Server.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
hardware.opengl = { # this fixes the "glXChooseVisual failed" bug, context: https://github.com/NixOS/nixpkgs/issues/47932
|
||||
enable = true;
|
||||
driSupport = true;
|
||||
driSupport32Bit = true;
|
||||
};
|
||||
|
||||
|
@ -26,6 +45,18 @@ in {
|
|||
hardware.steam-hardware.enable = true;
|
||||
|
||||
environment.systemPackages = [ steam steam.run ];
|
||||
|
||||
networking.firewall = lib.mkMerge [
|
||||
(mkIf cfg.remotePlay.openFirewall {
|
||||
allowedTCPPorts = [ 27036 ];
|
||||
allowedUDPPortRanges = [ { from = 27031; to = 27036; } ];
|
||||
})
|
||||
|
||||
(mkIf cfg.dedicatedServer.openFirewall {
|
||||
allowedTCPPorts = [ 27015 ]; # SRCDS Rcon port
|
||||
allowedUDPPorts = [ 27015 ]; # Gameplay traffic
|
||||
})
|
||||
];
|
||||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ mkg20001 ];
|
||||
|
|
54
third_party/nixpkgs/nixos/modules/services/backup/zrepl.nix
vendored
Normal file
54
third_party/nixpkgs/nixos/modules/services/backup/zrepl.nix
vendored
Normal file
|
@ -0,0 +1,54 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.zrepl;
|
||||
format = pkgs.formats.yaml { };
|
||||
configFile = format.generate "zrepl.yml" cfg.settings;
|
||||
in
|
||||
{
|
||||
meta.maintainers = with maintainers; [ cole-h ];
|
||||
|
||||
options = {
|
||||
services.zrepl = {
|
||||
enable = mkEnableOption "zrepl";
|
||||
|
||||
settings = mkOption {
|
||||
default = { };
|
||||
description = ''
|
||||
Configuration for zrepl. See <link
|
||||
xlink:href="https://zrepl.github.io/configuration.html"/>
|
||||
for more information.
|
||||
'';
|
||||
type = types.submodule {
|
||||
freeformType = format.type;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
### Implementation ###
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.zrepl ];
|
||||
|
||||
# zrepl looks for its config in this location by default. This
|
||||
# allows the use of e.g. `zrepl signal wakeup <job>` without having
|
||||
# to specify the storepath of the config.
|
||||
environment.etc."zrepl/zrepl.yml".source = configFile;
|
||||
|
||||
systemd.packages = [ pkgs.zrepl ];
|
||||
systemd.services.zrepl = {
|
||||
requires = [ "local-fs.target" ];
|
||||
wantedBy = [ "zfs.target" ];
|
||||
after = [ "zfs.target" ];
|
||||
|
||||
path = [ config.boot.zfs.package ];
|
||||
restartTriggers = [ configFile ];
|
||||
|
||||
serviceConfig = {
|
||||
Restart = "on-failure";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -3,7 +3,7 @@
|
|||
with lib;
|
||||
|
||||
let
|
||||
version = "1.6.4";
|
||||
version = "1.7.1";
|
||||
cfg = config.services.kubernetes.addons.dns;
|
||||
ports = {
|
||||
dns = 10053;
|
||||
|
@ -55,9 +55,9 @@ in {
|
|||
type = types.attrs;
|
||||
default = {
|
||||
imageName = "coredns/coredns";
|
||||
imageDigest = "sha256:493ee88e1a92abebac67cbd4b5658b4730e0f33512461442d8d9214ea6734a9b";
|
||||
imageDigest = "sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef";
|
||||
finalImageTag = version;
|
||||
sha256 = "0fm9zdjavpf5hni8g7fkdd3csjbhd7n7py7llxjc66sbii087028";
|
||||
sha256 = "02r440xcdsgi137k5lmmvp0z5w5fmk8g9mysq5pnysq1wl8sj6mw";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -156,7 +156,6 @@ in {
|
|||
health :${toString ports.health}
|
||||
kubernetes ${cfg.clusterDomain} in-addr.arpa ip6.arpa {
|
||||
pods insecure
|
||||
upstream
|
||||
fallthrough in-addr.arpa ip6.arpa
|
||||
}
|
||||
prometheus :${toString ports.metrics}
|
||||
|
|
|
@ -238,14 +238,40 @@ in
|
|||
type = int;
|
||||
};
|
||||
|
||||
apiAudiences = mkOption {
|
||||
description = ''
|
||||
Kubernetes apiserver ServiceAccount issuer.
|
||||
'';
|
||||
default = "api,https://kubernetes.default.svc";
|
||||
type = str;
|
||||
};
|
||||
|
||||
serviceAccountIssuer = mkOption {
|
||||
description = ''
|
||||
Kubernetes apiserver ServiceAccount issuer.
|
||||
'';
|
||||
default = "https://kubernetes.default.svc";
|
||||
type = str;
|
||||
};
|
||||
|
||||
serviceAccountSigningKeyFile = mkOption {
|
||||
description = ''
|
||||
Path to the file that contains the current private key of the service
|
||||
account token issuer. The issuer will sign issued ID tokens with this
|
||||
private key.
|
||||
'';
|
||||
type = path;
|
||||
};
|
||||
|
||||
serviceAccountKeyFile = mkOption {
|
||||
description = ''
|
||||
Kubernetes apiserver PEM-encoded x509 RSA private or public key file,
|
||||
used to verify ServiceAccount tokens. By default tls private key file
|
||||
is used.
|
||||
File containing PEM-encoded x509 RSA or ECDSA private or public keys,
|
||||
used to verify ServiceAccount tokens. The specified file can contain
|
||||
multiple keys, and the flag can be specified multiple times with
|
||||
different files. If unspecified, --tls-private-key-file is used.
|
||||
Must be specified when --service-account-signing-key is provided
|
||||
'';
|
||||
default = null;
|
||||
type = nullOr path;
|
||||
type = path;
|
||||
};
|
||||
|
||||
serviceClusterIpRange = mkOption {
|
||||
|
@ -357,8 +383,10 @@ in
|
|||
${optionalString (cfg.runtimeConfig != "")
|
||||
"--runtime-config=${cfg.runtimeConfig}"} \
|
||||
--secure-port=${toString cfg.securePort} \
|
||||
${optionalString (cfg.serviceAccountKeyFile!=null)
|
||||
"--service-account-key-file=${cfg.serviceAccountKeyFile}"} \
|
||||
--api-audiences=${toString cfg.apiAudiences} \
|
||||
--service-account-issuer=${toString cfg.serviceAccountIssuer} \
|
||||
--service-account-signing-key-file=${cfg.serviceAccountSigningKeyFile} \
|
||||
--service-account-key-file=${cfg.serviceAccountKeyFile} \
|
||||
--service-cluster-ip-range=${cfg.serviceClusterIpRange} \
|
||||
--storage-backend=${cfg.storageBackend} \
|
||||
${optionalString (cfg.tlsCertFile != null)
|
||||
|
|
|
@ -5,6 +5,29 @@ with lib;
|
|||
let
|
||||
cfg = config.services.kubernetes;
|
||||
|
||||
defaultContainerdConfigFile = pkgs.writeText "containerd.toml" ''
|
||||
version = 2
|
||||
root = "/var/lib/containerd/daemon"
|
||||
state = "/var/run/containerd/daemon"
|
||||
oom_score = 0
|
||||
|
||||
[grpc]
|
||||
address = "/var/run/containerd/containerd.sock"
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
sandbox_image = "pause:latest"
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".cni]
|
||||
bin_dir = "/opt/cni/bin"
|
||||
max_conf_num = 0
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."io.containerd.runc.v2".options]
|
||||
SystemdCgroup = true
|
||||
'';
|
||||
|
||||
mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON {
|
||||
apiVersion = "v1";
|
||||
kind = "Config";
|
||||
|
@ -222,14 +245,9 @@ in {
|
|||
})
|
||||
|
||||
(mkIf cfg.kubelet.enable {
|
||||
virtualisation.docker = {
|
||||
virtualisation.containerd = {
|
||||
enable = mkDefault true;
|
||||
|
||||
# kubernetes needs access to logs
|
||||
logDriver = mkDefault "json-file";
|
||||
|
||||
# iptables must be disabled for kubernetes
|
||||
extraOptions = "--iptables=false --ip-masq=false";
|
||||
configFile = mkDefault defaultContainerdConfigFile;
|
||||
};
|
||||
})
|
||||
|
||||
|
@ -269,7 +287,6 @@ in {
|
|||
users.users.kubernetes = {
|
||||
uid = config.ids.uids.kubernetes;
|
||||
description = "Kubernetes user";
|
||||
extraGroups = [ "docker" ];
|
||||
group = "kubernetes";
|
||||
home = cfg.dataDir;
|
||||
createHome = true;
|
||||
|
|
|
@ -8,16 +8,6 @@ let
|
|||
|
||||
# we want flannel to use kubernetes itself as configuration backend, not direct etcd
|
||||
storageBackend = "kubernetes";
|
||||
|
||||
# needed for flannel to pass options to docker
|
||||
mkDockerOpts = pkgs.runCommand "mk-docker-opts" {
|
||||
buildInputs = [ pkgs.makeWrapper ];
|
||||
} ''
|
||||
mkdir -p $out
|
||||
|
||||
# bashInteractive needed for `compgen`
|
||||
makeWrapper ${pkgs.bashInteractive}/bin/bash $out/mk-docker-opts --add-flags "${pkgs.kubernetes}/bin/mk-docker-opts.sh"
|
||||
'';
|
||||
in
|
||||
{
|
||||
###### interface
|
||||
|
@ -43,43 +33,17 @@ in
|
|||
cniVersion = "0.3.1";
|
||||
delegate = {
|
||||
isDefaultGateway = true;
|
||||
bridge = "docker0";
|
||||
bridge = "mynet";
|
||||
};
|
||||
}];
|
||||
};
|
||||
|
||||
systemd.services.mk-docker-opts = {
|
||||
description = "Pre-Docker Actions";
|
||||
path = with pkgs; [ gawk gnugrep ];
|
||||
script = ''
|
||||
${mkDockerOpts}/mk-docker-opts -d /run/flannel/docker
|
||||
systemctl restart docker
|
||||
'';
|
||||
serviceConfig.Type = "oneshot";
|
||||
};
|
||||
|
||||
systemd.paths.flannel-subnet-env = {
|
||||
wantedBy = [ "flannel.service" ];
|
||||
pathConfig = {
|
||||
PathModified = "/run/flannel/subnet.env";
|
||||
Unit = "mk-docker-opts.service";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.docker = {
|
||||
environment.DOCKER_OPTS = "-b none";
|
||||
serviceConfig.EnvironmentFile = "-/run/flannel/docker";
|
||||
};
|
||||
|
||||
# read environment variables generated by mk-docker-opts
|
||||
virtualisation.docker.extraOptions = "$DOCKER_OPTS";
|
||||
|
||||
networking = {
|
||||
firewall.allowedUDPPorts = [
|
||||
8285 # flannel udp
|
||||
8472 # flannel vxlan
|
||||
];
|
||||
dhcpcd.denyInterfaces = [ "docker*" "flannel*" ];
|
||||
dhcpcd.denyInterfaces = [ "mynet*" "flannel*" ];
|
||||
};
|
||||
|
||||
services.kubernetes.pki.certs = {
|
||||
|
|
|
@ -23,7 +23,7 @@ let
|
|||
name = "pause";
|
||||
tag = "latest";
|
||||
contents = top.package.pause;
|
||||
config.Cmd = "/bin/pause";
|
||||
config.Cmd = ["/bin/pause"];
|
||||
};
|
||||
|
||||
kubeconfig = top.lib.mkKubeConfig "kubelet" cfg.kubeconfig;
|
||||
|
@ -125,6 +125,18 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
containerRuntime = mkOption {
|
||||
description = "Which container runtime type to use";
|
||||
type = enum ["docker" "remote"];
|
||||
default = "remote";
|
||||
};
|
||||
|
||||
containerRuntimeEndpoint = mkOption {
|
||||
description = "Endpoint at which to find the container runtime api interface/socket";
|
||||
type = str;
|
||||
default = "unix:///var/run/containerd/containerd.sock";
|
||||
};
|
||||
|
||||
enable = mkEnableOption "Kubernetes kubelet.";
|
||||
|
||||
extraOpts = mkOption {
|
||||
|
@ -235,16 +247,24 @@ in
|
|||
###### implementation
|
||||
config = mkMerge [
|
||||
(mkIf cfg.enable {
|
||||
|
||||
environment.etc."cni/net.d".source = cniConfig;
|
||||
|
||||
services.kubernetes.kubelet.seedDockerImages = [infraContainer];
|
||||
|
||||
boot.kernel.sysctl = {
|
||||
"net.bridge.bridge-nf-call-iptables" = 1;
|
||||
"net.ipv4.ip_forward" = 1;
|
||||
"net.bridge.bridge-nf-call-ip6tables" = 1;
|
||||
};
|
||||
|
||||
systemd.services.kubelet = {
|
||||
description = "Kubernetes Kubelet Service";
|
||||
wantedBy = [ "kubernetes.target" ];
|
||||
after = [ "network.target" "docker.service" "kube-apiserver.service" ];
|
||||
after = [ "containerd.service" "network.target" "kube-apiserver.service" ];
|
||||
path = with pkgs; [
|
||||
gitMinimal
|
||||
openssh
|
||||
docker
|
||||
util-linux
|
||||
iproute
|
||||
ethtool
|
||||
|
@ -254,8 +274,12 @@ in
|
|||
] ++ lib.optional config.boot.zfs.enabled config.boot.zfs.package ++ top.path;
|
||||
preStart = ''
|
||||
${concatMapStrings (img: ''
|
||||
echo "Seeding docker image: ${img}"
|
||||
docker load <${img}
|
||||
echo "Seeding container image: ${img}"
|
||||
${if (lib.hasSuffix "gz" img) then
|
||||
''${pkgs.gzip}/bin/zcat "${img}" | ${pkgs.containerd}/bin/ctr -n k8s.io image import -''
|
||||
else
|
||||
''${pkgs.coreutils}/bin/cat "${img}" | ${pkgs.containerd}/bin/ctr -n k8s.io image import -''
|
||||
}
|
||||
'') cfg.seedDockerImages}
|
||||
|
||||
rm /opt/cni/bin/* || true
|
||||
|
@ -306,6 +330,9 @@ in
|
|||
${optionalString (cfg.tlsKeyFile != null)
|
||||
"--tls-private-key-file=${cfg.tlsKeyFile}"} \
|
||||
${optionalString (cfg.verbosity != null) "--v=${toString cfg.verbosity}"} \
|
||||
--container-runtime=${cfg.containerRuntime} \
|
||||
--container-runtime-endpoint=${cfg.containerRuntimeEndpoint} \
|
||||
--cgroup-driver=systemd \
|
||||
${cfg.extraOpts}
|
||||
'';
|
||||
WorkingDirectory = top.dataDir;
|
||||
|
@ -315,7 +342,7 @@ in
|
|||
# Allways include cni plugins
|
||||
services.kubernetes.kubelet.cni.packages = [pkgs.cni-plugins];
|
||||
|
||||
boot.kernelModules = ["br_netfilter"];
|
||||
boot.kernelModules = ["br_netfilter" "overlay"];
|
||||
|
||||
services.kubernetes.kubelet.hostname = with config.networking;
|
||||
mkDefault (hostName + optionalString (domain != null) ".${domain}");
|
||||
|
|
|
@ -361,6 +361,7 @@ in
|
|||
tlsCertFile = mkDefault cert;
|
||||
tlsKeyFile = mkDefault key;
|
||||
serviceAccountKeyFile = mkDefault cfg.certs.serviceAccount.cert;
|
||||
serviceAccountSigningKeyFile = mkDefault cfg.certs.serviceAccount.key;
|
||||
kubeletClientCaFile = mkDefault caCert;
|
||||
kubeletClientCertFile = mkDefault cfg.certs.apiserverKubeletClient.cert;
|
||||
kubeletClientKeyFile = mkDefault cfg.certs.apiserverKubeletClient.key;
|
||||
|
|
|
@ -89,6 +89,11 @@ in
|
|||
example = "dbi:Pg:dbname=hydra;host=postgres.example.org;user=foo;";
|
||||
description = ''
|
||||
The DBI string for Hydra database connection.
|
||||
|
||||
NOTE: Attempts to set `application_name` will be overridden by
|
||||
`hydra-TYPE` (where TYPE is e.g. `evaluator`, `queue-runner`,
|
||||
etc.) in all hydra services to more easily distinguish where
|
||||
queries are coming from.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -284,7 +289,9 @@ in
|
|||
{ wantedBy = [ "multi-user.target" ];
|
||||
requires = optional haveLocalDB "postgresql.service";
|
||||
after = optional haveLocalDB "postgresql.service";
|
||||
environment = env;
|
||||
environment = env // {
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-init";
|
||||
};
|
||||
preStart = ''
|
||||
mkdir -p ${baseDir}
|
||||
chown hydra.hydra ${baseDir}
|
||||
|
@ -339,7 +346,9 @@ in
|
|||
{ wantedBy = [ "multi-user.target" ];
|
||||
requires = [ "hydra-init.service" ];
|
||||
after = [ "hydra-init.service" ];
|
||||
environment = serverEnv;
|
||||
environment = serverEnv // {
|
||||
HYDRA_DBI = "${serverEnv.HYDRA_DBI};application_name=hydra-server";
|
||||
};
|
||||
restartTriggers = [ hydraConf ];
|
||||
serviceConfig =
|
||||
{ ExecStart =
|
||||
|
@ -361,6 +370,7 @@ in
|
|||
environment = env // {
|
||||
PGPASSFILE = "${baseDir}/pgpass-queue-runner"; # grrr
|
||||
IN_SYSTEMD = "1"; # to get log severity levels
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-queue-runner";
|
||||
};
|
||||
serviceConfig =
|
||||
{ ExecStart = "@${hydra-package}/bin/hydra-queue-runner hydra-queue-runner -v";
|
||||
|
@ -380,7 +390,9 @@ in
|
|||
after = [ "hydra-init.service" "network.target" ];
|
||||
path = with pkgs; [ hydra-package nettools jq ];
|
||||
restartTriggers = [ hydraConf ];
|
||||
environment = env;
|
||||
environment = env // {
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-evaluator";
|
||||
};
|
||||
serviceConfig =
|
||||
{ ExecStart = "@${hydra-package}/bin/hydra-evaluator hydra-evaluator";
|
||||
User = "hydra";
|
||||
|
@ -392,7 +404,9 @@ in
|
|||
systemd.services.hydra-update-gc-roots =
|
||||
{ requires = [ "hydra-init.service" ];
|
||||
after = [ "hydra-init.service" ];
|
||||
environment = env;
|
||||
environment = env // {
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-update-gc-roots";
|
||||
};
|
||||
serviceConfig =
|
||||
{ ExecStart = "@${hydra-package}/bin/hydra-update-gc-roots hydra-update-gc-roots";
|
||||
User = "hydra";
|
||||
|
@ -403,7 +417,9 @@ in
|
|||
systemd.services.hydra-send-stats =
|
||||
{ wantedBy = [ "multi-user.target" ];
|
||||
after = [ "hydra-init.service" ];
|
||||
environment = env;
|
||||
environment = env // {
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-send-stats";
|
||||
};
|
||||
serviceConfig =
|
||||
{ ExecStart = "@${hydra-package}/bin/hydra-send-stats hydra-send-stats";
|
||||
User = "hydra";
|
||||
|
@ -417,6 +433,7 @@ in
|
|||
restartTriggers = [ hydraConf ];
|
||||
environment = env // {
|
||||
PGPASSFILE = "${baseDir}/pgpass-queue-runner";
|
||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-notify";
|
||||
};
|
||||
serviceConfig =
|
||||
{ ExecStart = "@${hydra-package}/bin/hydra-notify hydra-notify";
|
||||
|
|
34
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/alsa-monitor.conf.json
vendored
Normal file
34
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/alsa-monitor.conf.json
vendored
Normal file
|
@ -0,0 +1,34 @@
|
|||
{
|
||||
"properties": {},
|
||||
"rules": [
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"device.name": "~alsa_card.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"api.alsa.use-acp": true,
|
||||
"api.acp.auto-profile": false,
|
||||
"api.acp.auto-port": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"node.name": "~alsa_input.*"
|
||||
},
|
||||
{
|
||||
"node.name": "~alsa_output.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"node.pause-on-idle": false
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
30
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/bluez-monitor.conf.json
vendored
Normal file
30
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/bluez-monitor.conf.json
vendored
Normal file
|
@ -0,0 +1,30 @@
|
|||
{
|
||||
"properties": {},
|
||||
"rules": [
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"device.name": "~bluez_card.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {}
|
||||
}
|
||||
},
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"node.name": "~bluez_input.*"
|
||||
},
|
||||
{
|
||||
"node.name": "~bluez_output.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"node.pause-on-idle": false
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
26
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/client-rt.conf.json
vendored
Normal file
26
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/client-rt.conf.json
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
{
|
||||
"context.properties": {
|
||||
"log.level": 0
|
||||
},
|
||||
"context.spa-libs": {
|
||||
"audio.convert.*": "audioconvert/libspa-audioconvert",
|
||||
"support.*": "support/libspa-support"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-rtkit": {
|
||||
"args": {},
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-client-device": null,
|
||||
"libpipewire-module-adapter": null,
|
||||
"libpipewire-module-metadata": null,
|
||||
"libpipewire-module-session-manager": null
|
||||
},
|
||||
"filter.properties": {},
|
||||
"stream.properties": {}
|
||||
}
|
19
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/client.conf.json
vendored
Normal file
19
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/client.conf.json
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
{
|
||||
"context.properties": {
|
||||
"log.level": 0
|
||||
},
|
||||
"context.spa-libs": {
|
||||
"audio.convert.*": "audioconvert/libspa-audioconvert",
|
||||
"support.*": "support/libspa-support"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-client-device": null,
|
||||
"libpipewire-module-adapter": null,
|
||||
"libpipewire-module-metadata": null,
|
||||
"libpipewire-module-session-manager": null
|
||||
},
|
||||
"filter.properties": {},
|
||||
"stream.properties": {}
|
||||
}
|
21
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/jack.conf.json
vendored
Normal file
21
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/jack.conf.json
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
{
|
||||
"context.properties": {
|
||||
"log.level": 0
|
||||
},
|
||||
"context.spa-libs": {
|
||||
"support.*": "support/libspa-support"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-rtkit": {
|
||||
"args": {},
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-metadata": null
|
||||
},
|
||||
"jack.properties": {}
|
||||
}
|
53
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/media-session.conf.json
vendored
Normal file
53
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/media-session.conf.json
vendored
Normal file
|
@ -0,0 +1,53 @@
|
|||
{
|
||||
"context.properties": {},
|
||||
"context.spa-libs": {
|
||||
"api.bluez5.*": "bluez5/libspa-bluez5",
|
||||
"api.alsa.*": "alsa/libspa-alsa",
|
||||
"api.v4l2.*": "v4l2/libspa-v4l2",
|
||||
"api.libcamera.*": "libcamera/libspa-libcamera"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-rtkit": {
|
||||
"args": {},
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-client-device": null,
|
||||
"libpipewire-module-adapter": null,
|
||||
"libpipewire-module-metadata": null,
|
||||
"libpipewire-module-session-manager": null
|
||||
},
|
||||
"session.modules": {
|
||||
"default": [
|
||||
"flatpak",
|
||||
"portal",
|
||||
"v4l2",
|
||||
"suspend-node",
|
||||
"policy-node"
|
||||
],
|
||||
"with-audio": [
|
||||
"metadata",
|
||||
"default-nodes",
|
||||
"default-profile",
|
||||
"default-routes",
|
||||
"alsa-seq",
|
||||
"alsa-monitor"
|
||||
],
|
||||
"with-alsa": [
|
||||
"with-audio"
|
||||
],
|
||||
"with-jack": [
|
||||
"with-audio"
|
||||
],
|
||||
"with-pulseaudio": [
|
||||
"with-audio",
|
||||
"bluez5",
|
||||
"restore-stream",
|
||||
"streams-follow-default"
|
||||
]
|
||||
}
|
||||
}
|
|
@ -9,18 +9,36 @@ let
|
|||
&& pkgs.stdenv.isx86_64
|
||||
&& pkgs.pkgsi686Linux.pipewire != null;
|
||||
|
||||
prioritizeNativeProtocol = {
|
||||
"context.modules" = {
|
||||
"libpipewire-module-protocol-native" = {
|
||||
_priority = -100;
|
||||
_content = null;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Use upstream config files passed through spa-json-dump as the base
|
||||
# Patched here as necessary for them to work with this module
|
||||
defaults = {
|
||||
alsa-monitor = (builtins.fromJSON (builtins.readFile ./alsa-monitor.conf.json));
|
||||
bluez-monitor = (builtins.fromJSON (builtins.readFile ./bluez-monitor.conf.json));
|
||||
media-session = recursiveUpdate (builtins.fromJSON (builtins.readFile ./media-session.conf.json)) prioritizeNativeProtocol;
|
||||
v4l2-monitor = (builtins.fromJSON (builtins.readFile ./v4l2-monitor.conf.json));
|
||||
};
|
||||
# Helpers for generating the pipewire JSON config file
|
||||
mkSPAValueString = v:
|
||||
if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]"
|
||||
else if lib.types.attrs.check v then
|
||||
"{${lib.concatStringsSep " " (mkSPAKeyValue v)}}"
|
||||
else if builtins.isString v then "\"${lib.generators.mkValueStringDefault { } v}\""
|
||||
else lib.generators.mkValueStringDefault { } v;
|
||||
|
||||
mkSPAKeyValue = attrs: map (def: def.content) (
|
||||
lib.sortProperties
|
||||
(
|
||||
lib.mapAttrsToList
|
||||
(k: v: lib.mkOrder (v._priority or 1000) "${lib.escape [ "=" ] k} = ${mkSPAValueString (v._content or v)}")
|
||||
(k: v: lib.mkOrder (v._priority or 1000) "${lib.escape [ "=" ":" ] k} = ${mkSPAValueString (v._content or v)}")
|
||||
attrs
|
||||
)
|
||||
);
|
||||
|
@ -51,272 +69,41 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
config = mkOption {
|
||||
config = {
|
||||
media-session = mkOption {
|
||||
type = types.attrs;
|
||||
description = ''
|
||||
Configuration for the media session core.
|
||||
Configuration for the media session core. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/media-session.conf
|
||||
'';
|
||||
default = {
|
||||
# media-session config file
|
||||
properties = {
|
||||
# Properties to configure the session and some
|
||||
# modules
|
||||
#mem.mlock-all = false;
|
||||
#context.profile.modules = "default,rtkit";
|
||||
default = {};
|
||||
};
|
||||
|
||||
spa-libs = {
|
||||
# Mapping from factory name to library.
|
||||
"api.bluez5.*" = "bluez5/libspa-bluez5";
|
||||
"api.alsa.*" = "alsa/libspa-alsa";
|
||||
"api.v4l2.*" = "v4l2/libspa-v4l2";
|
||||
"api.libcamera.*" = "libcamera/libspa-libcamera";
|
||||
};
|
||||
|
||||
modules = {
|
||||
# These are the modules that are enabled when a file with
|
||||
# the key name is found in the media-session.d config directory.
|
||||
# the default bundle is always enabled.
|
||||
|
||||
default = [
|
||||
"flatpak" # manages flatpak access
|
||||
"portal" # manage portal permissions
|
||||
"v4l2" # video for linux udev detection
|
||||
#"libcamera" # libcamera udev detection
|
||||
"suspend-node" # suspend inactive nodes
|
||||
"policy-node" # configure and link nodes
|
||||
#"metadata" # export metadata API
|
||||
#"default-nodes" # restore default nodes
|
||||
#"default-profile" # restore default profiles
|
||||
#"default-routes" # restore default route
|
||||
#"streams-follow-default" # move streams when default changes
|
||||
#"alsa-seq" # alsa seq midi support
|
||||
#"alsa-monitor" # alsa udev detection
|
||||
#"bluez5" # bluetooth support
|
||||
#"restore-stream" # restore stream settings
|
||||
];
|
||||
"with-audio" = [
|
||||
"metadata"
|
||||
"default-nodes"
|
||||
"default-profile"
|
||||
"default-routes"
|
||||
"alsa-seq"
|
||||
"alsa-monitor"
|
||||
];
|
||||
"with-alsa" = [
|
||||
"with-audio"
|
||||
];
|
||||
"with-jack" = [
|
||||
"with-audio"
|
||||
];
|
||||
"with-pulseaudio" = [
|
||||
"with-audio"
|
||||
"bluez5"
|
||||
"restore-stream"
|
||||
"streams-follow-default"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
alsaMonitorConfig = mkOption {
|
||||
alsa-monitor = mkOption {
|
||||
type = types.attrs;
|
||||
description = ''
|
||||
Configuration for the alsa monitor.
|
||||
Configuration for the alsa monitor. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/alsa-monitor.conf
|
||||
'';
|
||||
default = {
|
||||
# alsa-monitor config file
|
||||
properties = {
|
||||
#alsa.jack-device = true
|
||||
default = {};
|
||||
};
|
||||
|
||||
rules = [
|
||||
# an array of matches/actions to evaluate
|
||||
{
|
||||
# rules for matching a device or node. It is an array of
|
||||
# properties that all need to match the regexp. If any of the
|
||||
# matches work, the actions are executed for the object.
|
||||
matches = [
|
||||
{
|
||||
# this matches all cards
|
||||
device.name = "~alsa_card.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
# actions can update properties on the matched object.
|
||||
update-props = {
|
||||
api.alsa.use-acp = true;
|
||||
#api.alsa.use-ucm = true;
|
||||
#api.alsa.soft-mixer = false;
|
||||
#api.alsa.ignore-dB = false;
|
||||
#device.profile-set = "profileset-name";
|
||||
#device.profile = "default profile name";
|
||||
api.acp.auto-profile = false;
|
||||
api.acp.auto-port = false;
|
||||
#device.nick = "My Device";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
matches = [
|
||||
{
|
||||
# matches all sinks
|
||||
node.name = "~alsa_input.*";
|
||||
}
|
||||
{
|
||||
# matches all sources
|
||||
node.name = "~alsa_output.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
update-props = {
|
||||
#node.nick = "My Node";
|
||||
#node.nick = null;
|
||||
#priority.driver = 100;
|
||||
#priority.session = 100;
|
||||
#node.pause-on-idle = false;
|
||||
#resample.quality = 4;
|
||||
#channelmix.normalize = false;
|
||||
#channelmix.mix-lfe = false;
|
||||
#audio.channels = 2;
|
||||
#audio.format = "S16LE";
|
||||
#audio.rate = 44100;
|
||||
#audio.position = "FL,FR";
|
||||
#api.alsa.period-size = 1024;
|
||||
#api.alsa.headroom = 0;
|
||||
#api.alsa.disable-mmap = false;
|
||||
#api.alsa.disable-batch = false;
|
||||
};
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
bluezMonitorConfig = mkOption {
|
||||
bluez-monitor = mkOption {
|
||||
type = types.attrs;
|
||||
description = ''
|
||||
Configuration for the bluez5 monitor.
|
||||
Configuration for the bluez5 monitor. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/bluez-monitor.conf
|
||||
'';
|
||||
default = {
|
||||
# bluez-monitor config file
|
||||
properties = {
|
||||
# msbc is not expected to work on all headset + adapter combinations.
|
||||
#bluez5.msbc-support = true;
|
||||
#bluez5.sbc-xq-support = true;
|
||||
|
||||
# Enabled headset roles (default: [ hsp_hs hfp_ag ]), this
|
||||
# property only applies to native backend. Currently some headsets
|
||||
# (Sony WH-1000XM3) are not working with both hsp_ag and hfp_ag
|
||||
# enabled, disable either hsp_ag or hfp_ag to work around it.
|
||||
#
|
||||
# Supported headset roles: hsp_hs (HSP Headset),
|
||||
# hsp_ag (HSP Audio Gateway),
|
||||
# hfp_ag (HFP Audio Gateway)
|
||||
#bluez5.headset-roles = [ "hsp_hs" "hsp_ag" "hfp_ag" ];
|
||||
|
||||
# Enabled A2DP codecs (default: all)
|
||||
#bluez5.codecs = [ "sbc" "aac" "ldac" "aptx" "aptx_hd" ];
|
||||
default = {};
|
||||
};
|
||||
|
||||
rules = [
|
||||
# an array of matches/actions to evaluate
|
||||
{
|
||||
# rules for matching a device or node. It is an array of
|
||||
# properties that all need to match the regexp. If any of the
|
||||
# matches work, the actions are executed for the object.
|
||||
matches = [
|
||||
{
|
||||
# this matches all cards
|
||||
device.name = "~bluez_card.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
# actions can update properties on the matched object.
|
||||
update-props = {
|
||||
#device.nick = "My Device";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
matches = [
|
||||
{
|
||||
# matches all sinks
|
||||
node.name = "~bluez_input.*";
|
||||
}
|
||||
{
|
||||
# matches all sources
|
||||
node.name = "~bluez_output.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
update-props = {
|
||||
#node.nick = "My Node"
|
||||
#node.nick = null;
|
||||
#priority.driver = 100;
|
||||
#priority.session = 100;
|
||||
#node.pause-on-idle = false;
|
||||
#resample.quality = 4;
|
||||
#channelmix.normalize = false;
|
||||
#channelmix.mix-lfe = false;
|
||||
};
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
v4l2MonitorConfig = mkOption {
|
||||
v4l2-monitor = mkOption {
|
||||
type = types.attrs;
|
||||
description = ''
|
||||
Configuration for the V4L2 monitor.
|
||||
Configuration for the V4L2 monitor. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/media-session.d/v4l2-monitor.conf
|
||||
'';
|
||||
default = {
|
||||
# v4l2-monitor config file
|
||||
properties = {
|
||||
};
|
||||
|
||||
rules = [
|
||||
# an array of matches/actions to evaluate
|
||||
{
|
||||
# rules for matching a device or node. It is an array of
|
||||
# properties that all need to match the regexp. If any of the
|
||||
# matches work, the actions are executed for the object.
|
||||
matches = [
|
||||
{
|
||||
# this matches all devices
|
||||
device.name = "~v4l2_device.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
# actions can update properties on the matched object.
|
||||
update-props = {
|
||||
#device.nick = "My Device";
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
matches = [
|
||||
{
|
||||
# matches all sinks
|
||||
node.name = "~v4l2_input.*";
|
||||
}
|
||||
{
|
||||
# matches all sources
|
||||
node.name = "~v4l2_output.*";
|
||||
}
|
||||
];
|
||||
actions = {
|
||||
update-props = {
|
||||
#node.nick = "My Node";
|
||||
#node.nick = null;
|
||||
#priority.driver = 100;
|
||||
#priority.session = 100;
|
||||
#node.pause-on-idle = true;
|
||||
};
|
||||
};
|
||||
}
|
||||
];
|
||||
default = {};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
@ -325,16 +112,17 @@ in {
|
|||
###### implementation
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
services.pipewire.sessionManagerExecutable = "${cfg.package}/bin/pipewire-media-session";
|
||||
systemd.packages = [ cfg.package ];
|
||||
systemd.user.services.pipewire-media-session.wantedBy = [ "pipewire.service" ];
|
||||
|
||||
environment.etc."pipewire/media-session.d/media-session.conf" = { text = toSPAJSON cfg.config; };
|
||||
environment.etc."pipewire/media-session.d/v4l2-monitor.conf" = { text = toSPAJSON cfg.v4l2MonitorConfig; };
|
||||
environment.etc."pipewire/media-session.d/media-session.conf" = { text = toSPAJSON (recursiveUpdate defaults.media-session cfg.config.media-session); };
|
||||
environment.etc."pipewire/media-session.d/v4l2-monitor.conf" = { text = toSPAJSON (recursiveUpdate defaults.v4l2-monitor cfg.config.v4l2-monitor); };
|
||||
|
||||
environment.etc."pipewire/media-session.d/with-alsa" = mkIf config.services.pipewire.alsa.enable { text = ""; };
|
||||
environment.etc."pipewire/media-session.d/alsa-monitor.conf" = mkIf config.services.pipewire.alsa.enable { text = toSPAJSON cfg.alsaMonitorConfig; };
|
||||
environment.etc."pipewire/media-session.d/alsa-monitor.conf" = mkIf config.services.pipewire.alsa.enable { text = toSPAJSON (recursiveUpdate defaults.alsa-monitor cfg.config.alsa-monitor); };
|
||||
|
||||
environment.etc."pipewire/media-session.d/with-pulseaudio" = mkIf config.services.pipewire.pulse.enable { text = ""; };
|
||||
environment.etc."pipewire/media-session.d/bluez-monitor.conf" = mkIf config.services.pipewire.pulse.enable { text = toSPAJSON cfg.bluezMonitorConfig; };
|
||||
environment.etc."pipewire/media-session.d/bluez-monitor.conf" = mkIf config.services.pipewire.pulse.enable { text = toSPAJSON (recursiveUpdate defaults.bluez-monitor cfg.config.bluez-monitor); };
|
||||
|
||||
environment.etc."pipewire/media-session.d/with-jack" = mkIf config.services.pipewire.jack.enable { text = ""; };
|
||||
};
|
||||
|
|
28
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/pipewire-pulse.conf.json
vendored
Normal file
28
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/pipewire-pulse.conf.json
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
{
|
||||
"context.properties": {},
|
||||
"context.spa-libs": {
|
||||
"audio.convert.*": "audioconvert/libspa-audioconvert",
|
||||
"support.*": "support/libspa-support"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-rtkit": {
|
||||
"args": {},
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-adapter": null,
|
||||
"libpipewire-module-metadata": null,
|
||||
"libpipewire-module-protocol-pulse": {
|
||||
"args": {
|
||||
"server.address": [
|
||||
"unix:native"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"stream.properties": {}
|
||||
}
|
55
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/pipewire.conf.json
vendored
Normal file
55
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/pipewire.conf.json
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"context.properties": {
|
||||
"link.max-buffers": 16,
|
||||
"core.daemon": true,
|
||||
"core.name": "pipewire-0"
|
||||
},
|
||||
"context.spa-libs": {
|
||||
"audio.convert.*": "audioconvert/libspa-audioconvert",
|
||||
"api.alsa.*": "alsa/libspa-alsa",
|
||||
"api.v4l2.*": "v4l2/libspa-v4l2",
|
||||
"api.libcamera.*": "libcamera/libspa-libcamera",
|
||||
"api.bluez5.*": "bluez5/libspa-bluez5",
|
||||
"api.vulkan.*": "vulkan/libspa-vulkan",
|
||||
"api.jack.*": "jack/libspa-jack",
|
||||
"support.*": "support/libspa-support"
|
||||
},
|
||||
"context.modules": {
|
||||
"libpipewire-module-rtkit": {
|
||||
"args": {},
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-protocol-native": null,
|
||||
"libpipewire-module-profiler": null,
|
||||
"libpipewire-module-metadata": null,
|
||||
"libpipewire-module-spa-device-factory": null,
|
||||
"libpipewire-module-spa-node-factory": null,
|
||||
"libpipewire-module-client-node": null,
|
||||
"libpipewire-module-client-device": null,
|
||||
"libpipewire-module-portal": {
|
||||
"flags": [
|
||||
"ifexists",
|
||||
"nofail"
|
||||
]
|
||||
},
|
||||
"libpipewire-module-access": {
|
||||
"args": {}
|
||||
},
|
||||
"libpipewire-module-adapter": null,
|
||||
"libpipewire-module-link-factory": null,
|
||||
"libpipewire-module-session-manager": null
|
||||
},
|
||||
"context.objects": {
|
||||
"spa-node-factory": {
|
||||
"args": {
|
||||
"factory.name": "support.node.driver",
|
||||
"node.name": "Dummy-Driver",
|
||||
"priority.driver": 8000
|
||||
}
|
||||
}
|
||||
},
|
||||
"context.exec": {}
|
||||
}
|
|
@ -18,11 +18,53 @@ let
|
|||
ln -s "${cfg.package.jack}/lib" "$out/lib/pipewire"
|
||||
'';
|
||||
|
||||
prioritizeNativeProtocol = {
|
||||
"context.modules" = {
|
||||
# Most other modules depend on this, so put it first
|
||||
"libpipewire-module-protocol-native" = {
|
||||
_priority = -100;
|
||||
_content = null;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
fixDaemonModulePriorities = {
|
||||
"context.modules" = {
|
||||
# Most other modules depend on thism so put it first
|
||||
"libpipewire-module-protocol-native" = {
|
||||
_priority = -100;
|
||||
_content = null;
|
||||
};
|
||||
# Needs to be before libpipewire-module-access
|
||||
"libpipewire-module-portal" = {
|
||||
_priority = -50;
|
||||
_content = {
|
||||
flags = [
|
||||
"ifexists"
|
||||
"nofail"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Use upstream config files passed through spa-json-dump as the base
|
||||
# Patched here as necessary for them to work with this module
|
||||
defaults = {
|
||||
client = recursiveUpdate (builtins.fromJSON (builtins.readFile ./client.conf.json)) prioritizeNativeProtocol;
|
||||
client-rt = recursiveUpdate (builtins.fromJSON (builtins.readFile ./client-rt.conf.json)) prioritizeNativeProtocol;
|
||||
jack = recursiveUpdate (builtins.fromJSON (builtins.readFile ./jack.conf.json)) prioritizeNativeProtocol;
|
||||
# Remove session manager invocation from the upstream generated file, it points to the wrong path
|
||||
pipewire = recursiveUpdate (builtins.fromJSON (builtins.readFile ./pipewire.conf.json)) fixDaemonModulePriorities;
|
||||
pipewire-pulse = recursiveUpdate (builtins.fromJSON (builtins.readFile ./pipewire-pulse.conf.json)) prioritizeNativeProtocol;
|
||||
};
|
||||
|
||||
# Helpers for generating the pipewire JSON config file
|
||||
mkSPAValueString = v:
|
||||
if builtins.isList v then "[${lib.concatMapStringsSep " " mkSPAValueString v}]"
|
||||
else if lib.types.attrs.check v then
|
||||
"{${lib.concatStringsSep " " (mkSPAKeyValue v)}}"
|
||||
else if builtins.isString v then "\"${lib.generators.mkValueStringDefault { } v}\""
|
||||
else lib.generators.mkValueStringDefault { } v;
|
||||
|
||||
mkSPAKeyValue = attrs: map (def: def.content) (
|
||||
|
@ -64,131 +106,53 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
config = mkOption {
|
||||
config = {
|
||||
client = mkOption {
|
||||
type = types.attrs;
|
||||
default = {};
|
||||
description = ''
|
||||
Configuration for the pipewire daemon.
|
||||
'';
|
||||
default = {
|
||||
properties = {
|
||||
## set-prop is used to configure properties in the system
|
||||
#
|
||||
# "library.name.system" = "support/libspa-support";
|
||||
# "context.data-loop.library.name.system" = "support/libspa-support";
|
||||
"link.max-buffers" = 16; # version < 3 clients can't handle more than 16
|
||||
#"mem.allow-mlock" = false;
|
||||
#"mem.mlock-all" = true;
|
||||
## https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/pipewire/pipewire.h#L93
|
||||
#"log.level" = 2; # 5 is trace, which is verbose as hell, default is 2 which is warnings, 4 is debug output, 3 is info
|
||||
|
||||
## Properties for the DSP configuration
|
||||
#
|
||||
#"default.clock.rate" = 48000;
|
||||
#"default.clock.quantum" = 1024;
|
||||
#"default.clock.min-quantum" = 32;
|
||||
#"default.clock.max-quantum" = 8192;
|
||||
#"default.video.width" = 640;
|
||||
#"default.video.height" = 480;
|
||||
#"default.video.rate.num" = 25;
|
||||
#"default.video.rate.denom" = 1;
|
||||
};
|
||||
|
||||
spa-libs = {
|
||||
## add-spa-lib <factory-name regex> <library-name>
|
||||
#
|
||||
# used to find spa factory names. It maps an spa factory name
|
||||
# regular expression to a library name that should contain
|
||||
# that factory.
|
||||
#
|
||||
"audio.convert*" = "audioconvert/libspa-audioconvert";
|
||||
"api.alsa.*" = "alsa/libspa-alsa";
|
||||
"api.v4l2.*" = "v4l2/libspa-v4l2";
|
||||
"api.libcamera.*" = "libcamera/libspa-libcamera";
|
||||
"api.bluez5.*" = "bluez5/libspa-bluez5";
|
||||
"api.vulkan.*" = "vulkan/libspa-vulkan";
|
||||
"api.jack.*" = "jack/libspa-jack";
|
||||
"support.*" = "support/libspa-support";
|
||||
# "videotestsrc" = "videotestsrc/libspa-videotestsrc";
|
||||
# "audiotestsrc" = "audiotestsrc/libspa-audiotestsrc";
|
||||
};
|
||||
|
||||
modules = {
|
||||
## <module-name> = { [args = "<key>=<value> ..."]
|
||||
# [flags = ifexists] }
|
||||
# [flags = [ifexists]|[nofail]}
|
||||
#
|
||||
# Loads a module with the given parameters.
|
||||
# If ifexists is given, the module is ignoed when it is not found.
|
||||
# If nofail is given, module initialization failures are ignored.
|
||||
#
|
||||
libpipewire-module-rtkit = {
|
||||
args = {
|
||||
#rt.prio = 20;
|
||||
#rt.time.soft = 200000;
|
||||
#rt.time.hard = 200000;
|
||||
#nice.level = -11;
|
||||
};
|
||||
flags = "ifexists|nofail";
|
||||
};
|
||||
libpipewire-module-protocol-native = { _priority = -100; _content = "null"; };
|
||||
libpipewire-module-profiler = "null";
|
||||
libpipewire-module-metadata = "null";
|
||||
libpipewire-module-spa-device-factory = "null";
|
||||
libpipewire-module-spa-node-factory = "null";
|
||||
libpipewire-module-client-node = "null";
|
||||
libpipewire-module-client-device = "null";
|
||||
libpipewire-module-portal = "null";
|
||||
libpipewire-module-access = {
|
||||
args.access = {
|
||||
allowed = ["${builtins.unsafeDiscardStringContext cfg.sessionManagerExecutable}"];
|
||||
rejected = [];
|
||||
restricted = [];
|
||||
force = "flatpak";
|
||||
};
|
||||
};
|
||||
libpipewire-module-adapter = "null";
|
||||
libpipewire-module-link-factory = "null";
|
||||
libpipewire-module-session-manager = "null";
|
||||
};
|
||||
|
||||
objects = {
|
||||
## create-object [-nofail] <factory-name> [<key>=<value> ...]
|
||||
#
|
||||
# Creates an object from a PipeWire factory with the given parameters.
|
||||
# If -nofail is given, errors are ignored (and no object is created)
|
||||
#
|
||||
};
|
||||
|
||||
|
||||
exec = {
|
||||
## exec <program-name>
|
||||
#
|
||||
# Execute the given program. This is usually used to start the
|
||||
# session manager. run the session manager with -h for options
|
||||
#
|
||||
"${builtins.unsafeDiscardStringContext cfg.sessionManagerExecutable}" = { args = "\"${lib.concatStringsSep " " cfg.sessionManagerArguments}\""; };
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
sessionManagerExecutable = mkOption {
|
||||
type = types.str;
|
||||
default = "";
|
||||
example = literalExample ''${pkgs.pipewire.mediaSession}/bin/pipewire-media-session'';
|
||||
description = ''
|
||||
Path to the session manager executable.
|
||||
Configuration for pipewire clients. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/client.conf.in
|
||||
'';
|
||||
};
|
||||
|
||||
sessionManagerArguments = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [];
|
||||
example = literalExample ''["-p" "bluez5.msbc-support=true"]'';
|
||||
client-rt = mkOption {
|
||||
type = types.attrs;
|
||||
default = {};
|
||||
description = ''
|
||||
Arguments passed to the pipewire session manager.
|
||||
Configuration for realtime pipewire clients. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/client-rt.conf.in
|
||||
'';
|
||||
};
|
||||
|
||||
jack = mkOption {
|
||||
type = types.attrs;
|
||||
default = {};
|
||||
description = ''
|
||||
Configuration for the pipewire daemon's jack module. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/jack.conf.in
|
||||
'';
|
||||
};
|
||||
|
||||
pipewire = mkOption {
|
||||
type = types.attrs;
|
||||
default = {};
|
||||
description = ''
|
||||
Configuration for the pipewire daemon. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/pipewire.conf.in
|
||||
'';
|
||||
};
|
||||
|
||||
pipewire-pulse = mkOption {
|
||||
type = types.attrs;
|
||||
default = {};
|
||||
description = ''
|
||||
Configuration for the pipewire-pulse daemon. For details see
|
||||
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/${cfg.package.version}/src/daemon/pipewire-pulse.conf.in
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
alsa = {
|
||||
enable = mkEnableOption "ALSA support";
|
||||
support32Bit = mkEnableOption "32-bit ALSA support on 64-bit systems";
|
||||
|
@ -253,13 +217,16 @@ in {
|
|||
source = "${cfg.package}/share/alsa/alsa.conf.d/99-pipewire-default.conf";
|
||||
};
|
||||
|
||||
environment.etc."pipewire/client.conf" = { text = toSPAJSON (recursiveUpdate defaults.client cfg.config.client); };
|
||||
environment.etc."pipewire/client-rt.conf" = { text = toSPAJSON (recursiveUpdate defaults.client-rt cfg.config.client-rt); };
|
||||
environment.etc."pipewire/jack.conf" = { text = toSPAJSON (recursiveUpdate defaults.jack cfg.config.jack); };
|
||||
environment.etc."pipewire/pipewire.conf" = { text = toSPAJSON (recursiveUpdate defaults.pipewire cfg.config.pipewire); };
|
||||
environment.etc."pipewire/pipewire-pulse.conf" = { text = toSPAJSON (recursiveUpdate defaults.pipewire-pulse cfg.config.pipewire-pulse); };
|
||||
|
||||
environment.sessionVariables.LD_LIBRARY_PATH =
|
||||
lib.optional cfg.jack.enable "/run/current-system/sw/lib/pipewire";
|
||||
|
||||
# https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/464#note_723554
|
||||
systemd.user.services.pipewire.environment = {
|
||||
"PIPEWIRE_LINK_PASSIVE" = "1";
|
||||
"PIPEWIRE_CONFIG_FILE" = pkgs.writeText "pipewire.conf" (toSPAJSON cfg.config);
|
||||
};
|
||||
systemd.user.services.pipewire.environment."PIPEWIRE_LINK_PASSIVE" = "1";
|
||||
};
|
||||
}
|
||||
|
|
30
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/v4l2-monitor.conf.json
vendored
Normal file
30
third_party/nixpkgs/nixos/modules/services/desktops/pipewire/v4l2-monitor.conf.json
vendored
Normal file
|
@ -0,0 +1,30 @@
|
|||
{
|
||||
"properties": {},
|
||||
"rules": [
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"device.name": "~v4l2_device.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {}
|
||||
}
|
||||
},
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"node.name": "~v4l2_input.*"
|
||||
},
|
||||
{
|
||||
"node.name": "~v4l2_output.*"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"node.pause-on-idle": false
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
|
@ -4,7 +4,7 @@ with lib;
|
|||
|
||||
let
|
||||
cfg = config.services.minetest-server;
|
||||
flag = val: name: if val != null then "--${name} ${val} " else "";
|
||||
flag = val: name: if val != null then "--${name} ${toString val} " else "";
|
||||
flags = [
|
||||
(flag cfg.gameId "gameid")
|
||||
(flag cfg.world "world")
|
||||
|
|
|
@ -3,21 +3,22 @@
|
|||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.acpid;
|
||||
|
||||
canonicalHandlers = {
|
||||
powerEvent = {
|
||||
event = "button/power.*";
|
||||
action = config.services.acpid.powerEventCommands;
|
||||
action = cfg.powerEventCommands;
|
||||
};
|
||||
|
||||
lidEvent = {
|
||||
event = "button/lid.*";
|
||||
action = config.services.acpid.lidEventCommands;
|
||||
action = cfg.lidEventCommands;
|
||||
};
|
||||
|
||||
acEvent = {
|
||||
event = "ac_adapter.*";
|
||||
action = config.services.acpid.acEventCommands;
|
||||
action = cfg.acEventCommands;
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -33,7 +34,7 @@ let
|
|||
echo "event=${handler.event}" > $fn
|
||||
echo "action=${pkgs.writeShellScriptBin "${name}.sh" handler.action }/bin/${name}.sh '%e'" >> $fn
|
||||
'';
|
||||
in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // config.services.acpid.handlers))
|
||||
in concatStringsSep "\n" (mapAttrsToList f (canonicalHandlers // cfg.handlers))
|
||||
}
|
||||
'';
|
||||
|
||||
|
@ -47,11 +48,7 @@ in
|
|||
|
||||
services.acpid = {
|
||||
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Whether to enable the ACPI daemon.";
|
||||
};
|
||||
enable = mkEnableOption "the ACPI daemon";
|
||||
|
||||
logEvents = mkOption {
|
||||
type = types.bool;
|
||||
|
@ -129,26 +126,28 @@ in
|
|||
|
||||
###### implementation
|
||||
|
||||
config = mkIf config.services.acpid.enable {
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
systemd.services.acpid = {
|
||||
description = "ACPI Daemon";
|
||||
documentation = [ "man:acpid(8)" ];
|
||||
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "systemd-udev-settle.service" ];
|
||||
|
||||
path = [ pkgs.acpid ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
ExecStart = escapeShellArgs
|
||||
([ "${pkgs.acpid}/bin/acpid"
|
||||
"--foreground"
|
||||
"--netlink"
|
||||
"--confdir" "${acpiConfDir}"
|
||||
] ++ optional cfg.logEvents "--logevents"
|
||||
);
|
||||
};
|
||||
|
||||
unitConfig = {
|
||||
ConditionVirtualization = "!systemd-nspawn";
|
||||
ConditionPathExists = [ "/proc/acpi" ];
|
||||
};
|
||||
|
||||
script = "acpid ${optionalString config.services.acpid.logEvents "--logevents"} --confdir ${acpiConfDir}";
|
||||
};
|
||||
|
||||
};
|
||||
|
|
26
third_party/nixpkgs/nixos/modules/services/hardware/spacenavd.nix
vendored
Normal file
26
third_party/nixpkgs/nixos/modules/services/hardware/spacenavd.nix
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let cfg = config.hardware.spacenavd;
|
||||
|
||||
in {
|
||||
|
||||
options = {
|
||||
hardware.spacenavd = {
|
||||
enable = mkEnableOption "spacenavd to support 3DConnexion devices";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.user.services.spacenavd = {
|
||||
description = "Daemon for the Spacenavigator 6DOF mice by 3Dconnexion";
|
||||
after = [ "syslog.target" ];
|
||||
wantedBy = [ "graphical.target" ];
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.spacenavd}/bin/spacenavd -d -l syslog";
|
||||
StandardError = "syslog";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -48,7 +48,7 @@ in {
|
|||
|
||||
systemd.services.trezord = {
|
||||
description = "Trezor Bridge";
|
||||
after = [ "systemd-udev-settle.service" "network.target" ];
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [];
|
||||
serviceConfig = {
|
||||
|
|
|
@ -1,69 +0,0 @@
|
|||
worker_processes 3
|
||||
|
||||
listen ENV["UNICORN_PATH"] + "/tmp/sockets/gitlab.socket", :backlog => 1024
|
||||
listen "/run/gitlab/gitlab.socket", :backlog => 1024
|
||||
|
||||
working_directory ENV["GITLAB_PATH"]
|
||||
|
||||
pid ENV["UNICORN_PATH"] + "/tmp/pids/unicorn.pid"
|
||||
|
||||
timeout 60
|
||||
|
||||
# combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings
|
||||
# http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
|
||||
preload_app true
|
||||
GC.respond_to?(:copy_on_write_friendly=) and
|
||||
GC.copy_on_write_friendly = true
|
||||
|
||||
check_client_connection false
|
||||
|
||||
before_fork do |server, worker|
|
||||
# the following is highly recommended for Rails + "preload_app true"
|
||||
# as there's no need for the master process to hold a connection
|
||||
defined?(ActiveRecord::Base) and
|
||||
ActiveRecord::Base.connection.disconnect!
|
||||
|
||||
# The following is only recommended for memory/DB-constrained
|
||||
# installations. It is not needed if your system can house
|
||||
# twice as many worker_processes as you have configured.
|
||||
#
|
||||
# This allows a new master process to incrementally
|
||||
# phase out the old master process with SIGTTOU to avoid a
|
||||
# thundering herd (especially in the "preload_app false" case)
|
||||
# when doing a transparent upgrade. The last worker spawned
|
||||
# will then kill off the old master process with a SIGQUIT.
|
||||
old_pid = "#{server.config[:pid]}.oldbin"
|
||||
if old_pid != server.pid
|
||||
begin
|
||||
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
|
||||
Process.kill(sig, File.read(old_pid).to_i)
|
||||
rescue Errno::ENOENT, Errno::ESRCH
|
||||
end
|
||||
end
|
||||
|
||||
# Throttle the master from forking too quickly by sleeping. Due
|
||||
# to the implementation of standard Unix signal handlers, this
|
||||
# helps (but does not completely) prevent identical, repeated signals
|
||||
# from being lost when the receiving process is busy.
|
||||
# sleep 1
|
||||
end
|
||||
|
||||
after_fork do |server, worker|
|
||||
# per-process listener ports for debugging/admin/migrations
|
||||
# addr = "127.0.0.1:#{9293 + worker.nr}"
|
||||
# server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true)
|
||||
|
||||
# the following is *required* for Rails + "preload_app true",
|
||||
defined?(ActiveRecord::Base) and
|
||||
ActiveRecord::Base.establish_connection
|
||||
|
||||
# reset prometheus client, this will cause any opened metrics files to be closed
|
||||
defined?(::Prometheus::Client.reinitialize_on_pid_change) &&
|
||||
Prometheus::Client.reinitialize_on_pid_change
|
||||
|
||||
# if preload_app is true, then you may also want to check and
|
||||
# restart any other shared sockets/descriptors such as Memcached,
|
||||
# and Redis. TokyoCabinet file handles are safe to reuse
|
||||
# between any number of forked children (assuming your kernel
|
||||
# correctly implements pread()/pwrite() system calls)
|
||||
end
|
|
@ -142,7 +142,7 @@ let
|
|||
|
||||
gitlabEnv = {
|
||||
HOME = "${cfg.statePath}/home";
|
||||
UNICORN_PATH = "${cfg.statePath}/";
|
||||
PUMA_PATH = "${cfg.statePath}/";
|
||||
GITLAB_PATH = "${cfg.packages.gitlab}/share/gitlab/";
|
||||
SCHEMA = "${cfg.statePath}/db/structure.sql";
|
||||
GITLAB_UPLOADS_PATH = "${cfg.statePath}/uploads";
|
||||
|
@ -424,7 +424,7 @@ in {
|
|||
|
||||
port = mkOption {
|
||||
type = types.int;
|
||||
default = 465;
|
||||
default = 25;
|
||||
description = "Port of the SMTP server for Gitlab.";
|
||||
};
|
||||
|
||||
|
@ -641,6 +641,11 @@ in {
|
|||
|
||||
environment.systemPackages = [ pkgs.git gitlab-rake gitlab-rails cfg.packages.gitlab-shell ];
|
||||
|
||||
systemd.targets.gitlab = {
|
||||
description = "Common target for all GitLab services.";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
};
|
||||
|
||||
# Redis is required for the sidekiq queue runner.
|
||||
services.redis.enable = mkDefault true;
|
||||
|
||||
|
@ -655,36 +660,45 @@ in {
|
|||
# here.
|
||||
systemd.services.gitlab-postgresql = let pgsql = config.services.postgresql; in mkIf databaseActuallyCreateLocally {
|
||||
after = [ "postgresql.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [ pgsql.package ];
|
||||
bindsTo = [ "postgresql.service" ];
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
path = [
|
||||
pgsql.package
|
||||
pkgs.util-linux
|
||||
];
|
||||
script = ''
|
||||
set -eu
|
||||
|
||||
PSQL="${pkgs.util-linux}/bin/runuser -u ${pgsql.superUser} -- psql --port=${toString pgsql.port}"
|
||||
PSQL() {
|
||||
psql --port=${toString pgsql.port} "$@"
|
||||
}
|
||||
|
||||
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${cfg.databaseName}'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "${cfg.databaseName}" OWNER "${cfg.databaseUsername}"'
|
||||
current_owner=$($PSQL -tAc "SELECT pg_catalog.pg_get_userbyid(datdba) FROM pg_catalog.pg_database WHERE datname = '${cfg.databaseName}'")
|
||||
PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${cfg.databaseName}'" | grep -q 1 || PSQL -tAc 'CREATE DATABASE "${cfg.databaseName}" OWNER "${cfg.databaseUsername}"'
|
||||
current_owner=$(PSQL -tAc "SELECT pg_catalog.pg_get_userbyid(datdba) FROM pg_catalog.pg_database WHERE datname = '${cfg.databaseName}'")
|
||||
if [[ "$current_owner" != "${cfg.databaseUsername}" ]]; then
|
||||
$PSQL -tAc 'ALTER DATABASE "${cfg.databaseName}" OWNER TO "${cfg.databaseUsername}"'
|
||||
PSQL -tAc 'ALTER DATABASE "${cfg.databaseName}" OWNER TO "${cfg.databaseUsername}"'
|
||||
if [[ -e "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}" ]]; then
|
||||
echo "Reassigning ownership of database ${cfg.databaseName} to user ${cfg.databaseUsername} failed on last boot. Failing..."
|
||||
exit 1
|
||||
fi
|
||||
touch "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}"
|
||||
$PSQL "${cfg.databaseName}" -tAc "REASSIGN OWNED BY \"$current_owner\" TO \"${cfg.databaseUsername}\""
|
||||
PSQL "${cfg.databaseName}" -tAc "REASSIGN OWNED BY \"$current_owner\" TO \"${cfg.databaseUsername}\""
|
||||
rm "${config.services.postgresql.dataDir}/.reassigning_${cfg.databaseName}"
|
||||
fi
|
||||
$PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS pg_trgm"
|
||||
$PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS btree_gist;"
|
||||
PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS pg_trgm"
|
||||
PSQL '${cfg.databaseName}' -tAc "CREATE EXTENSION IF NOT EXISTS btree_gist;"
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
User = pgsql.superUser;
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
};
|
||||
};
|
||||
|
||||
# Use postfix to send out mails.
|
||||
services.postfix.enable = mkDefault true;
|
||||
services.postfix.enable = mkDefault (cfg.smtp.enable && cfg.smtp.address == "localhost");
|
||||
|
||||
users.users.${cfg.user} =
|
||||
{ group = cfg.group;
|
||||
|
@ -703,7 +717,6 @@ in {
|
|||
"d ${cfg.statePath} 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/builds 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/config 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/config/initializers 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/db 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/log 0750 ${cfg.user} ${cfg.group} -"
|
||||
"d ${cfg.statePath}/repositories 2770 ${cfg.user} ${cfg.group} -"
|
||||
|
@ -726,13 +739,156 @@ in {
|
|||
"L+ /run/gitlab/uploads - - - - ${cfg.statePath}/uploads"
|
||||
|
||||
"L+ /run/gitlab/shell-config.yml - - - - ${pkgs.writeText "config.yml" (builtins.toJSON gitlabShellConfig)}"
|
||||
|
||||
"L+ ${cfg.statePath}/config/unicorn.rb - - - - ${./defaultUnicornConfig.rb}"
|
||||
];
|
||||
|
||||
|
||||
systemd.services.gitlab-config = {
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
path = with pkgs; [
|
||||
jq
|
||||
openssl
|
||||
replace
|
||||
git
|
||||
];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
TimeoutSec = "infinity";
|
||||
Restart = "on-failure";
|
||||
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
|
||||
RemainAfterExit = true;
|
||||
|
||||
ExecStartPre = let
|
||||
preStartFullPrivileges = ''
|
||||
shopt -s dotglob nullglob
|
||||
set -eu
|
||||
|
||||
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/*
|
||||
if [[ -n "$(ls -A '${cfg.statePath}'/config/)" ]]; then
|
||||
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/config/*
|
||||
fi
|
||||
'';
|
||||
in "+${pkgs.writeShellScript "gitlab-pre-start-full-privileges" preStartFullPrivileges}";
|
||||
|
||||
ExecStart = pkgs.writeShellScript "gitlab-config" ''
|
||||
set -eu
|
||||
|
||||
umask u=rwx,g=rx,o=
|
||||
|
||||
cp -f ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
|
||||
rm -rf ${cfg.statePath}/db/*
|
||||
rm -f ${cfg.statePath}/lib
|
||||
find '${cfg.statePath}/config/' -maxdepth 1 -mindepth 1 -type d -execdir rm -rf {} \;
|
||||
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
|
||||
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/db/* ${cfg.statePath}/db
|
||||
ln -sf ${extraGitlabRb} ${cfg.statePath}/config/initializers/extra-gitlab.rb
|
||||
|
||||
${cfg.packages.gitlab-shell}/bin/install
|
||||
|
||||
${optionalString cfg.smtp.enable ''
|
||||
install -m u=rw ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
|
||||
${optionalString (cfg.smtp.passwordFile != null) ''
|
||||
smtp_password=$(<'${cfg.smtp.passwordFile}')
|
||||
replace-literal -e '@smtpPassword@' "$smtp_password" '${cfg.statePath}/config/initializers/smtp_settings.rb'
|
||||
''}
|
||||
''}
|
||||
|
||||
(
|
||||
umask u=rwx,g=,o=
|
||||
|
||||
openssl rand -hex 32 > ${cfg.statePath}/gitlab_shell_secret
|
||||
|
||||
rm -f '${cfg.statePath}/config/database.yml'
|
||||
|
||||
${if cfg.databasePasswordFile != null then ''
|
||||
export db_password="$(<'${cfg.databasePasswordFile}')"
|
||||
|
||||
if [[ -z "$db_password" ]]; then
|
||||
>&2 echo "Database password was an empty string!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
|
||||
'.production.password = $ENV.db_password' \
|
||||
>'${cfg.statePath}/config/database.yml'
|
||||
''
|
||||
else ''
|
||||
jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
|
||||
>'${cfg.statePath}/config/database.yml'
|
||||
''
|
||||
}
|
||||
|
||||
${utils.genJqSecretsReplacementSnippet
|
||||
gitlabConfig
|
||||
"${cfg.statePath}/config/gitlab.yml"
|
||||
}
|
||||
|
||||
rm -f '${cfg.statePath}/config/secrets.yml'
|
||||
|
||||
export secret="$(<'${cfg.secrets.secretFile}')"
|
||||
export db="$(<'${cfg.secrets.dbFile}')"
|
||||
export otp="$(<'${cfg.secrets.otpFile}')"
|
||||
export jws="$(<'${cfg.secrets.jwsFile}')"
|
||||
jq -n '{production: {secret_key_base: $ENV.secret,
|
||||
otp_key_base: $ENV.otp,
|
||||
db_key_base: $ENV.db,
|
||||
openid_connect_signing_key: $ENV.jws}}' \
|
||||
> '${cfg.statePath}/config/secrets.yml'
|
||||
)
|
||||
|
||||
# We remove potentially broken links to old gitlab-shell versions
|
||||
rm -Rf ${cfg.statePath}/repositories/**/*.git/hooks
|
||||
|
||||
git config --global core.autocrlf "input"
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.gitlab-db-config = {
|
||||
after = [ "gitlab-config.service" "gitlab-postgresql.service" "postgresql.service" ];
|
||||
bindsTo = [
|
||||
"gitlab-config.service"
|
||||
] ++ optional (cfg.databaseHost == "") "postgresql.service"
|
||||
++ optional databaseActuallyCreateLocally "gitlab-postgresql.service";
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
TimeoutSec = "infinity";
|
||||
Restart = "on-failure";
|
||||
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
|
||||
RemainAfterExit = true;
|
||||
|
||||
ExecStart = pkgs.writeShellScript "gitlab-db-config" ''
|
||||
set -eu
|
||||
umask u=rwx,g=rx,o=
|
||||
|
||||
initial_root_password="$(<'${cfg.initialRootPasswordFile}')"
|
||||
${gitlab-rake}/bin/gitlab-rake gitlab:db:configure GITLAB_ROOT_PASSWORD="$initial_root_password" \
|
||||
GITLAB_ROOT_EMAIL='${cfg.initialRootEmail}' > /dev/null
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.gitlab-sidekiq = {
|
||||
after = [ "network.target" "redis.service" "gitlab.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [
|
||||
"network.target"
|
||||
"redis.service"
|
||||
"postgresql.service"
|
||||
"gitlab-config.service"
|
||||
"gitlab-db-config.service"
|
||||
];
|
||||
bindsTo = [
|
||||
"redis.service"
|
||||
"gitlab-config.service"
|
||||
"gitlab-db-config.service"
|
||||
] ++ optional (cfg.databaseHost == "") "postgresql.service";
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
environment = gitlabEnv;
|
||||
path = with pkgs; [
|
||||
postgresqlPackage
|
||||
|
@ -758,9 +914,10 @@ in {
|
|||
};
|
||||
|
||||
systemd.services.gitaly = {
|
||||
after = [ "network.target" "gitlab.service" ];
|
||||
bindsTo = [ "gitlab.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" "gitlab-config.service" ];
|
||||
bindsTo = [ "gitlab-config.service" ];
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
path = with pkgs; [
|
||||
openssh
|
||||
procps # See https://gitlab.com/gitlab-org/gitaly/issues/1562
|
||||
|
@ -783,8 +940,10 @@ in {
|
|||
|
||||
systemd.services.gitlab-pages = mkIf (gitlabConfig.production.pages.enabled or false) {
|
||||
description = "GitLab static pages daemon";
|
||||
after = [ "network.target" "redis.service" "gitlab.service" ]; # gitlab.service creates configs
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" "gitlab-config.service" ];
|
||||
bindsTo = [ "gitlab-config.service" ];
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
|
||||
path = [ pkgs.unzip ];
|
||||
|
||||
|
@ -803,7 +962,8 @@ in {
|
|||
|
||||
systemd.services.gitlab-workhorse = {
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
path = with pkgs; [
|
||||
exiftool
|
||||
git
|
||||
|
@ -832,8 +992,10 @@ in {
|
|||
|
||||
systemd.services.gitlab-mailroom = mkIf (gitlabConfig.production.incoming_email.enabled or false) {
|
||||
description = "GitLab incoming mail daemon";
|
||||
after = [ "network.target" "redis.service" "gitlab.service" ]; # gitlab.service creates configs
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" "redis.service" "gitlab-config.service" ];
|
||||
bindsTo = [ "gitlab-config.service" ];
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
environment = gitlabEnv;
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
|
@ -842,15 +1004,26 @@ in {
|
|||
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/bundle exec mail_room -c ${cfg.packages.gitlab}/share/gitlab/config.dist/mail_room.yml";
|
||||
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/bundle exec mail_room -c ${cfg.statePath}/config/mail_room.yml";
|
||||
WorkingDirectory = gitlabEnv.HOME;
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.gitlab = {
|
||||
after = [ "gitlab-workhorse.service" "network.target" "gitlab-postgresql.service" "redis.service" ];
|
||||
requires = [ "gitlab-sidekiq.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [
|
||||
"gitlab-workhorse.service"
|
||||
"network.target"
|
||||
"redis.service"
|
||||
"gitlab-config.service"
|
||||
"gitlab-db-config.service"
|
||||
];
|
||||
bindsTo = [
|
||||
"redis.service"
|
||||
"gitlab-config.service"
|
||||
"gitlab-db-config.service"
|
||||
] ++ optional (cfg.databaseHost == "") "postgresql.service";
|
||||
wantedBy = [ "gitlab.target" ];
|
||||
partOf = [ "gitlab.target" ];
|
||||
environment = gitlabEnv;
|
||||
path = with pkgs; [
|
||||
postgresqlPackage
|
||||
|
@ -868,96 +1041,7 @@ in {
|
|||
TimeoutSec = "infinity";
|
||||
Restart = "on-failure";
|
||||
WorkingDirectory = "${cfg.packages.gitlab}/share/gitlab";
|
||||
ExecStartPre = let
|
||||
preStartFullPrivileges = ''
|
||||
shopt -s dotglob nullglob
|
||||
set -eu
|
||||
|
||||
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/*
|
||||
chown --no-dereference '${cfg.user}':'${cfg.group}' '${cfg.statePath}'/config/*
|
||||
'';
|
||||
preStart = ''
|
||||
set -eu
|
||||
|
||||
cp -f ${cfg.packages.gitlab}/share/gitlab/VERSION ${cfg.statePath}/VERSION
|
||||
rm -rf ${cfg.statePath}/db/*
|
||||
rm -rf ${cfg.statePath}/config/initializers/*
|
||||
rm -f ${cfg.statePath}/lib
|
||||
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/config.dist/* ${cfg.statePath}/config
|
||||
cp -rf --no-preserve=mode ${cfg.packages.gitlab}/share/gitlab/db/* ${cfg.statePath}/db
|
||||
ln -sf ${extraGitlabRb} ${cfg.statePath}/config/initializers/extra-gitlab.rb
|
||||
|
||||
${cfg.packages.gitlab-shell}/bin/install
|
||||
|
||||
${optionalString cfg.smtp.enable ''
|
||||
install -m u=rw ${smtpSettings} ${cfg.statePath}/config/initializers/smtp_settings.rb
|
||||
${optionalString (cfg.smtp.passwordFile != null) ''
|
||||
smtp_password=$(<'${cfg.smtp.passwordFile}')
|
||||
${pkgs.replace}/bin/replace-literal -e '@smtpPassword@' "$smtp_password" '${cfg.statePath}/config/initializers/smtp_settings.rb'
|
||||
''}
|
||||
''}
|
||||
|
||||
(
|
||||
umask u=rwx,g=,o=
|
||||
|
||||
${pkgs.openssl}/bin/openssl rand -hex 32 > ${cfg.statePath}/gitlab_shell_secret
|
||||
|
||||
if [[ -h '${cfg.statePath}/config/database.yml' ]]; then
|
||||
rm '${cfg.statePath}/config/database.yml'
|
||||
fi
|
||||
|
||||
${if cfg.databasePasswordFile != null then ''
|
||||
export db_password="$(<'${cfg.databasePasswordFile}')"
|
||||
|
||||
if [[ -z "$db_password" ]]; then
|
||||
>&2 echo "Database password was an empty string!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
${pkgs.jq}/bin/jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
|
||||
'.production.password = $ENV.db_password' \
|
||||
>'${cfg.statePath}/config/database.yml'
|
||||
''
|
||||
else ''
|
||||
${pkgs.jq}/bin/jq <${pkgs.writeText "database.yml" (builtins.toJSON databaseConfig)} \
|
||||
>'${cfg.statePath}/config/database.yml'
|
||||
''
|
||||
}
|
||||
|
||||
${utils.genJqSecretsReplacementSnippet
|
||||
gitlabConfig
|
||||
"${cfg.statePath}/config/gitlab.yml"
|
||||
}
|
||||
|
||||
if [[ -h '${cfg.statePath}/config/secrets.yml' ]]; then
|
||||
rm '${cfg.statePath}/config/secrets.yml'
|
||||
fi
|
||||
|
||||
export secret="$(<'${cfg.secrets.secretFile}')"
|
||||
export db="$(<'${cfg.secrets.dbFile}')"
|
||||
export otp="$(<'${cfg.secrets.otpFile}')"
|
||||
export jws="$(<'${cfg.secrets.jwsFile}')"
|
||||
${pkgs.jq}/bin/jq -n '{production: {secret_key_base: $ENV.secret,
|
||||
otp_key_base: $ENV.otp,
|
||||
db_key_base: $ENV.db,
|
||||
openid_connect_signing_key: $ENV.jws}}' \
|
||||
> '${cfg.statePath}/config/secrets.yml'
|
||||
)
|
||||
|
||||
initial_root_password="$(<'${cfg.initialRootPasswordFile}')"
|
||||
${gitlab-rake}/bin/gitlab-rake gitlab:db:configure GITLAB_ROOT_PASSWORD="$initial_root_password" \
|
||||
GITLAB_ROOT_EMAIL='${cfg.initialRootEmail}' > /dev/null
|
||||
|
||||
# We remove potentially broken links to old gitlab-shell versions
|
||||
rm -Rf ${cfg.statePath}/repositories/**/*.git/hooks
|
||||
|
||||
${pkgs.git}/bin/git config --global core.autocrlf "input"
|
||||
'';
|
||||
in [
|
||||
"+${pkgs.writeShellScript "gitlab-pre-start-full-privileges" preStartFullPrivileges}"
|
||||
"${pkgs.writeShellScript "gitlab-pre-start" preStart}"
|
||||
];
|
||||
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/unicorn -c ${cfg.statePath}/config/unicorn.rb -E production";
|
||||
ExecStart = "${cfg.packages.gitlab.rubyEnv}/bin/puma -C ${cfg.statePath}/config/puma.rb -e production";
|
||||
};
|
||||
|
||||
};
|
||||
|
|
|
@ -115,4 +115,6 @@ in
|
|||
};
|
||||
};
|
||||
};
|
||||
|
||||
meta.maintainers = with lib.maintainers; [ erictapen ];
|
||||
}
|
||||
|
|
|
@ -183,8 +183,14 @@ in {
|
|||
};
|
||||
|
||||
package = mkOption {
|
||||
default = pkgs.home-assistant;
|
||||
defaultText = "pkgs.home-assistant";
|
||||
default = pkgs.home-assistant.overrideAttrs (oldAttrs: {
|
||||
doInstallCheck = false;
|
||||
});
|
||||
defaultText = literalExample ''
|
||||
pkgs.home-assistant.overrideAttrs (oldAttrs: {
|
||||
doInstallCheck = false;
|
||||
})
|
||||
'';
|
||||
type = types.package;
|
||||
example = literalExample ''
|
||||
pkgs.home-assistant.override {
|
||||
|
@ -192,7 +198,7 @@ in {
|
|||
}
|
||||
'';
|
||||
description = ''
|
||||
Home Assistant package to use.
|
||||
Home Assistant package to use. By default the tests are disabled, as they take a considerable amout of time to complete.
|
||||
Override <literal>extraPackages</literal> or <literal>extraComponents</literal> in order to add additional dependencies.
|
||||
If you specify <option>config</option> and do not set <option>autoExtraComponents</option>
|
||||
to <literal>false</literal>, overriding <literal>extraComponents</literal> will have no effect.
|
||||
|
|
164
third_party/nixpkgs/nixos/modules/services/misc/lifecycled.nix
vendored
Normal file
164
third_party/nixpkgs/nixos/modules/services/misc/lifecycled.nix
vendored
Normal file
|
@ -0,0 +1,164 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.lifecycled;
|
||||
|
||||
# TODO: Add the ability to extend this with an rfc 42-like interface.
|
||||
# In the meantime, one can modify the environment (as
|
||||
# long as it's not overriding anything from here) with
|
||||
# systemd.services.lifecycled.serviceConfig.Environment
|
||||
configFile = pkgs.writeText "lifecycled" ''
|
||||
LIFECYCLED_HANDLER=${cfg.handler}
|
||||
${lib.optionalString (cfg.cloudwatchGroup != null) "LIFECYCLED_CLOUDWATCH_GROUP=${cfg.cloudwatchGroup}"}
|
||||
${lib.optionalString (cfg.cloudwatchStream != null) "LIFECYCLED_CLOUDWATCH_STREAM=${cfg.cloudwatchStream}"}
|
||||
${lib.optionalString cfg.debug "LIFECYCLED_DEBUG=${lib.boolToString cfg.debug}"}
|
||||
${lib.optionalString (cfg.instanceId != null) "LIFECYCLED_INSTANCE_ID=${cfg.instanceId}"}
|
||||
${lib.optionalString cfg.json "LIFECYCLED_JSON=${lib.boolToString cfg.json}"}
|
||||
${lib.optionalString cfg.noSpot "LIFECYCLED_NO_SPOT=${lib.boolToString cfg.noSpot}"}
|
||||
${lib.optionalString (cfg.snsTopic != null) "LIFECYCLED_SNS_TOPIC=${cfg.snsTopic}"}
|
||||
${lib.optionalString (cfg.awsRegion != null) "AWS_REGION=${cfg.awsRegion}"}
|
||||
'';
|
||||
in
|
||||
{
|
||||
meta.maintainers = with maintainers; [ cole-h grahamc ];
|
||||
|
||||
options = {
|
||||
services.lifecycled = {
|
||||
enable = mkEnableOption "lifecycled";
|
||||
|
||||
queueCleaner = {
|
||||
enable = mkEnableOption "lifecycled-queue-cleaner";
|
||||
|
||||
frequency = mkOption {
|
||||
type = types.str;
|
||||
default = "hourly";
|
||||
description = ''
|
||||
How often to trigger the queue cleaner.
|
||||
|
||||
NOTE: This string should be a valid value for a systemd
|
||||
timer's <literal>OnCalendar</literal> configuration. See
|
||||
<citerefentry><refentrytitle>systemd.timer</refentrytitle><manvolnum>5</manvolnum></citerefentry>
|
||||
for more information.
|
||||
'';
|
||||
};
|
||||
|
||||
parallel = mkOption {
|
||||
type = types.ints.unsigned;
|
||||
default = 20;
|
||||
description = ''
|
||||
The number of parallel deletes to run.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
instanceId = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
The instance ID to listen for events for.
|
||||
'';
|
||||
};
|
||||
|
||||
snsTopic = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
The SNS topic that receives events.
|
||||
'';
|
||||
};
|
||||
|
||||
noSpot = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Disable the spot termination listener.
|
||||
'';
|
||||
};
|
||||
|
||||
handler = mkOption {
|
||||
type = types.path;
|
||||
description = ''
|
||||
The script to invoke to handle events.
|
||||
'';
|
||||
};
|
||||
|
||||
json = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Enable JSON logging.
|
||||
'';
|
||||
};
|
||||
|
||||
cloudwatchGroup = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
Write logs to a specific Cloudwatch Logs group.
|
||||
'';
|
||||
};
|
||||
|
||||
cloudwatchStream = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
Write logs to a specific Cloudwatch Logs stream. Defaults to the instance ID.
|
||||
'';
|
||||
};
|
||||
|
||||
debug = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Enable debugging information.
|
||||
'';
|
||||
};
|
||||
|
||||
# XXX: Can be removed if / when
|
||||
# https://github.com/buildkite/lifecycled/pull/91 is merged.
|
||||
awsRegion = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
The region used for accessing AWS services.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
### Implementation ###
|
||||
|
||||
config = mkMerge [
|
||||
(mkIf cfg.enable {
|
||||
environment.etc."lifecycled".source = configFile;
|
||||
|
||||
systemd.packages = [ pkgs.lifecycled ];
|
||||
systemd.services.lifecycled = {
|
||||
wantedBy = [ "network-online.target" ];
|
||||
restartTriggers = [ configFile ];
|
||||
};
|
||||
})
|
||||
|
||||
(mkIf cfg.queueCleaner.enable {
|
||||
systemd.services.lifecycled-queue-cleaner = {
|
||||
description = "Lifecycle Daemon Queue Cleaner";
|
||||
environment = optionalAttrs (cfg.awsRegion != null) { AWS_REGION = cfg.awsRegion; };
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${pkgs.lifecycled}/bin/lifecycled-queue-cleaner -parallel ${toString cfg.queueCleaner.parallel}";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.timers.lifecycled-queue-cleaner = {
|
||||
description = "Lifecycle Daemon Queue Cleaner Timer";
|
||||
wantedBy = [ "timers.target" ];
|
||||
after = [ "network-online.target" ];
|
||||
timerConfig = {
|
||||
Unit = "lifecycled-queue-cleaner.service";
|
||||
OnCalendar = "${cfg.queueCleaner.frequency}";
|
||||
};
|
||||
};
|
||||
})
|
||||
];
|
||||
}
|
|
@ -21,13 +21,45 @@ in
|
|||
};
|
||||
|
||||
dates = mkOption {
|
||||
default = "03:15";
|
||||
type = types.str;
|
||||
default = "03:15";
|
||||
example = "weekly";
|
||||
description = ''
|
||||
Specification (in the format described by
|
||||
How often or when garbage collection is performed. For most desktop and server systems
|
||||
a sufficient garbage collection is once a week.
|
||||
|
||||
The format is described in
|
||||
<citerefentry><refentrytitle>systemd.time</refentrytitle>
|
||||
<manvolnum>7</manvolnum></citerefentry>) of the time at
|
||||
which the garbage collector will run.
|
||||
<manvolnum>7</manvolnum></citerefentry>.
|
||||
'';
|
||||
};
|
||||
|
||||
randomizedDelaySec = mkOption {
|
||||
default = "0";
|
||||
type = types.str;
|
||||
example = "45min";
|
||||
description = ''
|
||||
Add a randomized delay before each automatic upgrade.
|
||||
The delay will be chosen between zero and this value.
|
||||
This value must be a time span in the format specified by
|
||||
<citerefentry><refentrytitle>systemd.time</refentrytitle>
|
||||
<manvolnum>7</manvolnum></citerefentry>
|
||||
'';
|
||||
};
|
||||
|
||||
persistent = mkOption {
|
||||
default = true;
|
||||
type = types.bool;
|
||||
example = false;
|
||||
description = ''
|
||||
Takes a boolean argument. If true, the time when the service
|
||||
unit was last triggered is stored on disk. When the timer is
|
||||
activated, the service unit is triggered immediately if it
|
||||
would have been triggered at least once during the time when
|
||||
the timer was inactive. Such triggering is nonetheless
|
||||
subject to the delay imposed by RandomizedDelaySec=. This is
|
||||
useful to catch up on missed runs of the service when the
|
||||
system was powered down.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -50,12 +82,19 @@ in
|
|||
|
||||
config = {
|
||||
|
||||
systemd.services.nix-gc =
|
||||
{ description = "Nix Garbage Collector";
|
||||
systemd.services.nix-gc = {
|
||||
description = "Nix Garbage Collector";
|
||||
script = "exec ${config.nix.package.out}/bin/nix-collect-garbage ${cfg.options}";
|
||||
startAt = optional cfg.automatic cfg.dates;
|
||||
};
|
||||
|
||||
systemd.timers.nix-gc = lib.mkIf cfg.automatic {
|
||||
timerConfig = {
|
||||
RandomizedDelaySec = cfg.randomizedDelaySec;
|
||||
Persistent = cfg.persistent;
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
}
|
||||
|
|
82
third_party/nixpkgs/nixos/modules/services/misc/plikd.nix
vendored
Normal file
82
third_party/nixpkgs/nixos/modules/services/misc/plikd.nix
vendored
Normal file
|
@ -0,0 +1,82 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.plikd;
|
||||
|
||||
format = pkgs.formats.toml {};
|
||||
plikdCfg = format.generate "plikd.cfg" cfg.settings;
|
||||
in
|
||||
{
|
||||
options = {
|
||||
services.plikd = {
|
||||
enable = mkEnableOption "the plikd server";
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Open ports in the firewall for the plikd.";
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = format.type;
|
||||
default = {};
|
||||
description = ''
|
||||
Configuration for plikd, see <link xlink:href="https://github.com/root-gg/plik/blob/master/server/plikd.cfg"/>
|
||||
for supported values.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
services.plikd.settings = mapAttrs (name: mkDefault) {
|
||||
ListenPort = 8080;
|
||||
ListenAddress = "localhost";
|
||||
DataBackend = "file";
|
||||
DataBackendConfig = {
|
||||
Directory = "/var/lib/plikd";
|
||||
};
|
||||
MetadataBackendConfig = {
|
||||
Driver = "sqlite3";
|
||||
ConnectionString = "/var/lib/plikd/plik.db";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.plikd = {
|
||||
description = "Plikd file sharing server";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
ExecStart = "${pkgs.plikd}/bin/plikd --config ${plikdCfg}";
|
||||
Restart = "on-failure";
|
||||
StateDirectory = "plikd";
|
||||
LogsDirectory = "plikd";
|
||||
DynamicUser = true;
|
||||
|
||||
# Basic hardening
|
||||
NoNewPrivileges = "yes";
|
||||
PrivateTmp = "yes";
|
||||
PrivateDevices = "yes";
|
||||
DevicePolicy = "closed";
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = "read-only";
|
||||
ProtectControlGroups = "yes";
|
||||
ProtectKernelModules = "yes";
|
||||
ProtectKernelTunables = "yes";
|
||||
RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6 AF_NETLINK";
|
||||
RestrictNamespaces = "yes";
|
||||
RestrictRealtime = "yes";
|
||||
RestrictSUIDSGID = "yes";
|
||||
MemoryDenyWriteExecute = "yes";
|
||||
LockPersonality = "yes";
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall = mkIf cfg.openFirewall {
|
||||
allowedTCPPorts = [ cfg.settings.ListenPort ];
|
||||
};
|
||||
};
|
||||
}
|
|
@ -95,13 +95,13 @@ in
|
|||
ALERTA_SVR_CONF_FILE = alertaConf;
|
||||
};
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.python36Packages.alerta-server}/bin/alertad run --port ${toString cfg.port} --host ${cfg.bind}";
|
||||
ExecStart = "${pkgs.alerta-server}/bin/alertad run --port ${toString cfg.port} --host ${cfg.bind}";
|
||||
User = "alerta";
|
||||
Group = "alerta";
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = [ pkgs.python36Packages.alerta ];
|
||||
environment.systemPackages = [ pkgs.alerta ];
|
||||
|
||||
users.users.alerta = {
|
||||
uid = config.ids.uids.alerta;
|
||||
|
|
|
@ -65,10 +65,18 @@ let
|
|||
|
||||
dashboardFile = pkgs.writeText "dashboard.yaml" (builtins.toJSON dashboardConfiguration);
|
||||
|
||||
notifierConfiguration = {
|
||||
apiVersion = 1;
|
||||
notifiers = cfg.provision.notifiers;
|
||||
};
|
||||
|
||||
notifierFile = pkgs.writeText "notifier.yaml" (builtins.toJSON notifierConfiguration);
|
||||
|
||||
provisionConfDir = pkgs.runCommand "grafana-provisioning" { } ''
|
||||
mkdir -p $out/{datasources,dashboards}
|
||||
mkdir -p $out/{datasources,dashboards,notifiers}
|
||||
ln -sf ${datasourceFile} $out/datasources/datasource.yaml
|
||||
ln -sf ${dashboardFile} $out/dashboards/dashboard.yaml
|
||||
ln -sf ${notifierFile} $out/notifiers/notifier.yaml
|
||||
'';
|
||||
|
||||
# Get a submodule without any embedded metadata:
|
||||
|
@ -79,80 +87,80 @@ let
|
|||
options = {
|
||||
name = mkOption {
|
||||
type = types.str;
|
||||
description = "Name of the datasource. Required";
|
||||
description = "Name of the datasource. Required.";
|
||||
};
|
||||
type = mkOption {
|
||||
type = types.enum ["graphite" "prometheus" "cloudwatch" "elasticsearch" "influxdb" "opentsdb" "mysql" "mssql" "postgres" "loki"];
|
||||
description = "Datasource type. Required";
|
||||
description = "Datasource type. Required.";
|
||||
};
|
||||
access = mkOption {
|
||||
type = types.enum ["proxy" "direct"];
|
||||
default = "proxy";
|
||||
description = "Access mode. proxy or direct (Server or Browser in the UI). Required";
|
||||
description = "Access mode. proxy or direct (Server or Browser in the UI). Required.";
|
||||
};
|
||||
orgId = mkOption {
|
||||
type = types.int;
|
||||
default = 1;
|
||||
description = "Org id. will default to orgId 1 if not specified";
|
||||
description = "Org id. will default to orgId 1 if not specified.";
|
||||
};
|
||||
url = mkOption {
|
||||
type = types.str;
|
||||
description = "Url of the datasource";
|
||||
description = "Url of the datasource.";
|
||||
};
|
||||
password = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Database password, if used";
|
||||
description = "Database password, if used.";
|
||||
};
|
||||
user = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Database user, if used";
|
||||
description = "Database user, if used.";
|
||||
};
|
||||
database = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Database name, if used";
|
||||
description = "Database name, if used.";
|
||||
};
|
||||
basicAuth = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
default = null;
|
||||
description = "Enable/disable basic auth";
|
||||
description = "Enable/disable basic auth.";
|
||||
};
|
||||
basicAuthUser = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Basic auth username";
|
||||
description = "Basic auth username.";
|
||||
};
|
||||
basicAuthPassword = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = "Basic auth password";
|
||||
description = "Basic auth password.";
|
||||
};
|
||||
withCredentials = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Enable/disable with credentials headers";
|
||||
description = "Enable/disable with credentials headers.";
|
||||
};
|
||||
isDefault = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Mark as default datasource. Max one per org";
|
||||
description = "Mark as default datasource. Max one per org.";
|
||||
};
|
||||
jsonData = mkOption {
|
||||
type = types.nullOr types.attrs;
|
||||
default = null;
|
||||
description = "Datasource specific configuration";
|
||||
description = "Datasource specific configuration.";
|
||||
};
|
||||
secureJsonData = mkOption {
|
||||
type = types.nullOr types.attrs;
|
||||
default = null;
|
||||
description = "Datasource specific secure configuration";
|
||||
description = "Datasource specific secure configuration.";
|
||||
};
|
||||
version = mkOption {
|
||||
type = types.int;
|
||||
default = 1;
|
||||
description = "Version";
|
||||
description = "Version.";
|
||||
};
|
||||
editable = mkOption {
|
||||
type = types.bool;
|
||||
|
@ -168,41 +176,99 @@ let
|
|||
name = mkOption {
|
||||
type = types.str;
|
||||
default = "default";
|
||||
description = "Provider name";
|
||||
description = "Provider name.";
|
||||
};
|
||||
orgId = mkOption {
|
||||
type = types.int;
|
||||
default = 1;
|
||||
description = "Organization ID";
|
||||
description = "Organization ID.";
|
||||
};
|
||||
folder = mkOption {
|
||||
type = types.str;
|
||||
default = "";
|
||||
description = "Add dashboards to the specified folder";
|
||||
description = "Add dashboards to the specified folder.";
|
||||
};
|
||||
type = mkOption {
|
||||
type = types.str;
|
||||
default = "file";
|
||||
description = "Dashboard provider type";
|
||||
description = "Dashboard provider type.";
|
||||
};
|
||||
disableDeletion = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Disable deletion when JSON file is removed";
|
||||
description = "Disable deletion when JSON file is removed.";
|
||||
};
|
||||
updateIntervalSeconds = mkOption {
|
||||
type = types.int;
|
||||
default = 10;
|
||||
description = "How often Grafana will scan for changed dashboards";
|
||||
description = "How often Grafana will scan for changed dashboards.";
|
||||
};
|
||||
options = {
|
||||
path = mkOption {
|
||||
type = types.path;
|
||||
description = "Path grafana will watch for dashboards";
|
||||
description = "Path grafana will watch for dashboards.";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
grafanaTypes.notifierConfig = types.submodule {
|
||||
options = {
|
||||
name = mkOption {
|
||||
type = types.str;
|
||||
default = "default";
|
||||
description = "Notifier name.";
|
||||
};
|
||||
type = mkOption {
|
||||
type = types.enum ["dingding" "discord" "email" "googlechat" "hipchat" "kafka" "line" "teams" "opsgenie" "pagerduty" "prometheus-alertmanager" "pushover" "sensu" "sensugo" "slack" "telegram" "threema" "victorops" "webhook"];
|
||||
description = "Notifier type.";
|
||||
};
|
||||
uid = mkOption {
|
||||
type = types.str;
|
||||
description = "Unique notifier identifier.";
|
||||
};
|
||||
org_id = mkOption {
|
||||
type = types.int;
|
||||
default = 1;
|
||||
description = "Organization ID.";
|
||||
};
|
||||
org_name = mkOption {
|
||||
type = types.str;
|
||||
default = "Main Org.";
|
||||
description = "Organization name.";
|
||||
};
|
||||
is_default = mkOption {
|
||||
type = types.bool;
|
||||
description = "Is the default notifier.";
|
||||
default = false;
|
||||
};
|
||||
send_reminder = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = "Should the notifier be sent reminder notifications while alerts continue to fire.";
|
||||
};
|
||||
frequency = mkOption {
|
||||
type = types.str;
|
||||
default = "5m";
|
||||
description = "How frequently should the notifier be sent reminders.";
|
||||
};
|
||||
disable_resolve_message = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Turn off the message that sends when an alert returns to OK.";
|
||||
};
|
||||
settings = mkOption {
|
||||
type = types.nullOr types.attrs;
|
||||
default = null;
|
||||
description = "Settings for the notifier type.";
|
||||
};
|
||||
secure_settings = mkOption {
|
||||
type = types.nullOr types.attrs;
|
||||
default = null;
|
||||
description = "Secure settings for the notifier type.";
|
||||
};
|
||||
};
|
||||
};
|
||||
in {
|
||||
options.services.grafana = {
|
||||
enable = mkEnableOption "grafana";
|
||||
|
@ -337,17 +403,23 @@ in {
|
|||
provision = {
|
||||
enable = mkEnableOption "provision";
|
||||
datasources = mkOption {
|
||||
description = "Grafana datasources configuration";
|
||||
description = "Grafana datasources configuration.";
|
||||
default = [];
|
||||
type = types.listOf grafanaTypes.datasourceConfig;
|
||||
apply = x: map _filter x;
|
||||
};
|
||||
dashboards = mkOption {
|
||||
description = "Grafana dashboard configuration";
|
||||
description = "Grafana dashboard configuration.";
|
||||
default = [];
|
||||
type = types.listOf grafanaTypes.dashboardConfig;
|
||||
apply = x: map _filter x;
|
||||
};
|
||||
notifiers = mkOption {
|
||||
description = "Grafana notifier configuration.";
|
||||
default = [];
|
||||
type = types.listOf grafanaTypes.notifierConfig;
|
||||
apply = x: map _filter x;
|
||||
};
|
||||
};
|
||||
|
||||
security = {
|
||||
|
@ -391,12 +463,12 @@ in {
|
|||
smtp = {
|
||||
enable = mkEnableOption "smtp";
|
||||
host = mkOption {
|
||||
description = "Host to connect to";
|
||||
description = "Host to connect to.";
|
||||
default = "localhost:25";
|
||||
type = types.str;
|
||||
};
|
||||
user = mkOption {
|
||||
description = "User used for authentication";
|
||||
description = "User used for authentication.";
|
||||
default = "";
|
||||
type = types.str;
|
||||
};
|
||||
|
@ -417,7 +489,7 @@ in {
|
|||
type = types.nullOr types.path;
|
||||
};
|
||||
fromAddress = mkOption {
|
||||
description = "Email address used for sending";
|
||||
description = "Email address used for sending.";
|
||||
default = "admin@grafana.localhost";
|
||||
type = types.str;
|
||||
};
|
||||
|
@ -425,7 +497,7 @@ in {
|
|||
|
||||
users = {
|
||||
allowSignUp = mkOption {
|
||||
description = "Disable user signup / registration";
|
||||
description = "Disable user signup / registration.";
|
||||
default = false;
|
||||
type = types.bool;
|
||||
};
|
||||
|
@ -451,17 +523,17 @@ in {
|
|||
|
||||
auth.anonymous = {
|
||||
enable = mkOption {
|
||||
description = "Whether to allow anonymous access";
|
||||
description = "Whether to allow anonymous access.";
|
||||
default = false;
|
||||
type = types.bool;
|
||||
};
|
||||
org_name = mkOption {
|
||||
description = "Which organization to allow anonymous access to";
|
||||
description = "Which organization to allow anonymous access to.";
|
||||
default = "Main Org.";
|
||||
type = types.str;
|
||||
};
|
||||
org_role = mkOption {
|
||||
description = "Which role anonymous users have in the organization";
|
||||
description = "Which role anonymous users have in the organization.";
|
||||
default = "Viewer";
|
||||
type = types.str;
|
||||
};
|
||||
|
@ -470,7 +542,7 @@ in {
|
|||
|
||||
analytics.reporting = {
|
||||
enable = mkOption {
|
||||
description = "Whether to allow anonymous usage reporting to stats.grafana.net";
|
||||
description = "Whether to allow anonymous usage reporting to stats.grafana.net.";
|
||||
default = true;
|
||||
type = types.bool;
|
||||
};
|
||||
|
@ -496,6 +568,9 @@ in {
|
|||
(optional (
|
||||
any (x: x.password != null || x.basicAuthPassword != null || x.secureJsonData != null) cfg.provision.datasources
|
||||
) "Datasource passwords will be stored as plaintext in the Nix store!")
|
||||
(optional (
|
||||
any (x: x.secure_settings != null) cfg.provision.notifiers
|
||||
) "Notifier secure settings will be stored as plaintext in the Nix store!")
|
||||
];
|
||||
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
|
|
@ -468,7 +468,7 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
value = mkOption {
|
||||
values = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [];
|
||||
description = ''
|
||||
|
|
|
@ -316,7 +316,7 @@ in
|
|||
client = {
|
||||
enable = mkEnableOption "Ceph client configuration";
|
||||
extraConfig = mkOption {
|
||||
type = with types; attrsOf str;
|
||||
type = with types; attrsOf (attrsOf str);
|
||||
default = {};
|
||||
example = ''
|
||||
{
|
||||
|
|
|
@ -162,10 +162,7 @@ in {
|
|||
NODE_NAME = cfg.nodeName;
|
||||
};
|
||||
path = [ pkgs.iptables ];
|
||||
preStart = ''
|
||||
mkdir -p /run/flannel
|
||||
touch /run/flannel/docker
|
||||
'' + optionalString (cfg.storageBackend == "etcd") ''
|
||||
preStart = optionalString (cfg.storageBackend == "etcd") ''
|
||||
echo "setting network configuration"
|
||||
until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
|
||||
do
|
||||
|
@ -177,6 +174,7 @@ in {
|
|||
ExecStart = "${cfg.package}/bin/flannel";
|
||||
Restart = "always";
|
||||
RestartSec = "10s";
|
||||
RuntimeDirectory = "flannel";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -8,9 +8,9 @@ let
|
|||
# Convert systemd-style address specification to kresd config line(s).
|
||||
# On Nix level we don't attempt to precisely validate the address specifications.
|
||||
mkListen = kind: addr: let
|
||||
al_v4 = builtins.match "([0-9.]\+):([0-9]\+)" addr;
|
||||
al_v6 = builtins.match "\\[(.\+)]:([0-9]\+)" addr;
|
||||
al_portOnly = builtins.match "()([0-9]\+)" addr;
|
||||
al_v4 = builtins.match "([0-9.]+):([0-9]+)" addr;
|
||||
al_v6 = builtins.match "\\[(.+)]:([0-9]+)" addr;
|
||||
al_portOnly = builtins.match "()([0-9]+)" addr;
|
||||
al = findFirst (a: a != null)
|
||||
(throw "services.kresd.*: incorrect address specification '${addr}'")
|
||||
[ al_v4 al_v6 al_portOnly ];
|
||||
|
|
|
@ -8,30 +8,19 @@ let
|
|||
cfg = config.services.clamav;
|
||||
pkg = pkgs.clamav;
|
||||
|
||||
clamdConfigFile = pkgs.writeText "clamd.conf" ''
|
||||
DatabaseDirectory ${stateDir}
|
||||
LocalSocket ${runDir}/clamd.ctl
|
||||
PidFile ${runDir}/clamd.pid
|
||||
TemporaryDirectory /tmp
|
||||
User clamav
|
||||
Foreground yes
|
||||
toKeyValue = generators.toKeyValue {
|
||||
mkKeyValue = generators.mkKeyValueDefault {} " ";
|
||||
listsAsDuplicateKeys = true;
|
||||
};
|
||||
|
||||
${cfg.daemon.extraConfig}
|
||||
'';
|
||||
|
||||
freshclamConfigFile = pkgs.writeText "freshclam.conf" ''
|
||||
DatabaseDirectory ${stateDir}
|
||||
Foreground yes
|
||||
Checks ${toString cfg.updater.frequency}
|
||||
|
||||
${cfg.updater.extraConfig}
|
||||
|
||||
DatabaseMirror database.clamav.net
|
||||
'';
|
||||
clamdConfigFile = pkgs.writeText "clamd.conf" (toKeyValue cfg.daemon.settings);
|
||||
freshclamConfigFile = pkgs.writeText "freshclam.conf" (toKeyValue cfg.updater.settings);
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
(mkRenamedOptionModule [ "services" "clamav" "updater" "config" ] [ "services" "clamav" "updater" "extraConfig" ])
|
||||
(mkRemovedOptionModule [ "services" "clamav" "updater" "config" ] "Use services.clamav.updater.settings instead.")
|
||||
(mkRemovedOptionModule [ "services" "clamav" "updater" "extraConfig" ] "Use services.clamav.updater.settings instead.")
|
||||
(mkRemovedOptionModule [ "services" "clamav" "daemon" "extraConfig" ] "Use services.clamav.daemon.settings instead.")
|
||||
];
|
||||
|
||||
options = {
|
||||
|
@ -39,12 +28,12 @@ in
|
|||
daemon = {
|
||||
enable = mkEnableOption "ClamAV clamd daemon";
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
settings = mkOption {
|
||||
type = with types; attrsOf (oneOf [ bool int str (listOf str) ]);
|
||||
default = {};
|
||||
description = ''
|
||||
Extra configuration for clamd. Contents will be added verbatim to the
|
||||
configuration file.
|
||||
ClamAV configuration. Refer to <link xlink:href="https://linux.die.net/man/5/clamd.conf"/>,
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
@ -68,12 +57,12 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
settings = mkOption {
|
||||
type = with types; attrsOf (oneOf [ bool int str (listOf str) ]);
|
||||
default = {};
|
||||
description = ''
|
||||
Extra configuration for freshclam. Contents will be added verbatim to the
|
||||
configuration file.
|
||||
freshclam configuration. Refer to <link xlink:href="https://linux.die.net/man/5/freshclam.conf"/>,
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
@ -93,6 +82,22 @@ in
|
|||
users.groups.${clamavGroup} =
|
||||
{ gid = config.ids.gids.clamav; };
|
||||
|
||||
services.clamav.daemon.settings = {
|
||||
DatabaseDirectory = stateDir;
|
||||
LocalSocket = "${runDir}/clamd.ctl";
|
||||
PidFile = "${runDir}/clamd.pid";
|
||||
TemporaryDirectory = "/tmp";
|
||||
User = "clamav";
|
||||
Foreground = true;
|
||||
};
|
||||
|
||||
services.clamav.updater.settings = {
|
||||
DatabaseDirectory = stateDir;
|
||||
Foreground = true;
|
||||
Checks = cfg.updater.frequency;
|
||||
DatabaseMirror = [ "database.clamav.net" ];
|
||||
};
|
||||
|
||||
environment.etc."clamav/freshclam.conf".source = freshclamConfigFile;
|
||||
environment.etc."clamav/clamd.conf".source = clamdConfigFile;
|
||||
|
||||
|
|
|
@ -329,7 +329,7 @@ in
|
|||
extraConfig = "internal;";
|
||||
};
|
||||
|
||||
locations."~ ^/lib.*\.(js|css|gif|png|ico|jpg|jpeg)$" = {
|
||||
locations."~ ^/lib.*\\.(js|css|gif|png|ico|jpg|jpeg)$" = {
|
||||
extraConfig = "expires 365d;";
|
||||
};
|
||||
|
||||
|
@ -349,7 +349,7 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
locations."~ \.php$" = {
|
||||
locations."~ \\.php$" = {
|
||||
extraConfig = ''
|
||||
try_files $uri $uri/ /doku.php;
|
||||
include ${pkgs.nginx}/conf/fastcgi_params;
|
||||
|
|
|
@ -28,7 +28,10 @@ let
|
|||
upload_max_filesize = cfg.maxUploadSize;
|
||||
post_max_size = cfg.maxUploadSize;
|
||||
memory_limit = cfg.maxUploadSize;
|
||||
} // cfg.phpOptions;
|
||||
} // cfg.phpOptions
|
||||
// optionalAttrs cfg.caching.apcu {
|
||||
"apc.enable_cli" = "1";
|
||||
};
|
||||
|
||||
occ = pkgs.writeScriptBin "nextcloud-occ" ''
|
||||
#! ${pkgs.runtimeShell}
|
||||
|
@ -86,7 +89,7 @@ in {
|
|||
package = mkOption {
|
||||
type = types.package;
|
||||
description = "Which package to use for the Nextcloud instance.";
|
||||
relatedPackages = [ "nextcloud18" "nextcloud19" "nextcloud20" ];
|
||||
relatedPackages = [ "nextcloud19" "nextcloud20" "nextcloud21" ];
|
||||
};
|
||||
|
||||
maxUploadSize = mkOption {
|
||||
|
@ -280,6 +283,24 @@ in {
|
|||
may be served via HTTPS.
|
||||
'';
|
||||
};
|
||||
|
||||
defaultPhoneRegion = mkOption {
|
||||
default = null;
|
||||
type = types.nullOr types.str;
|
||||
example = "DE";
|
||||
description = ''
|
||||
<warning>
|
||||
<para>This option exists since Nextcloud 21! If older versions are used,
|
||||
this will throw an eval-error!</para>
|
||||
</warning>
|
||||
|
||||
<link xlink:href="https://www.iso.org/iso-3166-country-codes.html">ISO 3611-1</link>
|
||||
country codes for automatic phone-number detection without a country code.
|
||||
|
||||
With e.g. <literal>DE</literal> set, the <literal>+49</literal> can be omitted for
|
||||
phone-numbers.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
caching = {
|
||||
|
@ -345,10 +366,13 @@ in {
|
|||
&& !(acfg.adminpass != null && acfg.adminpassFile != null));
|
||||
message = "Please specify exactly one of adminpass or adminpassFile";
|
||||
}
|
||||
{ assertion = versionOlder cfg.package.version "21" -> cfg.config.defaultPhoneRegion == null;
|
||||
message = "The `defaultPhoneRegion'-setting is only supported for Nextcloud >=21!";
|
||||
}
|
||||
];
|
||||
|
||||
warnings = let
|
||||
latest = 20;
|
||||
latest = 21;
|
||||
upgradeWarning = major: nixos:
|
||||
''
|
||||
A legacy Nextcloud install (from before NixOS ${nixos}) may be installed.
|
||||
|
@ -366,9 +390,9 @@ in {
|
|||
Using config.services.nextcloud.poolConfig is deprecated and will become unsupported in a future release.
|
||||
Please migrate your configuration to config.services.nextcloud.poolSettings.
|
||||
'')
|
||||
++ (optional (versionOlder cfg.package.version "18") (upgradeWarning 17 "20.03"))
|
||||
++ (optional (versionOlder cfg.package.version "19") (upgradeWarning 18 "20.09"))
|
||||
++ (optional (versionOlder cfg.package.version "20") (upgradeWarning 19 "21.05"));
|
||||
++ (optional (versionOlder cfg.package.version "20") (upgradeWarning 19 "21.05"))
|
||||
++ (optional (versionOlder cfg.package.version "21") (upgradeWarning 20 "21.05"));
|
||||
|
||||
services.nextcloud.package = with pkgs;
|
||||
mkDefault (
|
||||
|
@ -378,14 +402,13 @@ in {
|
|||
nextcloud defined in an overlay, please set `services.nextcloud.package` to
|
||||
`pkgs.nextcloud`.
|
||||
''
|
||||
else if versionOlder stateVersion "20.03" then nextcloud17
|
||||
else if versionOlder stateVersion "20.09" then nextcloud18
|
||||
# 21.03 will not be an official release - it was instead 21.05.
|
||||
# This versionOlder statement remains set to 21.03 for backwards compatibility.
|
||||
# See https://github.com/NixOS/nixpkgs/pull/108899 and
|
||||
# https://github.com/NixOS/rfcs/blob/master/rfcs/0080-nixos-release-schedule.md.
|
||||
else if versionOlder stateVersion "21.03" then nextcloud19
|
||||
else nextcloud20
|
||||
else nextcloud21
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -443,6 +466,7 @@ in {
|
|||
'dbtype' => '${c.dbtype}',
|
||||
'trusted_domains' => ${writePhpArrary ([ cfg.hostName ] ++ c.extraTrustedDomains)},
|
||||
'trusted_proxies' => ${writePhpArrary (c.trustedProxies)},
|
||||
${optionalString (c.defaultPhoneRegion != null) "'default_phone_region' => '${c.defaultPhoneRegion}',"}
|
||||
];
|
||||
'';
|
||||
occInstallCmd = let
|
||||
|
@ -591,6 +615,14 @@ in {
|
|||
access_log off;
|
||||
'';
|
||||
};
|
||||
"= /" = {
|
||||
priority = 100;
|
||||
extraConfig = ''
|
||||
if ( $http_user_agent ~ ^DavClnt ) {
|
||||
return 302 /remote.php/webdav/$is_args$args;
|
||||
}
|
||||
'';
|
||||
};
|
||||
"/" = {
|
||||
priority = 900;
|
||||
extraConfig = "rewrite ^ /index.php;";
|
||||
|
@ -609,6 +641,9 @@ in {
|
|||
location = /.well-known/caldav {
|
||||
return 301 /remote.php/dav;
|
||||
}
|
||||
location ~ ^/\.well-known/(?!acme-challenge|pki-validation) {
|
||||
return 301 /index.php$request_uri;
|
||||
}
|
||||
try_files $uri $uri/ =404;
|
||||
'';
|
||||
};
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
desktop client is packaged at <literal>pkgs.nextcloud-client</literal>.
|
||||
</para>
|
||||
<para>
|
||||
The current default by NixOS is <package>nextcloud20</package> which is also the latest
|
||||
The current default by NixOS is <package>nextcloud21</package> which is also the latest
|
||||
major version available.
|
||||
</para>
|
||||
<section xml:id="module-services-nextcloud-basic-usage">
|
||||
|
|
|
@ -22,7 +22,9 @@ let
|
|||
|
||||
php = cfg.phpPackage.override { apacheHttpd = pkg; };
|
||||
|
||||
phpMajorVersion = lib.versions.major (lib.getVersion php);
|
||||
phpModuleName = let
|
||||
majorVersion = lib.versions.major (lib.getVersion php);
|
||||
in (if majorVersion == "8" then "php" else "php${majorVersion}");
|
||||
|
||||
mod_perl = pkgs.apacheHttpdPackages.mod_perl.override { apacheHttpd = pkg; };
|
||||
|
||||
|
@ -63,7 +65,7 @@ let
|
|||
++ optional enableSSL "ssl"
|
||||
++ optional enableUserDir "userdir"
|
||||
++ optional cfg.enableMellon { name = "auth_mellon"; path = "${pkgs.apacheHttpdPackages.mod_auth_mellon}/modules/mod_auth_mellon.so"; }
|
||||
++ optional cfg.enablePHP { name = "php${phpMajorVersion}"; path = "${php}/modules/libphp${phpMajorVersion}.so"; }
|
||||
++ optional cfg.enablePHP { name = phpModuleName; path = "${php}/modules/lib${phpModuleName}.so"; }
|
||||
++ optional cfg.enablePerl { name = "perl"; path = "${mod_perl}/modules/mod_perl.so"; }
|
||||
++ cfg.extraModules;
|
||||
|
||||
|
|
|
@ -804,7 +804,7 @@ in
|
|||
ProtectControlGroups = true;
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) pkgs.nginx.modules);
|
||||
MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) cfg.package.modules);
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
PrivateMounts = true;
|
||||
|
|
|
@ -58,7 +58,7 @@ in
|
|||
noDesktop = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Don't install XFCE desktop components (xfdesktop, panel and notification daemon).";
|
||||
description = "Don't install XFCE desktop components (xfdesktop and panel).";
|
||||
};
|
||||
|
||||
enableXfwm = mkOption {
|
||||
|
@ -98,6 +98,7 @@ in
|
|||
parole
|
||||
ristretto
|
||||
xfce4-appfinder
|
||||
xfce4-notifyd
|
||||
xfce4-screenshooter
|
||||
xfce4-session
|
||||
xfce4-settings
|
||||
|
@ -119,7 +120,6 @@ in
|
|||
xfwm4
|
||||
xfwm4-themes
|
||||
] ++ optionals (!cfg.noDesktop) [
|
||||
xfce4-notifyd
|
||||
xfce4-panel
|
||||
xfdesktop
|
||||
];
|
||||
|
@ -166,7 +166,8 @@ in
|
|||
# Systemd services
|
||||
systemd.packages = with pkgs.xfce; [
|
||||
(thunar.override { thunarPlugins = cfg.thunarPlugins; })
|
||||
] ++ optional (!cfg.noDesktop) xfce4-notifyd;
|
||||
xfce4-notifyd
|
||||
];
|
||||
|
||||
};
|
||||
}
|
||||
|
|
|
@ -37,6 +37,11 @@ let
|
|||
. /etc/profile
|
||||
cd "$HOME"
|
||||
|
||||
# Allow the user to execute commands at the beginning of the X session.
|
||||
if test -f ~/.xprofile; then
|
||||
source ~/.xprofile
|
||||
fi
|
||||
|
||||
${optionalString cfg.displayManager.job.logToJournal ''
|
||||
if [ -z "$_DID_SYSTEMD_CAT" ]; then
|
||||
export _DID_SYSTEMD_CAT=1
|
||||
|
@ -64,22 +69,23 @@ let
|
|||
|
||||
# Speed up application start by 50-150ms according to
|
||||
# http://kdemonkey.blogspot.nl/2008/04/magic-trick.html
|
||||
rm -rf "$HOME/.compose-cache"
|
||||
mkdir "$HOME/.compose-cache"
|
||||
compose_cache="''${XCOMPOSECACHE:-$HOME/.compose-cache}"
|
||||
mkdir -p "$compose_cache"
|
||||
# To avoid accidentally deleting a wrongly set up XCOMPOSECACHE directory,
|
||||
# defensively try to delete cache *files* only, following the file format specified in
|
||||
# https://gitlab.freedesktop.org/xorg/lib/libx11/-/blob/master/modules/im/ximcp/imLcIm.c#L353-358
|
||||
# sprintf (*res, "%s/%c%d_%03x_%08x_%08x", dir, _XimGetMyEndian(), XIM_CACHE_VERSION, (unsigned int)sizeof (DefTree), hash, hash2);
|
||||
${pkgs.findutils}/bin/find "$compose_cache" -maxdepth 1 -regextype posix-extended -regex '.*/[Bl][0-9]+_[0-9a-f]{3}_[0-9a-f]{8}_[0-9a-f]{8}' -delete
|
||||
unset compose_cache
|
||||
|
||||
# Work around KDE errors when a user first logs in and
|
||||
# .local/share doesn't exist yet.
|
||||
mkdir -p "$HOME/.local/share"
|
||||
mkdir -p "''${XDG_DATA_HOME:-$HOME/.local/share}"
|
||||
|
||||
unset _DID_SYSTEMD_CAT
|
||||
|
||||
${cfg.displayManager.sessionCommands}
|
||||
|
||||
# Allow the user to execute commands at the beginning of the X session.
|
||||
if test -f ~/.xprofile; then
|
||||
source ~/.xprofile
|
||||
fi
|
||||
|
||||
# Start systemd user services for graphical sessions
|
||||
/run/current-system/systemd/bin/systemctl --user start graphical-session.target
|
||||
|
||||
|
|
|
@ -2,24 +2,6 @@
|
|||
|
||||
with lib;
|
||||
let
|
||||
findWinner = candidates: winner:
|
||||
any (x: x == winner) candidates;
|
||||
|
||||
# winners is an ordered list where first item wins over 2nd etc
|
||||
mergeAnswer = winners: locs: defs:
|
||||
let
|
||||
values = map (x: x.value) defs;
|
||||
inter = intersectLists values winners;
|
||||
winner = head winners;
|
||||
in
|
||||
if defs == [] then abort "This case should never happen."
|
||||
else if winner == [] then abort "Give a valid list of winner"
|
||||
else if inter == [] then mergeOneOption locs defs
|
||||
else if findWinner values winner then
|
||||
winner
|
||||
else
|
||||
mergeAnswer (tail winners) locs defs;
|
||||
|
||||
mergeFalseByDefault = locs: defs:
|
||||
if defs == [] then abort "This case should never happen."
|
||||
else if any (x: x == false) (getValues defs) then false
|
||||
|
@ -28,9 +10,7 @@ let
|
|||
kernelItem = types.submodule {
|
||||
options = {
|
||||
tristate = mkOption {
|
||||
type = types.enum [ "y" "m" "n" null ] // {
|
||||
merge = mergeAnswer [ "y" "m" "n" ];
|
||||
};
|
||||
type = types.enum [ "y" "m" "n" null ];
|
||||
default = null;
|
||||
internal = true;
|
||||
visible = true;
|
||||
|
|
|
@ -436,7 +436,8 @@ let
|
|||
"IPv4ProxyARP"
|
||||
"IPv6ProxyNDP"
|
||||
"IPv6ProxyNDPAddress"
|
||||
"IPv6PrefixDelegation"
|
||||
"IPv6SendRA"
|
||||
"DHCPv6PrefixDelegation"
|
||||
"IPv6MTUBytes"
|
||||
"Bridge"
|
||||
"Bond"
|
||||
|
@ -477,7 +478,8 @@ let
|
|||
(assertMinimum "IPv6HopLimit" 0)
|
||||
(assertValueOneOf "IPv4ProxyARP" boolValues)
|
||||
(assertValueOneOf "IPv6ProxyNDP" boolValues)
|
||||
(assertValueOneOf "IPv6PrefixDelegation" ["static" "dhcpv6" "yes" "false"])
|
||||
(assertValueOneOf "IPv6SendRA" boolValues)
|
||||
(assertValueOneOf "DHCPv6PrefixDelegation" boolValues)
|
||||
(assertByteFormat "IPv6MTUBytes")
|
||||
(assertValueOneOf "ActiveSlave" boolValues)
|
||||
(assertValueOneOf "PrimarySlave" boolValues)
|
||||
|
@ -643,18 +645,63 @@ let
|
|||
|
||||
sectionDHCPv6 = checkUnitConfig "DHCPv6" [
|
||||
(assertOnlyFields [
|
||||
"UseAddress"
|
||||
"UseDNS"
|
||||
"UseNTP"
|
||||
"RouteMetric"
|
||||
"RapidCommit"
|
||||
"MUDURL"
|
||||
"RequestOptions"
|
||||
"SendVendorOption"
|
||||
"ForceDHCPv6PDOtherInformation"
|
||||
"PrefixDelegationHint"
|
||||
"RouteMetric"
|
||||
"WithoutRA"
|
||||
"SendOption"
|
||||
"UserClass"
|
||||
"VendorClass"
|
||||
])
|
||||
(assertValueOneOf "UseAddress" boolValues)
|
||||
(assertValueOneOf "UseDNS" boolValues)
|
||||
(assertValueOneOf "UseNTP" boolValues)
|
||||
(assertInt "RouteMetric")
|
||||
(assertValueOneOf "RapidCommit" boolValues)
|
||||
(assertValueOneOf "ForceDHCPv6PDOtherInformation" boolValues)
|
||||
(assertInt "RouteMetric")
|
||||
(assertValueOneOf "WithoutRA" ["solicit" "information-request"])
|
||||
(assertRange "SendOption" 1 65536)
|
||||
];
|
||||
|
||||
sectionDHCPv6PrefixDelegation = checkUnitConfig "DHCPv6PrefixDelegation" [
|
||||
(assertOnlyFields [
|
||||
"SubnetId"
|
||||
"Announce"
|
||||
"Assign"
|
||||
"Token"
|
||||
])
|
||||
(assertValueOneOf "Announce" boolValues)
|
||||
(assertValueOneOf "Assign" boolValues)
|
||||
];
|
||||
|
||||
sectionIPv6AcceptRA = checkUnitConfig "IPv6AcceptRA" [
|
||||
(assertOnlyFields [
|
||||
"UseDNS"
|
||||
"UseDomains"
|
||||
"RouteTable"
|
||||
"UseAutonomousPrefix"
|
||||
"UseOnLinkPrefix"
|
||||
"RouterDenyList"
|
||||
"RouterAllowList"
|
||||
"PrefixDenyList"
|
||||
"PrefixAllowList"
|
||||
"RouteDenyList"
|
||||
"RouteAllowList"
|
||||
"DHCPv6Client"
|
||||
])
|
||||
(assertValueOneOf "UseDNS" boolValues)
|
||||
(assertValueOneOf "UseDomains" (boolValues ++ ["route"]))
|
||||
(assertRange "RouteTable" 0 4294967295)
|
||||
(assertValueOneOf "UseAutonomousPrefix" boolValues)
|
||||
(assertValueOneOf "UseOnLinkPrefix" boolValues)
|
||||
(assertValueOneOf "DHCPv6Client" (boolValues ++ ["always"]))
|
||||
];
|
||||
|
||||
sectionDHCPServer = checkUnitConfig "DHCPServer" [
|
||||
|
@ -685,7 +732,7 @@ let
|
|||
(assertValueOneOf "EmitTimezone" boolValues)
|
||||
];
|
||||
|
||||
sectionIPv6PrefixDelegation = checkUnitConfig "IPv6PrefixDelegation" [
|
||||
sectionIPv6SendRA = checkUnitConfig "IPv6SendRA" [
|
||||
(assertOnlyFields [
|
||||
"Managed"
|
||||
"OtherInformation"
|
||||
|
@ -1090,6 +1137,30 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
dhcpV6PrefixDelegationConfig = mkOption {
|
||||
default = {};
|
||||
example = { SubnetId = "auto"; Announce = true; };
|
||||
type = types.addCheck (types.attrsOf unitOption) check.network.sectionDHCPv6PrefixDelegation;
|
||||
description = ''
|
||||
Each attribute in this set specifies an option in the
|
||||
<literal>[DHCPv6PrefixDelegation]</literal> section of the unit. See
|
||||
<citerefentry><refentrytitle>systemd.network</refentrytitle>
|
||||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
};
|
||||
|
||||
ipv6AcceptRAConfig = mkOption {
|
||||
default = {};
|
||||
example = { UseDNS = true; DHCPv6Client = "always"; };
|
||||
type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6AcceptRA;
|
||||
description = ''
|
||||
Each attribute in this set specifies an option in the
|
||||
<literal>[IPv6AcceptRA]</literal> section of the unit. See
|
||||
<citerefentry><refentrytitle>systemd.network</refentrytitle>
|
||||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
};
|
||||
|
||||
dhcpServerConfig = mkOption {
|
||||
default = {};
|
||||
example = { PoolOffset = 50; EmitDNS = false; };
|
||||
|
@ -1102,13 +1173,20 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
# systemd.network.networks.*.ipv6PrefixDelegationConfig has been deprecated
|
||||
# in 247 in favor of systemd.network.networks.*.ipv6SendRAConfig.
|
||||
ipv6PrefixDelegationConfig = mkOption {
|
||||
visible = false;
|
||||
apply = _: throw "The option `systemd.network.networks.*.ipv6PrefixDelegationConfig` has been replaced by `systemd.network.networks.*.ipv6SendRAConfig`.";
|
||||
};
|
||||
|
||||
ipv6SendRAConfig = mkOption {
|
||||
default = {};
|
||||
example = { EmitDNS = true; Managed = true; OtherInformation = true; };
|
||||
type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6PrefixDelegation;
|
||||
type = types.addCheck (types.attrsOf unitOption) check.network.sectionIPv6SendRA;
|
||||
description = ''
|
||||
Each attribute in this set specifies an option in the
|
||||
<literal>[IPv6PrefixDelegation]</literal> section of the unit. See
|
||||
<literal>[IPv6SendRA]</literal> section of the unit. See
|
||||
<citerefentry><refentrytitle>systemd.network</refentrytitle>
|
||||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
|
@ -1457,13 +1535,21 @@ let
|
|||
[DHCPv6]
|
||||
${attrsToSection def.dhcpV6Config}
|
||||
''
|
||||
+ optionalString (def.dhcpV6PrefixDelegationConfig != { }) ''
|
||||
[DHCPv6PrefixDelegation]
|
||||
${attrsToSection def.dhcpV6PrefixDelegationConfig}
|
||||
''
|
||||
+ optionalString (def.ipv6AcceptRAConfig != { }) ''
|
||||
[IPv6AcceptRA]
|
||||
${attrsToSection def.ipv6AcceptRAConfig}
|
||||
''
|
||||
+ optionalString (def.dhcpServerConfig != { }) ''
|
||||
[DHCPServer]
|
||||
${attrsToSection def.dhcpServerConfig}
|
||||
''
|
||||
+ optionalString (def.ipv6PrefixDelegationConfig != { }) ''
|
||||
[IPv6PrefixDelegation]
|
||||
${attrsToSection def.ipv6PrefixDelegationConfig}
|
||||
+ optionalString (def.ipv6SendRAConfig != { }) ''
|
||||
[IPv6SendRA]
|
||||
${attrsToSection def.ipv6SendRAConfig}
|
||||
''
|
||||
+ flip concatMapStrings def.ipv6Prefixes (x: ''
|
||||
[IPv6Prefix]
|
||||
|
@ -1479,7 +1565,6 @@ let
|
|||
in
|
||||
|
||||
{
|
||||
|
||||
options = {
|
||||
|
||||
systemd.network.enable = mkOption {
|
||||
|
|
|
@ -4,8 +4,7 @@ with lib;
|
|||
|
||||
let
|
||||
|
||||
inherit (pkgs) plymouth;
|
||||
inherit (pkgs) nixos-icons;
|
||||
inherit (pkgs) plymouth nixos-icons;
|
||||
|
||||
cfg = config.boot.plymouth;
|
||||
|
||||
|
@ -16,14 +15,37 @@ let
|
|||
osVersion = config.system.nixos.release;
|
||||
};
|
||||
|
||||
plymouthLogos = pkgs.runCommand "plymouth-logos" { inherit (cfg) logo; } ''
|
||||
mkdir -p $out
|
||||
|
||||
# For themes that are compiled with PLYMOUTH_LOGO_FILE
|
||||
mkdir -p $out/etc/plymouth
|
||||
ln -s $logo $out/etc/plymouth/logo.png
|
||||
|
||||
# Logo for bgrt theme
|
||||
# Note this is technically an abuse of watermark for the bgrt theme
|
||||
# See: https://gitlab.freedesktop.org/plymouth/plymouth/-/issues/95#note_813768
|
||||
mkdir -p $out/share/plymouth/themes/spinner
|
||||
ln -s $logo $out/share/plymouth/themes/spinner/watermark.png
|
||||
|
||||
# Logo for spinfinity theme
|
||||
# See: https://gitlab.freedesktop.org/plymouth/plymouth/-/issues/106
|
||||
mkdir -p $out/share/plymouth/themes/spinfinity
|
||||
ln -s $logo $out/share/plymouth/themes/spinfinity/header-image.png
|
||||
'';
|
||||
|
||||
themesEnv = pkgs.buildEnv {
|
||||
name = "plymouth-themes";
|
||||
paths = [ plymouth ] ++ cfg.themePackages;
|
||||
paths = [
|
||||
plymouth
|
||||
plymouthLogos
|
||||
] ++ cfg.themePackages;
|
||||
};
|
||||
|
||||
configFile = pkgs.writeText "plymouthd.conf" ''
|
||||
[Daemon]
|
||||
ShowDelay=0
|
||||
DeviceTimeout=8
|
||||
Theme=${cfg.theme}
|
||||
${cfg.extraConfig}
|
||||
'';
|
||||
|
@ -47,7 +69,7 @@ in
|
|||
};
|
||||
|
||||
themePackages = mkOption {
|
||||
default = [ nixosBreezePlymouth ];
|
||||
default = lib.optional (cfg.theme == "breeze") nixosBreezePlymouth;
|
||||
type = types.listOf types.package;
|
||||
description = ''
|
||||
Extra theme packages for plymouth.
|
||||
|
@ -55,7 +77,7 @@ in
|
|||
};
|
||||
|
||||
theme = mkOption {
|
||||
default = "breeze";
|
||||
default = "bgrt";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Splash screen theme.
|
||||
|
@ -64,7 +86,8 @@ in
|
|||
|
||||
logo = mkOption {
|
||||
type = types.path;
|
||||
default = "${nixos-icons}/share/icons/hicolor/128x128/apps/nix-snowflake.png";
|
||||
# Dimensions are 48x48 to match GDM logo
|
||||
default = "${nixos-icons}/share/icons/hicolor/48x48/apps/nix-snowflake-white.png";
|
||||
defaultText = ''pkgs.fetchurl {
|
||||
url = "https://nixos.org/logo/nixos-hires.png";
|
||||
sha256 = "1ivzgd7iz0i06y36p8m5w48fd8pjqwxhdaavc0pxs7w1g7mcy5si";
|
||||
|
@ -110,12 +133,18 @@ in
|
|||
systemd.services.plymouth-poweroff.wantedBy = [ "poweroff.target" ];
|
||||
systemd.services.plymouth-reboot.wantedBy = [ "reboot.target" ];
|
||||
systemd.services.plymouth-read-write.wantedBy = [ "sysinit.target" ];
|
||||
systemd.services.systemd-ask-password-plymouth.wantedBy = ["multi-user.target"];
|
||||
systemd.paths.systemd-ask-password-plymouth.wantedBy = ["multi-user.target"];
|
||||
systemd.services.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ];
|
||||
systemd.paths.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ];
|
||||
|
||||
boot.initrd.extraUtilsCommands = ''
|
||||
copy_bin_and_libs ${pkgs.plymouth}/bin/plymouthd
|
||||
copy_bin_and_libs ${pkgs.plymouth}/bin/plymouth
|
||||
copy_bin_and_libs ${plymouth}/bin/plymouth
|
||||
copy_bin_and_libs ${plymouth}/bin/plymouthd
|
||||
|
||||
# Check if the actual requested theme is here
|
||||
if [[ ! -d ${themesEnv}/share/plymouth/themes/${cfg.theme} ]]; then
|
||||
echo "The requested theme: ${cfg.theme} is not provided by any of the packages in boot.plymouth.themePackages"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
moduleName="$(sed -n 's,ModuleName *= *,,p' ${themesEnv}/share/plymouth/themes/${cfg.theme}/${cfg.theme}.plymouth)"
|
||||
|
||||
|
@ -127,21 +156,29 @@ in
|
|||
mkdir -p $out/share/plymouth/themes
|
||||
cp ${plymouth}/share/plymouth/plymouthd.defaults $out/share/plymouth
|
||||
|
||||
# copy themes into working directory for patching
|
||||
# Copy themes into working directory for patching
|
||||
mkdir themes
|
||||
# use -L to copy the directories proper, not the symlinks to them
|
||||
cp -r -L ${themesEnv}/share/plymouth/themes/{text,details,${cfg.theme}} themes
|
||||
|
||||
# patch out any attempted references to the theme or plymouth's themes directory
|
||||
# Use -L to copy the directories proper, not the symlinks to them.
|
||||
# Copy all themes because they're not large assets, and bgrt depends on the ImageDir of
|
||||
# the spinner theme.
|
||||
cp -r -L ${themesEnv}/share/plymouth/themes/* themes
|
||||
|
||||
# Patch out any attempted references to the theme or plymouth's themes directory
|
||||
chmod -R +w themes
|
||||
find themes -type f | while read file
|
||||
do
|
||||
sed -i "s,/nix/.*/share/plymouth/themes,$out/share/plymouth/themes,g" $file
|
||||
done
|
||||
|
||||
# Install themes
|
||||
cp -r themes/* $out/share/plymouth/themes
|
||||
cp ${cfg.logo} $out/share/plymouth/logo.png
|
||||
|
||||
# Install logo
|
||||
mkdir -p $out/etc/plymouth
|
||||
cp -r -L ${themesEnv}/etc/plymouth $out
|
||||
|
||||
# Setup font
|
||||
mkdir -p $out/share/fonts
|
||||
cp ${cfg.font} $out/share/fonts
|
||||
mkdir -p $out/etc/fonts
|
||||
|
|
|
@ -614,11 +614,16 @@ echo /sbin/modprobe > /proc/sys/kernel/modprobe
|
|||
|
||||
|
||||
# Start stage 2. `switch_root' deletes all files in the ramfs on the
|
||||
# current root. Note that $stage2Init might be an absolute symlink,
|
||||
# in which case "-e" won't work because we're not in the chroot yet.
|
||||
if [ ! -e "$targetRoot/$stage2Init" ] && [ ! -L "$targetRoot/$stage2Init" ] ; then
|
||||
# current root. The path has to be valid in the chroot not outside.
|
||||
if [ ! -e "$targetRoot/$stage2Init" ]; then
|
||||
stage2Check=${stage2Init}
|
||||
while [ "$stage2Check" != "${stage2Check%/*}" ] && [ ! -L "$targetRoot/$stage2Check" ]; do
|
||||
stage2Check=${stage2Check%/*}
|
||||
done
|
||||
if [ ! -L "$targetRoot/$stage2Check" ]; then
|
||||
echo "stage 2 init script ($targetRoot/$stage2Init) not found"
|
||||
fail
|
||||
fi
|
||||
fi
|
||||
|
||||
mkdir -m 0755 -p $targetRoot/proc $targetRoot/sys $targetRoot/dev $targetRoot/run
|
||||
|
|
|
@ -93,17 +93,7 @@ in
|
|||
(if i.useDHCP != null then i.useDHCP else false));
|
||||
address = forEach (interfaceIps i)
|
||||
(ip: "${ip.address}/${toString ip.prefixLength}");
|
||||
# IPv6PrivacyExtensions=kernel seems to be broken with networkd.
|
||||
# Instead of using IPv6PrivacyExtensions=kernel, configure it according to the value of
|
||||
# `tempAddress`:
|
||||
networkConfig.IPv6PrivacyExtensions = {
|
||||
# generate temporary addresses and use them by default
|
||||
"default" = true;
|
||||
# generate temporary addresses but keep using the standard EUI-64 ones by default
|
||||
"enabled" = "prefer-public";
|
||||
# completely disable temporary addresses
|
||||
"disabled" = false;
|
||||
}.${i.tempAddress};
|
||||
networkConfig.IPv6PrivacyExtensions = "kernel";
|
||||
linkConfig = optionalAttrs (i.macAddress != null) {
|
||||
MACAddress = i.macAddress;
|
||||
} // optionalAttrs (i.mtu != null) {
|
||||
|
|
|
@ -1,6 +1,10 @@
|
|||
{ config, pkgs, ... }:
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.virtualisation.amazon-init;
|
||||
|
||||
script = ''
|
||||
#!${pkgs.runtimeShell} -eu
|
||||
|
||||
|
@ -41,6 +45,18 @@ let
|
|||
nixos-rebuild switch
|
||||
'';
|
||||
in {
|
||||
|
||||
options.virtualisation.amazon-init = {
|
||||
enable = mkOption {
|
||||
default = true;
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Enable or disable the amazon-init service.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.services.amazon-init = {
|
||||
inherit script;
|
||||
description = "Reconfigure the system from EC2 userdata on startup";
|
||||
|
@ -57,4 +73,5 @@ in {
|
|||
RemainAfterExit = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -98,7 +98,6 @@ in
|
|||
environment.XDG_RUNTIME_DIR="${anboxloc}";
|
||||
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "systemd-udev-settle.service" ];
|
||||
preStart = let
|
||||
initsh = pkgs.writeText "nixos-init" (''
|
||||
#!/system/bin/sh
|
||||
|
|
60
third_party/nixpkgs/nixos/modules/virtualisation/containerd.nix
vendored
Normal file
60
third_party/nixpkgs/nixos/modules/virtualisation/containerd.nix
vendored
Normal file
|
@ -0,0 +1,60 @@
|
|||
{ pkgs, lib, config, ... }:
|
||||
let
|
||||
cfg = config.virtualisation.containerd;
|
||||
containerdConfigChecked = pkgs.runCommand "containerd-config-checked.toml" { nativeBuildInputs = [pkgs.containerd]; } ''
|
||||
containerd -c ${cfg.configFile} config dump >/dev/null
|
||||
ln -s ${cfg.configFile} $out
|
||||
'';
|
||||
in
|
||||
{
|
||||
|
||||
options.virtualisation.containerd = with lib.types; {
|
||||
enable = lib.mkEnableOption "containerd container runtime";
|
||||
|
||||
configFile = lib.mkOption {
|
||||
default = null;
|
||||
description = "path to containerd config file";
|
||||
type = nullOr path;
|
||||
};
|
||||
|
||||
args = lib.mkOption {
|
||||
default = {};
|
||||
description = "extra args to append to the containerd cmdline";
|
||||
type = attrsOf str;
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
virtualisation.containerd.args.config = lib.mkIf (cfg.configFile != null) (toString containerdConfigChecked);
|
||||
|
||||
environment.systemPackages = [pkgs.containerd];
|
||||
|
||||
systemd.services.containerd = {
|
||||
description = "containerd - container runtime";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" ];
|
||||
path = with pkgs; [
|
||||
containerd
|
||||
runc
|
||||
iptables
|
||||
];
|
||||
serviceConfig = {
|
||||
ExecStart = ''${pkgs.containerd}/bin/containerd ${lib.concatStringsSep " " (lib.cli.toGNUCommandLine {} cfg.args)}'';
|
||||
Delegate = "yes";
|
||||
KillMode = "process";
|
||||
Type = "notify";
|
||||
Restart = "always";
|
||||
RestartSec = "5";
|
||||
StartLimitBurst = "8";
|
||||
StartLimitIntervalSec = "120s";
|
||||
|
||||
# "limits" defined below are adopted from upstream: https://github.com/containerd/containerd/blob/master/containerd.service
|
||||
LimitNPROC = "infinity";
|
||||
LimitCORE = "infinity";
|
||||
LimitNOFILE = "infinity";
|
||||
TasksMax = "infinity";
|
||||
OOMScoreAdjust = "-999";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -221,7 +221,7 @@ in {
|
|||
|
||||
systemd.services.libvirtd = {
|
||||
requires = [ "libvirtd-config.service" ];
|
||||
after = [ "systemd-udev-settle.service" "libvirtd-config.service" ]
|
||||
after = [ "libvirtd-config.service" ]
|
||||
++ optional vswitch.enable "ovs-vswitchd.service";
|
||||
|
||||
environment.LIBVIRTD_ARGS = escapeShellArgs (
|
||||
|
|
|
@ -66,7 +66,7 @@ in {
|
|||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
enables various settings to avoid common pitfalls when
|
||||
Enables various settings to avoid common pitfalls when
|
||||
running containers requiring many file operations.
|
||||
Fixes errors like "Too many open files" or
|
||||
"neighbour: ndisc_cache: neighbor table overflow!".
|
||||
|
@ -74,6 +74,17 @@ in {
|
|||
for details.
|
||||
'';
|
||||
};
|
||||
|
||||
startTimeout = mkOption {
|
||||
type = types.int;
|
||||
default = 600;
|
||||
apply = toString;
|
||||
description = ''
|
||||
Time to wait (in seconds) for LXD to become ready to process requests.
|
||||
If LXD does not reply within the configured time, lxd.service will be
|
||||
considered failed and systemd will attempt to restart it.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -81,40 +92,58 @@ in {
|
|||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
security.apparmor = {
|
||||
enable = true;
|
||||
profiles = [
|
||||
"${cfg.lxcPackage}/etc/apparmor.d/usr.bin.lxc-start"
|
||||
# Note: the following options are also declared in virtualisation.lxc, but
|
||||
# the latter can't be simply enabled to reuse the formers, because it
|
||||
# does a bunch of unrelated things.
|
||||
systemd.tmpfiles.rules = [ "d /var/lib/lxc/rootfs 0755 root root -" ];
|
||||
|
||||
security.apparmor.packages = [ cfg.lxcPackage ];
|
||||
security.apparmor.profiles = [
|
||||
"${cfg.lxcPackage}/etc/apparmor.d/lxc-containers"
|
||||
"${cfg.lxcPackage}/etc/apparmor.d/usr.bin.lxc-start"
|
||||
];
|
||||
packages = [ cfg.lxcPackage ];
|
||||
};
|
||||
|
||||
# TODO: remove once LXD gets proper support for cgroupsv2
|
||||
# (currently most of the e.g. CPU accounting stuff doesn't work)
|
||||
systemd.enableUnifiedCgroupHierarchy = false;
|
||||
|
||||
systemd.sockets.lxd = {
|
||||
description = "LXD UNIX socket";
|
||||
wantedBy = [ "sockets.target" ];
|
||||
|
||||
socketConfig = {
|
||||
ListenStream = "/var/lib/lxd/unix.socket";
|
||||
SocketMode = "0660";
|
||||
SocketGroup = "lxd";
|
||||
Service = "lxd.service";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.lxd = {
|
||||
description = "LXD Container Management Daemon";
|
||||
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "systemd-udev-settle.service" ];
|
||||
after = [ "network-online.target" "lxcfs.service" ];
|
||||
requires = [ "network-online.target" "lxd.socket" "lxcfs.service" ];
|
||||
documentation = [ "man:lxd(1)" ];
|
||||
|
||||
path = lib.optional config.boot.zfs.enabled config.boot.zfs.package;
|
||||
|
||||
preStart = ''
|
||||
mkdir -m 0755 -p /var/lib/lxc/rootfs
|
||||
'';
|
||||
path = optional cfg.zfsSupport config.boot.zfs.package;
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "@${cfg.package}/bin/lxd lxd --group lxd";
|
||||
Type = "simple";
|
||||
ExecStartPost = "${cfg.package}/bin/lxd waitready --timeout=${cfg.startTimeout}";
|
||||
ExecStop = "${cfg.package}/bin/lxd shutdown";
|
||||
|
||||
KillMode = "process"; # when stopping, leave the containers alone
|
||||
LimitMEMLOCK = "infinity";
|
||||
LimitNOFILE = "1048576";
|
||||
LimitNPROC = "infinity";
|
||||
TasksMax = "infinity";
|
||||
|
||||
Restart = "on-failure";
|
||||
TimeoutStartSec = "${cfg.startTimeout}s";
|
||||
TimeoutStopSec = "30s";
|
||||
|
||||
# By default, `lxd` loads configuration files from hard-coded
|
||||
# `/usr/share/lxc/config` - since this is a no-go for us, we have to
|
||||
# explicitly tell it where the actual configuration files are
|
||||
|
|
|
@ -271,8 +271,8 @@ let
|
|||
DeviceAllow = map (d: "${d.node} ${d.modifier}") cfg.allowedDevices;
|
||||
};
|
||||
|
||||
|
||||
system = config.nixpkgs.localSystem.system;
|
||||
kernelVersion = config.boot.kernelPackages.kernel.version;
|
||||
|
||||
bindMountOpts = { name, ... }: {
|
||||
|
||||
|
@ -321,7 +321,6 @@ let
|
|||
};
|
||||
};
|
||||
|
||||
|
||||
mkBindFlag = d:
|
||||
let flagPrefix = if d.isReadOnly then " --bind-ro=" else " --bind=";
|
||||
mountstr = if d.hostPath != null then "${d.hostPath}:${d.mountPoint}" else "${d.mountPoint}";
|
||||
|
@ -482,11 +481,16 @@ in
|
|||
networking.useDHCP = false;
|
||||
assertions = [
|
||||
{
|
||||
assertion = config.privateNetwork -> stringLength name < 12;
|
||||
assertion =
|
||||
(builtins.compareVersions kernelVersion "5.8" <= 0)
|
||||
-> config.privateNetwork
|
||||
-> stringLength name <= 11;
|
||||
message = ''
|
||||
Container name `${name}` is too long: When `privateNetwork` is enabled, container names can
|
||||
not be longer than 11 characters, because the container's interface name is derived from it.
|
||||
This might be fixed in the future. See https://github.com/NixOS/nixpkgs/issues/38509
|
||||
You should either make the container name shorter or upgrade to a more recent kernel that
|
||||
supports interface altnames (i.e. at least Linux 5.8 - please see https://github.com/NixOS/nixpkgs/issues/38509
|
||||
for details).
|
||||
'';
|
||||
}
|
||||
];
|
||||
|
|
|
@ -277,6 +277,18 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
virtualisation.msize =
|
||||
mkOption {
|
||||
default = null;
|
||||
type = types.nullOr types.ints.unsigned;
|
||||
description =
|
||||
''
|
||||
msize (maximum packet size) option passed to 9p file systems, in
|
||||
bytes. Increasing this should increase performance significantly,
|
||||
at the cost of higher RAM usage.
|
||||
'';
|
||||
};
|
||||
|
||||
virtualisation.diskSize =
|
||||
mkOption {
|
||||
default = 512;
|
||||
|
@ -666,7 +678,7 @@ in
|
|||
${if cfg.writableStore then "/nix/.ro-store" else "/nix/store"} =
|
||||
{ device = "store";
|
||||
fsType = "9p";
|
||||
options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ];
|
||||
options = [ "trans=virtio" "version=9p2000.L" "cache=loose" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
|
||||
neededForBoot = true;
|
||||
};
|
||||
"/tmp" = mkIf config.boot.tmpOnTmpfs
|
||||
|
@ -679,13 +691,13 @@ in
|
|||
"/tmp/xchg" =
|
||||
{ device = "xchg";
|
||||
fsType = "9p";
|
||||
options = [ "trans=virtio" "version=9p2000.L" ];
|
||||
options = [ "trans=virtio" "version=9p2000.L" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
|
||||
neededForBoot = true;
|
||||
};
|
||||
"/tmp/shared" =
|
||||
{ device = "shared";
|
||||
fsType = "9p";
|
||||
options = [ "trans=virtio" "version=9p2000.L" ];
|
||||
options = [ "trans=virtio" "version=9p2000.L" ] ++ lib.optional (cfg.msize != null) "msize=${toString cfg.msize}";
|
||||
neededForBoot = true;
|
||||
};
|
||||
} // optionalAttrs (cfg.writableStore && cfg.writableStoreUseTmpfs)
|
||||
|
|
|
@ -73,6 +73,7 @@ in
|
|||
containers-imperative = handleTest ./containers-imperative.nix {};
|
||||
containers-ip = handleTest ./containers-ip.nix {};
|
||||
containers-macvlans = handleTest ./containers-macvlans.nix {};
|
||||
containers-names = handleTest ./containers-names.nix {};
|
||||
containers-physical_interfaces = handleTest ./containers-physical_interfaces.nix {};
|
||||
containers-portforward = handleTest ./containers-portforward.nix {};
|
||||
containers-reloadable = handleTest ./containers-reloadable.nix {};
|
||||
|
@ -196,6 +197,7 @@ in
|
|||
keymap = handleTest ./keymap.nix {};
|
||||
knot = handleTest ./knot.nix {};
|
||||
krb5 = discoverTests (import ./krb5 {});
|
||||
ksm = handleTest ./ksm.nix {};
|
||||
kubernetes.dns = handleTestOn ["x86_64-linux"] ./kubernetes/dns.nix {};
|
||||
# kubernetes.e2e should eventually replace kubernetes.rbac when it works
|
||||
#kubernetes.e2e = handleTestOn ["x86_64-linux"] ./kubernetes/e2e.nix {};
|
||||
|
@ -238,6 +240,7 @@ in
|
|||
mosquitto = handleTest ./mosquitto.nix {};
|
||||
mpd = handleTest ./mpd.nix {};
|
||||
mumble = handleTest ./mumble.nix {};
|
||||
musescore = handleTest ./musescore.nix {};
|
||||
munin = handleTest ./munin.nix {};
|
||||
mutableUsers = handleTest ./mutable-users.nix {};
|
||||
mxisd = handleTest ./mxisd.nix {};
|
||||
|
@ -304,9 +307,13 @@ in
|
|||
pgjwt = handleTest ./pgjwt.nix {};
|
||||
pgmanage = handleTest ./pgmanage.nix {};
|
||||
php = handleTest ./php {};
|
||||
php73 = handleTest ./php { php = pkgs.php73; };
|
||||
php74 = handleTest ./php { php = pkgs.php74; };
|
||||
php80 = handleTest ./php { php = pkgs.php80; };
|
||||
pinnwand = handleTest ./pinnwand.nix {};
|
||||
plasma5 = handleTest ./plasma5.nix {};
|
||||
pleroma = handleTestOn [ "x86_64-linux" "aarch64-linux" ] ./pleroma.nix {};
|
||||
plikd = handleTest ./plikd.nix {};
|
||||
plotinus = handleTest ./plotinus.nix {};
|
||||
podman = handleTestOn ["x86_64-linux"] ./podman.nix {};
|
||||
postfix = handleTest ./postfix.nix {};
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
let
|
||||
hostIp = "192.168.0.1";
|
||||
containerIp = "192.168.0.100/24";
|
||||
|
@ -7,10 +5,10 @@ let
|
|||
containerIp6 = "fc00::2/7";
|
||||
in
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-bridge";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ aristid aszlig eelco kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
import ./make-test-python.nix ({ pkgs, lib, ...} : let
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: let
|
||||
|
||||
customPkgs = pkgs.appendOverlays [ (self: super: {
|
||||
hello = super.hello.overrideAttrs (old: {
|
||||
|
@ -8,8 +8,8 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : let
|
|||
|
||||
in {
|
||||
name = "containers-custom-pkgs";
|
||||
meta = with lib.maintainers; {
|
||||
maintainers = [ adisbladis earvstedt ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ adisbladis earvstedt ];
|
||||
};
|
||||
|
||||
machine = { config, ... }: {
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-ephemeral";
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ patryk27 ];
|
||||
};
|
||||
|
||||
machine = { pkgs, ... }: {
|
||||
virtualisation.memorySize = 768;
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-extra_veth";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ kampfschlaefer ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-hosts";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ montag451 ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ montag451 ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-imperative";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ aristid aszlig eelco kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
let
|
||||
webserverFor = hostAddress: localAddress: {
|
||||
inherit hostAddress localAddress;
|
||||
|
@ -13,10 +11,10 @@ let
|
|||
};
|
||||
};
|
||||
|
||||
in import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
in import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-ipv4-ipv6";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ aristid aszlig eelco kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,15 +1,13 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
let
|
||||
# containers IP on VLAN 1
|
||||
containerIp1 = "192.168.1.253";
|
||||
containerIp2 = "192.168.1.254";
|
||||
in
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-macvlans";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ montag451 ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ montag451 ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
|
|
37
third_party/nixpkgs/nixos/tests/containers-names.nix
vendored
Normal file
37
third_party/nixpkgs/nixos/tests/containers-names.nix
vendored
Normal file
|
@ -0,0 +1,37 @@
|
|||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-names";
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ patryk27 ];
|
||||
};
|
||||
|
||||
machine = { ... }: {
|
||||
# We're using the newest kernel, so that we can test containers with long names.
|
||||
# Please see https://github.com/NixOS/nixpkgs/issues/38509 for details.
|
||||
boot.kernelPackages = pkgs.linuxPackages_latest;
|
||||
|
||||
containers = let
|
||||
container = subnet: {
|
||||
autoStart = true;
|
||||
privateNetwork = true;
|
||||
hostAddress = "192.168.${subnet}.1";
|
||||
localAddress = "192.168.${subnet}.2";
|
||||
config = { };
|
||||
};
|
||||
|
||||
in {
|
||||
first = container "1";
|
||||
second = container "2";
|
||||
really-long-name = container "3";
|
||||
really-long-long-name-2 = container "4";
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.wait_for_unit("default.target")
|
||||
|
||||
machine.succeed("ip link show | grep ve-first")
|
||||
machine.succeed("ip link show | grep ve-second")
|
||||
machine.succeed("ip link show | grep ve-really-lFYWO")
|
||||
machine.succeed("ip link show | grep ve-really-l3QgY")
|
||||
'';
|
||||
})
|
|
@ -1,8 +1,7 @@
|
|||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-physical_interfaces";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ kampfschlaefer ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
let
|
||||
hostIp = "192.168.0.1";
|
||||
hostPort = 10080;
|
||||
|
@ -7,10 +5,10 @@ let
|
|||
containerPort = 80;
|
||||
in
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-portforward";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ aristid aszlig eelco kampfschlaefer ianwookim ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ aristid aszlig eelco kampfschlaefer ianwookim ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
import ./make-test-python.nix ({ pkgs, lib, ...} :
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }:
|
||||
let
|
||||
client_base = {
|
||||
|
||||
containers.test1 = {
|
||||
autoStart = true;
|
||||
config = {
|
||||
|
@ -16,8 +15,8 @@ let
|
|||
};
|
||||
in {
|
||||
name = "containers-reloadable";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ danbst ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ danbst ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
let
|
||||
client_base = {
|
||||
networking.firewall.enable = false;
|
||||
|
@ -16,11 +14,11 @@ let
|
|||
};
|
||||
};
|
||||
};
|
||||
in import ./make-test-python.nix ({ pkgs, ...} :
|
||||
in import ./make-test-python.nix ({ pkgs, lib, ... }:
|
||||
{
|
||||
name = "containers-restart_networking";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ kampfschlaefer ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ kampfschlaefer ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
|
|
|
@ -1,9 +1,7 @@
|
|||
# Test for NixOS' container support.
|
||||
|
||||
import ./make-test-python.nix ({ pkgs, ...} : {
|
||||
import ./make-test-python.nix ({ pkgs, lib, ... }: {
|
||||
name = "containers-tmpfs";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ ];
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ patryk27 ];
|
||||
};
|
||||
|
||||
machine =
|
||||
|
|
26
third_party/nixpkgs/nixos/tests/gitlab.nix
vendored
26
third_party/nixpkgs/nixos/tests/gitlab.nix
vendored
|
@ -11,6 +11,8 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
|
|||
|
||||
nodes = {
|
||||
gitlab = { ... }: {
|
||||
imports = [ common/user-account.nix ];
|
||||
|
||||
virtualisation.memorySize = if pkgs.stdenv.is64bit then 4096 else 2047;
|
||||
systemd.services.gitlab.serviceConfig.Restart = mkForce "no";
|
||||
systemd.services.gitlab-workhorse.serviceConfig.Restart = mkForce "no";
|
||||
|
@ -27,11 +29,31 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
|
|||
};
|
||||
};
|
||||
|
||||
services.dovecot2 = {
|
||||
enable = true;
|
||||
enableImap = true;
|
||||
};
|
||||
|
||||
services.gitlab = {
|
||||
enable = true;
|
||||
databasePasswordFile = pkgs.writeText "dbPassword" "xo0daiF4";
|
||||
initialRootPasswordFile = pkgs.writeText "rootPassword" initialRootPassword;
|
||||
smtp.enable = true;
|
||||
extraConfig = {
|
||||
incoming_email = {
|
||||
enabled = true;
|
||||
mailbox = "inbox";
|
||||
address = "alice@localhost";
|
||||
user = "alice";
|
||||
password = "foobar";
|
||||
host = "localhost";
|
||||
port = 143;
|
||||
};
|
||||
pages = {
|
||||
enabled = true;
|
||||
host = "localhost";
|
||||
};
|
||||
};
|
||||
secrets = {
|
||||
secretFile = pkgs.writeText "secret" "r8X9keSKynU7p4aKlh4GO1Bo77g5a7vj";
|
||||
otpFile = pkgs.writeText "otpsecret" "Zu5hGx3YvQx40DvI8WoZJQpX2paSDOlG";
|
||||
|
@ -64,12 +86,16 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : with lib; {
|
|||
in
|
||||
''
|
||||
gitlab.start()
|
||||
|
||||
gitlab.wait_for_unit("gitaly.service")
|
||||
gitlab.wait_for_unit("gitlab-workhorse.service")
|
||||
gitlab.wait_for_unit("gitlab-pages.service")
|
||||
gitlab.wait_for_unit("gitlab-mailroom.service")
|
||||
gitlab.wait_for_unit("gitlab.service")
|
||||
gitlab.wait_for_unit("gitlab-sidekiq.service")
|
||||
gitlab.wait_for_file("/var/gitlab/state/tmp/sockets/gitlab.socket")
|
||||
gitlab.wait_until_succeeds("curl -sSf http://gitlab/users/sign_in")
|
||||
|
||||
gitlab.succeed(
|
||||
"curl -isSf http://gitlab | grep -i location | grep -q http://gitlab/users/sign_in"
|
||||
)
|
||||
|
|
|
@ -24,6 +24,8 @@ in {
|
|||
services.home-assistant = {
|
||||
inherit configDir;
|
||||
enable = true;
|
||||
# includes the package with all tests enabled
|
||||
package = pkgs.home-assistant;
|
||||
config = {
|
||||
homeassistant = {
|
||||
name = "Home";
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue