Project import generated by Copybara.
GitOrigin-RevId: f8e2ebd66d097614d51a56a755450d4ae1632df1
This commit is contained in:
parent
3156e14105
commit
60f07311b9
2515 changed files with 67233 additions and 43683 deletions
7
third_party/nixpkgs/.github/CODEOWNERS
vendored
7
third_party/nixpkgs/.github/CODEOWNERS
vendored
|
@ -67,8 +67,8 @@
|
|||
/nixos/lib/make-disk-image.nix @raitobezarius
|
||||
|
||||
# Nix, the package manager
|
||||
pkgs/tools/package-management/nix/ @raitobezarius
|
||||
nixos/modules/installer/tools/nix-fallback-paths.nix @raitobezarius
|
||||
pkgs/tools/package-management/nix/ @raitobezarius @ma27
|
||||
nixos/modules/installer/tools/nix-fallback-paths.nix @raitobezarius @ma27
|
||||
|
||||
# Nixpkgs documentation
|
||||
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
|
||||
|
@ -325,6 +325,9 @@ pkgs/applications/version-management/forgejo @bendlas @emilylange
|
|||
/pkgs/build-support/node/fetch-npm-deps @lilyinstarlight @winterqt
|
||||
/doc/languages-frameworks/javascript.section.md @lilyinstarlight @winterqt
|
||||
|
||||
# environment.noXlibs option aka NoX
|
||||
/nixos/modules/config/no-x-libs.nix @SuperSandro2000
|
||||
|
||||
# OCaml
|
||||
/pkgs/build-support/ocaml @ulrikstrid
|
||||
/pkgs/development/compilers/ocaml @ulrikstrid
|
||||
|
|
|
@ -35,10 +35,6 @@ jobs:
|
|||
pairs:
|
||||
- from: master
|
||||
into: haskell-updates
|
||||
- from: release-23.05
|
||||
into: staging-next-23.05
|
||||
- from: staging-next-23.05
|
||||
into: staging-23.05
|
||||
- from: release-23.11
|
||||
into: staging-next-23.11
|
||||
- from: staging-next-23.11
|
||||
|
@ -56,7 +52,7 @@ jobs:
|
|||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment on failure
|
||||
uses: peter-evans/create-or-update-comment@23ff15729ef2fc348714a3bb66d2f655ca9066f2 # v3.1.0
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
if: ${{ failure() }}
|
||||
with:
|
||||
issue-number: 105153
|
||||
|
|
|
@ -50,7 +50,7 @@ jobs:
|
|||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Comment on failure
|
||||
uses: peter-evans/create-or-update-comment@23ff15729ef2fc348714a3bb66d2f655ca9066f2 # v3.1.0
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043 # v4.0.0
|
||||
if: ${{ failure() }}
|
||||
with:
|
||||
issue-number: 105153
|
||||
|
|
145
third_party/nixpkgs/doc/README.md
vendored
145
third_party/nixpkgs/doc/README.md
vendored
|
@ -71,6 +71,11 @@ If you **omit a link text** for a link pointing to a section, the text will be s
|
|||
|
||||
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
|
||||
|
||||
|
||||
#### HTML
|
||||
|
||||
Inlining HTML is not allowed. Parts of the documentation gets rendered to various non-HTML formats, such as man pages in the case of NixOS manual.
|
||||
|
||||
#### Roles
|
||||
|
||||
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``. The references will turn into links when a mapping exists in [`doc/manpage-urls.json`](./manpage-urls.json).
|
||||
|
@ -157,6 +162,9 @@ watermelon
|
|||
|
||||
In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
|
||||
In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
|
||||
When needed, each convention explain why it exists, so you can make a decision whether to follow it or not based on your particular case.
|
||||
Note that these conventions are about the **structure** of the manual (and its source files), not about the content that goes in it.
|
||||
You, as the writer of documentation, are still in charge of its content.
|
||||
|
||||
- Put each sentence in its own line.
|
||||
This makes reviews and suggestions much easier, since GitHub's review system is based on lines.
|
||||
|
@ -188,26 +196,153 @@ In that case, please open an issue about the particular documentation convention
|
|||
}
|
||||
```
|
||||
|
||||
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments. For example:
|
||||
- When showing inputs/outputs of any [REPL](https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop), such as a shell or the Nix REPL, use a format as you'd see in the REPL, while trying to visually separate inputs from outputs.
|
||||
This means that for a shell, you should use a format like the following:
|
||||
```shell
|
||||
$ nix-build -A hello '<nixpkgs>' \
|
||||
--option require-sigs false \
|
||||
--option trusted-substituters file:///tmp/hello-cache \
|
||||
--option substituters file:///tmp/hello-cache
|
||||
/nix/store/zhl06z4lrfrkw5rp0hnjjfrgsclzvxpm-hello-2.12.1
|
||||
```
|
||||
Note how the input is preceded by `$` on the first line and indented on subsequent lines, and how the output is provided as you'd see on the shell.
|
||||
|
||||
For the Nix REPL, you should use a format like the following:
|
||||
```shell
|
||||
nix-repl> builtins.attrNames { a = 1; b = 2; }
|
||||
[ "a" "b" ]
|
||||
```
|
||||
Note how the input is preceded by `nix-repl>` and the output is provided as you'd see on the Nix REPL.
|
||||
|
||||
- When documenting functions or anything that has inputs/outputs and example usage, use nested headings to clearly separate inputs, outputs, and examples.
|
||||
Keep examples as the last nested heading, and link to the examples wherever applicable in the documentation.
|
||||
|
||||
The purpose of this convention is to provide a familiar structure for navigating the manual, so any reader can expect to find content related to inputs in an "inputs" heading, examples in an "examples" heading, and so on.
|
||||
An example:
|
||||
```
|
||||
## buildImage
|
||||
|
||||
Some explanation about the function here.
|
||||
Describe a particular scenario, and point to [](#ex-dockerTools-buildImage), which is an example demonstrating it.
|
||||
|
||||
### Inputs
|
||||
|
||||
Documentation for the inputs of `buildImage`.
|
||||
Perhaps even point to [](#ex-dockerTools-buildImage) again when talking about something specifically linked to it.
|
||||
|
||||
### Passthru outputs
|
||||
|
||||
Documentation for any passthru outputs of `buildImage`.
|
||||
|
||||
### Examples
|
||||
|
||||
Note that this is the last nested heading in the `buildImage` section.
|
||||
|
||||
:::{.example #ex-dockerTools-buildImage}
|
||||
|
||||
# Using `buildImage`
|
||||
|
||||
Example of how to use `buildImage` goes here.
|
||||
|
||||
:::
|
||||
```
|
||||
|
||||
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments as well as their [types](https://nixos.org/manual/nix/stable/language/values).
|
||||
For example:
|
||||
|
||||
```markdown
|
||||
# pkgs.coolFunction
|
||||
|
||||
Description of what `coolFunction` does.
|
||||
|
||||
## Inputs
|
||||
|
||||
`coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
|
||||
|
||||
`name`
|
||||
`name` (String)
|
||||
|
||||
: The name of the resulting image.
|
||||
|
||||
`tag` _optional_
|
||||
`tag` (String; _optional_)
|
||||
|
||||
: Tag of the generated image.
|
||||
|
||||
_Default value:_ the output path's hash.
|
||||
|
||||
_Default:_ the output path's hash.
|
||||
```
|
||||
|
||||
#### Examples
|
||||
|
||||
To define a referenceable figure use the following fencing:
|
||||
|
||||
```markdown
|
||||
:::{.example #an-attribute-set-example}
|
||||
# An attribute set example
|
||||
|
||||
You can add text before
|
||||
|
||||
```nix
|
||||
{ a = 1; b = 2;}
|
||||
```
|
||||
|
||||
and after code fencing
|
||||
:::
|
||||
```
|
||||
|
||||
Defining examples through the `example` fencing class adds them to a "List of Examples" section after the Table of Contents.
|
||||
Though this is not shown in the rendered documentation on nixos.org.
|
||||
|
||||
#### Figures
|
||||
|
||||
To define a referencable figure use the following fencing:
|
||||
|
||||
```markdown
|
||||
::: {.figure #nixos-logo}
|
||||
# NixOS Logo
|
||||
![NixOS logo](./nixos_logo.png)
|
||||
:::
|
||||
```
|
||||
|
||||
Defining figures through the `figure` fencing class adds them to a `List of Figures` after the `Table of Contents`.
|
||||
Though this is not shown in the rendered documentation on nixos.org.
|
||||
|
||||
#### Footnotes
|
||||
|
||||
To add a foonote explanation, use the following syntax:
|
||||
|
||||
```markdown
|
||||
Sometimes it's better to add context [^context] in a footnote.
|
||||
|
||||
[^context]: This explanation will be rendered at the end of the chapter.
|
||||
```
|
||||
|
||||
#### Inline comments
|
||||
|
||||
Inline comments are supported with following syntax:
|
||||
|
||||
```markdown
|
||||
<!-- This is an inline comment -->
|
||||
```
|
||||
|
||||
The comments will not be rendered in the rendered HTML.
|
||||
|
||||
#### Link reference definitions
|
||||
|
||||
Links can reference a label, for example, to make the link target reusable:
|
||||
|
||||
```markdown
|
||||
::: {.note}
|
||||
Reference links can also be used to [shorten URLs][url-id] and keep the markdown readable.
|
||||
:::
|
||||
|
||||
[url-id]: https://github.com/NixOS/nixpkgs/blob/19d4f7dc485f74109bd66ef74231285ff797a823/doc/README.md
|
||||
```
|
||||
|
||||
This syntax is taken from [CommonMark](https://spec.commonmark.org/0.30/#link-reference-definitions).
|
||||
|
||||
#### Typographic replacements
|
||||
|
||||
Typographic replacements are enabled. Check the [list of possible replacement patterns check](https://github.com/executablebooks/markdown-it-py/blob/3613e8016ecafe21709471ee0032a90a4157c2d1/markdown_it/rules_core/replacements.py#L1-L15).
|
||||
|
||||
## Getting help
|
||||
|
||||
If you need documentation-specific help or reviews, ping [@NixOS/documentation-reviewers](https://github.com/orgs/nixos/teams/documentation-reviewers) on your pull request.
|
||||
|
|
|
@ -676,6 +676,7 @@ If our package sets `includeStorePaths` to `false`, we'll end up with only the f
|
|||
dockerTools.streamLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ hello ];
|
||||
includeStorePaths = false;
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -714,78 +715,376 @@ dockerTools.streamLayeredImage {
|
|||
```
|
||||
:::
|
||||
|
||||
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry}
|
||||
[]{#ssec-pkgs-dockerTools-fetchFromRegistry}
|
||||
## pullImage {#ssec-pkgs-dockerTools-pullImage}
|
||||
|
||||
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images.
|
||||
This function is similar to the `docker pull` command, which means it can be used to pull a Docker image from a registry that implements the [Docker Registry HTTP API V2](https://distribution.github.io/distribution/spec/api/).
|
||||
By default, the `docker.io` registry is used.
|
||||
|
||||
Its parameters are described in the example below:
|
||||
The image will be downloaded as an uncompressed Docker-compatible repository tarball, which is suitable for use with other `dockerTools` functions such as [`buildImage`](#ssec-pkgs-dockerTools-buildImage), [`buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage), and [`streamLayeredImage`](#ssec-pkgs-dockerTools-streamLayeredImage).
|
||||
|
||||
This function requires two different types of hashes/digests to be specified:
|
||||
|
||||
- One of them is used to identify a unique image within the registry (see the documentation for the `imageDigest` attribute).
|
||||
- The other is used by Nix to ensure the contents of the output haven't changed (see the documentation for the `sha256` attribute).
|
||||
|
||||
Both hashes are required because they must uniquely identify some content in two completely different systems (the Docker registry and the Nix store), but their values will not be the same.
|
||||
See [](#ex-dockerTools-pullImage-nixprefetchdocker) for a tool that can help gather these values.
|
||||
|
||||
### Inputs {#ssec-pkgs-dockerTools-pullImage-inputs}
|
||||
|
||||
`pullImage` expects a single argument with the following attributes:
|
||||
|
||||
`imageName` (String)
|
||||
|
||||
: Specifies the name of the image to be downloaded, as well as the registry endpoint.
|
||||
By default, the `docker.io` registry is used.
|
||||
To specify a different registry, prepend the endpoint to `imageName`, separated by a slash (`/`).
|
||||
See [](#ex-dockerTools-pullImage-differentregistry) for how to do that.
|
||||
|
||||
`imageDigest` (String)
|
||||
|
||||
: Specifies the digest of the image to be downloaded.
|
||||
|
||||
:::{.tip}
|
||||
**Why can't I specify a tag to pull from, and have to use a digest instead?**
|
||||
|
||||
Tags are often updated to point to different image contents.
|
||||
The most common example is the `latest` tag, which is usually updated whenever a newer image version is available.
|
||||
|
||||
An image tag isn't enough to guarantee the contents of an image won't change, but a digest guarantees this.
|
||||
Providing a digest helps ensure that you will still be able to build the same Nix code and get the same output even if newer versions of an image are released.
|
||||
:::
|
||||
|
||||
`sha256` (String)
|
||||
|
||||
: The hash of the image after it is downloaded.
|
||||
Internally, this is passed to the [`outputHash`](https://nixos.org/manual/nix/stable/language/advanced-attributes#adv-attr-outputHash) attribute of the resulting derivation.
|
||||
This is needed to provide a guarantee to Nix that the contents of the image haven't changed, because Nix doesn't support the value in `imageDigest`.
|
||||
|
||||
`finalImageName` (String; _optional_)
|
||||
|
||||
: Specifies the name that will be used for the image after it has been downloaded.
|
||||
This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
|
||||
Use `imageName` for that instead.
|
||||
|
||||
_Default value:_ the same value specified in `imageName`.
|
||||
|
||||
`finalImageTag` (String; _optional_)
|
||||
|
||||
: Specifies the tag that will be used for the image after it has been downloaded.
|
||||
This only applies after the image is downloaded, and is not used to identify the image to be downloaded in the registry.
|
||||
|
||||
_Default value:_ `"latest"`.
|
||||
|
||||
`os` (String; _optional_)
|
||||
|
||||
: Specifies the operating system of the image to pull.
|
||||
If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
|
||||
According to the linked specification, all possible values for `$GOOS` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `darwin` or `linux`.
|
||||
|
||||
_Default value:_ `"linux"`.
|
||||
|
||||
`arch` (String; _optional_)
|
||||
|
||||
: Specifies the architecture of the image to pull.
|
||||
If specified, its value should follow the [OCI Image Configuration Specification](https://github.com/opencontainers/image-spec/blob/main/config.md#properties), which should still be compatible with Docker.
|
||||
According to the linked specification, all possible values for `$GOARCH` in [the Go docs](https://go.dev/doc/install/source#environment) should be valid, but will commonly be one of `386`, `amd64`, `arm`, or `arm64`.
|
||||
|
||||
_Default value:_ the same value from `pkgs.go.GOARCH`.
|
||||
|
||||
`tlsVerify` (Boolean; _optional_)
|
||||
|
||||
: Used to enable or disable HTTPS and TLS certificate verification when communicating with the chosen Docker registry.
|
||||
Setting this to `false` will make `pullImage` connect to the registry through HTTP.
|
||||
|
||||
_Default value:_ `true`.
|
||||
|
||||
`name` (String; _optional_)
|
||||
|
||||
: The name used for the output in the Nix store path.
|
||||
|
||||
_Default value:_ a value derived from `finalImageName` and `finalImageTag`, with some symbols replaced.
|
||||
It is recommended to treat the default as an opaque value.
|
||||
|
||||
### Examples {#ssec-pkgs-dockerTools-pullImage-examples}
|
||||
|
||||
::: {.example #ex-dockerTools-pullImage-niximage}
|
||||
# Pulling the nixos/nix Docker image from the default registry
|
||||
|
||||
This example pulls the [`nixos/nix` image](https://hub.docker.com/r/nixos/nix) and saves it in the Nix store.
|
||||
|
||||
```nix
|
||||
pullImage {
|
||||
{ dockerTools }:
|
||||
dockerTools.pullImage {
|
||||
imageName = "nixos/nix";
|
||||
imageDigest =
|
||||
"sha256:473a2b527958665554806aea24d0131bacec46d23af09fef4598eeab331850fa";
|
||||
imageDigest = "sha256:b8ea88f763f33dfda2317b55eeda3b1a4006692ee29e60ee54ccf6d07348c598";
|
||||
finalImageName = "nix";
|
||||
finalImageTag = "2.11.1";
|
||||
sha256 = "sha256-qvhj+Hlmviz+KEBVmsyPIzTB3QlVAFzwAY1zDPIBGxc=";
|
||||
os = "linux";
|
||||
arch = "x86_64";
|
||||
finalImageTag = "2.19.3";
|
||||
sha256 = "zRwlQs1FiKrvHPaf8vWOR/Tlp1C5eLn1d9pE4BZg3oA=";
|
||||
}
|
||||
```
|
||||
:::
|
||||
|
||||
::: {.example #ex-dockerTools-pullImage-differentregistry}
|
||||
# Pulling the nixos/nix Docker image from a specific registry
|
||||
|
||||
This example pulls the [`coreos/etcd` image](https://quay.io/repository/coreos/etcd) from the `quay.io` registry.
|
||||
|
||||
```nix
|
||||
{ dockerTools }:
|
||||
dockerTools.pullImage {
|
||||
imageName = "quay.io/coreos/etcd";
|
||||
imageDigest = "sha256:24a23053f29266fb2731ebea27f915bb0fb2ae1ea87d42d890fe4e44f2e27c5d";
|
||||
finalImageName = "etcd";
|
||||
finalImageTag = "v3.5.11";
|
||||
sha256 = "Myw+85f2/EVRyMB3axECdmQ5eh9p1q77FWYKy8YpRWU=";
|
||||
}
|
||||
```
|
||||
:::
|
||||
|
||||
::: {.example #ex-dockerTools-pullImage-nixprefetchdocker}
|
||||
# Finding the digest and hash values to use for `dockerTools.pullImage`
|
||||
|
||||
Since [`dockerTools.pullImage`](#ssec-pkgs-dockerTools-pullImage) requires two different hashes, one can run the `nix-prefetch-docker` tool to find out the values for the hashes.
|
||||
The tool outputs some text for an attribute set which you can pass directly to `pullImage`.
|
||||
|
||||
```shell
|
||||
$ nix run nixpkgs#nix-prefetch-docker -- --image-name nixos/nix --image-tag 2.19.3 --arch amd64 --os linux
|
||||
(some output removed for clarity)
|
||||
Writing manifest to image destination
|
||||
-> ImageName: nixos/nix
|
||||
-> ImageDigest: sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634
|
||||
-> FinalImageName: nixos/nix
|
||||
-> FinalImageTag: latest
|
||||
-> ImagePath: /nix/store/4mxy9mn6978zkvlc670g5703nijsqc95-docker-image-nixos-nix-latest.tar
|
||||
-> ImageHash: 1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q
|
||||
{
|
||||
imageName = "nixos/nix";
|
||||
imageDigest = "sha256:498fa2d7f2b5cb3891a4edf20f3a8f8496e70865099ba72540494cd3e2942634";
|
||||
sha256 = "1q6cf2pdrasa34zz0jw7pbs6lvv52rq2aibgxccbwcagwkg2qj1q";
|
||||
finalImageName = "nixos/nix";
|
||||
finalImageTag = "latest";
|
||||
}
|
||||
```
|
||||
|
||||
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required.
|
||||
It is important to supply the `--arch` and `--os` arguments to `nix-prefetch-docker` to filter to a single image, in case there are multiple architectures and/or operating systems supported by the image name and tags specified.
|
||||
By default, `nix-prefetch-docker` will set `os` to `linux` and `arch` to `amd64`.
|
||||
|
||||
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required.
|
||||
|
||||
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to `imageName`.
|
||||
|
||||
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's `latest`.
|
||||
|
||||
- `sha256` is the checksum of the whole fetched image. This argument is required.
|
||||
|
||||
- `os`, if specified, is the operating system of the fetched image. By default it's `linux`.
|
||||
|
||||
- `arch`, if specified, is the cpu architecture of the fetched image. By default it's `x86_64`.
|
||||
|
||||
`nix-prefetch-docker` command can be used to get required image parameters:
|
||||
|
||||
```ShellSession
|
||||
$ nix run nixpkgs#nix-prefetch-docker -- --image-name mysql --image-tag 5
|
||||
```
|
||||
|
||||
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on.
|
||||
|
||||
```ShellSession
|
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux
|
||||
```
|
||||
|
||||
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments:
|
||||
|
||||
```ShellSession
|
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod
|
||||
Run `nix-prefetch-docker --help` for a list of all supported arguments:
|
||||
```shell
|
||||
$ nix run nixpkgs#nix-prefetch-docker -- --help
|
||||
(output removed for clarity)
|
||||
```
|
||||
:::
|
||||
|
||||
## exportImage {#ssec-pkgs-dockerTools-exportImage}
|
||||
|
||||
This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`.
|
||||
This function is similar to the `docker container export` command, which means it can be used to export an image's filesystem as an uncompressed tarball archive.
|
||||
The difference is that `docker container export` is applied to containers, but `dockerTools.exportImage` applies to Docker images.
|
||||
The resulting archive will not contain any image metadata (such as command to run with `docker container run`), only the filesystem contents.
|
||||
|
||||
> **_NOTE:_** Using this function requires the `kvm` device to be available.
|
||||
You can use this function to import an archive in Docker with `docker image import`.
|
||||
See [](#ex-dockerTools-exportImage-importingDocker) to understand how to do that.
|
||||
|
||||
The parameters of `exportImage` are the following:
|
||||
:::{.caution}
|
||||
`exportImage` works by unpacking the given image inside a VM.
|
||||
Because of this, using this function requires the `kvm` device to be available, see [`system-features`](https://nixos.org/manual/nix/stable/command-ref/conf-file.html#conf-system-features).
|
||||
:::
|
||||
|
||||
### Inputs {#ssec-pkgs-dockerTools-exportImage-inputs}
|
||||
|
||||
`exportImage` expects an argument with the following attributes:
|
||||
|
||||
`fromImage` (Attribute Set or String)
|
||||
|
||||
: The repository tarball of the image whose filesystem will be exported.
|
||||
It must be a valid Docker image, such as one exported by `docker image save`, or another image built with the `dockerTools` utility functions.
|
||||
|
||||
If `name` is not specified, `fromImage` must be an Attribute Set corresponding to a derivation, i.e. it can't be a path to a tarball.
|
||||
If `name` is specified, `fromImage` can be either an Attribute Set corresponding to a derivation or simply a path to a tarball.
|
||||
|
||||
See [](#ex-dockerTools-exportImage-naming) and [](#ex-dockerTools-exportImage-fromImagePath) to understand the connection between `fromImage`, `name`, and the name used for the output of `exportImage`.
|
||||
|
||||
`fromImageName` (String or Null; _optional_)
|
||||
|
||||
: Used to specify the image within the repository tarball in case it contains multiple images.
|
||||
A value of `null` means that `exportImage` will use the first image available in the repository.
|
||||
|
||||
:::{.note}
|
||||
This must be used with `fromImageTag`. Using only `fromImageName` without `fromImageTag` will make `exportImage` use the first image available in the repository.
|
||||
:::
|
||||
|
||||
_Default value:_ `null`.
|
||||
|
||||
`fromImageTag` (String or Null; _optional_)
|
||||
|
||||
: Used to specify the image within the repository tarball in case it contains multiple images.
|
||||
A value of `null` means that `exportImage` will use the first image available in the repository.
|
||||
|
||||
:::{.note}
|
||||
This must be used with `fromImageName`. Using only `fromImageTag` without `fromImageName` will make `exportImage` use the first image available in the repository
|
||||
:::
|
||||
|
||||
_Default value:_ `null`.
|
||||
|
||||
`diskSize` (Number; _optional_)
|
||||
|
||||
: Controls the disk size (in megabytes) of the VM used to unpack the image.
|
||||
|
||||
_Default value:_ 1024.
|
||||
|
||||
`name` (String; _optional_)
|
||||
|
||||
: The name used for the output in the Nix store path.
|
||||
|
||||
_Default value:_ the value of `fromImage.name`.
|
||||
|
||||
### Examples {#ssec-pkgs-dockerTools-exportImage-examples}
|
||||
|
||||
:::{.example #ex-dockerTools-exportImage-hello}
|
||||
# Exporting a Docker image with `dockerTools.exportImage`
|
||||
|
||||
This example first builds a layered image with [`dockerTools.buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage), and then exports its filesystem with `dockerTools.exportImage`.
|
||||
|
||||
```nix
|
||||
exportImage {
|
||||
fromImage = someLayeredImage;
|
||||
fromImageName = null;
|
||||
fromImageTag = null;
|
||||
|
||||
name = someLayeredImage.name;
|
||||
{ dockerTools, hello }:
|
||||
dockerTools.exportImage {
|
||||
name = "hello";
|
||||
fromImage = dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ hello ];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case.
|
||||
When building the package above, we can see the layers of the Docker image being unpacked to produce the final output:
|
||||
|
||||
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`.
|
||||
```shell
|
||||
$ nix-build
|
||||
(some output removed for clarity)
|
||||
Unpacking base image...
|
||||
From-image name or tag wasn't set. Reading the first ID.
|
||||
Unpacking layer 5731199219418f175d1580dbca05677e69144425b2d9ecb60f416cd57ca3ca42/layer.tar
|
||||
tar: Removing leading `/' from member names
|
||||
Unpacking layer e2897bf34bb78c4a65736510204282d9f7ca258ba048c183d665bd0f3d24c5ec/layer.tar
|
||||
tar: Removing leading `/' from member names
|
||||
Unpacking layer 420aa5876dca4128cd5256da7dea0948e30ef5971712f82601718cdb0a6b4cda/layer.tar
|
||||
tar: Removing leading `/' from member names
|
||||
Unpacking layer ea5f4e620e7906c8ecbc506b5e6f46420e68d4b842c3303260d5eb621b5942e5/layer.tar
|
||||
tar: Removing leading `/' from member names
|
||||
Unpacking layer 65807b9abe8ab753fa97da8fb74a21fcd4725cc51e1b679c7973c97acd47ebcf/layer.tar
|
||||
tar: Removing leading `/' from member names
|
||||
Unpacking layer b7da2076b60ebc0ea6824ef641978332b8ac908d47b2d07ff31b9cc362245605/layer.tar
|
||||
Executing post-mount steps...
|
||||
Packing raw image...
|
||||
[ 1.660036] reboot: Power down
|
||||
/nix/store/x6a5m7c6zdpqz1d8j7cnzpx9glzzvd2h-hello
|
||||
```
|
||||
|
||||
The following command lists some of the contents of the output to verify that the structure of the archive is as expected:
|
||||
|
||||
```shell
|
||||
$ tar --exclude '*/share/*' --exclude 'nix/store/*/*' -tvf /nix/store/x6a5m7c6zdpqz1d8j7cnzpx9glzzvd2h-hello
|
||||
drwxr-xr-x root/0 0 1979-12-31 16:00 ./
|
||||
drwxr-xr-x root/0 0 1979-12-31 16:00 ./bin/
|
||||
lrwxrwxrwx root/0 0 1979-12-31 16:00 ./bin/hello -> /nix/store/h92a9jd0lhhniv2q417hpwszd4jhys7q-hello-2.12.1/bin/hello
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/05zbwhz8a7i2v79r9j21pl6m6cj0xi8k-libunistring-1.1/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/ayg5rhjhi9ic73hqw33mjqjxwv59ndym-xgcc-13.2.0-libgcc/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/h92a9jd0lhhniv2q417hpwszd4jhys7q-hello-2.12.1/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/m59xdgkgnjbk8kk6k6vbxmqnf82mk9s0-libidn2-2.3.4/
|
||||
dr-xr-xr-x root/0 0 1979-12-31 16:00 ./nix/store/p3jshbwxiwifm1py0yq544fmdyy98j8a-glibc-2.38-27/
|
||||
drwxr-xr-x root/0 0 1979-12-31 16:00 ./share/
|
||||
```
|
||||
:::
|
||||
|
||||
:::{.example #ex-dockerTools-exportImage-importingDocker}
|
||||
# Importing an archive built with `dockerTools.exportImage` in Docker
|
||||
|
||||
We will use the same package from [](#ex-dockerTools-exportImage-hello) and import it into Docker.
|
||||
|
||||
```nix
|
||||
{ dockerTools, hello }:
|
||||
dockerTools.exportImage {
|
||||
name = "hello";
|
||||
fromImage = dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ hello ];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
Building and importing it into Docker:
|
||||
|
||||
```shell
|
||||
$ nix-build
|
||||
(output removed for clarity)
|
||||
/nix/store/x6a5m7c6zdpqz1d8j7cnzpx9glzzvd2h-hello
|
||||
$ docker image import /nix/store/x6a5m7c6zdpqz1d8j7cnzpx9glzzvd2h-hello
|
||||
sha256:1d42dba415e9b298ea0decf6497fbce954de9b4fcb2984f91e307c8fedc1f52f
|
||||
$ docker image ls
|
||||
REPOSITORY TAG IMAGE ID CREATED SIZE
|
||||
<none> <none> 1d42dba415e9 4 seconds ago 32.6MB
|
||||
```
|
||||
:::
|
||||
|
||||
:::{.example #ex-dockerTools-exportImage-naming}
|
||||
# Exploring output naming with `dockerTools.exportImage`
|
||||
|
||||
`exportImage` does not require a `name` attribute if `fromImage` is a derivation, which means that the following works:
|
||||
|
||||
```nix
|
||||
{ dockerTools, hello }:
|
||||
dockerTools.exportImage {
|
||||
fromImage = dockerTools.buildLayeredImage {
|
||||
name = "hello";
|
||||
contents = [ hello ];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
However, since [`dockerTools.buildLayeredImage`](#ssec-pkgs-dockerTools-buildLayeredImage)'s output ends with `.tar.gz`, the output of `exportImage` will also end with `.tar.gz`, even though the archive created with `exportImage` is uncompressed:
|
||||
|
||||
```shell
|
||||
$ nix-build
|
||||
(output removed for clarity)
|
||||
/nix/store/by3f40xvc4l6bkis74l0fj4zsy0djgkn-hello.tar.gz
|
||||
$ file /nix/store/by3f40xvc4l6bkis74l0fj4zsy0djgkn-hello.tar.gz
|
||||
/nix/store/by3f40xvc4l6bkis74l0fj4zsy0djgkn-hello.tar.gz: POSIX tar archive (GNU)
|
||||
```
|
||||
|
||||
If the archive was actually compressed, the output of file would've mentioned that fact.
|
||||
Because of this, it may be important to set a proper `name` attribute when using `exportImage` with other functions from `dockerTools`.
|
||||
:::
|
||||
|
||||
:::{.example #ex-dockerTools-exportImage-fromImagePath}
|
||||
# Using `dockerTools.exportImage` with a path as `fromImage`
|
||||
|
||||
It is possible to use a path as the value of the `fromImage` attribute when calling `dockerTools.exportImage`.
|
||||
However, when doing so, a `name` attribute **MUST** be specified, or you'll encounter an error when evaluating the Nix code.
|
||||
|
||||
For this example, we'll assume a Docker tarball image named `image.tar.gz` exists in the same directory where our package is defined:
|
||||
|
||||
```nix
|
||||
{ dockerTools }:
|
||||
dockerTools.exportImage {
|
||||
name = "filesystem.tar";
|
||||
fromImage = ./image.tar.gz;
|
||||
}
|
||||
```
|
||||
|
||||
Building this will give us the expected output:
|
||||
|
||||
```shell
|
||||
$ nix-build
|
||||
(output removed for clarity)
|
||||
/nix/store/w13l8h3nlkg0zv56k7rj0ai0l2zlf7ss-filesystem.tar
|
||||
```
|
||||
|
||||
If you don't specify a `name` attribute, you'll encounter an evaluation error and the package won't build.
|
||||
:::
|
||||
|
||||
## Environment Helpers {#ssec-pkgs-dockerTools-helpers}
|
||||
|
||||
|
@ -845,6 +1144,18 @@ buildImage {
|
|||
|
||||
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups.
|
||||
|
||||
When using `buildLayeredImage`, you can put this in `fakeRootCommands` if you `enableFakechroot`:
|
||||
```nix
|
||||
buildLayeredImage {
|
||||
name = "shadow-layered";
|
||||
|
||||
fakeRootCommands = ''
|
||||
${pkgs.dockerTools.shadowSetup}
|
||||
'';
|
||||
enableFakechroot = true;
|
||||
}
|
||||
```
|
||||
|
||||
## fakeNss {#ssec-pkgs-dockerTools-fakeNss}
|
||||
|
||||
If your primary goal is providing a basic skeleton for user lookups to work,
|
||||
|
|
|
@ -502,9 +502,14 @@ concatScript "my-file" [ file1 file2 ]
|
|||
|
||||
## `writeShellApplication` {#trivial-builder-writeShellApplication}
|
||||
|
||||
This can be used to easily produce a shell script that has some dependencies (`runtimeInputs`). It automatically sets the `PATH` of the script to contain all of the listed inputs, sets some sanity shellopts (`errexit`, `nounset`, `pipefail`), and checks the resulting script with [`shellcheck`](https://github.com/koalaman/shellcheck).
|
||||
`writeShellApplication` is similar to `writeShellScriptBin` and `writeScriptBin` but supports runtime dependencies with `runtimeInputs`.
|
||||
Writes an executable shell script to `/nix/store/<store path>/bin/<name>` and checks its syntax with [`shellcheck`](https://github.com/koalaman/shellcheck) and the `bash`'s `-n` option.
|
||||
Some basic Bash options are set by default (`errexit`, `nounset`, and `pipefail`), but can be overridden with `bashOptions`.
|
||||
|
||||
For example, look at the following code:
|
||||
Extra arguments may be passed to `stdenv.mkDerivation` by setting `derivationArgs`; note that variables set in this manner will be set when the shell script is _built,_ not when it's run.
|
||||
Runtime environment variables can be set with the `runtimeEnv` argument.
|
||||
|
||||
For example, the following shell application can refer to `curl` directly, rather than needing to write `${curl}/bin/curl`:
|
||||
|
||||
```nix
|
||||
writeShellApplication {
|
||||
|
@ -518,10 +523,6 @@ writeShellApplication {
|
|||
}
|
||||
```
|
||||
|
||||
Unlike with normal `writeShellScriptBin`, there is no need to manually write out `${curl}/bin/curl`, setting the PATH
|
||||
was handled by `writeShellApplication`. Moreover, the script is being checked with `shellcheck` for more strict
|
||||
validation.
|
||||
|
||||
## `symlinkJoin` {#trivial-builder-symlinkJoin}
|
||||
|
||||
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, `name`, and `paths`. `name` is the name used in the Nix store path for the created derivation. `paths` is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
|
||||
|
|
1
third_party/nixpkgs/doc/default.nix
vendored
1
third_party/nixpkgs/doc/default.nix
vendored
|
@ -25,6 +25,7 @@ let
|
|||
{ name = "gvariant"; description = "GVariant formatted string serialization functions"; }
|
||||
{ name = "customisation"; description = "Functions to customise (derivation-related) functions, derivatons, or attribute sets"; }
|
||||
{ name = "meta"; description = "functions for derivation metadata"; }
|
||||
{ name = "derivations"; description = "miscellaneous derivation-specific functions"; }
|
||||
];
|
||||
};
|
||||
|
||||
|
|
|
@ -29,7 +29,7 @@ stdenv.mkDerivation {
|
|||
mkdir -p "$out"
|
||||
|
||||
cat > "$out/index.md" << 'EOF'
|
||||
```{=include=} sections
|
||||
```{=include=} sections auto-id-prefix=auto-generated
|
||||
EOF
|
||||
|
||||
${lib.concatMapStrings ({ name, baseName ? name, description }: ''
|
||||
|
|
|
@ -45,6 +45,7 @@ Bash-only variables:
|
|||
- `postgresqlTestSetupCommands`: bash commands to run after database start, defaults to running `$postgresqlTestSetupSQL` as database administrator.
|
||||
- `postgresqlEnableTCP`: set to `1` to enable TCP listening. Flaky; not recommended.
|
||||
- `postgresqlStartCommands`: defaults to `pg_ctl start`.
|
||||
- `postgresqlExtraSettings`: Additional configuration to add to `postgresql.conf`
|
||||
|
||||
## Hooks {#sec-postgresqlTestHook-hooks}
|
||||
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
# Python {#setup-hook-python}
|
||||
|
||||
Adds the `lib/${python.libPrefix}/site-packages` subdirectory of each build input to the `PYTHONPATH` environment variable.
|
||||
Adds the `python.sitePackages` subdirectory (i.e. `lib/pythonX.Y/site-packages`) of each build input to the `PYTHONPATH` environment variable.
|
||||
|
|
|
@ -216,7 +216,7 @@ in packages.mixRelease {
|
|||
Setup will require the following steps:
|
||||
|
||||
- Move your secrets to runtime environment variables. For more information refer to the [runtime.exs docs](https://hexdocs.pm/mix/Mix.Tasks.Release.html#module-runtime-configuration). On a fresh Phoenix build that would mean that both `DATABASE_URL` and `SECRET_KEY` need to be moved to `runtime.exs`.
|
||||
- `cd assets` and `nix-shell -p node2nix --run node2nix --development` will generate a Nix expression containing your frontend dependencies
|
||||
- `cd assets` and `nix-shell -p node2nix --run "node2nix --development"` will generate a Nix expression containing your frontend dependencies
|
||||
- commit and push those changes
|
||||
- you can now `nix-build .`
|
||||
- To run the release, set the `RELEASE_TMP` environment variable to a directory that your program has write access to. It will be used to store the BEAM settings.
|
||||
|
|
|
@ -103,7 +103,7 @@ See the [Dart documentation](#ssec-dart-applications) for more details on requir
|
|||
|
||||
flutter.buildFlutterApplication {
|
||||
pname = "firmware-updater";
|
||||
version = "unstable-2023-04-30";
|
||||
version = "0-unstable-2023-04-30";
|
||||
|
||||
# To build for the Web, use the targetFlutterPlatform argument.
|
||||
# targetFlutterPlatform = "web";
|
||||
|
|
|
@ -144,7 +144,7 @@ in buildDotnetModule rec {
|
|||
|
||||
projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
|
||||
|
||||
dotnet-sdk = dotnetCorePackages.sdk_6.0;
|
||||
dotnet-sdk = dotnetCorePackages.sdk_6_0;
|
||||
dotnet-runtime = dotnetCorePackages.runtime_6_0;
|
||||
|
||||
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
|
||||
|
|
|
@ -70,39 +70,42 @@ compilers like this:
|
|||
```console
|
||||
$ nix-env -f '<nixpkgs>' -qaP -A haskell.compiler
|
||||
haskell.compiler.ghc810 ghc-8.10.7
|
||||
haskell.compiler.ghc88 ghc-8.8.4
|
||||
haskell.compiler.ghc90 ghc-9.0.2
|
||||
haskell.compiler.ghc924 ghc-9.2.4
|
||||
haskell.compiler.ghc925 ghc-9.2.5
|
||||
haskell.compiler.ghc926 ghc-9.2.6
|
||||
haskell.compiler.ghc92 ghc-9.2.7
|
||||
haskell.compiler.ghc942 ghc-9.4.2
|
||||
haskell.compiler.ghc943 ghc-9.4.3
|
||||
haskell.compiler.ghc94 ghc-9.4.4
|
||||
haskell.compiler.ghcHEAD ghc-9.7.20221224
|
||||
haskell.compiler.ghc8102Binary ghc-binary-8.10.2
|
||||
haskell.compiler.ghc8102BinaryMinimal ghc-binary-8.10.2
|
||||
haskell.compiler.ghc8107BinaryMinimal ghc-binary-8.10.7
|
||||
haskell.compiler.ghc927 ghc-9.2.7
|
||||
haskell.compiler.ghc92 ghc-9.2.8
|
||||
haskell.compiler.ghc945 ghc-9.4.5
|
||||
haskell.compiler.ghc946 ghc-9.4.6
|
||||
haskell.compiler.ghc947 ghc-9.4.7
|
||||
haskell.compiler.ghc94 ghc-9.4.8
|
||||
haskell.compiler.ghc963 ghc-9.6.3
|
||||
haskell.compiler.ghc96 ghc-9.6.4
|
||||
haskell.compiler.ghc98 ghc-9.8.1
|
||||
haskell.compiler.ghcHEAD ghc-9.9.20231121
|
||||
haskell.compiler.ghc8107Binary ghc-binary-8.10.7
|
||||
haskell.compiler.ghc865Binary ghc-binary-8.6.5
|
||||
haskell.compiler.ghc924Binary ghc-binary-9.2.4
|
||||
haskell.compiler.ghc924BinaryMinimal ghc-binary-9.2.4
|
||||
haskell.compiler.integer-simple.ghc810 ghc-integer-simple-8.10.7
|
||||
haskell.compiler.integer-simple.ghc8107 ghc-integer-simple-8.10.7
|
||||
haskell.compiler.integer-simple.ghc88 ghc-integer-simple-8.8.4
|
||||
haskell.compiler.integer-simple.ghc884 ghc-integer-simple-8.8.4
|
||||
haskell.compiler.integer-simple.ghc810 ghc-integer-simple-8.10.7
|
||||
haskell.compiler.native-bignum.ghc90 ghc-native-bignum-9.0.2
|
||||
haskell.compiler.native-bignum.ghc902 ghc-native-bignum-9.0.2
|
||||
haskell.compiler.native-bignum.ghc924 ghc-native-bignum-9.2.4
|
||||
haskell.compiler.native-bignum.ghc925 ghc-native-bignum-9.2.5
|
||||
haskell.compiler.native-bignum.ghc926 ghc-native-bignum-9.2.6
|
||||
haskell.compiler.native-bignum.ghc92 ghc-native-bignum-9.2.7
|
||||
haskell.compiler.native-bignum.ghc927 ghc-native-bignum-9.2.7
|
||||
haskell.compiler.native-bignum.ghc942 ghc-native-bignum-9.4.2
|
||||
haskell.compiler.native-bignum.ghc943 ghc-native-bignum-9.4.3
|
||||
haskell.compiler.native-bignum.ghc94 ghc-native-bignum-9.4.4
|
||||
haskell.compiler.native-bignum.ghc944 ghc-native-bignum-9.4.4
|
||||
haskell.compiler.native-bignum.ghcHEAD ghc-native-bignum-9.7.20221224
|
||||
haskell.compiler.native-bignum.ghc92 ghc-native-bignum-9.2.8
|
||||
haskell.compiler.native-bignum.ghc928 ghc-native-bignum-9.2.8
|
||||
haskell.compiler.native-bignum.ghc945 ghc-native-bignum-9.4.5
|
||||
haskell.compiler.native-bignum.ghc946 ghc-native-bignum-9.4.6
|
||||
haskell.compiler.native-bignum.ghc947 ghc-native-bignum-9.4.7
|
||||
haskell.compiler.native-bignum.ghc94 ghc-native-bignum-9.4.8
|
||||
haskell.compiler.native-bignum.ghc948 ghc-native-bignum-9.4.8
|
||||
haskell.compiler.native-bignum.ghc963 ghc-native-bignum-9.6.3
|
||||
haskell.compiler.native-bignum.ghc96 ghc-native-bignum-9.6.4
|
||||
haskell.compiler.native-bignum.ghc964 ghc-native-bignum-9.6.4
|
||||
haskell.compiler.native-bignum.ghc98 ghc-native-bignum-9.8.1
|
||||
haskell.compiler.native-bignum.ghc981 ghc-native-bignum-9.8.1
|
||||
haskell.compiler.native-bignum.ghcHEAD ghc-native-bignum-9.9.20231121
|
||||
haskell.compiler.ghcjs ghcjs-8.10.7
|
||||
```
|
||||
|
||||
|
@ -1226,12 +1229,14 @@ in
|
|||
in
|
||||
|
||||
{
|
||||
haskell = lib.recursiveUpdate prev.haskell {
|
||||
compiler.${ghcName} = prev.haskell.compiler.${ghcName}.override {
|
||||
haskell = prev.haskell // {
|
||||
compiler = prev.haskell.compiler // {
|
||||
${ghcName} = prev.haskell.compiler.${ghcName}.override {
|
||||
# Unfortunately, the GHC setting is named differently for historical reasons
|
||||
enableProfiledLibs = enableProfiling;
|
||||
};
|
||||
};
|
||||
};
|
||||
})
|
||||
|
||||
(final: prev:
|
||||
|
@ -1241,8 +1246,9 @@ in
|
|||
in
|
||||
|
||||
{
|
||||
haskell = lib.recursiveUpdate prev.haskell {
|
||||
packages.${ghcName} = prev.haskell.packages.${ghcName}.override {
|
||||
haskell = prev.haskell // {
|
||||
packages = prev.haskell.packages // {
|
||||
${ghcName} = prev.haskell.packages.${ghcName}.override {
|
||||
overrides = hfinal: hprev: {
|
||||
mkDerivation = args: hprev.mkDerivation (args // {
|
||||
# Since we are forcing our ideas upon mkDerivation, this change will
|
||||
|
@ -1269,6 +1275,7 @@ in
|
|||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
})
|
||||
]
|
||||
```
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
In addition to exposing the Idris2 compiler itself, Nixpkgs exposes an `idris2Packages.buildIdris` helper to make it a bit more ergonomic to build Idris2 executables or libraries.
|
||||
|
||||
The `buildIdris` function takes a package set that defines at a minimum the `src` and `projectName` of the package to be built and any `idrisLibraries` required to build it. The `src` is the same source you're familiar with but the `projectName` must be the name of the `ipkg` file for the project (omitting the `.ipkg` extension). The `idrisLibraries` is a list of other library derivations created with `buildIdris`. You can optionally specify other derivation properties as needed but sensible defaults for `configurePhase`, `buildPhase`, and `installPhase` are provided.
|
||||
The `buildIdris` function takes an attribute set that defines at a minimum the `src` and `ipkgName` of the package to be built and any `idrisLibraries` required to build it. The `src` is the same source you're familiar with and the `ipkgName` must be the name of the `ipkg` file for the project (omitting the `.ipkg` extension). The `idrisLibraries` is a list of other library derivations created with `buildIdris`. You can optionally specify other derivation properties as needed but sensible defaults for `configurePhase`, `buildPhase`, and `installPhase` are provided.
|
||||
|
||||
Importantly, `buildIdris` does not create a single derivation but rather an attribute set with two properties: `executable` and `library`. The `executable` property is a derivation and the `library` property is a function that will return a derivation for the library with or without source code included. Source code need not be included unless you are aiming to use IDE or LSP features that are able to jump to definitions within an editor.
|
||||
|
||||
|
@ -10,7 +10,7 @@ A simple example of a fully packaged library would be the [`LSP-lib`](https://gi
|
|||
```nix
|
||||
{ fetchFromGitHub, idris2Packages }:
|
||||
let lspLibPkg = idris2Packages.buildIdris {
|
||||
projectName = "lsp-lib";
|
||||
ipkgName = "lsp-lib";
|
||||
src = fetchFromGitHub {
|
||||
owner = "idris-community";
|
||||
repo = "LSP-lib";
|
||||
|
@ -31,7 +31,7 @@ A slightly more involved example of a fully packaged executable would be the [`i
|
|||
# Assuming the previous example lives in `lsp-lib.nix`:
|
||||
let lspLib = callPackage ./lsp-lib.nix { };
|
||||
lspPkg = idris2Packages.buildIdris {
|
||||
projectName = "idris2-lsp";
|
||||
ipkgName = "idris2-lsp";
|
||||
src = fetchFromGitHub {
|
||||
owner = "idris-community";
|
||||
repo = "idris2-lsp";
|
||||
|
|
12
third_party/nixpkgs/doc/preface.chapter.md
vendored
12
third_party/nixpkgs/doc/preface.chapter.md
vendored
|
@ -27,18 +27,18 @@ With these expressions the Nix package manager can build binary packages.
|
|||
Packages, including the Nix packages collection, are distributed through
|
||||
[channels](https://nixos.org/nix/manual/#sec-channels). The collection is
|
||||
distributed for users of Nix on non-NixOS distributions through the channel
|
||||
`nixpkgs`. Users of NixOS generally use one of the `nixos-*` channels, e.g.
|
||||
`nixos-22.11`, which includes all packages and modules for the stable NixOS
|
||||
`nixpkgs-unstable`. Users of NixOS generally use one of the `nixos-*` channels,
|
||||
e.g. `nixos-22.11`, which includes all packages and modules for the stable NixOS
|
||||
22.11. Stable NixOS releases are generally only given
|
||||
security updates. More up to date packages and modules are available via the
|
||||
`nixos-unstable` channel.
|
||||
|
||||
Both `nixos-unstable` and `nixpkgs` follow the `master` branch of the Nixpkgs
|
||||
repository, although both do lag the `master` branch by generally
|
||||
Both `nixos-unstable` and `nixpkgs-unstable` follow the `master` branch of the
|
||||
nixpkgs repository, although both do lag the `master` branch by generally
|
||||
[a couple of days](https://status.nixos.org/). Updates to a channel are
|
||||
distributed as soon as all tests for that channel pass, e.g.
|
||||
[this table](https://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
|
||||
shows the status of tests for the `nixpkgs` channel.
|
||||
shows the status of tests for the `nixpkgs-unstable` channel.
|
||||
|
||||
The tests are conducted by a cluster called [Hydra](https://nixos.org/hydra/),
|
||||
which also builds binary packages from the Nix expressions in Nixpkgs for
|
||||
|
@ -46,5 +46,5 @@ which also builds binary packages from the Nix expressions in Nixpkgs for
|
|||
The binaries are made available via a [binary cache](https://cache.nixos.org).
|
||||
|
||||
The current Nix expressions of the channels are available in the
|
||||
[`nixpkgs`](https://github.com/NixOS/nixpkgs) repository in branches
|
||||
[nixpkgs repository](https://github.com/NixOS/nixpkgs) in branches
|
||||
that correspond to the channel names (e.g. `nixos-22.11-small`).
|
||||
|
|
2
third_party/nixpkgs/lib/default.nix
vendored
2
third_party/nixpkgs/lib/default.nix
vendored
|
@ -116,7 +116,7 @@ let
|
|||
inherit (self.customisation) overrideDerivation makeOverridable
|
||||
callPackageWith callPackagesWith extendDerivation hydraJob
|
||||
makeScope makeScopeWithSplicing makeScopeWithSplicing';
|
||||
inherit (self.derivations) lazyDerivation;
|
||||
inherit (self.derivations) lazyDerivation optionalDrvAttr;
|
||||
inherit (self.meta) addMetaAttrs dontDistribute setName updateName
|
||||
appendToName mapDerivationAttrset setPrio lowPrio lowPrioSet hiPrio
|
||||
hiPrioSet getLicenseFromSpdxId getExe getExe';
|
||||
|
|
26
third_party/nixpkgs/lib/derivations.nix
vendored
26
third_party/nixpkgs/lib/derivations.nix
vendored
|
@ -98,4 +98,30 @@ in
|
|||
# `lazyDerivation` caller knew a shortcut, be taken from there.
|
||||
meta = args.meta or checked.meta;
|
||||
} // passthru;
|
||||
|
||||
/* Conditionally set a derivation attribute.
|
||||
|
||||
Because `mkDerivation` sets `__ignoreNulls = true`, a derivation
|
||||
attribute set to `null` will not impact the derivation output hash.
|
||||
Thus, this function passes through its `value` argument if the `cond`
|
||||
is `true`, but returns `null` if not.
|
||||
|
||||
Type: optionalDrvAttr :: Bool -> a -> a | Null
|
||||
|
||||
Example:
|
||||
(stdenv.mkDerivation {
|
||||
name = "foo";
|
||||
x = optionalDrvAttr true 1;
|
||||
y = optionalDrvAttr false 1;
|
||||
}).drvPath == (stdenv.mkDerivation {
|
||||
name = "foo";
|
||||
x = 1;
|
||||
}).drvPath
|
||||
=> true
|
||||
*/
|
||||
optionalDrvAttr =
|
||||
# Condition
|
||||
cond:
|
||||
# Attribute value
|
||||
value: if cond then value else null;
|
||||
}
|
||||
|
|
19
third_party/nixpkgs/lib/fileset/internal.nix
vendored
19
third_party/nixpkgs/lib/fileset/internal.nix
vendored
|
@ -5,6 +5,7 @@ let
|
|||
isAttrs
|
||||
isPath
|
||||
isString
|
||||
nixVersion
|
||||
pathExists
|
||||
readDir
|
||||
split
|
||||
|
@ -17,6 +18,7 @@ let
|
|||
attrNames
|
||||
attrValues
|
||||
mapAttrs
|
||||
optionalAttrs
|
||||
zipAttrsWith
|
||||
;
|
||||
|
||||
|
@ -56,6 +58,7 @@ let
|
|||
substring
|
||||
stringLength
|
||||
hasSuffix
|
||||
versionAtLeast
|
||||
;
|
||||
|
||||
inherit (lib.trivial)
|
||||
|
@ -840,6 +843,10 @@ rec {
|
|||
# https://github.com/NixOS/nix/commit/55cefd41d63368d4286568e2956afd535cb44018
|
||||
_fetchGitSubmodulesMinver = "2.4";
|
||||
|
||||
# Support for `builtins.fetchGit` with `shallow = true` was introduced in 2.4
|
||||
# https://github.com/NixOS/nix/commit/d1165d8791f559352ff6aa7348e1293b2873db1c
|
||||
_fetchGitShallowMinver = "2.4";
|
||||
|
||||
# Mirrors the contents of a Nix store path relative to a local path as a file set.
|
||||
# Some notes:
|
||||
# - The store path is read at evaluation time.
|
||||
|
@ -894,7 +901,17 @@ rec {
|
|||
# However a simpler alternative still would be [a builtins.gitLsFiles](https://github.com/NixOS/nix/issues/2944).
|
||||
fetchResult = fetchGit ({
|
||||
url = path;
|
||||
} // extraFetchGitAttrs);
|
||||
}
|
||||
# In older Nix versions, repositories were always assumed to be deep clones, which made `fetchGit` fail for shallow clones
|
||||
# For newer versions this was fixed, but the `shallow` flag is required.
|
||||
# The only behavioral difference is that for shallow clones, `fetchGit` doesn't return a `revCount`,
|
||||
# which we don't need here, so it's fine to always pass it.
|
||||
|
||||
# Unfortunately this means older Nix versions get a poor error message for shallow repositories, and there's no good way to improve that.
|
||||
# Checking for `.git/shallow` doesn't seem worth it, especially since that's more of an implementation detail,
|
||||
# and would also require more code to handle worktrees where `.git` is a file.
|
||||
// optionalAttrs (versionAtLeast nixVersion _fetchGitShallowMinver) { shallow = true; }
|
||||
// extraFetchGitAttrs);
|
||||
in
|
||||
# We can identify local working directories by checking for .git,
|
||||
# see https://git-scm.com/docs/gitrepository-layout#_description.
|
||||
|
|
13
third_party/nixpkgs/lib/fileset/tests.sh
vendored
13
third_party/nixpkgs/lib/fileset/tests.sh
vendored
|
@ -1439,6 +1439,19 @@ if [[ -n "$fetchGitSupportsSubmodules" ]]; then
|
|||
fi
|
||||
rm -rf -- *
|
||||
|
||||
# shallow = true is not supported on all Nix versions
|
||||
# and older versions don't support shallow clones at all
|
||||
if [[ "$(nix-instantiate --eval --expr "$prefixExpression (versionAtLeast builtins.nixVersion _fetchGitShallowMinver)")" == true ]]; then
|
||||
createGitRepo full
|
||||
# Extra commit such that there's a commit that won't be in the shallow clone
|
||||
git -C full commit --allow-empty -q -m extra
|
||||
git clone -q --depth 1 "file://${PWD}/full" shallow
|
||||
cd shallow
|
||||
checkGitTracked
|
||||
cd ..
|
||||
rm -rf -- *
|
||||
fi
|
||||
|
||||
# Go through all stages of Git files
|
||||
# See https://www.git-scm.com/book/en/v2/Git-Basics-Recording-Changes-to-the-Repository
|
||||
|
||||
|
|
5
third_party/nixpkgs/lib/licenses.nix
vendored
5
third_party/nixpkgs/lib/licenses.nix
vendored
|
@ -337,6 +337,11 @@ in mkLicense lset) ({
|
|||
fullName = "Creative Commons Attribution 1.0";
|
||||
};
|
||||
|
||||
cc-by-20 = {
|
||||
spdxId = "CC-BY-2.0";
|
||||
fullName = "Creative Commons Attribution 2.0";
|
||||
};
|
||||
|
||||
cc-by-30 = {
|
||||
spdxId = "CC-BY-3.0";
|
||||
fullName = "Creative Commons Attribution 3.0";
|
||||
|
|
6
third_party/nixpkgs/lib/modules.nix
vendored
6
third_party/nixpkgs/lib/modules.nix
vendored
|
@ -1256,7 +1256,7 @@ let
|
|||
(opt.highestPrio or defaultOverridePriority)
|
||||
(f opt.value);
|
||||
|
||||
doRename = { from, to, visible, warn, use, withPriority ? true }:
|
||||
doRename = { from, to, visible, warn, use, withPriority ? true, condition ? true }:
|
||||
{ config, options, ... }:
|
||||
let
|
||||
fromOpt = getAttrFromPath from options;
|
||||
|
@ -1272,7 +1272,7 @@ let
|
|||
} // optionalAttrs (toType != null) {
|
||||
type = toType;
|
||||
});
|
||||
config = mkMerge [
|
||||
config = mkIf condition (mkMerge [
|
||||
(optionalAttrs (options ? warnings) {
|
||||
warnings = optional (warn && fromOpt.isDefined)
|
||||
"The option `${showOption from}' defined in ${showFiles fromOpt.files} has been renamed to `${showOption to}'.";
|
||||
|
@ -1280,7 +1280,7 @@ let
|
|||
(if withPriority
|
||||
then mkAliasAndWrapDefsWithPriority (setAttrByPath to) fromOpt
|
||||
else mkAliasAndWrapDefinitions (setAttrByPath to) fromOpt)
|
||||
];
|
||||
]);
|
||||
};
|
||||
|
||||
/* Use this function to import a JSON file as NixOS configuration.
|
||||
|
|
20
third_party/nixpkgs/lib/tests/misc.nix
vendored
20
third_party/nixpkgs/lib/tests/misc.nix
vendored
|
@ -1902,7 +1902,7 @@ runTests {
|
|||
expected = true;
|
||||
};
|
||||
|
||||
# lazyDerivation
|
||||
# DERIVATIONS
|
||||
|
||||
testLazyDerivationIsLazyInDerivationForAttrNames = {
|
||||
expr = attrNames (lazyDerivation {
|
||||
|
@ -1955,6 +1955,24 @@ runTests {
|
|||
expected = derivation;
|
||||
};
|
||||
|
||||
testOptionalDrvAttr = let
|
||||
mkDerivation = args: derivation (args // {
|
||||
builder = "builder";
|
||||
system = "system";
|
||||
__ignoreNulls = true;
|
||||
});
|
||||
in {
|
||||
expr = (mkDerivation {
|
||||
name = "foo";
|
||||
x = optionalDrvAttr true 1;
|
||||
y = optionalDrvAttr false 1;
|
||||
}).drvPath;
|
||||
expected = (mkDerivation {
|
||||
name = "foo";
|
||||
x = 1;
|
||||
}).drvPath;
|
||||
};
|
||||
|
||||
testTypeDescriptionInt = {
|
||||
expr = (with types; int).description;
|
||||
expected = "signed integer";
|
||||
|
|
4
third_party/nixpkgs/lib/tests/modules.sh
vendored
4
third_party/nixpkgs/lib/tests/modules.sh
vendored
|
@ -101,6 +101,7 @@ checkConfigError 'It seems as if you.re trying to declare an option by placing i
|
|||
checkConfigError 'It seems as if you.re trying to declare an option by placing it into .config. rather than .options.' config.nest.wrong2 ./error-mkOption-in-config.nix
|
||||
checkConfigError 'The option .sub.wrong2. does not exist. Definition values:' config.sub ./error-mkOption-in-submodule-config.nix
|
||||
checkConfigError '.*This can happen if you e.g. declared your options in .types.submodule.' config.sub ./error-mkOption-in-submodule-config.nix
|
||||
checkConfigError '.*A definition for option .bad. is not of type .non-empty .list of .submodule...\.' config.bad ./error-nonEmptyListOf-submodule.nix
|
||||
|
||||
# types.pathInStore
|
||||
checkConfigOutput '".*/store/0lz9p8xhf89kb1c1kk6jxrzskaiygnlh-bash-5.2-p15.drv"' config.pathInStore.ok1 ./types.nix
|
||||
|
@ -464,6 +465,9 @@ checkConfigOutput '^1234$' config.c.d.e ./doRename-basic.nix
|
|||
checkConfigOutput '^"The option `a\.b. defined in `.*/doRename-warnings\.nix. has been renamed to `c\.d\.e.\."$' \
|
||||
config.result \
|
||||
./doRename-warnings.nix
|
||||
checkConfigOutput "^true$" config.result ./doRename-condition.nix ./doRename-condition-enable.nix
|
||||
checkConfigOutput "^true$" config.result ./doRename-condition.nix ./doRename-condition-no-enable.nix
|
||||
checkConfigOutput "^true$" config.result ./doRename-condition.nix ./doRename-condition-migrated.nix
|
||||
|
||||
# Anonymous modules get deduplicated by key
|
||||
checkConfigOutput '^"pear"$' config.once.raw ./merge-module-with-key.nix
|
||||
|
|
10
third_party/nixpkgs/lib/tests/modules/doRename-condition-enable.nix
vendored
Normal file
10
third_party/nixpkgs/lib/tests/modules/doRename-condition-enable.nix
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
{ config, lib, ... }:
|
||||
{
|
||||
config = {
|
||||
services.foo.enable = true;
|
||||
services.foo.bar = "baz";
|
||||
result =
|
||||
assert config.services.foos == { "" = { bar = "baz"; }; };
|
||||
true;
|
||||
};
|
||||
}
|
10
third_party/nixpkgs/lib/tests/modules/doRename-condition-migrated.nix
vendored
Normal file
10
third_party/nixpkgs/lib/tests/modules/doRename-condition-migrated.nix
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
{ config, lib, ... }:
|
||||
{
|
||||
config = {
|
||||
services.foos."".bar = "baz";
|
||||
result =
|
||||
assert config.services.foos == { "" = { bar = "baz"; }; };
|
||||
assert config.services.foo.bar == "baz";
|
||||
true;
|
||||
};
|
||||
}
|
9
third_party/nixpkgs/lib/tests/modules/doRename-condition-no-enable.nix
vendored
Normal file
9
third_party/nixpkgs/lib/tests/modules/doRename-condition-no-enable.nix
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
{ config, lib, options, ... }:
|
||||
{
|
||||
config = {
|
||||
result =
|
||||
assert config.services.foos == { };
|
||||
assert ! options.services.foo.bar.isDefined;
|
||||
true;
|
||||
};
|
||||
}
|
42
third_party/nixpkgs/lib/tests/modules/doRename-condition.nix
vendored
Normal file
42
third_party/nixpkgs/lib/tests/modules/doRename-condition.nix
vendored
Normal file
|
@ -0,0 +1,42 @@
|
|||
/*
|
||||
Simulate a migration from a single-instance `services.foo` to a multi instance
|
||||
`services.foos.<name>` module, where `name = ""` serves as the legacy /
|
||||
compatibility instance.
|
||||
|
||||
- No instances must exist, unless one is defined in the multi-instance module,
|
||||
or if the legacy enable option is set to true.
|
||||
- The legacy instance options must be renamed to the new instance, if it exists.
|
||||
|
||||
The relevant scenarios are tested in separate files:
|
||||
- ./doRename-condition-enable.nix
|
||||
- ./doRename-condition-no-enable.nix
|
||||
*/
|
||||
{ config, lib, ... }:
|
||||
let
|
||||
inherit (lib) mkOption mkEnableOption types doRename;
|
||||
in
|
||||
{
|
||||
options = {
|
||||
services.foo.enable = mkEnableOption "foo";
|
||||
services.foos = mkOption {
|
||||
type = types.attrsOf (types.submodule {
|
||||
options = {
|
||||
bar = mkOption { type = types.str; };
|
||||
};
|
||||
});
|
||||
default = { };
|
||||
};
|
||||
result = mkOption {};
|
||||
};
|
||||
imports = [
|
||||
(doRename {
|
||||
from = [ "services" "foo" "bar" ];
|
||||
to = [ "services" "foos" "" "bar" ];
|
||||
visible = true;
|
||||
warn = false;
|
||||
use = x: x;
|
||||
withPriority = true;
|
||||
condition = config.services.foo.enable;
|
||||
})
|
||||
];
|
||||
}
|
7
third_party/nixpkgs/lib/tests/modules/error-nonEmptyListOf-submodule.nix
vendored
Normal file
7
third_party/nixpkgs/lib/tests/modules/error-nonEmptyListOf-submodule.nix
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
{ lib, ... }:
|
||||
{
|
||||
options.bad = lib.mkOption {
|
||||
type = lib.types.nonEmptyListOf (lib.types.submodule { });
|
||||
default = [ ];
|
||||
};
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
{ }
|
|
@ -0,0 +1 @@
|
|||
{ }
|
2
third_party/nixpkgs/lib/trivial.nix
vendored
2
third_party/nixpkgs/lib/trivial.nix
vendored
|
@ -189,7 +189,7 @@ in {
|
|||
they take effect as soon as the oldest release reaches end of life. */
|
||||
oldestSupportedRelease =
|
||||
# Update on master only. Do not backport.
|
||||
2305;
|
||||
2311;
|
||||
|
||||
/* Whether a feature is supported in all supported releases (at the time of
|
||||
release branch-off, if applicable). See `oldestSupportedRelease`. */
|
||||
|
|
1
third_party/nixpkgs/lib/types.nix
vendored
1
third_party/nixpkgs/lib/types.nix
vendored
|
@ -557,6 +557,7 @@ rec {
|
|||
in list // {
|
||||
description = "non-empty ${optionDescriptionPhrase (class: class == "noun") list}";
|
||||
emptyValue = { }; # no .value attr, meaning unset
|
||||
substSubModules = m: nonEmptyListOf (elemType.substSubModules m);
|
||||
};
|
||||
|
||||
attrsOf = elemType: mkOptionType rec {
|
||||
|
|
255
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
255
third_party/nixpkgs/maintainers/maintainer-list.nix
vendored
|
@ -60,6 +60,18 @@
|
|||
See `./scripts/check-maintainer-github-handles.sh` for an example on how to work with this data.
|
||||
*/
|
||||
{
|
||||
_0b11stan = {
|
||||
name = "Tristan Auvinet Pinaudeau";
|
||||
email = "tristan@tic.sh";
|
||||
github = "0b11stan";
|
||||
githubId = 27831931;
|
||||
};
|
||||
_0nyr = {
|
||||
email = "onyr.maintainer@gmail.com";
|
||||
github = "0nyr";
|
||||
githubId = 47721040;
|
||||
name = "Florian Rascoussier";
|
||||
};
|
||||
_0qq = {
|
||||
email = "0qqw0qqw@gmail.com";
|
||||
github = "0qq";
|
||||
|
@ -1408,6 +1420,20 @@
|
|||
fingerprint = "7083 E268 4BFD 845F 2B84 9E74 B695 8918 ED23 32CE";
|
||||
}];
|
||||
};
|
||||
applejag = {
|
||||
email = "applejag.luminance905@passmail.com";
|
||||
github = "applejag";
|
||||
githubId = 2477952;
|
||||
name = "Kalle Fagerberg";
|
||||
keys = [
|
||||
{
|
||||
fingerprint = "F68E 6DB3 79FB 1FF0 7C72 6479 9874 DEDD 3592 5ED0";
|
||||
}
|
||||
{
|
||||
fingerprint = "8DDB 3994 0A34 4FE5 4F3B 3E77 F161 001D EE78 1051";
|
||||
}
|
||||
];
|
||||
};
|
||||
applePrincess = {
|
||||
email = "appleprincess@appleprincess.io";
|
||||
github = "applePrincess";
|
||||
|
@ -1890,6 +1916,12 @@
|
|||
githubId = 1217745;
|
||||
name = "Aldwin Vlasblom";
|
||||
};
|
||||
averagebit = {
|
||||
email = "averagebit@pm.me";
|
||||
github = "averagebit";
|
||||
githubId = 97070581;
|
||||
name = "averagebit";
|
||||
};
|
||||
averelld = {
|
||||
email = "averell+nixos@rxd4.com";
|
||||
github = "averelld";
|
||||
|
@ -2297,6 +2329,12 @@
|
|||
fingerprint = "D35E C9CE E631 638F F1D8 B401 6F0E 410D C3EE D02";
|
||||
}];
|
||||
};
|
||||
benhiemer = {
|
||||
name = "Benedikt Hiemer";
|
||||
email = "ben.email@posteo.de";
|
||||
github = "benhiemer";
|
||||
githubId = 16649926;
|
||||
};
|
||||
benjaminedwardwebb = {
|
||||
name = "Ben Webb";
|
||||
email = "benjaminedwardwebb@gmail.com";
|
||||
|
@ -2781,6 +2819,12 @@
|
|||
githubId = 40476330;
|
||||
name = "brokenpip3";
|
||||
};
|
||||
brpaz = {
|
||||
email = "oss@brunopaz.dev";
|
||||
github = "brpaz";
|
||||
githubId = 184563;
|
||||
name = "Bruno Paz";
|
||||
};
|
||||
bryanasdev000 = {
|
||||
email = "bryanasdev000@gmail.com";
|
||||
matrix = "@bryanasdev000:matrix.org";
|
||||
|
@ -3239,6 +3283,9 @@
|
|||
github = "LostAttractor";
|
||||
githubId = 46527539;
|
||||
name = "ChaosAttractor";
|
||||
keys = [{
|
||||
fingerprint = "A137 4415 DB7C 6439 10EA 5BF1 0FEE 4E47 5940 E125";
|
||||
}];
|
||||
};
|
||||
charlesbaynham = {
|
||||
email = "charlesbaynham@gmail.com";
|
||||
|
@ -3318,6 +3365,13 @@
|
|||
githubId = 4526429;
|
||||
name = "Philipp Dargel";
|
||||
};
|
||||
chito = {
|
||||
email = "iamchito@protonmail.com";
|
||||
github = "chitochi";
|
||||
githubId = 153365419;
|
||||
matrix = "@chito:nichijou.dev";
|
||||
name = "Chito";
|
||||
};
|
||||
chivay = {
|
||||
email = "hubert.jasudowicz@gmail.com";
|
||||
github = "chivay";
|
||||
|
@ -4350,6 +4404,15 @@
|
|||
githubId = 3179832;
|
||||
name = "D. Bohdan";
|
||||
};
|
||||
dbrgn = {
|
||||
email = "nix@dbrgn.ch";
|
||||
github = "dbrgn";
|
||||
githubId = 105168;
|
||||
name = "Danilo B.";
|
||||
keys = [{
|
||||
fingerprint = "20EE 002D 778A E197 EF7D 0D2C B993 FF98 A90C 9AB1";
|
||||
}];
|
||||
};
|
||||
dbrock = {
|
||||
email = "daniel@brockman.se";
|
||||
github = "dbrock";
|
||||
|
@ -4600,6 +4663,12 @@
|
|||
githubId = 30475873;
|
||||
name = "Andrei Hava";
|
||||
};
|
||||
devplayer0 = {
|
||||
email = "dev@nul.ie";
|
||||
github = "devplayer0";
|
||||
githubId = 1427254;
|
||||
name = "Jack O'Sullivan";
|
||||
};
|
||||
devusb = {
|
||||
email = "mhelton@devusb.us";
|
||||
github = "devusb";
|
||||
|
@ -4885,6 +4954,14 @@
|
|||
fingerprint = "EE7D 158E F9E7 660E 0C33 86B2 8FC5 F7D9 0A5D 8F4D";
|
||||
}];
|
||||
};
|
||||
donteatoreo = {
|
||||
name = "DontEatOreo";
|
||||
github = "DontEatOreo";
|
||||
githubId = 57304299;
|
||||
keys = [{
|
||||
fingerprint = "33CD 5C0A 673C C54D 661E 5E4C 0DB5 361B EEE5 30AB";
|
||||
}];
|
||||
};
|
||||
doriath = {
|
||||
email = "tomasz.zurkowski@gmail.com";
|
||||
github = "doriath";
|
||||
|
@ -5245,6 +5322,13 @@
|
|||
github = "edlimerkaj";
|
||||
githubId = 71988351;
|
||||
};
|
||||
edmundmiller = {
|
||||
name = "Edmund Miller";
|
||||
email = "git@edmundmiller.dev";
|
||||
matrix = "@emiller:beeper.com";
|
||||
github = "edmundmiller";
|
||||
githubId = 20095261;
|
||||
};
|
||||
edrex = {
|
||||
email = "ericdrex@gmail.com";
|
||||
github = "edrex";
|
||||
|
@ -5468,6 +5552,12 @@
|
|||
githubId = 428026;
|
||||
name = "embr";
|
||||
};
|
||||
emilioziniades = {
|
||||
email = "emilioziniades@protonmail.com";
|
||||
github = "emilioziniades";
|
||||
githubId = 75438244;
|
||||
name = "Emilio Ziniades";
|
||||
};
|
||||
emily = {
|
||||
email = "nixpkgs@emily.moe";
|
||||
github = "emilazy";
|
||||
|
@ -6608,6 +6698,12 @@
|
|||
githubId = 293586;
|
||||
name = "Adam Gamble";
|
||||
};
|
||||
gangaram = {
|
||||
email = "Ganga.Ram@tii.ae";
|
||||
github = "gangaram-tii";
|
||||
githubId = 131853076;
|
||||
name = "Ganga Ram";
|
||||
};
|
||||
garaiza-93 = {
|
||||
email = "araizagustavo93@gmail.com";
|
||||
github = "garaiza-93";
|
||||
|
@ -6918,6 +7014,12 @@
|
|||
email = "nix@quidecco.pl";
|
||||
name = "Isidor Zeuner";
|
||||
};
|
||||
gmacon = {
|
||||
name = "George Macon";
|
||||
matrix = "@gmacon:matrix.org";
|
||||
github = "gmacon";
|
||||
githubId = 238853;
|
||||
};
|
||||
gmemstr = {
|
||||
email = "git@gmem.ca";
|
||||
github = "gmemstr";
|
||||
|
@ -9585,6 +9687,11 @@
|
|||
matrix = "@katexochen:matrix.org";
|
||||
name = "Paul Meyer";
|
||||
};
|
||||
katrinafyi = {
|
||||
name = "katrinafyi";
|
||||
github = "katrinafyi";
|
||||
githubId = 39479354;
|
||||
};
|
||||
kayhide = {
|
||||
email = "kayhide@gmail.com";
|
||||
github = "kayhide";
|
||||
|
@ -10046,6 +10153,12 @@
|
|||
githubId = 264372;
|
||||
name = "Jan van den Berg";
|
||||
};
|
||||
koppor = {
|
||||
email = "kopp.dev@gmail.com";
|
||||
github = "koppor";
|
||||
githubId = 1366654;
|
||||
name = "Oliver Kopp";
|
||||
};
|
||||
koral = {
|
||||
email = "koral@mailoo.org";
|
||||
github = "k0ral";
|
||||
|
@ -10136,6 +10249,13 @@
|
|||
githubId = 22116767;
|
||||
name = "Kritnich";
|
||||
};
|
||||
krloer = {
|
||||
email = "kriloneri@gmail.com";
|
||||
github = "krloer";
|
||||
githubId = 45591621;
|
||||
name = "Kristoffer Longva Eriksen";
|
||||
matrix = "@krisleri:pvv.ntnu.no";
|
||||
};
|
||||
kroell = {
|
||||
email = "nixosmainter@makroell.de";
|
||||
github = "rokk4";
|
||||
|
@ -10478,6 +10598,14 @@
|
|||
githubId = 31388299;
|
||||
name = "Leonardo Eugênio";
|
||||
};
|
||||
leo248 = {
|
||||
github ="leo248";
|
||||
githubId = 95365184;
|
||||
keys = [{
|
||||
fingerprint = "81E3 418D C1A2 9687 2C4D 96DC BB1A 818F F295 26D2";
|
||||
}];
|
||||
name = "leo248";
|
||||
};
|
||||
leo60228 = {
|
||||
email = "leo@60228.dev";
|
||||
matrix = "@leo60228:matrix.org";
|
||||
|
@ -10576,6 +10704,15 @@
|
|||
githubId = 1769386;
|
||||
name = "Liam Diprose";
|
||||
};
|
||||
liassica = {
|
||||
email = "git-commit.jingle869@aleeas.com";
|
||||
github = "Liassica";
|
||||
githubId = 115422798;
|
||||
name = "Liassica";
|
||||
keys = [{
|
||||
fingerprint = "83BE 3033 6164 B971 FA82 7036 0D34 0E59 4980 7BDD";
|
||||
}];
|
||||
};
|
||||
liberatys = {
|
||||
email = "liberatys@hey.com";
|
||||
name = "Nick Anthony Flueckiger";
|
||||
|
@ -11172,6 +11309,12 @@
|
|||
githubId = 7910815;
|
||||
name = "Alex McGrath";
|
||||
};
|
||||
lychee = {
|
||||
email = "itslychee+nixpkgs@protonmail.com";
|
||||
githubId = 82718618;
|
||||
github = "itslychee";
|
||||
name = "Lychee";
|
||||
};
|
||||
lynty = {
|
||||
email = "ltdong93+nix@gmail.com";
|
||||
github = "Lynty";
|
||||
|
@ -11209,6 +11352,15 @@
|
|||
githubId = 42545625;
|
||||
name = "Maas Lalani";
|
||||
};
|
||||
mabster314 = {
|
||||
name = "Max Haland";
|
||||
email = "max@haland.org";
|
||||
github = "mabster314";
|
||||
githubId = 5741741;
|
||||
keys = [{
|
||||
fingerprint = "71EF 8F1F 0C24 8B4D 5CDC 1B47 74B3 D790 77EE 37A8";
|
||||
}];
|
||||
};
|
||||
macalinao = {
|
||||
email = "me@ianm.com";
|
||||
name = "Ian Macalinao";
|
||||
|
@ -11352,6 +11504,12 @@
|
|||
githubId = 346094;
|
||||
name = "Michael Alyn Miller";
|
||||
};
|
||||
mandos = {
|
||||
email = "marek.maksimczyk@mandos.net.pl";
|
||||
github = "mandos";
|
||||
githubId = 115060;
|
||||
name = "Marek Maksimczyk";
|
||||
};
|
||||
mangoiv = {
|
||||
email = "contact@mangoiv.com";
|
||||
github = "mangoiv";
|
||||
|
@ -11948,6 +12106,12 @@
|
|||
githubId = 4641445;
|
||||
name = "Carlo Nucera";
|
||||
};
|
||||
medv = {
|
||||
email = "mikhail.advent@gmail.com";
|
||||
github = "medv";
|
||||
githubId = 1631737;
|
||||
name = "Mikhail Medvedev";
|
||||
};
|
||||
megheaiulian = {
|
||||
email = "iulian.meghea@gmail.com";
|
||||
github = "megheaiulian";
|
||||
|
@ -12310,6 +12474,12 @@
|
|||
githubId = 92937;
|
||||
name = "Breland Miley";
|
||||
};
|
||||
minersebas = {
|
||||
email = "scherthan_sebastian@web.de";
|
||||
github = "MinerSebas";
|
||||
githubId = 66798382;
|
||||
name = "Sebastian Maximilian Scherthan";
|
||||
};
|
||||
minijackson = {
|
||||
email = "minijackson@riseup.net";
|
||||
github = "minijackson";
|
||||
|
@ -13037,6 +13207,12 @@
|
|||
githubId = 1222539;
|
||||
name = "Roman Naumann";
|
||||
};
|
||||
nanotwerp = {
|
||||
email = "nanotwerp@gmail.com";
|
||||
github = "nanotwerp";
|
||||
githubId = 17240342;
|
||||
name = "Nano Twerpus";
|
||||
};
|
||||
naphta = {
|
||||
github = "naphta";
|
||||
githubId = 6709831;
|
||||
|
@ -14438,6 +14614,12 @@
|
|||
github = "pbsds";
|
||||
githubId = 140964;
|
||||
};
|
||||
pca006132 = {
|
||||
name = "pca006132";
|
||||
email = "john.lck40@gmail.com";
|
||||
github = "pca006132";
|
||||
githubId = 12198657;
|
||||
};
|
||||
pcarrier = {
|
||||
email = "pc@rrier.ca";
|
||||
github = "pcarrier";
|
||||
|
@ -14480,6 +14662,11 @@
|
|||
github = "pennae";
|
||||
githubId = 82953136;
|
||||
};
|
||||
peret = {
|
||||
name = "Peter Retzlaff";
|
||||
github = "peret";
|
||||
githubId = 617977;
|
||||
};
|
||||
periklis = {
|
||||
email = "theopompos@gmail.com";
|
||||
github = "periklis";
|
||||
|
@ -15055,6 +15242,16 @@
|
|||
githubId = 11898437;
|
||||
name = "Florian Ströger";
|
||||
};
|
||||
presto8 = {
|
||||
name = "Preston Hunt";
|
||||
email = "me@prestonhunt.com";
|
||||
matrix = "@presto8:matrix.org";
|
||||
github = "presto8";
|
||||
githubId = 246631;
|
||||
keys = [{
|
||||
fingerprint = "3E46 7EF1 54AA A1D0 C7DF A694 E45C B17F 1940 CA52";
|
||||
}];
|
||||
};
|
||||
priegger = {
|
||||
email = "philipp@riegger.name";
|
||||
github = "priegger";
|
||||
|
@ -15357,7 +15554,7 @@
|
|||
name = "Jonathan Wright";
|
||||
};
|
||||
quantenzitrone = {
|
||||
email = "quantenzitrone@protonmail.com";
|
||||
email = "nix@dev.quantenzitrone.eu";
|
||||
github = "quantenzitrone";
|
||||
githubId = 74491719;
|
||||
matrix = "@quantenzitrone:matrix.org";
|
||||
|
@ -15724,6 +15921,12 @@
|
|||
githubId = 801525;
|
||||
name = "rembo10";
|
||||
};
|
||||
remexre = {
|
||||
email = "nathan+nixpkgs@remexre.com";
|
||||
github = "remexre";
|
||||
githubId = 4196789;
|
||||
name = "Nathan Ringo";
|
||||
};
|
||||
renatoGarcia = {
|
||||
email = "fgarcia.renato@gmail.com";
|
||||
github = "renatoGarcia";
|
||||
|
@ -15790,6 +15993,11 @@
|
|||
githubId = 811827;
|
||||
name = "Gabriel Lievano";
|
||||
};
|
||||
rgri = {
|
||||
name = "shortcut";
|
||||
github = "rgri";
|
||||
githubId = 45253749;
|
||||
};
|
||||
rgrinberg = {
|
||||
name = "Rudi Grinberg";
|
||||
email = "me@rgrinberg.com";
|
||||
|
@ -16053,7 +16261,7 @@
|
|||
name = "Robert Walter";
|
||||
};
|
||||
roconnor = {
|
||||
email = "roconnor@theorem.ca";
|
||||
email = "roconnor@r6.ca";
|
||||
github = "roconnor";
|
||||
githubId = 852967;
|
||||
name = "Russell O'Connor";
|
||||
|
@ -16723,12 +16931,6 @@
|
|||
fingerprint = "E173 237A C782 296D 98F5 ADAC E13D FD4B 4712 7951";
|
||||
}];
|
||||
};
|
||||
scubed2 = {
|
||||
email = "scubed2@gmail.com";
|
||||
github = "scubed2";
|
||||
githubId = 7401858;
|
||||
name = "Sterling Stein";
|
||||
};
|
||||
sdier = {
|
||||
email = "scott@dier.name";
|
||||
matrix = "@sdier:matrix.org";
|
||||
|
@ -17143,6 +17345,12 @@
|
|||
github = "shymega";
|
||||
githubId = 1334592;
|
||||
};
|
||||
siddharthdhakane = {
|
||||
email = "siddharthdhakane@gmail.com";
|
||||
github = "siddharthdhakane";
|
||||
githubId = 28101092;
|
||||
name = "Siddharth Dhakane";
|
||||
};
|
||||
siddharthist = {
|
||||
email = "langston.barrett@gmail.com";
|
||||
github = "langston-barrett";
|
||||
|
@ -17935,6 +18143,12 @@
|
|||
githubId = 38893265;
|
||||
name = "StrikerLulu";
|
||||
};
|
||||
struan = {
|
||||
email = "contact@struanrobertson.co.uk";
|
||||
github = "struan-robertson";
|
||||
githubId = 7543617;
|
||||
name = "Struan Robertson";
|
||||
};
|
||||
stteague = {
|
||||
email = "stteague505@yahoo.com";
|
||||
github = "stteague";
|
||||
|
@ -18955,6 +19169,7 @@
|
|||
tomasajt = {
|
||||
github = "TomaSajt";
|
||||
githubId = 62384384;
|
||||
matrix = "@tomasajt:matrix.org";
|
||||
name = "TomaSajt";
|
||||
keys = [{
|
||||
fingerprint = "8CA9 8016 F44D B717 5B44 6032 F011 163C 0501 22A1";
|
||||
|
@ -20376,6 +20591,22 @@
|
|||
githubId = 13489144;
|
||||
name = "Calle Rosenquist";
|
||||
};
|
||||
xbz = {
|
||||
email = "renatochavez7@gmail.com";
|
||||
github = "Xbz-24";
|
||||
githubId = 68678258;
|
||||
name = "Renato German Chavez Chicoma";
|
||||
};
|
||||
xddxdd = {
|
||||
email = "b980120@hotmail.com";
|
||||
github = "xddxdd";
|
||||
githubId = 5778879;
|
||||
keys = [
|
||||
{ fingerprint = "2306 7C13 B6AE BDD7 C0BB 5673 27F3 1700 E751 EC22"; }
|
||||
{ fingerprint = "B195 E8FB 873E 6020 DCD1 C0C6 B50E C319 385F CB0D"; }
|
||||
];
|
||||
name = "Yuhui Xu";
|
||||
};
|
||||
xdhampus = {
|
||||
name = "Hampus";
|
||||
github = "xdHampus";
|
||||
|
@ -20396,7 +20627,6 @@
|
|||
};
|
||||
xfix = {
|
||||
email = "kamila@borowska.pw";
|
||||
matrix = "@xfix:matrix.org";
|
||||
github = "KamilaBorowska";
|
||||
githubId = 1297598;
|
||||
name = "Kamila Borowska";
|
||||
|
@ -20570,6 +20800,13 @@
|
|||
githubId = 11229748;
|
||||
name = "Lin Yinfeng";
|
||||
};
|
||||
yisraeldov = {
|
||||
email = "lebow@lebowtech.com";
|
||||
name = "Yisrael Dov Lebow";
|
||||
github = "yisraeldov";
|
||||
githubId = 138219;
|
||||
matrix = "@yisraeldov:matrix.org";
|
||||
};
|
||||
yisuidenghua = {
|
||||
email = "bileiner@gmail.com";
|
||||
name = "Milena Yisui";
|
||||
|
|
85
third_party/nixpkgs/maintainers/scripts/bootstrap-files/README.md
vendored
Normal file
85
third_party/nixpkgs/maintainers/scripts/bootstrap-files/README.md
vendored
Normal file
|
@ -0,0 +1,85 @@
|
|||
# Bootstrap files
|
||||
|
||||
Currently `nixpkgs` builds most of it's packages using bootstrap seed
|
||||
binaries (without the reliance on external inputs):
|
||||
|
||||
- `bootstrap-tools`: an archive with the compiler toolchain and other
|
||||
helper tools enough to build the rest of the `nixpkgs`.
|
||||
- initial binaries needed to unpack `bootstrap-tools.*`. On `linux`
|
||||
it's just `busybox`, on `darwin` it's `sh`, `bzip2`, `mkdir` and
|
||||
`cpio`. These binaries can be executed directly from the store.
|
||||
|
||||
These are called "bootstrap files".
|
||||
|
||||
Bootstrap files should always be fetched from hydra and uploaded to
|
||||
`tarballs.nixos.org` to guarantee that all the binaries were built from
|
||||
the code committed into `nixpkgs` repository.
|
||||
|
||||
The uploads to `tarballs.nixos.org` are done by `@lovesegfault` today.
|
||||
|
||||
This document describes the procedure of updating bootstrap files in
|
||||
`nixpkgs`.
|
||||
|
||||
## How to request the bootstrap seed update
|
||||
|
||||
To get the tarballs updated let's use an example `i686-unknown-linux-gnu`
|
||||
target:
|
||||
|
||||
1. Create a local update:
|
||||
|
||||
```
|
||||
$ maintainers/scripts/bootstrap-files/refresh-tarballs.bash --commit --targets=i686-unknown-linux-gnu
|
||||
```
|
||||
|
||||
2. Test the update locally. I'll build local `hello` derivation with
|
||||
the result:
|
||||
|
||||
```
|
||||
$ nix-build -A hello --argstr system i686-linux
|
||||
```
|
||||
|
||||
To validate cross-targets `binfmt` `NixOS` helper can be useful.
|
||||
For `riscv64-unknown-linux-gnu` the `/etc/nixox/configuraqtion.nix`
|
||||
entry would be `boot.binfmt.emulatedSystems = [ "riscv64-linux" ]`.
|
||||
|
||||
3. Propose the commit as a PR to update bootstrap tarballs, tag people
|
||||
who can help you test the updated architecture and once reviewed tag
|
||||
`@lovesegfault` to upload the tarballs.
|
||||
|
||||
## Bootstrap files job definitions
|
||||
|
||||
There are two types of bootstrap files:
|
||||
|
||||
- natively built `stdenvBootstrapTools.build` hydra jobs in
|
||||
[`nixpkgs:trunk`](https://hydra.nixos.org/jobset/nixpkgs/trunk#tabs-jobs)
|
||||
jobset. Incomplete list of examples is:
|
||||
|
||||
* `aarch64-unknown-linux-musl.nix`
|
||||
* `i686-unknown-linux-gnu.nix`
|
||||
|
||||
These are Tier 1 hydra platforms.
|
||||
|
||||
- cross-built by `bootstrapTools.build` hydra jobs in
|
||||
[`nixpkgs:cross-trunk`](https://hydra.nixos.org/jobset/nixpkgs/cross-trunk#tabs-jobs)
|
||||
jobset. Incomplete list of examples is:
|
||||
|
||||
* `mips64el-unknown-linux-gnuabi64.nix`
|
||||
* `mips64el-unknown-linux-gnuabin32.nix`
|
||||
* `mipsel-unknown-linux-gnu.nix`
|
||||
* `powerpc64le-unknown-linux-gnu.nix`
|
||||
* `riscv64-unknown-linux-gnu.nix`
|
||||
|
||||
These are usually Tier 2 and lower targets.
|
||||
|
||||
The `.build` job contains `/on-server/` subdirectory with binaries to
|
||||
be uploaded to `tarballs.nixos.org`.
|
||||
The files are uploaded to `tarballs.nixos.org` by writers to `S3` store.
|
||||
|
||||
## TODOs
|
||||
|
||||
- `pkgs/stdenv/darwin` file layout is slightly different from
|
||||
`pkgs/stdenv/linux`. Once `linux` seed update becomes a routine we can
|
||||
bring `darwin` in sync if it's feasible.
|
||||
- `darwin` definition of `.build` `on-server/` directory layout differs
|
||||
and should be updated.
|
||||
|
282
third_party/nixpkgs/maintainers/scripts/bootstrap-files/refresh-tarballs.bash
vendored
Executable file
282
third_party/nixpkgs/maintainers/scripts/bootstrap-files/refresh-tarballs.bash
vendored
Executable file
|
@ -0,0 +1,282 @@
|
|||
#!/usr/bin/env nix-shell
|
||||
#! nix-shell --pure
|
||||
#! nix-shell -i bash
|
||||
#! nix-shell -p curl cacert
|
||||
#! nix-shell -p git
|
||||
#! nix-shell -p nix
|
||||
#! nix-shell -p jq
|
||||
|
||||
# How the refresher works:
|
||||
#
|
||||
# For a given list of <targets>:
|
||||
# 1. fetch latest successful '.build` job
|
||||
# 2. fetch oldest evaluation that contained that '.build', extract nixpkgs commit
|
||||
# 3. fetch all the `.build` artifacts from '$out/on-server/' directory
|
||||
# 4. calculate hashes and craft the commit message with the details on
|
||||
# how to upload the result to 'tarballs.nixos.org'
|
||||
|
||||
usage() {
|
||||
cat >&2 <<EOF
|
||||
Usage:
|
||||
$0 [ --commit ] --targets=<target>[,<target>,...]
|
||||
|
||||
The tool must be ran from the root directory of 'nixpkgs' repository.
|
||||
|
||||
Synopsis:
|
||||
'refresh-tarballs.bash' script fetches latest bootstrapFiles built
|
||||
by hydra, registers them in 'nixpkgs' and provides commands to
|
||||
upload seed files to 'tarballs.nixos.org'.
|
||||
|
||||
This is usually done in the following cases:
|
||||
|
||||
1. Single target fix: current bootstrap files for a single target
|
||||
are problematic for some reason (target-specific bug). In this
|
||||
case we can refresh just that target as:
|
||||
|
||||
\$ $0 --commit --targets=i686-unknown-linux-gnu
|
||||
|
||||
2. Routine refresh: all bootstrap files should be refreshed to avoid
|
||||
debugging problems that only occur on very old binaries.
|
||||
|
||||
\$ $0 --commit --all-targets
|
||||
|
||||
To get help on uploading refreshed binaries to 'tarballs.nixos.org'
|
||||
please have a look at <maintainers/scripts/bootstrap-files/README.md>.
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
# log helpers
|
||||
|
||||
die() {
|
||||
echo "ERROR: $*" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
info() {
|
||||
echo "INFO: $*" >&2
|
||||
}
|
||||
|
||||
[[ ${#@} -eq 0 ]] && usage
|
||||
|
||||
# known targets
|
||||
|
||||
NATIVE_TARGETS=(
|
||||
aarch64-unknown-linux-gnu
|
||||
aarch64-unknown-linux-musl
|
||||
i686-unknown-linux-gnu
|
||||
x86_64-unknown-linux-gnu
|
||||
x86_64-unknown-linux-musl
|
||||
|
||||
# TODO: add darwin here once a few prerequisites are satisfied:
|
||||
# - bootstrap-files are factored out into a separate file
|
||||
# - the build artifacts are factored out into an `on-server`
|
||||
# directory. Right onw if does not match `linux` layout.
|
||||
#
|
||||
#aarch64-apple-darwin
|
||||
#x86_64-apple-darwin
|
||||
)
|
||||
|
||||
is_native() {
|
||||
local t target=$1
|
||||
for t in "${NATIVE_TARGETS[@]}"; do
|
||||
[[ $t == $target ]] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
CROSS_TARGETS=(
|
||||
armv5tel-unknown-linux-gnueabi
|
||||
armv6l-unknown-linux-gnueabihf
|
||||
armv6l-unknown-linux-musleabihf
|
||||
armv7l-unknown-linux-gnueabihf
|
||||
mips64el-unknown-linux-gnuabi64
|
||||
mips64el-unknown-linux-gnuabin32
|
||||
mipsel-unknown-linux-gnu
|
||||
powerpc64le-unknown-linux-gnu
|
||||
riscv64-unknown-linux-gnu
|
||||
)
|
||||
|
||||
is_cross() {
|
||||
local t target=$1
|
||||
for t in "${CROSS_TARGETS[@]}"; do
|
||||
[[ $t == $target ]] && return 0
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# collect passed options
|
||||
|
||||
targets=()
|
||||
commit=no
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--all-targets)
|
||||
targets+=(
|
||||
${CROSS_TARGETS[@]}
|
||||
${NATIVE_TARGETS[@]}
|
||||
)
|
||||
;;
|
||||
--targets=*)
|
||||
# Convert "--targets=a,b,c" to targets=(a b c) bash array.
|
||||
comma_targets=${arg#--targets=}
|
||||
targets+=(${comma_targets//,/ })
|
||||
;;
|
||||
--commit)
|
||||
commit=yes
|
||||
;;
|
||||
*)
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
for target in "${targets[@]}"; do
|
||||
# Native and cross jobsets differ a bit. We'll have to pick the
|
||||
# one based on target name:
|
||||
if is_native $target; then
|
||||
jobset=nixpkgs/trunk
|
||||
job="stdenvBootstrapTools.${target}.build"
|
||||
elif is_cross $target; then
|
||||
jobset=nixpkgs/cross-trunk
|
||||
job="bootstrapTools.${target}.build"
|
||||
else
|
||||
die "'$target' is not present in either of 'NATIVE_TARGETS' or 'CROSS_TARGETS'. Please add one."
|
||||
fi
|
||||
|
||||
# 'nixpkgs' prefix where we will write new tarball hashes
|
||||
case "$target" in
|
||||
*linux*) nixpkgs_prefix="pkgs/stdenv/linux" ;;
|
||||
*darwin*) nixpkgs_prefix="pkgs/stdenv/darwin" ;;
|
||||
*) die "don't know where to put '$target'" ;;
|
||||
esac
|
||||
|
||||
# We enforce s3 prefix for all targets here. This slightly differs
|
||||
# from manual uploads targets where names were chosen inconsistently.
|
||||
s3_prefix="stdenv/$target"
|
||||
|
||||
# resolve 'latest' build to the build 'id', construct the link.
|
||||
latest_build_uri="https://hydra.nixos.org/job/$jobset/$job/latest"
|
||||
latest_build="$target.latest-build"
|
||||
info "Fetching latest successful build from '${latest_build_uri}'"
|
||||
curl -s -H "Content-Type: application/json" -L "$latest_build_uri" > "$latest_build"
|
||||
[[ $? -ne 0 ]] && die "Failed to fetch latest successful build"
|
||||
latest_build_id=$(jq '.id' < "$latest_build")
|
||||
[[ $? -ne 0 ]] && die "Did not find 'id' in latest build"
|
||||
build_uri="https://hydra.nixos.org/build/${latest_build_id}"
|
||||
|
||||
# We pick oldest jobset evaluation and extract the 'nicpkgs' commit.
|
||||
#
|
||||
# We use oldest instead of latest to make the result more stable
|
||||
# across unrelated 'nixpkgs' updates. Ideally two subsequent runs of
|
||||
# this refresher should produce the same output (provided there are
|
||||
# no bootstrapTools updates committed between the two runs).
|
||||
oldest_eval_id=$(jq '.jobsetevals|min' < "$latest_build")
|
||||
[[ $? -ne 0 ]] && die "Did not find 'jobsetevals' in latest build"
|
||||
eval_uri="https://hydra.nixos.org/eval/${oldest_eval_id}"
|
||||
eval_meta="$target.eval-meta"
|
||||
info "Fetching oldest eval details from '${eval_uri}' (can take a minute)"
|
||||
curl -s -H "Content-Type: application/json" -L "${eval_uri}" > "$eval_meta"
|
||||
[[ $? -ne 0 ]] && die "Failed to fetch eval metadata"
|
||||
nixpkgs_revision=$(jq --raw-output ".jobsetevalinputs.nixpkgs.revision" < "$eval_meta")
|
||||
[[ $? -ne 0 ]] && die "Failed to fetch revision"
|
||||
|
||||
# Extract the build paths out of the build metadata
|
||||
drvpath=$(jq --raw-output '.drvpath' < "${latest_build}")
|
||||
[[ $? -ne 0 ]] && die "Did not find 'drvpath' in latest build"
|
||||
outpath=$(jq --raw-output '.buildoutputs.out.path' < "${latest_build}")
|
||||
[[ $? -ne 0 ]] && die "Did not find 'buildoutputs' in latest build"
|
||||
build_timestamp=$(jq --raw-output '.timestamp' < "${latest_build}")
|
||||
[[ $? -ne 0 ]] && die "Did not find 'timestamp' in latest build"
|
||||
build_time=$(TZ=UTC LANG=C date --date="@${build_timestamp}" --rfc-email)
|
||||
[[ $? -ne 0 ]] && die "Failed to format timestamp"
|
||||
|
||||
info "Fetching bootstrap tools to calculate hashes from '${outpath}'"
|
||||
nix-store --realize "$outpath"
|
||||
[[ $? -ne 0 ]] && die "Failed to fetch '${outpath}' from hydra"
|
||||
|
||||
fnames=()
|
||||
|
||||
target_file="${nixpkgs_prefix}/bootstrap-files/${target}.nix"
|
||||
info "Writing '${target_file}'"
|
||||
{
|
||||
# header
|
||||
cat <<EOF
|
||||
# Autogenerated by maintainers/scripts/bootstrap-files/refresh-tarballs.bash as:
|
||||
# $ ./refresh-tarballs.bash --targets=${target}
|
||||
#
|
||||
# Metadata:
|
||||
# - nixpkgs revision: ${nixpkgs_revision}
|
||||
# - hydra build: ${latest_build_uri}
|
||||
# - resolved hydra build: ${build_uri}
|
||||
# - instantiated derivation: ${drvpath}
|
||||
# - output directory: ${outpath}
|
||||
# - build time: ${build_time}
|
||||
{
|
||||
EOF
|
||||
for p in "${outpath}/on-server"/*; do
|
||||
fname=$(basename "$p")
|
||||
fnames+=("$fname")
|
||||
case "$fname" in
|
||||
bootstrap-tools.tar.xz) attr=bootstrapTools ;;
|
||||
busybox) attr=$fname ;;
|
||||
*) die "Don't know how to map '$fname' to attribute name. Please update me."
|
||||
esac
|
||||
|
||||
executable_arg=
|
||||
executable_nix=
|
||||
if [[ -x "$p" ]]; then
|
||||
executable_arg="--executable"
|
||||
executable_nix=" executable = true;"
|
||||
fi
|
||||
sha256=$(nix-prefetch-url $executable_arg --name "$fname" "file://$p")
|
||||
[[ $? -ne 0 ]] && die "Failed to get the hash for '$p'"
|
||||
sri=$(nix-hash --to-sri "sha256:$sha256")
|
||||
[[ $? -ne 0 ]] && die "Failed to convert '$sha256' hash to an SRI form"
|
||||
|
||||
# individual file entries
|
||||
cat <<EOF
|
||||
$attr = import <nix/fetchurl.nix> {
|
||||
url = "http://tarballs.nixos.org/${s3_prefix}/${nixpkgs_revision}/$fname";
|
||||
hash = "${sri}";$(printf "\n%s" "${executable_nix}")
|
||||
};
|
||||
EOF
|
||||
done
|
||||
# footer
|
||||
cat <<EOF
|
||||
}
|
||||
EOF
|
||||
} > "${target_file}"
|
||||
|
||||
target_file_commit_msg=${target}.commit_message
|
||||
cat > "$target_file_commit_msg" <<EOF
|
||||
${nixpkgs_prefix}: update ${target} bootstrap-files
|
||||
|
||||
sha256sum of files to be uploaded:
|
||||
|
||||
$(
|
||||
echo "$ sha256sum ${outpath}/on-server/*"
|
||||
sha256sum ${outpath}/on-server/*
|
||||
)
|
||||
|
||||
Suggested commands to upload files to 'tarballs.nixos.org':
|
||||
|
||||
$ nix-store --realize ${outpath}
|
||||
$ aws s3 cp --recursive --acl public-read ${outpath}/on-server/ s3://nixpkgs-tarballs/${s3_prefix}/${nixpkgs_revision}
|
||||
$ aws s3 cp --recursive s3://nixpkgs-tarballs/${s3_prefix}/${nixpkgs_revision} ./
|
||||
$ sha256sum ${fnames[*]}
|
||||
$ sha256sum ${outpath}/on-server/*
|
||||
EOF
|
||||
|
||||
cat "$target_file_commit_msg"
|
||||
if [[ $commit == yes ]]; then
|
||||
git commit "${target_file}" -F "$target_file_commit_msg"
|
||||
else
|
||||
info "DRY RUN: git commit ${target_file} -F $target_file_commit_msg"
|
||||
fi
|
||||
rm -- "$target_file_commit_msg"
|
||||
|
||||
# delete temp files
|
||||
rm -- "$latest_build" "$eval_meta"
|
||||
done
|
|
@ -1,5 +1,5 @@
|
|||
#!/usr/bin/env nix-shell
|
||||
#!nix-shell -i bash -p jq -I nixpkgs=../../../..
|
||||
#!nix-shell -i bash -p jq
|
||||
|
||||
set -o pipefail -o errexit -o nounset
|
||||
|
||||
|
|
|
@ -100,9 +100,11 @@ moonscript,https://github.com/leafo/moonscript.git,dev-1,,,,arobyn
|
|||
nlua,,,,,,teto
|
||||
nui.nvim,,,,,,mrcjkb
|
||||
nvim-cmp,https://github.com/hrsh7th/nvim-cmp,,,,,
|
||||
nvim-nio,,,,,,mrcjkb
|
||||
penlight,https://github.com/lunarmodules/Penlight.git,,,,,alerque
|
||||
plenary.nvim,https://github.com/nvim-lua/plenary.nvim.git,,,,5.1,
|
||||
rapidjson,https://github.com/xpol/lua-rapidjson.git,,,,,
|
||||
rocks.nvim,,,,,5.1,teto mrcjkb
|
||||
rest.nvim,,,,,5.1,teto
|
||||
rustaceanvim,,,,,,mrcjkb
|
||||
say,https://github.com/Olivine-Labs/say.git,,,,,
|
||||
|
@ -119,3 +121,4 @@ toml,,,,,,mrcjkb
|
|||
toml-edit,,,,,5.1,mrcjkb
|
||||
vstruct,https://github.com/ToxicFrog/vstruct.git,,,,,
|
||||
vusted,,,,,,figsoda
|
||||
xml2lua,,,,,,teto
|
||||
|
|
|
|
@ -1,6 +1,6 @@
|
|||
# Contributing to this manual {#chap-contributing}
|
||||
|
||||
The [DocBook] and CommonMark sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
|
||||
The sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
|
||||
This manual uses the [Nixpkgs manual syntax](https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup).
|
||||
|
||||
You can quickly check your edits with the following:
|
||||
|
|
|
@ -7,7 +7,7 @@ worthy contribution to the project.
|
|||
|
||||
## Building the Manual {#sec-writing-docs-building-the-manual}
|
||||
|
||||
The DocBook sources of the [](#book-nixos-manual) are in the
|
||||
The sources of the [](#book-nixos-manual) are in the
|
||||
[`nixos/doc/manual`](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual)
|
||||
subdirectory of the Nixpkgs repository.
|
||||
|
||||
|
@ -29,65 +29,3 @@ nix-build nixos/release.nix -A manual.x86_64-linux
|
|||
When this command successfully finishes, it will tell you where the
|
||||
manual got generated. The HTML will be accessible through the `result`
|
||||
symlink at `./result/share/doc/nixos/index.html`.
|
||||
|
||||
## Editing DocBook XML {#sec-writing-docs-editing-docbook-xml}
|
||||
|
||||
For general information on how to write in DocBook, see [DocBook 5: The
|
||||
Definitive Guide](https://tdg.docbook.org/tdg/5.1/).
|
||||
|
||||
Emacs nXML Mode is very helpful for editing DocBook XML because it
|
||||
validates the document as you write, and precisely locates errors. To
|
||||
use it, see [](#sec-emacs-docbook-xml).
|
||||
|
||||
[Pandoc](https://pandoc.org/) can generate DocBook XML from a multitude of
|
||||
formats, which makes a good starting point. Here is an example of Pandoc
|
||||
invocation to convert GitHub-Flavoured MarkDown to DocBook 5 XML:
|
||||
|
||||
```ShellSession
|
||||
pandoc -f markdown_github -t docbook5 docs.md -o my-section.md
|
||||
```
|
||||
|
||||
Pandoc can also quickly convert a single `section.xml` to HTML, which is
|
||||
helpful when drafting.
|
||||
|
||||
Sometimes writing valid DocBook is too difficult. In this case,
|
||||
submit your documentation updates in a [GitHub
|
||||
Issue](https://github.com/NixOS/nixpkgs/issues/new) and someone will
|
||||
handle the conversion to XML for you.
|
||||
|
||||
## Creating a Topic {#sec-writing-docs-creating-a-topic}
|
||||
|
||||
You can use an existing topic as a basis for the new topic or create a
|
||||
topic from scratch.
|
||||
|
||||
Keep the following guidelines in mind when you create and add a topic:
|
||||
|
||||
- The NixOS [`book`](https://tdg.docbook.org/tdg/5.0/book.html)
|
||||
element is in `nixos/doc/manual/manual.xml`. It includes several
|
||||
[`parts`](https://tdg.docbook.org/tdg/5.0/book.html) which are in
|
||||
subdirectories.
|
||||
|
||||
- Store the topic file in the same directory as the `part` to which it
|
||||
belongs. If your topic is about configuring a NixOS module, then the
|
||||
XML file can be stored alongside the module definition `nix` file.
|
||||
|
||||
- If you include multiple words in the file name, separate the words
|
||||
with a dash. For example: `ipv6-config.xml`.
|
||||
|
||||
- Make sure that the `xml:id` value is unique. You can use abbreviations
|
||||
if the ID is too long. For example: `nixos-config`.
|
||||
|
||||
- Determine whether your topic is a chapter or a section. If you are
|
||||
unsure, open an existing topic file and check whether the main
|
||||
element is chapter or section.
|
||||
|
||||
## Adding a Topic to the Book {#sec-writing-docs-adding-a-topic}
|
||||
|
||||
Open the parent CommonMark file and add a line to the list of
|
||||
chapters with the file name of the topic that you created. If you
|
||||
created a `section`, you add the file to the `chapter` file. If you created
|
||||
a `chapter`, you add the file to the `part` file.
|
||||
|
||||
If the topic is about configuring a NixOS module, it can be
|
||||
automatically included in the manual by using the `meta.doc` attribute.
|
||||
See [](#sec-meta-attributes) for an explanation.
|
||||
|
|
|
@ -38,18 +38,26 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
|
||||
|
||||
- [Handheld Daemon](https://github.com/hhd-dev/hhd), support for gaming handhelds like the Legion Go, ROG Ally, and GPD Win. Available as [services.handheld-daemon](#opt-services.handheld-daemon.enable).
|
||||
|
||||
- [Guix](https://guix.gnu.org), a functional package manager inspired by Nix. Available as [services.guix](#opt-services.guix.enable).
|
||||
|
||||
- [pyLoad](https://pyload.net/), a FOSS download manager written in Python. Available as [services.pyload](#opt-services.pyload.enable)
|
||||
|
||||
- [maubot](https://github.com/maubot/maubot), a plugin-based Matrix bot framework. Available as [services.maubot](#opt-services.maubot.enable).
|
||||
|
||||
- systemd's gateway, upload, and remote services, which provides ways of sending journals across the network. Enable using [services.journald.gateway](#opt-services.journald.gateway.enable), [services.journald.upload](#opt-services.journald.upload.enable), and [services.journald.remote](#opt-services.journald.remote.enable).
|
||||
|
||||
- [GNS3](https://www.gns3.com/), a network software emulator. Available as [services.gns3-server](#opt-services.gns3-server.enable).
|
||||
|
||||
- [pretalx](https://github.com/pretalx/pretalx), a conference planning tool. Available as [services.pretalx](#opt-services.pretalx.enable).
|
||||
|
||||
- [rspamd-trainer](https://gitlab.com/onlime/rspamd-trainer), script triggered by a helper which reads mails from a specific mail inbox and feeds them into rspamd for spam/ham training.
|
||||
|
||||
- [ollama](https://ollama.ai), server for running large language models locally.
|
||||
|
||||
- [hebbot](https://github.com/haecker-felix/hebbot), a Matrix bot to generate "This Week in X" like blog posts. Available as [services.hebbot](#opt-services.hebbot.enable).
|
||||
|
||||
- [Anki Sync Server](https://docs.ankiweb.net/sync-server.html), the official sync server built into recent versions of Anki. Available as [services.anki-sync-server](#opt-services.anki-sync-server.enable).
|
||||
The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been marked deprecated and will be dropped after 24.05 due to lack of maintenance of the anki-sync-server softwares.
|
||||
|
||||
|
@ -63,8 +71,12 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
|
||||
- [TuxClocker](https://github.com/Lurkki14/tuxclocker), a hardware control and monitoring program. Available as [programs.tuxclocker](#opt-programs.tuxclocker.enable).
|
||||
|
||||
- [ALVR](https://github.com/alvr-org/alvr), a VR desktop streamer. Available as [programs.alvr](#opt-programs.alvr.enable)
|
||||
|
||||
- [RustDesk](https://rustdesk.com), a full-featured open source remote control alternative for self-hosting and security with minimal configuration. Alternative to TeamViewer.
|
||||
|
||||
- [systemd-lock-handler](https://git.sr.ht/~whynothugo/systemd-lock-handler/), a bridge between logind D-Bus events and systemd targets. Available as [services.systemd-lock-handler.enable](#opt-services.systemd-lock-handler.enable).
|
||||
|
||||
## Backward Incompatibilities {#sec-release-24.05-incompatibilities}
|
||||
|
||||
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
|
||||
|
@ -81,8 +93,19 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
|
||||
- `idris2` was updated to v0.7.0. This version introduces breaking changes. Check out the [changelog](https://github.com/idris-lang/Idris2/blob/v0.7.0/CHANGELOG.md#v070) for details.
|
||||
|
||||
- `neo4j` has been updated to 5, you may want to read the [release notes for Neo4j 5](https://neo4j.com/release-notes/database/neo4j-5/)
|
||||
|
||||
- `services.neo4j.allowUpgrade` was removed and no longer has any effect. Neo4j 5 supports automatic rolling upgrades.
|
||||
|
||||
- `nitter` requires a `guest_accounts.jsonl` to be provided as a path or loaded into the default location at `/var/lib/nitter/guest_accounts.jsonl`. See [Guest Account Branch Deployment](https://github.com/zedeus/nitter/wiki/Guest-Account-Branch-Deployment) for details.
|
||||
|
||||
- `services.aria2.rpcSecret` has been replaced with `services.aria2.rpcSecretFile`.
|
||||
This was done so that secrets aren't stored in the world-readable nix store.
|
||||
To migrate, you will have create a file with the same exact string, and change
|
||||
your module options to point to that file. For example, `services.aria2.rpcSecret =
|
||||
"mysecret"` becomes `services.aria2.rpcSecretFile = "/path/to/secret_file"`
|
||||
where the file `secret_file` contains the string `mysecret`.
|
||||
|
||||
- Invidious has changed its default database username from `kemal` to `invidious`. Setups involving an externally provisioned database (i.e. `services.invidious.database.createLocally == false`) should adjust their configuration accordingly. The old `kemal` user will not be removed automatically even when the database is provisioned automatically.(https://github.com/NixOS/nixpkgs/pull/265857)
|
||||
|
||||
- `inetutils` now has a lower priority to avoid shadowing the commonly used `util-linux`. If one wishes to restore the default priority, simply use `lib.setPrio 5 inetutils` or override with `meta.priority = 5`.
|
||||
|
@ -101,6 +124,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
release notes of [v19](https://github.com/systemd/mkosi/releases/tag/v19) and
|
||||
[v20](https://github.com/systemd/mkosi/releases/tag/v20) for a list of changes.
|
||||
|
||||
- The `woodpecker-*` packages have been updated to v2 which includes [breaking changes](https://woodpecker-ci.org/docs/next/migrations#200).
|
||||
|
||||
- `services.nginx` will no longer advertise HTTP/3 availability automatically. This must now be manually added, preferably to each location block.
|
||||
Example:
|
||||
|
||||
|
@ -113,6 +138,9 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
'';
|
||||
|
||||
```
|
||||
|
||||
- The package `optparse-bash` is now dropped due to upstream inactivity. Alternatives available in Nixpkgs include [`argc`](https://github.com/sigoden/argc), [`argbash`](https://github.com/matejak/argbash), [`bashly`](https://github.com/DannyBen/bashly) and [`gum`](https://github.com/charmbracelet/gum), to name a few.
|
||||
|
||||
- The `kanata` package has been updated to v1.5.0, which includes [breaking changes](https://github.com/jtroo/kanata/releases/tag/v1.5.0).
|
||||
|
||||
- The `craftos-pc` package has been updated to v2.8, which includes [breaking changes](https://github.com/MCJack123/craftos2/releases/tag/v2.8).
|
||||
|
@ -138,12 +166,10 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
- `services.avahi.nssmdns` got split into `services.avahi.nssmdns4` and `services.avahi.nssmdns6` which enable the mDNS NSS switch for IPv4 and IPv6 respectively.
|
||||
Since most mDNS responders only register IPv4 addresses, most users want to keep the IPv6 support disabled to avoid long timeouts.
|
||||
|
||||
- `multi-user.target` no longer depends on `network-online.target`.
|
||||
This will potentially break services that assumed this was the case in the past.
|
||||
This was changed for consistency with other distributions as well as improved boot times.
|
||||
|
||||
We have added a warning for services that are
|
||||
`after = [ "network-online.target" ]` but do not depend on it (e.g. using `wants`).
|
||||
- A warning has been added for services that are
|
||||
`after = [ "network-online.target" ]` but do not depend on it (e.g. using
|
||||
`wants`), because the dependency that `multi-user.target` has on
|
||||
`network-online.target` is planned for removal.
|
||||
|
||||
- `services.archisteamfarm` no longer uses the abbreviation `asf` for its state directory (`/var/lib/asf`), user and group (both `asf`). Instead the long name `archisteamfarm` is used.
|
||||
Configurations with `system.stateVersion` 23.11 or earlier, default to the old stateDirectory until the 24.11 release and must either set the option explicitly or move the data to the new directory.
|
||||
|
@ -196,6 +222,19 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
|
||||
- The `-data` path is no longer required to run the package, and will be set to point to a folder in `$TMP` if missing.
|
||||
|
||||
- `nomad` has been updated - note that HashiCorp recommends updating one minor version at a time. Please check [their upgrade guide](https://developer.hashicorp.com/nomad/docs/upgrade) for information on safely updating clusters and potential breaking changes.
|
||||
|
||||
- `nomad` is now Nomad 1.7.x.
|
||||
|
||||
- `nomad_1_4` has been removed, as it is now unsupported upstream.
|
||||
|
||||
- The `livebook` package is now built as a `mix release` instead of an `escript`.
|
||||
This means that configuration now has to be done using [environment variables](https://hexdocs.pm/livebook/readme.html#environment-variables) instead of command line arguments.
|
||||
This has the further implication that the `livebook` service configuration has changed:
|
||||
|
||||
- The `erlang_node_short_name`, `erlang_node_name`, `port` and `options` configuration parameters are gone, and have been replaced with an `environment` parameter.
|
||||
Use the appropriate [environment variables](https://hexdocs.pm/livebook/readme.html#environment-variables) inside `environment` to configure the service instead.
|
||||
|
||||
## Other Notable Changes {#sec-release-24.05-notable-changes}
|
||||
|
||||
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
|
||||
|
@ -220,26 +259,33 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
|
||||
- [Lilypond](https://lilypond.org/index.html) and [Denemo](https://www.denemo.org) are now compiled with Guile 3.0.
|
||||
|
||||
- The following options of the Nextcloud module were moved into [`services.nextcloud.extraOptions`](#opt-services.nextcloud.extraOptions) and renamed to match the name from Nextcloud's `config.php`:
|
||||
- `logLevel` -> [`loglevel`](#opt-services.nextcloud.extraOptions.loglevel),
|
||||
- `logType` -> [`log_type`](#opt-services.nextcloud.extraOptions.log_type),
|
||||
- `defaultPhoneRegion` -> [`default_phone_region`](#opt-services.nextcloud.extraOptions.default_phone_region),
|
||||
- `overwriteProtocol` -> [`overwriteprotocol`](#opt-services.nextcloud.extraOptions.overwriteprotocol),
|
||||
- `skeletonDirectory` -> [`skeletondirectory`](#opt-services.nextcloud.extraOptions.skeletondirectory),
|
||||
- `globalProfiles` -> [`profile.enabled`](#opt-services.nextcloud.extraOptions._profile.enabled_),
|
||||
- `extraTrustedDomains` -> [`trusted_domains`](#opt-services.nextcloud.extraOptions.trusted_domains) and
|
||||
- `trustedProxies` -> [`trusted_proxies`](#opt-services.nextcloud.extraOptions.trusted_proxies).
|
||||
- The following options of the Nextcloud module were moved into [`services.nextcloud.settings`](#opt-services.nextcloud.settings) and renamed to match the name from Nextcloud's `config.php`:
|
||||
- `logLevel` -> [`loglevel`](#opt-services.nextcloud.settings.loglevel),
|
||||
- `logType` -> [`log_type`](#opt-services.nextcloud.settings.log_type),
|
||||
- `defaultPhoneRegion` -> [`default_phone_region`](#opt-services.nextcloud.settings.default_phone_region),
|
||||
- `overwriteProtocol` -> [`overwriteprotocol`](#opt-services.nextcloud.settings.overwriteprotocol),
|
||||
- `skeletonDirectory` -> [`skeletondirectory`](#opt-services.nextcloud.settings.skeletondirectory),
|
||||
- `globalProfiles` -> [`profile.enabled`](#opt-services.nextcloud.settings._profile.enabled_),
|
||||
- `extraTrustedDomains` -> [`trusted_domains`](#opt-services.nextcloud.settings.trusted_domains) and
|
||||
- `trustedProxies` -> [`trusted_proxies`](#opt-services.nextcloud.settings.trusted_proxies).
|
||||
|
||||
- The option [`services.nextcloud.config.dbport`] of the Nextcloud module was removed to match upstream.
|
||||
The port can be specified in [`services.nextcloud.config.dbhost`](#opt-services.nextcloud.config.dbhost).
|
||||
|
||||
- `stdenv`: The `--replace` flag in `substitute`, `substituteInPlace`, `substituteAll`, `substituteAllStream`, and `substituteStream` is now deprecated if favor of the new `--replace-fail`, `--replace-warn` and `--replace-quiet`. The deprecated `--replace` equates to `--replace-warn`.
|
||||
|
||||
- New options were added to the dnsdist module to enable and configure a DNSCrypt endpoint (see `services.dnsdist.dnscrypt.enable`, etc.).
|
||||
The module can generate the DNSCrypt provider key pair, certificates and also performs their rotation automatically with no downtime.
|
||||
|
||||
- With a bump to `sonarr` v4, existing config database files will be upgraded automatically, but note that some old apparently-working configs [might actually be corrupt and fail to upgrade cleanly](https://forums.sonarr.tv/t/sonarr-v4-released/33089).
|
||||
|
||||
- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
|
||||
non-child processes. This means you will not be able to attach gdb to an
|
||||
existing process, but will need to start that process from gdb (so it is a
|
||||
child). Or you can set `boot.kernel.sysctl."kernel.yama.ptrace_scope"` to 0.
|
||||
|
||||
- The netbird module now allows running multiple tunnels in parallel through [`services.netbird.tunnels`](#opt-services.netbird.tunnels).
|
||||
|
||||
- [Nginx virtual hosts](#opt-services.nginx.virtualHosts) using `forceSSL` or
|
||||
`globalRedirect` can now have redirect codes other than 301 through
|
||||
`redirectCode`.
|
||||
|
@ -263,6 +309,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
- Custom themes and other assets that were previously stored in `custom/public/*` now belong in `custom/public/assets/*`
|
||||
- New instances of Gitea using MySQL now ignore the `[database].CHARSET` config option and always use the `utf8mb4` charset, existing instances should migrate via the `gitea doctor convert` CLI command.
|
||||
|
||||
- The `services.paperless` module no longer uses the previously downloaded NLTK data stored in `/var/cache/paperless/nltk`. This directory can be removed.
|
||||
|
||||
- The `hardware.pulseaudio` module now sets permission of pulse user home directory to 755 when running in "systemWide" mode. It fixes [issue 114399](https://github.com/NixOS/nixpkgs/issues/114399).
|
||||
|
||||
- The `btrbk` module now automatically selects and provides required compression
|
||||
|
@ -272,5 +320,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
|
|||
|
||||
- The `mpich` package expression now requires `withPm` to be a list, e.g. `"hydra:gforker"` becomes `[ "hydra" "gforker" ]`.
|
||||
|
||||
- YouTrack is bumped to 2023.3. The update is not performed automatically, it requires manual interaction. See the YouTrack section in the manual for details.
|
||||
|
||||
- QtMultimedia has changed its default backend to `QT_MEDIA_BACKEND=ffmpeg` (previously `gstreamer` on Linux or `darwin` on MacOS).
|
||||
The previous native backends remain available but are now minimally maintained. Refer to [upstream documentation](https://doc.qt.io/qt-6/qtmultimedia-index.html#ffmpeg-as-the-default-backend) for further details about each platform.
|
||||
|
||||
|
|
|
@ -768,6 +768,32 @@ class Machine:
|
|||
self.booted = False
|
||||
self.connected = False
|
||||
|
||||
def wait_for_qmp_event(
|
||||
self, event_filter: Callable[[dict[str, Any]], bool], timeout: int = 60 * 10
|
||||
) -> dict[str, Any]:
|
||||
"""
|
||||
Wait for a QMP event which you can filter with the `event_filter` function.
|
||||
The function takes as an input a dictionary of the event and if it returns True, we return that event,
|
||||
if it does not, we wait for the next event and retry.
|
||||
|
||||
It will skip all events received in the meantime, if you want to keep them,
|
||||
you have to do the bookkeeping yourself and store them somewhere.
|
||||
|
||||
By default, it will wait up to 10 minutes, `timeout` is in seconds.
|
||||
"""
|
||||
if self.qmp_client is None:
|
||||
raise RuntimeError("QMP API is not ready yet, is the VM ready?")
|
||||
|
||||
start = time.time()
|
||||
while True:
|
||||
evt = self.qmp_client.wait_for_event(timeout=timeout)
|
||||
if event_filter(evt):
|
||||
return evt
|
||||
|
||||
elapsed = time.time() - start
|
||||
if elapsed >= timeout:
|
||||
raise TimeoutError
|
||||
|
||||
def get_tty_text(self, tty: str) -> str:
|
||||
status, output = self.execute(
|
||||
f"fold -w$(stty -F /dev/tty{tty} size | "
|
||||
|
|
1
third_party/nixpkgs/nixos/lib/utils.nix
vendored
1
third_party/nixpkgs/nixos/lib/utils.nix
vendored
|
@ -109,6 +109,7 @@ rec {
|
|||
recurse = prefix: item:
|
||||
if item ? ${attr} then
|
||||
nameValuePair prefix item.${attr}
|
||||
else if isDerivation item then []
|
||||
else if isAttrs item then
|
||||
map (name:
|
||||
let
|
||||
|
|
|
@ -30,6 +30,7 @@ with lib;
|
|||
beam = super.beam_nox;
|
||||
cairo = super.cairo.override { x11Support = false; };
|
||||
dbus = super.dbus.override { x11Support = false; };
|
||||
fastfetch = super.fastfetch.override { vulkanSupport = false; waylandSupport = false; x11Support = false; };
|
||||
ffmpeg_4 = super.ffmpeg_4.override { ffmpegVariant = "headless"; };
|
||||
ffmpeg_5 = super.ffmpeg_5.override { ffmpegVariant = "headless"; };
|
||||
# dep of graphviz, libXpm is optional for Xpm support
|
||||
|
@ -37,6 +38,7 @@ with lib;
|
|||
ghostscript = super.ghostscript.override { cupsSupport = false; x11Support = false; };
|
||||
gjs = super.gjs.overrideAttrs { doCheck = false; installTests = false; }; # avoid test dependency on gtk3
|
||||
gobject-introspection = super.gobject-introspection.override { x11Support = false; };
|
||||
gpg-tui = super.gpg-tui.override { x11Support = false; };
|
||||
gpsd = super.gpsd.override { guiSupport = false; };
|
||||
graphviz = super.graphviz-nox;
|
||||
gst_all_1 = super.gst_all_1 // {
|
||||
|
|
|
@ -32,6 +32,7 @@
|
|||
, split
|
||||
, seed
|
||||
, definitionsDirectory
|
||||
, sectorSize
|
||||
}:
|
||||
|
||||
let
|
||||
|
@ -94,6 +95,7 @@ runCommand imageFileBasename
|
|||
--definitions="$amendedRepartDefinitions" \
|
||||
--split="${lib.boolToString split}" \
|
||||
--json=pretty \
|
||||
${lib.optionalString (sectorSize != null) "--sector-size=${toString sectorSize}"} \
|
||||
${imageFileBasename}.raw \
|
||||
| tee repart-output.json
|
||||
|
||||
|
|
|
@ -135,6 +135,16 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
sectorSize = lib.mkOption {
|
||||
type = with lib.types; nullOr int;
|
||||
default = 512;
|
||||
example = lib.literalExpression "4096";
|
||||
description = lib.mdDoc ''
|
||||
The sector size of the disk image produced by systemd-repart. This
|
||||
value must be a power of 2 between 512 and 4096.
|
||||
'';
|
||||
};
|
||||
|
||||
package = lib.mkPackageOption pkgs "systemd-repart" {
|
||||
# We use buildPackages so that repart images are built with the build
|
||||
# platform's systemd, allowing for cross-compiled systems to work.
|
||||
|
@ -232,7 +242,7 @@ in
|
|||
in
|
||||
pkgs.callPackage ./repart-image.nix {
|
||||
systemd = cfg.package;
|
||||
inherit (cfg) imageFileBasename compression split seed;
|
||||
inherit (cfg) imageFileBasename compression split seed sectorSize;
|
||||
inherit fileSystems definitionsDirectory partitions;
|
||||
};
|
||||
|
||||
|
|
|
@ -139,6 +139,7 @@
|
|||
./programs/_1password-gui.nix
|
||||
./programs/_1password.nix
|
||||
./programs/adb.nix
|
||||
./programs/alvr.nix
|
||||
./programs/appgate-sdp.nix
|
||||
./programs/atop.nix
|
||||
./programs/ausweisapp.nix
|
||||
|
@ -214,9 +215,11 @@
|
|||
./programs/minipro.nix
|
||||
./programs/miriway.nix
|
||||
./programs/mosh.nix
|
||||
./programs/mouse-actions.nix
|
||||
./programs/msmtp.nix
|
||||
./programs/mtr.nix
|
||||
./programs/nano.nix
|
||||
./programs/nautilus-open-any-terminal.nix
|
||||
./programs/nbd.nix
|
||||
./programs/neovim.nix
|
||||
./programs/nethoscope.nix
|
||||
|
@ -427,6 +430,7 @@
|
|||
./services/databases/couchdb.nix
|
||||
./services/databases/dgraph.nix
|
||||
./services/databases/dragonflydb.nix
|
||||
./services/databases/etcd.nix
|
||||
./services/databases/ferretdb.nix
|
||||
./services/databases/firebird.nix
|
||||
./services/databases/foundationdb.nix
|
||||
|
@ -533,6 +537,7 @@
|
|||
./services/hardware/fancontrol.nix
|
||||
./services/hardware/freefall.nix
|
||||
./services/hardware/fwupd.nix
|
||||
./services/hardware/handheld-daemon.nix
|
||||
./services/hardware/hddfancontrol.nix
|
||||
./services/hardware/illum.nix
|
||||
./services/hardware/interception-tools.nix
|
||||
|
@ -634,6 +639,7 @@
|
|||
./services/matrix/appservice-irc.nix
|
||||
./services/matrix/conduit.nix
|
||||
./services/matrix/dendrite.nix
|
||||
./services/matrix/hebbot.nix
|
||||
./services/matrix/maubot.nix
|
||||
./services/matrix/mautrix-facebook.nix
|
||||
./services/matrix/mautrix-telegram.nix
|
||||
|
@ -675,7 +681,6 @@
|
|||
./services/misc/dwm-status.nix
|
||||
./services/misc/dysnomia.nix
|
||||
./services/misc/errbot.nix
|
||||
./services/misc/etcd.nix
|
||||
./services/misc/etebase-server.nix
|
||||
./services/misc/etesync-dav.nix
|
||||
./services/misc/evdevremapkeys.nix
|
||||
|
@ -1059,6 +1064,7 @@
|
|||
./services/networking/openvpn.nix
|
||||
./services/networking/ostinato.nix
|
||||
./services/networking/owamp.nix
|
||||
./services/networking/pyload.nix
|
||||
./services/networking/pdns-recursor.nix
|
||||
./services/networking/pdnsd.nix
|
||||
./services/networking/peroxide.nix
|
||||
|
@ -1197,6 +1203,7 @@
|
|||
./services/security/hologram-agent.nix
|
||||
./services/security/hologram-server.nix
|
||||
./services/security/infnoise.nix
|
||||
./services/security/intune.nix
|
||||
./services/security/jitterentropy-rngd.nix
|
||||
./services/security/kanidm.nix
|
||||
./services/security/munge.nix
|
||||
|
@ -1234,6 +1241,7 @@
|
|||
./services/system/saslauthd.nix
|
||||
./services/system/self-deploy.nix
|
||||
./services/system/systembus-notify.nix
|
||||
./services/system/systemd-lock-handler.nix
|
||||
./services/system/uptimed.nix
|
||||
./services/system/zram-generator.nix
|
||||
./services/torrent/deluge.nix
|
||||
|
@ -1338,6 +1346,7 @@
|
|||
./services/web-apps/plantuml-server.nix
|
||||
./services/web-apps/plausible.nix
|
||||
./services/web-apps/powerdns-admin.nix
|
||||
./services/web-apps/pretalx.nix
|
||||
./services/web-apps/prosody-filer.nix
|
||||
./services/web-apps/restya-board.nix
|
||||
./services/web-apps/rimgo.nix
|
||||
|
|
|
@ -39,14 +39,17 @@ with lib;
|
|||
security.apparmor.killUnconfinedConfinables = mkDefault true;
|
||||
|
||||
boot.kernelParams = [
|
||||
# Slab/slub sanity checks, redzoning, and poisoning
|
||||
"slub_debug=FZP"
|
||||
# Don't merge slabs
|
||||
"slab_nomerge"
|
||||
|
||||
# Overwrite free'd memory
|
||||
# Overwrite free'd pages
|
||||
"page_poison=1"
|
||||
|
||||
# Enable page allocator randomization
|
||||
"page_alloc.shuffle=1"
|
||||
|
||||
# Disable debugfs
|
||||
"debugfs=off"
|
||||
];
|
||||
|
||||
boot.blacklistedKernelModules = [
|
||||
|
|
|
@ -39,6 +39,9 @@ with lib;
|
|||
# Allow the user to log in as root without a password.
|
||||
users.users.root.initialHashedPassword = "";
|
||||
|
||||
# Don't require sudo/root to `reboot` or `poweroff`.
|
||||
security.polkit.enable = true;
|
||||
|
||||
# Allow passwordless sudo from nixos user
|
||||
security.sudo = {
|
||||
enable = mkDefault true;
|
||||
|
|
35
third_party/nixpkgs/nixos/modules/programs/alvr.nix
vendored
Normal file
35
third_party/nixpkgs/nixos/modules/programs/alvr.nix
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.programs.alvr;
|
||||
in
|
||||
{
|
||||
options = {
|
||||
programs.alvr = {
|
||||
enable = mkEnableOption (lib.mdDoc "ALVR, the VR desktop streamer");
|
||||
|
||||
package = mkPackageOption pkgs "alvr" { };
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = lib.mdDoc ''
|
||||
Whether to open the default ports in the firewall for the ALVR server.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
networking.firewall = mkIf cfg.openFirewall {
|
||||
allowedTCPPorts = [ 9943 9944 ];
|
||||
allowedUDPPorts = [ 9943 9944 ];
|
||||
};
|
||||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ passivelemon ];
|
||||
}
|
|
@ -90,6 +90,8 @@ in
|
|||
];
|
||||
};
|
||||
};
|
||||
|
||||
users.groups.gamemode = { };
|
||||
};
|
||||
|
||||
meta = {
|
||||
|
|
|
@ -9,6 +9,7 @@ in
|
|||
{
|
||||
options = {
|
||||
programs.light = {
|
||||
|
||||
enable = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
|
@ -17,11 +18,60 @@ in
|
|||
and udev rules granting access to members of the "video" group.
|
||||
'';
|
||||
};
|
||||
|
||||
brightnessKeys = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to enable brightness control with keyboard keys.
|
||||
|
||||
This is mainly useful for minimalistic (desktop) environments. You
|
||||
may want to leave this disabled if you run a feature-rich desktop
|
||||
environment such as KDE, GNOME or Xfce as those handle the
|
||||
brightness keys themselves. However, enabling brightness control
|
||||
with this setting makes the control independent of X, so the keys
|
||||
work in non-graphical ttys, so you might want to consider using this
|
||||
instead of the default offered by the desktop environment.
|
||||
|
||||
Enabling this will turn on {option}`services.actkbd`.
|
||||
'';
|
||||
};
|
||||
|
||||
step = mkOption {
|
||||
type = types.int;
|
||||
default = 10;
|
||||
description = ''
|
||||
The percentage value by which to increase/decrease brightness.
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.light ];
|
||||
services.udev.packages = [ pkgs.light ];
|
||||
services.actkbd = mkIf cfg.brightnessKeys.enable {
|
||||
enable = true;
|
||||
bindings = let
|
||||
light = "${pkgs.light}/bin/light";
|
||||
step = toString cfg.brightnessKeys.step;
|
||||
in [
|
||||
{
|
||||
keys = [ 224 ];
|
||||
events = [ "key" ];
|
||||
# Use minimum brightness 0.1 so the display won't go totally black.
|
||||
command = "${light} -N 0.1 && ${light} -U ${step}";
|
||||
}
|
||||
{
|
||||
keys = [ 225 ];
|
||||
events = [ "key" ];
|
||||
command = "${light} -A ${step}";
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
15
third_party/nixpkgs/nixos/modules/programs/mouse-actions.nix
vendored
Normal file
15
third_party/nixpkgs/nixos/modules/programs/mouse-actions.nix
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.programs.mouse-actions;
|
||||
in
|
||||
{
|
||||
options.programs.mouse-actions = {
|
||||
enable = lib.mkEnableOption ''
|
||||
mouse-actions udev rules. This is a prerequisite for using mouse-actions without being root.
|
||||
'';
|
||||
};
|
||||
config = lib.mkIf cfg.enable {
|
||||
services.udev.packages = [ pkgs.mouse-actions ];
|
||||
};
|
||||
}
|
36
third_party/nixpkgs/nixos/modules/programs/nautilus-open-any-terminal.nix
vendored
Normal file
36
third_party/nixpkgs/nixos/modules/programs/nautilus-open-any-terminal.nix
vendored
Normal file
|
@ -0,0 +1,36 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
cfg = config.programs.nautilus-open-any-terminal;
|
||||
in
|
||||
{
|
||||
options.programs.nautilus-open-any-terminal = {
|
||||
enable = lib.mkEnableOption (lib.mdDoc "nautilus-open-any-terminal");
|
||||
|
||||
terminal = lib.mkOption {
|
||||
type = with lib.types; nullOr str;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
The terminal emulator to add to context-entry of nautilus. Supported terminal
|
||||
emulators are listed in https://github.com/Stunkymonkey/nautilus-open-any-terminal#supported-terminal-emulators.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
environment.systemPackages = with pkgs; [
|
||||
gnome.nautilus-python
|
||||
nautilus-open-any-terminal
|
||||
];
|
||||
programs.dconf = lib.optionalAttrs (cfg.terminal != null) {
|
||||
enable = true;
|
||||
profiles.user.databases = [{
|
||||
settings."com/github/stunkymonkey/nautilus-open-any-terminal".terminal = cfg.terminal;
|
||||
lockAll = true;
|
||||
}];
|
||||
};
|
||||
};
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ stunkymonkey linsui ];
|
||||
};
|
||||
}
|
|
@ -78,11 +78,15 @@ in
|
|||
else settingsFormat.generate "regreet.toml" cfg.settings;
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = let
|
||||
systemd.tmpfiles.settings."10-regreet" = let
|
||||
defaultConfig = {
|
||||
user = "greeter";
|
||||
group = config.users.users.${config.services.greetd.settings.default_session.user}.group;
|
||||
in [
|
||||
"d /var/log/regreet 0755 greeter ${group} - -"
|
||||
"d /var/cache/regreet 0755 greeter ${group} - -"
|
||||
];
|
||||
mode = "0755";
|
||||
};
|
||||
in {
|
||||
"/var/log/regreet".d = defaultConfig;
|
||||
"/var/cache/regreet".d = defaultConfig;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ let
|
|||
serverOptions = { name, config, ... }: {
|
||||
freeformType = attrsOf (either scalarType (listOf scalarType));
|
||||
# Client system-options file directives are explained here:
|
||||
# https://www.ibm.com/docs/en/storage-protect/8.1.20?topic=commands-processing-options
|
||||
# https://www.ibm.com/docs/en/storage-protect/8.1.21?topic=commands-processing-options
|
||||
options.servername = mkOption {
|
||||
type = servernameType;
|
||||
default = name;
|
||||
|
|
|
@ -545,12 +545,14 @@ let
|
|||
};
|
||||
|
||||
server = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
inherit (defaultAndText "server" null) default defaultText;
|
||||
type = types.str;
|
||||
inherit (defaultAndText "server" "https://acme-v02.api.letsencrypt.org/directory") default defaultText;
|
||||
example = "https://acme-staging-v02.api.letsencrypt.org/directory";
|
||||
description = lib.mdDoc ''
|
||||
ACME Directory Resource URI. Defaults to Let's Encrypt's
|
||||
production endpoint,
|
||||
<https://acme-v02.api.letsencrypt.org/directory>, if unset.
|
||||
ACME Directory Resource URI.
|
||||
Defaults to Let's Encrypt's production endpoint.
|
||||
For testing Let's Encrypt's [staging endpoint](https://letsencrypt.org/docs/staging-environment/)
|
||||
should be used to avoid the rather tight rate limit on the production endpoint.
|
||||
'';
|
||||
};
|
||||
|
||||
|
|
|
@ -700,6 +700,7 @@ let
|
|||
|| cfg.pamMount
|
||||
|| cfg.enableKwallet
|
||||
|| cfg.enableGnomeKeyring
|
||||
|| config.services.intune.enable
|
||||
|| cfg.googleAuthenticator.enable
|
||||
|| cfg.gnupg.enable
|
||||
|| cfg.failDelay.enable
|
||||
|
@ -726,6 +727,7 @@ let
|
|||
kwalletd = "${pkgs.plasma5Packages.kwallet.bin}/bin/kwalletd5";
|
||||
}; }
|
||||
{ name = "gnome_keyring"; enable = cfg.enableGnomeKeyring; control = "optional"; modulePath = "${pkgs.gnome.gnome-keyring}/lib/security/pam_gnome_keyring.so"; }
|
||||
{ name = "intune"; enable = config.services.intune.enable; control = "optional"; modulePath = "${pkgs.intune-portal}/lib/security/pam_intune.so"; }
|
||||
{ name = "gnupg"; enable = cfg.gnupg.enable; control = "optional"; modulePath = "${pkgs.pam_gnupg}/lib/security/pam_gnupg.so"; settings = {
|
||||
store-only = cfg.gnupg.storeOnly;
|
||||
}; }
|
||||
|
@ -867,9 +869,7 @@ let
|
|||
{ name = "gnupg"; enable = cfg.gnupg.enable; control = "optional"; modulePath = "${pkgs.pam_gnupg}/lib/security/pam_gnupg.so"; settings = {
|
||||
no-autostart = cfg.gnupg.noAutostart;
|
||||
}; }
|
||||
{ name = "cgfs"; enable = config.virtualisation.lxc.lxcfs.enable; control = "optional"; modulePath = "${pkgs.lxc}/lib/security/pam_cgfs.so"; args = [
|
||||
"-c" "all"
|
||||
]; }
|
||||
{ name = "intune"; enable = config.services.intune.enable; control = "optional"; modulePath = "${pkgs.intune-portal}/lib/security/pam_intune.so"; }
|
||||
];
|
||||
};
|
||||
};
|
||||
|
|
|
@ -172,6 +172,13 @@ static int make_caps_ambient(const char *self_path) {
|
|||
int main(int argc, char **argv) {
|
||||
ASSERT(argc >= 1);
|
||||
|
||||
// argv[0] goes into a lot of places, to a far greater degree than other elements
|
||||
// of argv. glibc has had buffer overflows relating to argv[0], eg CVE-2023-6246.
|
||||
// Since we expect the wrappers to be invoked from either $PATH or /run/wrappers/bin,
|
||||
// there should be no reason to pass any particularly large values here, so we can
|
||||
// be strict for strictness' sake.
|
||||
ASSERT(strlen(argv[0]) < 512);
|
||||
|
||||
int debug = getenv(wrapper_debug) != NULL;
|
||||
|
||||
// Drop insecure environment variables explicitly
|
||||
|
|
|
@ -14,6 +14,15 @@ let
|
|||
|
||||
in
|
||||
{
|
||||
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "services" "rabbitmq" "cookie" ] ''
|
||||
This option wrote the Erlang cookie to the store, while it should be kept secret.
|
||||
Please remove it from your NixOS configuration and deploy a cookie securely instead.
|
||||
The renamed `unsafeCookie` must ONLY be used in isolated non-production environments such as NixOS VM tests.
|
||||
'')
|
||||
];
|
||||
|
||||
###### interface
|
||||
options = {
|
||||
services.rabbitmq = {
|
||||
|
@ -62,13 +71,18 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
cookie = mkOption {
|
||||
unsafeCookie = mkOption {
|
||||
default = "";
|
||||
type = types.str;
|
||||
description = lib.mdDoc ''
|
||||
Erlang cookie is a string of arbitrary length which must
|
||||
be the same for several nodes to be allowed to communicate.
|
||||
Leave empty to generate automatically.
|
||||
|
||||
Setting the cookie via this option exposes the cookie to the store, which
|
||||
is not recommended for security reasons.
|
||||
Only use this option in an isolated non-production environment such as
|
||||
NixOS VM tests.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -209,9 +223,8 @@ in
|
|||
};
|
||||
|
||||
preStart = ''
|
||||
${optionalString (cfg.cookie != "") ''
|
||||
echo -n ${cfg.cookie} > ${cfg.dataDir}/.erlang.cookie
|
||||
chmod 600 ${cfg.dataDir}/.erlang.cookie
|
||||
${optionalString (cfg.unsafeCookie != "") ''
|
||||
install -m 600 <(echo -n ${cfg.unsafeCookie}) ${cfg.dataDir}/.erlang.cookie
|
||||
''}
|
||||
'';
|
||||
};
|
||||
|
|
|
@ -70,9 +70,10 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' - mopidy mopidy - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-mopidy".${cfg.dataDir}.d = {
|
||||
user = "mopidy";
|
||||
group = "mopidy";
|
||||
};
|
||||
|
||||
systemd.services.mopidy = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
|
|
@ -53,6 +53,7 @@ in {
|
|||
RuntimeDirectory = "navidrome";
|
||||
RootDirectory = "/run/navidrome";
|
||||
ReadWritePaths = "";
|
||||
BindPaths = lib.optional (cfg.settings ? DataFolder) cfg.settings.DataFolder;
|
||||
BindReadOnlyPaths = [
|
||||
# navidrome uses online services to download additional album metadata / covers
|
||||
"${config.environment.etc."ssl/certs/ca-certificates.crt".source}:/etc/ssl/certs/ca-certificates.crt"
|
||||
|
|
|
@ -90,7 +90,7 @@ in
|
|||
environment.HOME = "/var/lib/tsm-backup";
|
||||
serviceConfig = {
|
||||
# for exit status description see
|
||||
# https://www.ibm.com/docs/en/storage-protect/8.1.20?topic=clients-client-return-codes
|
||||
# https://www.ibm.com/docs/en/storage-protect/8.1.21?topic=clients-client-return-codes
|
||||
SuccessExitStatus = "4 8";
|
||||
# The `-se` option must come after the command.
|
||||
# The `-optfile` option suppresses a `dsm.opt`-not-found warning.
|
||||
|
|
|
@ -174,9 +174,8 @@ in
|
|||
'')
|
||||
(optionalString cfg.genCfsslAPIToken ''
|
||||
if [ ! -f "${cfsslAPITokenPath}" ]; then
|
||||
head -c ${toString (cfsslAPITokenLength / 2)} /dev/urandom | od -An -t x | tr -d ' ' >"${cfsslAPITokenPath}"
|
||||
install -u cfssl -m 400 <(head -c ${toString (cfsslAPITokenLength / 2)} /dev/urandom | od -An -t x | tr -d ' ') "${cfsslAPITokenPath}"
|
||||
fi
|
||||
chown cfssl "${cfsslAPITokenPath}" && chmod 400 "${cfsslAPITokenPath}"
|
||||
'')]);
|
||||
|
||||
systemd.services.kube-certmgr-bootstrap = {
|
||||
|
@ -194,7 +193,7 @@ in
|
|||
if [ -f "${cfsslAPITokenPath}" ]; then
|
||||
ln -fs "${cfsslAPITokenPath}" "${certmgrAPITokenPath}"
|
||||
else
|
||||
touch "${certmgrAPITokenPath}" && chmod 600 "${certmgrAPITokenPath}"
|
||||
install -m 600 /dev/null "${certmgrAPITokenPath}"
|
||||
fi
|
||||
''
|
||||
(optionalString (cfg.pkiTrustOnBootstrap) ''
|
||||
|
@ -297,8 +296,7 @@ in
|
|||
exit 1
|
||||
fi
|
||||
|
||||
echo $token > ${certmgrAPITokenPath}
|
||||
chmod 600 ${certmgrAPITokenPath}
|
||||
install -m 0600 <(echo $token) ${certmgrAPITokenPath}
|
||||
|
||||
echo "Restarting certmgr..." >&1
|
||||
systemctl restart certmgr
|
||||
|
|
|
@ -99,6 +99,17 @@ in {
|
|||
type = types.nullOr types.path;
|
||||
};
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = lib.mdDoc ''
|
||||
Open etcd ports in the firewall.
|
||||
Ports opened:
|
||||
- 2379/tcp for client requests
|
||||
- 2380/tcp for peer communication
|
||||
'';
|
||||
};
|
||||
|
||||
peerCertFile = mkOption {
|
||||
description = lib.mdDoc "Cert file to use for peer to peer communication";
|
||||
default = cfg.certFile;
|
||||
|
@ -152,14 +163,18 @@ in {
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' 0700 etcd - - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-etcd".${cfg.dataDir}.d = {
|
||||
user = "etcd";
|
||||
mode = "0700";
|
||||
};
|
||||
|
||||
systemd.services.etcd = {
|
||||
description = "etcd key-value store";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" ];
|
||||
after = [ "network-online.target" ]
|
||||
++ lib.optional config.networking.firewall.enable "firewall.service";
|
||||
wants = [ "network-online.target" ]
|
||||
++ lib.optional config.networking.firewall.enable "firewall.service";
|
||||
|
||||
environment = (filterAttrs (n: v: v != null) {
|
||||
ETCD_NAME = cfg.name;
|
||||
|
@ -189,6 +204,8 @@ in {
|
|||
|
||||
serviceConfig = {
|
||||
Type = "notify";
|
||||
Restart = "always";
|
||||
RestartSec = "30s";
|
||||
ExecStart = "${cfg.package}/bin/etcd";
|
||||
User = "etcd";
|
||||
LimitNOFILE = 40000;
|
||||
|
@ -197,6 +214,13 @@ in {
|
|||
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
networking.firewall = lib.mkIf cfg.openFirewall {
|
||||
allowedTCPPorts = [
|
||||
2379 # for client requests
|
||||
2380 # for peer communication
|
||||
];
|
||||
};
|
||||
|
||||
users.users.etcd = {
|
||||
isSystemUser = true;
|
||||
group = "etcd";
|
|
@ -35,65 +35,64 @@ let
|
|||
|
||||
serverConfig = pkgs.writeText "neo4j.conf" ''
|
||||
# General
|
||||
dbms.allow_upgrade=${boolToString cfg.allowUpgrade}
|
||||
dbms.default_listen_address=${cfg.defaultListenAddress}
|
||||
dbms.databases.default_to_read_only=${boolToString cfg.readOnly}
|
||||
server.default_listen_address=${cfg.defaultListenAddress}
|
||||
server.databases.default_to_read_only=${boolToString cfg.readOnly}
|
||||
${optionalString (cfg.workerCount > 0) ''
|
||||
dbms.threads.worker_count=${toString cfg.workerCount}
|
||||
''}
|
||||
|
||||
# Directories (readonly)
|
||||
dbms.directories.certificates=${cfg.directories.certificates}
|
||||
dbms.directories.plugins=${cfg.directories.plugins}
|
||||
dbms.directories.lib=${cfg.package}/share/neo4j/lib
|
||||
# dbms.directories.certificates=${cfg.directories.certificates}
|
||||
server.directories.plugins=${cfg.directories.plugins}
|
||||
server.directories.lib=${cfg.package}/share/neo4j/lib
|
||||
${optionalString (cfg.constrainLoadCsv) ''
|
||||
dbms.directories.import=${cfg.directories.imports}
|
||||
server.directories.import=${cfg.directories.imports}
|
||||
''}
|
||||
|
||||
# Directories (read and write)
|
||||
dbms.directories.data=${cfg.directories.data}
|
||||
dbms.directories.logs=${cfg.directories.home}/logs
|
||||
dbms.directories.run=${cfg.directories.home}/run
|
||||
server.directories.data=${cfg.directories.data}
|
||||
server.directories.logs=${cfg.directories.home}/logs
|
||||
server.directories.run=${cfg.directories.home}/run
|
||||
|
||||
# HTTP Connector
|
||||
${optionalString (cfg.http.enable) ''
|
||||
dbms.connector.http.enabled=${boolToString cfg.http.enable}
|
||||
dbms.connector.http.listen_address=${cfg.http.listenAddress}
|
||||
dbms.connector.http.advertised_address=${cfg.http.listenAddress}
|
||||
server.http.enabled=${boolToString cfg.http.enable}
|
||||
server.http.listen_address=${cfg.http.listenAddress}
|
||||
server.http.advertised_address=${cfg.http.listenAddress}
|
||||
''}
|
||||
|
||||
# HTTPS Connector
|
||||
dbms.connector.https.enabled=${boolToString cfg.https.enable}
|
||||
dbms.connector.https.listen_address=${cfg.https.listenAddress}
|
||||
dbms.connector.https.advertised_address=${cfg.https.listenAddress}
|
||||
server.https.enabled=${boolToString cfg.https.enable}
|
||||
server.https.listen_address=${cfg.https.listenAddress}
|
||||
server.https.advertised_address=${cfg.https.listenAddress}
|
||||
|
||||
# BOLT Connector
|
||||
dbms.connector.bolt.enabled=${boolToString cfg.bolt.enable}
|
||||
dbms.connector.bolt.listen_address=${cfg.bolt.listenAddress}
|
||||
dbms.connector.bolt.advertised_address=${cfg.bolt.listenAddress}
|
||||
dbms.connector.bolt.tls_level=${cfg.bolt.tlsLevel}
|
||||
server.bolt.enabled=${boolToString cfg.bolt.enable}
|
||||
server.bolt.listen_address=${cfg.bolt.listenAddress}
|
||||
server.bolt.advertised_address=${cfg.bolt.listenAddress}
|
||||
server.bolt.tls_level=${cfg.bolt.tlsLevel}
|
||||
|
||||
# SSL Policies
|
||||
${concatStringsSep "\n" sslPolicies}
|
||||
|
||||
# Default retention policy from neo4j.conf
|
||||
dbms.tx_log.rotation.retention_policy=1 days
|
||||
db.tx_log.rotation.retention_policy=1 days
|
||||
|
||||
# Default JVM parameters from neo4j.conf
|
||||
dbms.jvm.additional=-XX:+UseG1GC
|
||||
dbms.jvm.additional=-XX:-OmitStackTraceInFastThrow
|
||||
dbms.jvm.additional=-XX:+AlwaysPreTouch
|
||||
dbms.jvm.additional=-XX:+UnlockExperimentalVMOptions
|
||||
dbms.jvm.additional=-XX:+TrustFinalNonStaticFields
|
||||
dbms.jvm.additional=-XX:+DisableExplicitGC
|
||||
dbms.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048
|
||||
dbms.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true
|
||||
dbms.jvm.additional=-Dunsupported.dbms.udc.source=tarball
|
||||
server.jvm.additional=-XX:+UseG1GC
|
||||
server.jvm.additional=-XX:-OmitStackTraceInFastThrow
|
||||
server.jvm.additional=-XX:+AlwaysPreTouch
|
||||
server.jvm.additional=-XX:+UnlockExperimentalVMOptions
|
||||
server.jvm.additional=-XX:+TrustFinalNonStaticFields
|
||||
server.jvm.additional=-XX:+DisableExplicitGC
|
||||
server.jvm.additional=-Djdk.tls.ephemeralDHKeySize=2048
|
||||
server.jvm.additional=-Djdk.tls.rejectClientInitiatedRenegotiation=true
|
||||
server.jvm.additional=-Dunsupported.dbms.udc.source=tarball
|
||||
|
||||
#dbms.memory.heap.initial_size=12000m
|
||||
#dbms.memory.heap.max_size=12000m
|
||||
#dbms.memory.pagecache.size=4g
|
||||
#dbms.tx_state.max_off_heap_memory=8000m
|
||||
#server.memory.off_heap.transaction_max_size=12000m
|
||||
#server.memory.heap.max_size=12000m
|
||||
#server.memory.pagecache.size=4g
|
||||
#server.tx_state.max_off_heap_memory=8000m
|
||||
|
||||
# Extra Configuration
|
||||
${cfg.extraServerConfig}
|
||||
|
@ -127,14 +126,6 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
allowUpgrade = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = lib.mdDoc ''
|
||||
Allow upgrade of Neo4j database files from an older version.
|
||||
'';
|
||||
};
|
||||
|
||||
constrainLoadCsv = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
|
|
|
@ -15,11 +15,12 @@ which runs the server.
|
|||
{
|
||||
services.livebook = {
|
||||
enableUserService = true;
|
||||
port = 20123;
|
||||
environment = {
|
||||
LIVEBOOK_PORT = 20123;
|
||||
LIVEBOOK_PASSWORD = "mypassword";
|
||||
};
|
||||
# See note below about security
|
||||
environmentFile = pkgs.writeText "livebook.env" ''
|
||||
LIVEBOOK_PASSWORD = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||
'';
|
||||
environmentFile = "/var/lib/livebook.env";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
@ -30,14 +31,19 @@ The Livebook server has the ability to run any command as the user it
|
|||
is running under, so securing access to it with a password is highly
|
||||
recommended.
|
||||
|
||||
Putting the password in the Nix configuration like above is an easy
|
||||
way to get started but it is not recommended in the real world because
|
||||
the `livebook.env` file will be added to the world-readable Nix store.
|
||||
A better approach would be to put the password in some secure
|
||||
user-readable location and set `environmentFile = /home/user/secure/livebook.env`.
|
||||
Putting the password in the Nix configuration like above is an easy way to get
|
||||
started but it is not recommended in the real world because the resulting
|
||||
environment variables can be read by unprivileged users. A better approach
|
||||
would be to put the password in some secure user-readable location and set
|
||||
`environmentFile = /home/user/secure/livebook.env`.
|
||||
|
||||
:::
|
||||
|
||||
The [Livebook
|
||||
documentation](https://hexdocs.pm/livebook/readme.html#environment-variables)
|
||||
lists all the applicable environment variables. It is recommended to at least
|
||||
set `LIVEBOOK_PASSWORD` or `LIVEBOOK_TOKEN_ENABLED=false`.
|
||||
|
||||
### Extra dependencies {#module-services-livebook-extra-dependencies}
|
||||
|
||||
By default, the Livebook service is run with minimum dependencies, but
|
||||
|
|
|
@ -14,58 +14,64 @@ in
|
|||
|
||||
package = mkPackageOption pkgs "livebook" { };
|
||||
|
||||
environmentFile = mkOption {
|
||||
type = types.path;
|
||||
description = lib.mdDoc ''
|
||||
Environment file as defined in {manpage}`systemd.exec(5)` passed to the service.
|
||||
|
||||
This must contain at least `LIVEBOOK_PASSWORD` or
|
||||
`LIVEBOOK_TOKEN_ENABLED=false`. See `livebook server --help`
|
||||
for other options.'';
|
||||
};
|
||||
|
||||
erlang_node_short_name = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
example = "livebook";
|
||||
description = "A short name for the distributed node.";
|
||||
};
|
||||
|
||||
erlang_node_name = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
example = "livebook@127.0.0.1";
|
||||
description = "The name for the app distributed node.";
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
type = types.port;
|
||||
default = 8080;
|
||||
description = "The port to start the web application on.";
|
||||
};
|
||||
|
||||
address = mkOption {
|
||||
type = types.str;
|
||||
default = "127.0.0.1";
|
||||
description = lib.mdDoc ''
|
||||
The address to start the web application on. Must be a valid IPv4 or
|
||||
IPv6 address.
|
||||
'';
|
||||
};
|
||||
|
||||
options = mkOption {
|
||||
type = with types; attrsOf str;
|
||||
environment = mkOption {
|
||||
type = with types; attrsOf (nullOr (oneOf [ bool int str ]));
|
||||
default = { };
|
||||
description = lib.mdDoc ''
|
||||
Additional options to pass as command-line arguments to the server.
|
||||
Environment variables to set.
|
||||
|
||||
Livebook is configured through the use of environment variables. The
|
||||
available configuration options can be found in the [Livebook
|
||||
documentation](https://hexdocs.pm/livebook/readme.html#environment-variables).
|
||||
|
||||
Note that all environment variables set through this configuration
|
||||
parameter will be readable by anyone with access to the host
|
||||
machine. Therefore, sensitive information like {env}`LIVEBOOK_PASSWORD`
|
||||
or {env}`LIVEBOOK_COOKIE` should never be set using this configuration
|
||||
option, but should instead use
|
||||
[](#opt-services.livebook.environmentFile). See the documentation for
|
||||
that option for more information.
|
||||
|
||||
Any environment variables specified in the
|
||||
[](#opt-services.livebook.environmentFile) will supersede environment
|
||||
variables specified in this option.
|
||||
'';
|
||||
|
||||
example = literalExpression ''
|
||||
{
|
||||
cookie = "a value shared by all nodes in this cluster";
|
||||
LIVEBOOK_PORT = 8080;
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
environmentFile = mkOption {
|
||||
type = with types; nullOr types.path;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Additional dnvironment file as defined in {manpage}`systemd.exec(5)`.
|
||||
|
||||
Secrets like {env}`LIVEBOOK_PASSWORD` (which is used to specify the
|
||||
password needed to access the livebook site) or {env}`LIVEBOOK_COOKIE`
|
||||
(which is used to specify the
|
||||
[cookie](https://www.erlang.org/doc/reference_manual/distributed.html#security)
|
||||
used to connect to the running Elixir system) may be passed to the
|
||||
service without making them readable to everyone with access to
|
||||
systemctl by using this configuration parameter.
|
||||
|
||||
Note that this file needs to be available on the host on which
|
||||
`livebook` is running.
|
||||
|
||||
For security purposes, this file should contain at least
|
||||
{env}`LIVEBOOK_PASSWORD` or {env}`LIVEBOOK_TOKEN_ENABLED=false`.
|
||||
|
||||
See the [Livebook
|
||||
documentation](https://hexdocs.pm/livebook/readme.html#environment-variables)
|
||||
and the [](#opt-services.livebook.environment) configuration parameter
|
||||
for further options.
|
||||
'';
|
||||
example = "/var/lib/livebook.env";
|
||||
};
|
||||
|
||||
extraPackages = mkOption {
|
||||
type = with types; listOf package;
|
||||
default = [ ];
|
||||
|
@ -81,17 +87,12 @@ in
|
|||
serviceConfig = {
|
||||
Restart = "always";
|
||||
EnvironmentFile = cfg.environmentFile;
|
||||
ExecStart =
|
||||
let
|
||||
args = lib.cli.toGNUCommandLineShell { } ({
|
||||
inherit (cfg) port;
|
||||
ip = cfg.address;
|
||||
name = cfg.erlang_node_name;
|
||||
sname = cfg.erlang_node_short_name;
|
||||
} // cfg.options);
|
||||
in
|
||||
"${cfg.package}/bin/livebook server ${args}";
|
||||
ExecStart = "${cfg.package}/bin/livebook start";
|
||||
KillMode = "mixed";
|
||||
};
|
||||
environment = mapAttrs (name: value:
|
||||
if isBool value then boolToString value else toString value)
|
||||
cfg.environment;
|
||||
path = [ pkgs.bash ] ++ cfg.extraPackages;
|
||||
wantedBy = [ "default.target" ];
|
||||
};
|
||||
|
|
|
@ -20,10 +20,11 @@ let
|
|||
mkBot = n: c:
|
||||
format.generate "${n}.json" (c.settings // {
|
||||
SteamLogin = if c.username == "" then n else c.username;
|
||||
Enabled = c.enabled;
|
||||
} // lib.optionalAttrs (c.passwordFile != null) {
|
||||
SteamPassword = c.passwordFile;
|
||||
# sets the password format to file (https://github.com/JustArchiNET/ArchiSteamFarm/wiki/Security#file)
|
||||
PasswordFormat = 4;
|
||||
Enabled = c.enabled;
|
||||
});
|
||||
in
|
||||
{
|
||||
|
@ -127,8 +128,12 @@ in
|
|||
default = "";
|
||||
};
|
||||
passwordFile = lib.mkOption {
|
||||
type = lib.types.path;
|
||||
description = lib.mdDoc "Path to a file containing the password. The file must be readable by the `archisteamfarm` user/group.";
|
||||
type = with lib.types; nullOr path;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Path to a file containing the password. The file must be readable by the `archisteamfarm` user/group.
|
||||
Omit or set to null to provide the password a different way, such as through the web-ui.
|
||||
'';
|
||||
};
|
||||
enabled = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
|
|
|
@ -135,7 +135,6 @@ in
|
|||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
serviceConfig = {
|
||||
PrivateNetwork = true;
|
||||
ExecStart = escapeShellArgs
|
||||
([ "${pkgs.acpid}/bin/acpid"
|
||||
"--foreground"
|
||||
|
|
44
third_party/nixpkgs/nixos/modules/services/hardware/handheld-daemon.nix
vendored
Normal file
44
third_party/nixpkgs/nixos/modules/services/hardware/handheld-daemon.nix
vendored
Normal file
|
@ -0,0 +1,44 @@
|
|||
{ config
|
||||
, lib
|
||||
, pkgs
|
||||
, ...
|
||||
}:
|
||||
with lib; let
|
||||
cfg = config.services.handheld-daemon;
|
||||
in
|
||||
{
|
||||
options.services.handheld-daemon = {
|
||||
enable = mkEnableOption "Enable Handheld Daemon";
|
||||
package = mkPackageOption pkgs "handheld-daemon" { };
|
||||
|
||||
user = mkOption {
|
||||
type = types.str;
|
||||
description = lib.mdDoc ''
|
||||
The user to run Handheld Daemon with.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
services.udev.packages = [ cfg.package ];
|
||||
systemd.packages = [ cfg.package ];
|
||||
|
||||
systemd.services.handheld-daemon = {
|
||||
description = "Handheld Daemon";
|
||||
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
restartIfChanged = true;
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${ lib.getExe cfg.package } --user ${ cfg.user }";
|
||||
Nice = "-12";
|
||||
Restart = "on-failure";
|
||||
RestartSec = "10";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
meta.maintainers = [ maintainers.appsforartists ];
|
||||
}
|
|
@ -11,6 +11,8 @@ in
|
|||
options = {
|
||||
services.ratbagd = {
|
||||
enable = mkEnableOption (lib.mdDoc "ratbagd for configuring gaming mice");
|
||||
|
||||
package = mkPackageOption pkgs "libratbag" { };
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -18,10 +20,10 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
# Give users access to the "ratbagctl" tool
|
||||
environment.systemPackages = [ pkgs.libratbag ];
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
services.dbus.packages = [ pkgs.libratbag ];
|
||||
services.dbus.packages = [ cfg.package ];
|
||||
|
||||
systemd.packages = [ pkgs.libratbag ];
|
||||
systemd.packages = [ cfg.package ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -63,6 +63,12 @@ in
|
|||
'';
|
||||
type = types.listOf types.str;
|
||||
};
|
||||
|
||||
usePing = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
description = lib.mdDoc "Use ping to check online status of devices instead of mDNS";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
@ -74,8 +80,10 @@ in
|
|||
wantedBy = ["multi-user.target"];
|
||||
path = [cfg.package];
|
||||
|
||||
environment = {
|
||||
# platformio fails to determine the home directory when using DynamicUser
|
||||
environment.PLATFORMIO_CORE_DIR = "${stateDir}/.platformio";
|
||||
PLATFORMIO_CORE_DIR = "${stateDir}/.platformio";
|
||||
} // lib.optionalAttrs cfg.usePing { ESPHOME_DASHBOARD_USE_PING = "true"; };
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package}/bin/esphome dashboard ${esphomeParams} ${stateDir}";
|
||||
|
|
|
@ -71,6 +71,7 @@ in
|
|||
after = [ "network.target" ];
|
||||
environment.ZIGBEE2MQTT_DATA = cfg.dataDir;
|
||||
serviceConfig = {
|
||||
Type = "notify";
|
||||
ExecStart = "${cfg.package}/bin/zigbee2mqtt";
|
||||
User = "zigbee2mqtt";
|
||||
Group = "zigbee2mqtt";
|
||||
|
|
|
@ -1,10 +1,12 @@
|
|||
{ options, config, lib, pkgs, ... }:
|
||||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
inherit (lib) any attrValues concatMapStringsSep concatStrings
|
||||
concatStringsSep flatten imap1 isList literalExpression mapAttrsToList
|
||||
inherit (lib) attrValues concatMapStringsSep concatStrings
|
||||
concatStringsSep flatten imap1 literalExpression mapAttrsToList
|
||||
mkEnableOption mkIf mkOption mkRemovedOptionModule optional optionalAttrs
|
||||
optionalString singleton types;
|
||||
optionalString singleton types mkRenamedOptionModule nameValuePair
|
||||
mapAttrs' listToAttrs filter;
|
||||
inherit (lib.strings) match;
|
||||
|
||||
cfg = config.services.dovecot2;
|
||||
dovecotPkg = pkgs.dovecot;
|
||||
|
@ -12,6 +14,58 @@ let
|
|||
baseDir = "/run/dovecot2";
|
||||
stateDir = "/var/lib/dovecot";
|
||||
|
||||
sieveScriptSettings = mapAttrs' (to: _: nameValuePair "sieve_${to}" "${stateDir}/sieve/${to}") cfg.sieve.scripts;
|
||||
imapSieveMailboxSettings = listToAttrs (flatten (imap1 (idx: el:
|
||||
singleton {
|
||||
name = "imapsieve_mailbox${toString idx}_name";
|
||||
value = el.name;
|
||||
} ++ optional (el.from != null) {
|
||||
name = "imapsieve_mailbox${toString idx}_from";
|
||||
value = el.from;
|
||||
} ++ optional (el.causes != []) {
|
||||
name = "imapsieve_mailbox${toString idx}_causes";
|
||||
value = concatStringsSep "," el.causes;
|
||||
} ++ optional (el.before != null) {
|
||||
name = "imapsieve_mailbox${toString idx}_before";
|
||||
value = "file:${stateDir}/imapsieve/before/${baseNameOf el.before}";
|
||||
} ++ optional (el.after != null) {
|
||||
name = "imapsieve_mailbox${toString idx}_after";
|
||||
value = "file:${stateDir}/imapsieve/after/${baseNameOf el.after}";
|
||||
}
|
||||
) cfg.imapsieve.mailbox));
|
||||
|
||||
mkExtraConfigCollisionWarning = term: ''
|
||||
You referred to ${term} in `services.dovecot2.extraConfig`.
|
||||
|
||||
Due to gradual transition to structured configuration for plugin configuration, it is possible
|
||||
this will cause your plugin configuration to be ignored.
|
||||
|
||||
Consider setting `services.dovecot2.pluginSettings.${term}` instead.
|
||||
'';
|
||||
|
||||
# Those settings are automatically set based on other parts
|
||||
# of this module.
|
||||
automaticallySetPluginSettings = [
|
||||
"sieve_plugins"
|
||||
"sieve_extensions"
|
||||
"sieve_global_extensions"
|
||||
"sieve_pipe_bin_dir"
|
||||
]
|
||||
++ (builtins.attrNames sieveScriptSettings)
|
||||
++ (builtins.attrNames imapSieveMailboxSettings);
|
||||
|
||||
# The idea is to match everything that looks like `$term =`
|
||||
# but not `# $term something something`
|
||||
# or `# $term = some value` because those are comments.
|
||||
configContainsSetting = lines: term: (match "^[^#]*\b${term}\b.*=" lines) != null;
|
||||
|
||||
warnAboutExtraConfigCollisions = map mkExtraConfigCollisionWarning (filter (configContainsSetting cfg.extraConfig) automaticallySetPluginSettings);
|
||||
|
||||
sievePipeBinScriptDirectory = pkgs.linkFarm "sieve-pipe-bins" (map (el: {
|
||||
name = builtins.unsafeDiscardStringContext (baseNameOf el);
|
||||
path = el;
|
||||
}) cfg.sieve.pipeBins);
|
||||
|
||||
dovecotConf = concatStrings [
|
||||
''
|
||||
base_dir = ${baseDir}
|
||||
|
@ -77,14 +131,6 @@ let
|
|||
''
|
||||
)
|
||||
|
||||
(
|
||||
optionalString (cfg.sieveScripts != {}) ''
|
||||
plugin {
|
||||
${concatStringsSep "\n" (mapAttrsToList (to: from: "sieve_${to} = ${stateDir}/sieve/${to}") cfg.sieveScripts)}
|
||||
}
|
||||
''
|
||||
)
|
||||
|
||||
(
|
||||
optionalString (cfg.mailboxes != {}) ''
|
||||
namespace inbox {
|
||||
|
@ -116,33 +162,12 @@ let
|
|||
''
|
||||
)
|
||||
|
||||
# General plugin settings:
|
||||
# - sieve is mostly generated here, refer to `pluginSettings` to follow
|
||||
# the control flow.
|
||||
''
|
||||
plugin {
|
||||
sieve_plugins = ${concatStringsSep " " cfg.sieve.plugins}
|
||||
sieve_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.extensions)}
|
||||
sieve_global_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.globalExtensions)}
|
||||
''
|
||||
(optionalString (cfg.imapsieve.mailbox != []) ''
|
||||
${
|
||||
concatStringsSep "\n" (flatten (imap1 (
|
||||
idx: el:
|
||||
singleton "imapsieve_mailbox${toString idx}_name = ${el.name}"
|
||||
++ optional (el.from != null) "imapsieve_mailbox${toString idx}_from = ${el.from}"
|
||||
++ optional (el.causes != null) "imapsieve_mailbox${toString idx}_causes = ${el.causes}"
|
||||
++ optional (el.before != null) "imapsieve_mailbox${toString idx}_before = file:${stateDir}/imapsieve/before/${baseNameOf el.before}"
|
||||
++ optional (el.after != null) "imapsieve_mailbox${toString idx}_after = file:${stateDir}/imapsieve/after/${baseNameOf el.after}"
|
||||
)
|
||||
cfg.imapsieve.mailbox))
|
||||
}
|
||||
'')
|
||||
(optionalString (cfg.sieve.pipeBins != []) ''
|
||||
sieve_pipe_bin_dir = ${pkgs.linkFarm "sieve-pipe-bins" (map (el: {
|
||||
name = builtins.unsafeDiscardStringContext (baseNameOf el);
|
||||
path = el;
|
||||
})
|
||||
cfg.sieve.pipeBins)}
|
||||
'')
|
||||
''
|
||||
${concatStringsSep "\n" (mapAttrsToList (key: value: " ${key} = ${value}") cfg.pluginSettings)}
|
||||
}
|
||||
''
|
||||
|
||||
|
@ -199,6 +224,7 @@ in
|
|||
{
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "services" "dovecot2" "package" ] "")
|
||||
(mkRenamedOptionModule [ "services" "dovecot2" "sieveScripts" ] [ "services" "dovecot2" "sieve" "scripts" ])
|
||||
];
|
||||
|
||||
options.services.dovecot2 = {
|
||||
|
@ -337,12 +363,6 @@ in
|
|||
|
||||
enableDHE = mkEnableOption (lib.mdDoc "ssl_dh and generation of primes for the key exchange") // { default = true; };
|
||||
|
||||
sieveScripts = mkOption {
|
||||
type = types.attrsOf types.path;
|
||||
default = {};
|
||||
description = lib.mdDoc "Sieve scripts to be executed. Key is a sequence, e.g. 'before2', 'after' etc.";
|
||||
};
|
||||
|
||||
showPAMFailure = mkEnableOption (lib.mdDoc "showing the PAM failure message on authentication error (useful for OTPW)");
|
||||
|
||||
mailboxes = mkOption {
|
||||
|
@ -376,6 +396,26 @@ in
|
|||
description = lib.mdDoc "Quota limit for the user in bytes. Supports suffixes b, k, M, G, T and %.";
|
||||
};
|
||||
|
||||
|
||||
pluginSettings = mkOption {
|
||||
# types.str does not coerce from packages, like `sievePipeBinScriptDirectory`.
|
||||
type = types.attrsOf (types.oneOf [ types.str types.package ]);
|
||||
default = {};
|
||||
example = literalExpression ''
|
||||
{
|
||||
sieve = "file:~/sieve;active=~/.dovecot.sieve";
|
||||
}
|
||||
'';
|
||||
description = ''
|
||||
Plugin settings for dovecot in general, e.g. `sieve`, `sieve_default`, etc.
|
||||
|
||||
Some of the other knobs of this module will influence by default the plugin settings, but you
|
||||
can still override any plugin settings.
|
||||
|
||||
If you override a plugin setting, its value is cleared and you have to copy over the defaults.
|
||||
'';
|
||||
};
|
||||
|
||||
imapsieve.mailbox = mkOption {
|
||||
default = [];
|
||||
description = "Configure Sieve filtering rules on IMAP actions";
|
||||
|
@ -405,14 +445,14 @@ in
|
|||
};
|
||||
|
||||
causes = mkOption {
|
||||
default = null;
|
||||
default = [ ];
|
||||
description = ''
|
||||
Only execute the administrator Sieve scripts for the mailbox configured with services.dovecot2.imapsieve.mailbox.<name>.name when one of the listed IMAPSIEVE causes apply.
|
||||
|
||||
This has no effect on the user script, which is always executed no matter the cause.
|
||||
'';
|
||||
example = "COPY";
|
||||
type = types.nullOr (types.enum [ "APPEND" "COPY" "FLAG" ]);
|
||||
example = [ "COPY" "APPEND" ];
|
||||
type = types.listOf (types.enum [ "APPEND" "COPY" "FLAG" ]);
|
||||
};
|
||||
|
||||
before = mkOption {
|
||||
|
@ -462,6 +502,12 @@ in
|
|||
type = types.listOf types.str;
|
||||
};
|
||||
|
||||
scripts = mkOption {
|
||||
type = types.attrsOf types.path;
|
||||
default = {};
|
||||
description = lib.mdDoc "Sieve scripts to be executed. Key is a sequence, e.g. 'before2', 'after' etc.";
|
||||
};
|
||||
|
||||
pipeBins = mkOption {
|
||||
default = [];
|
||||
example = literalExpression ''
|
||||
|
@ -476,7 +522,6 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
security.pam.services.dovecot2 = mkIf cfg.enablePAM {};
|
||||
|
||||
|
@ -501,6 +546,13 @@ in
|
|||
++ optional (cfg.sieve.pipeBins != []) "sieve_extprograms";
|
||||
|
||||
sieve.globalExtensions = optional (cfg.sieve.pipeBins != []) "vnd.dovecot.pipe";
|
||||
|
||||
pluginSettings = lib.mapAttrs (n: lib.mkDefault) ({
|
||||
sieve_plugins = concatStringsSep " " cfg.sieve.plugins;
|
||||
sieve_extensions = concatStringsSep " " (map (el: "+${el}") cfg.sieve.extensions);
|
||||
sieve_global_extensions = concatStringsSep " " (map (el: "+${el}") cfg.sieve.globalExtensions);
|
||||
sieve_pipe_bin_dir = sievePipeBinScriptDirectory;
|
||||
} // sieveScriptSettings // imapSieveMailboxSettings);
|
||||
};
|
||||
|
||||
users.users = {
|
||||
|
@ -556,7 +608,7 @@ in
|
|||
# the source file and Dovecot won't try to compile it.
|
||||
preStart = ''
|
||||
rm -rf ${stateDir}/sieve ${stateDir}/imapsieve
|
||||
'' + optionalString (cfg.sieveScripts != {}) ''
|
||||
'' + optionalString (cfg.sieve.scripts != {}) ''
|
||||
mkdir -p ${stateDir}/sieve
|
||||
${concatStringsSep "\n" (
|
||||
mapAttrsToList (
|
||||
|
@ -569,7 +621,7 @@ in
|
|||
fi
|
||||
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/sieve/${to}'
|
||||
''
|
||||
) cfg.sieveScripts
|
||||
) cfg.sieve.scripts
|
||||
)}
|
||||
chown -R '${cfg.mailUser}:${cfg.mailGroup}' '${stateDir}/sieve'
|
||||
''
|
||||
|
@ -600,9 +652,7 @@ in
|
|||
|
||||
environment.systemPackages = [ dovecotPkg ];
|
||||
|
||||
warnings = mkIf (any isList options.services.dovecot2.mailboxes.definitions) [
|
||||
"Declaring `services.dovecot2.mailboxes' as a list is deprecated and will break eval in 21.05! See the release notes for more info for migration."
|
||||
];
|
||||
warnings = warnAboutExtraConfigCollisions;
|
||||
|
||||
assertions = [
|
||||
{
|
||||
|
@ -615,8 +665,8 @@ in
|
|||
message = "dovecot is configured with showPAMFailure while enablePAM is disabled";
|
||||
}
|
||||
{
|
||||
assertion = cfg.sieveScripts != {} -> (cfg.mailUser != null && cfg.mailGroup != null);
|
||||
message = "dovecot requires mailUser and mailGroup to be set when sieveScripts is set";
|
||||
assertion = cfg.sieve.scripts != {} -> (cfg.mailUser != null && cfg.mailGroup != null);
|
||||
message = "dovecot requires mailUser and mailGroup to be set when `sieve.scripts` is set";
|
||||
}
|
||||
];
|
||||
|
||||
|
|
|
@ -143,11 +143,13 @@ in
|
|||
|
||||
environment.systemPackages = [ pkgs.mlmmj ];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
''d "${stateDir}" -''
|
||||
''d "${spoolDir}/${cfg.listDomain}" -''
|
||||
''Z "${spoolDir}" - "${cfg.user}" "${cfg.group}" -''
|
||||
];
|
||||
systemd.tmpfiles.settings."10-mlmmj" = {
|
||||
${stateDir}.d = { };
|
||||
"${spoolDir}/${cfg.listDomain}".d = { };
|
||||
${spoolDir}.Z = {
|
||||
inherit (cfg) user group;
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.mlmmj-maintd = {
|
||||
description = "mlmmj maintenance daemon";
|
||||
|
|
|
@ -99,7 +99,11 @@ in
|
|||
${cfg.extraConfig}
|
||||
'';
|
||||
|
||||
systemd.tmpfiles.rules = [ "d /var/cache/postfixadmin/templates_c 700 ${user} ${user}" ];
|
||||
systemd.tmpfiles.settings."10-postfixadmin"."/var/cache/postfixadmin/templates_c".d = {
|
||||
inherit user;
|
||||
group = user;
|
||||
mode = "700";
|
||||
};
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
|
|
|
@ -95,9 +95,11 @@ in {
|
|||
|
||||
services.rss2email.config.to = cfg.to;
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/rss2email 0700 rss2email rss2email - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-rss2email"."/var/rss2email".d = {
|
||||
user = "rss2email";
|
||||
group = "rss2email";
|
||||
mode = "0700";
|
||||
};
|
||||
|
||||
systemd.services.rss2email = let
|
||||
conf = pkgs.writeText "rss2email.cfg" (lib.generators.toINI {} ({
|
||||
|
|
|
@ -93,7 +93,11 @@ in {
|
|||
|
||||
environment.etc."zeyple.conf".source = ini.generate "zeyple.conf" cfg.settings;
|
||||
|
||||
systemd.tmpfiles.rules = [ "f '${cfg.settings.zeyple.log_file}' 0600 ${cfg.user} ${cfg.group} - -" ];
|
||||
systemd.tmpfiles.settings."10-zeyple".${cfg.settings.zeyple.log_file}.f = {
|
||||
inherit (cfg) user group;
|
||||
mode = "0600";
|
||||
};
|
||||
|
||||
services.logrotate = mkIf cfg.rotateLogs {
|
||||
enable = true;
|
||||
settings.zeyple = {
|
||||
|
|
78
third_party/nixpkgs/nixos/modules/services/matrix/hebbot.nix
vendored
Normal file
78
third_party/nixpkgs/nixos/modules/services/matrix/hebbot.nix
vendored
Normal file
|
@ -0,0 +1,78 @@
|
|||
{ lib
|
||||
, config
|
||||
, pkgs
|
||||
, ...
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (lib) mkEnableOption mkOption mkIf types;
|
||||
format = pkgs.formats.toml { };
|
||||
cfg = config.services.hebbot;
|
||||
settingsFile = format.generate "config.toml" cfg.settings;
|
||||
mkTemplateOption = templateName: mkOption {
|
||||
type = types.path;
|
||||
description = lib.mdDoc ''
|
||||
A path to the Markdown file for the ${templateName}.
|
||||
'';
|
||||
};
|
||||
in
|
||||
{
|
||||
meta.maintainers = [ lib.maintainers.raitobezarius ];
|
||||
options.services.hebbot = {
|
||||
enable = mkEnableOption "hebbot";
|
||||
botPasswordFile = mkOption {
|
||||
type = types.path;
|
||||
description = lib.mdDoc ''
|
||||
A path to the password file for your bot.
|
||||
|
||||
Consider using a path that does not end up in your Nix store
|
||||
as it would be world readable.
|
||||
'';
|
||||
};
|
||||
templates = {
|
||||
project = mkTemplateOption "project template";
|
||||
report = mkTemplateOption "report template";
|
||||
section = mkTemplateOption "section template";
|
||||
};
|
||||
settings = mkOption {
|
||||
type = format.type;
|
||||
default = { };
|
||||
description = lib.mdDoc ''
|
||||
Configuration for Hebbot, see, for examples:
|
||||
|
||||
- <https://github.com/matrix-org/twim-config/blob/master/config.toml>
|
||||
- <https://gitlab.gnome.org/Teams/Websites/thisweek.gnome.org/-/blob/main/hebbot/config.toml>
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.services.hebbot = {
|
||||
description = "hebbot - a TWIM-style Matrix bot written in Rust";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
preStart = ''
|
||||
ln -sf ${cfg.templates.project} ./project_template.md
|
||||
ln -sf ${cfg.templates.report} ./report_template.md
|
||||
ln -sf ${cfg.templates.section} ./section_template.md
|
||||
ln -sf ${settingsFile} ./config.toml
|
||||
'';
|
||||
|
||||
script = ''
|
||||
export BOT_PASSWORD="$(cat $CREDENTIALS_DIRECTORY/bot-password-file)"
|
||||
${lib.getExe pkgs.hebbot}
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
DynamicUser = true;
|
||||
Restart = "on-failure";
|
||||
LoadCredential = "bot-password-file:${cfg.botPasswordFile}";
|
||||
RestartSec = "10s";
|
||||
StateDirectory = "hebbot";
|
||||
WorkingDirectory = "hebbot";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
@ -45,9 +45,10 @@ in
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' 0700 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-lidarr".${cfg.dataDir}.d = {
|
||||
inherit (cfg) user group;
|
||||
mode = "0700";
|
||||
};
|
||||
|
||||
systemd.services.lidarr = {
|
||||
description = "Lidarr";
|
||||
|
|
|
@ -103,17 +103,18 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
warnings = []
|
||||
++ optional (cfg.settings.update_manager.enable_system_updates or false)
|
||||
''Enabling update_manager is not supported on NixOS and will lead to non-removable warnings in some clients.''
|
||||
++ optional (cfg.configDir != null)
|
||||
''
|
||||
++ (optional (head (cfg.settings.update_manager.enable_system_updates or [false])) ''
|
||||
Enabling system updates is not supported on NixOS and will lead to non-removable warnings in some clients.
|
||||
'')
|
||||
++ (optional (cfg.configDir != null) ''
|
||||
services.moonraker.configDir has been deprecated upstream and will be removed.
|
||||
|
||||
Action: ${
|
||||
if cfg.configDir == unifiedConfigDir then "Simply remove services.moonraker.configDir from your config."
|
||||
if cfg.configDir == unifiedConfigDir
|
||||
then "Simply remove services.moonraker.configDir from your config."
|
||||
else "Move files from `${cfg.configDir}` to `${unifiedConfigDir}` then remove services.moonraker.configDir from your config."
|
||||
}
|
||||
'';
|
||||
'');
|
||||
|
||||
assertions = [
|
||||
{
|
||||
|
|
|
@ -13,7 +13,7 @@ let
|
|||
(iniFmt.generate "PackageKit.conf" (recursiveUpdate
|
||||
{
|
||||
Daemon = {
|
||||
DefaultBackend = "nix";
|
||||
DefaultBackend = "test_nop";
|
||||
KeepCache = false;
|
||||
};
|
||||
}
|
||||
|
@ -35,7 +35,7 @@ let
|
|||
in
|
||||
{
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "services" "packagekit" "backend" ] "Always set to Nix.")
|
||||
(mkRemovedOptionModule [ "services" "packagekit" "backend" ] "Always set to test_nop, Nix backend is broken see #177946.")
|
||||
];
|
||||
|
||||
options.services.packagekit = {
|
||||
|
|
|
@ -6,7 +6,6 @@ let
|
|||
pkg = cfg.package;
|
||||
|
||||
defaultUser = "paperless";
|
||||
nltkDir = "/var/cache/paperless/nltk";
|
||||
defaultFont = "${pkgs.liberation_ttf}/share/fonts/truetype/LiberationSerif-Regular.ttf";
|
||||
|
||||
# Don't start a redis instance if the user sets a custom redis connection
|
||||
|
@ -17,13 +16,17 @@ let
|
|||
PAPERLESS_DATA_DIR = cfg.dataDir;
|
||||
PAPERLESS_MEDIA_ROOT = cfg.mediaDir;
|
||||
PAPERLESS_CONSUMPTION_DIR = cfg.consumptionDir;
|
||||
PAPERLESS_NLTK_DIR = nltkDir;
|
||||
PAPERLESS_THUMBNAIL_FONT_NAME = defaultFont;
|
||||
GUNICORN_CMD_ARGS = "--bind=${cfg.address}:${toString cfg.port}";
|
||||
} // optionalAttrs (config.time.timeZone != null) {
|
||||
PAPERLESS_TIME_ZONE = config.time.timeZone;
|
||||
} // optionalAttrs enableRedis {
|
||||
PAPERLESS_REDIS = "unix://${redisServer.unixSocket}";
|
||||
} // optionalAttrs (cfg.settings.PAPERLESS_ENABLE_NLTK or true) {
|
||||
PAPERLESS_NLTK_DIR = pkgs.symlinkJoin {
|
||||
name = "paperless_ngx_nltk_data";
|
||||
paths = pkg.nltkData;
|
||||
};
|
||||
} // (lib.mapAttrs (_: s:
|
||||
if (lib.isAttrs s || lib.isList s) then builtins.toJSON s
|
||||
else if lib.isBool s then lib.boolToString s
|
||||
|
@ -141,12 +144,12 @@ in
|
|||
`''${dataDir}/paperless-manage createsuperuser`.
|
||||
|
||||
The default superuser name is `admin`. To change it, set
|
||||
option {option}`extraConfig.PAPERLESS_ADMIN_USER`.
|
||||
option {option}`settings.PAPERLESS_ADMIN_USER`.
|
||||
WARNING: When changing the superuser name after the initial setup, the old superuser
|
||||
will continue to exist.
|
||||
|
||||
To disable login for the web interface, set the following:
|
||||
`extraConfig.PAPERLESS_AUTO_LOGIN_USERNAME = "admin";`.
|
||||
`settings.PAPERLESS_AUTO_LOGIN_USERNAME = "admin";`.
|
||||
WARNING: Only use this on a trusted system without internet access to Paperless.
|
||||
'';
|
||||
};
|
||||
|
@ -292,23 +295,6 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
# Download NLTK corpus data
|
||||
systemd.services.paperless-download-nltk-data = {
|
||||
wantedBy = [ "paperless-scheduler.service" ];
|
||||
before = [ "paperless-scheduler.service" ];
|
||||
after = [ "network-online.target" ];
|
||||
wants = [ "network-online.target" ];
|
||||
serviceConfig = defaultServiceConfig // {
|
||||
User = cfg.user;
|
||||
Type = "oneshot";
|
||||
# Enable internet access
|
||||
PrivateNetwork = false;
|
||||
ExecStart = let pythonWithNltk = pkg.python.withPackages (ps: [ ps.nltk ]); in ''
|
||||
${pythonWithNltk}/bin/python -m nltk.downloader -d '${nltkDir}' punkt snowball_data stopwords
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.paperless-consumer = {
|
||||
description = "Paperless document consumer";
|
||||
# Bind to `paperless-scheduler` so that the consumer never runs
|
||||
|
|
|
@ -37,6 +37,15 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
seedSettings = lib.mkOption {
|
||||
type = with lib.types; nullOr (attrsOf (listOf (attrsOf anything)));
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Seed settings for users and groups.
|
||||
See upstream for format <https://github.com/majewsky/portunus#seeding-users-and-groups-from-static-configuration>
|
||||
'';
|
||||
};
|
||||
|
||||
stateDir = mkOption {
|
||||
type = types.path;
|
||||
default = "/var/lib/portunus";
|
||||
|
@ -172,7 +181,8 @@ in
|
|||
"127.0.0.1" = [ cfg.domain ];
|
||||
};
|
||||
|
||||
services.dex = mkIf cfg.dex.enable {
|
||||
services = {
|
||||
dex = mkIf cfg.dex.enable {
|
||||
enable = true;
|
||||
settings = {
|
||||
issuer = "https://${cfg.domain}/dex";
|
||||
|
@ -217,6 +227,9 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
portunus.seedPath = lib.mkIf (cfg.seedSettings != null) (pkgs.writeText "seed.json" (builtins.toJSON cfg.seedSettings));
|
||||
};
|
||||
|
||||
systemd.services = {
|
||||
dex.serviceConfig = mkIf cfg.dex.enable {
|
||||
# `dex.service` is super locked down out of the box, but we need some
|
||||
|
|
|
@ -40,9 +40,10 @@ in
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' 0700 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-radarr".${cfg.dataDir}.d = {
|
||||
inherit (cfg) user group;
|
||||
mode = "0700";
|
||||
};
|
||||
|
||||
systemd.services.radarr = {
|
||||
description = "Radarr";
|
||||
|
|
|
@ -45,9 +45,10 @@ in
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' 0700 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-readarr".${cfg.dataDir}.d = {
|
||||
inherit (cfg) user group;
|
||||
mode = "0700";
|
||||
};
|
||||
|
||||
systemd.services.readarr = {
|
||||
description = "Readarr";
|
||||
|
|
|
@ -79,9 +79,10 @@ in
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.logDir}' - alerta alerta - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-alerta".${cfg.logDir}.d = {
|
||||
user = "alerta";
|
||||
group = "alerta";
|
||||
};
|
||||
|
||||
systemd.services.alerta = {
|
||||
description = "Alerta Monitoring System";
|
||||
|
|
|
@ -160,9 +160,9 @@ in
|
|||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.kapacitor ];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' - ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-kapacitor".${cfg.dataDir}.d = {
|
||||
inherit (cfg) user group;
|
||||
};
|
||||
|
||||
systemd.services.kapacitor = {
|
||||
description = "Kapacitor Real-Time Stream Processing Engine";
|
||||
|
|
|
@ -374,7 +374,11 @@ in
|
|||
};
|
||||
|
||||
# munin_stats plugin breaks as of 2.0.33 when this doesn't exist
|
||||
systemd.tmpfiles.rules = [ "d /run/munin 0755 munin munin -" ];
|
||||
systemd.tmpfiles.settings."10-munin"."/run/munin".d = {
|
||||
mode = "0755";
|
||||
user = "munin";
|
||||
group = "munin";
|
||||
};
|
||||
|
||||
}) (mkIf cronCfg.enable {
|
||||
|
||||
|
@ -399,11 +403,17 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /run/munin 0755 munin munin -"
|
||||
"d /var/log/munin 0755 munin munin -"
|
||||
"d /var/www/munin 0755 munin munin -"
|
||||
"d /var/lib/munin 0755 munin munin -"
|
||||
];
|
||||
systemd.tmpfiles.settings."20-munin" = let
|
||||
defaultConfig = {
|
||||
mode = "0755";
|
||||
user = "munin";
|
||||
group = "munin";
|
||||
};
|
||||
in {
|
||||
"/run/munin".d = defaultConfig;
|
||||
"/var/log/munin".d = defaultConfig;
|
||||
"/var/www/munin".d = defaultConfig;
|
||||
"/var/lib/munin".d = defaultConfig;
|
||||
};
|
||||
})];
|
||||
}
|
||||
|
|
|
@ -90,8 +90,10 @@ in
|
|||
};
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
};
|
||||
systemd.tmpfiles.rules = [
|
||||
"d ${dirname (cfg.flags.pidfile)} 0755 root root -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-osquery".${dirname (cfg.flags.pidfile)}.d = {
|
||||
user = "root";
|
||||
group = "root";
|
||||
mode = "0755";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -60,7 +60,6 @@ let
|
|||
"node"
|
||||
"nut"
|
||||
"openldap"
|
||||
"openvpn"
|
||||
"pgbouncer"
|
||||
"php-fpm"
|
||||
"pihole"
|
||||
|
@ -71,6 +70,7 @@ let
|
|||
"pve"
|
||||
"py-air-control"
|
||||
"redis"
|
||||
"restic"
|
||||
"rspamd"
|
||||
"rtl_433"
|
||||
"sabnzbd"
|
||||
|
|
|
@ -1,39 +0,0 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.prometheus.exporters.openvpn;
|
||||
in {
|
||||
port = 9176;
|
||||
extraOpts = {
|
||||
statusPaths = mkOption {
|
||||
type = types.listOf types.str;
|
||||
description = lib.mdDoc ''
|
||||
Paths to OpenVPN status files. Please configure the OpenVPN option
|
||||
`status` accordingly.
|
||||
'';
|
||||
};
|
||||
telemetryPath = mkOption {
|
||||
type = types.str;
|
||||
default = "/metrics";
|
||||
description = lib.mdDoc ''
|
||||
Path under which to expose metrics.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
serviceOpts = {
|
||||
serviceConfig = {
|
||||
PrivateDevices = true;
|
||||
ProtectKernelModules = true;
|
||||
NoNewPrivileges = true;
|
||||
ExecStart = ''
|
||||
${pkgs.prometheus-openvpn-exporter}/bin/openvpn_exporter \
|
||||
-openvpn.status_paths "${concatStringsSep "," cfg.statusPaths}" \
|
||||
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
|
||||
-web.telemetry-path ${cfg.telemetryPath}
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
|
@ -21,7 +21,7 @@ in
|
|||
type = with types; nullOr path;
|
||||
default = null;
|
||||
example = "/etc/prometheus-pve-exporter/pve.env";
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Path to the service's environment file. This path can either be a computed path in /nix/store or a path in the local filesystem.
|
||||
|
||||
The environment file should NOT be stored in /nix/store as it contains passwords and/or keys in plain text.
|
||||
|
@ -34,7 +34,7 @@ in
|
|||
type = with types; nullOr path;
|
||||
default = null;
|
||||
example = "/etc/prometheus-pve-exporter/pve.yml";
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Path to the service's config file. This path can either be a computed path in /nix/store or a path in the local filesystem.
|
||||
|
||||
The config file should NOT be stored in /nix/store as it will contain passwords and/or keys in plain text.
|
||||
|
@ -45,46 +45,66 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
server = {
|
||||
keyFile = mkOption {
|
||||
type = with types; nullOr path;
|
||||
default = null;
|
||||
example = "/var/lib/prometheus-pve-exporter/privkey.key";
|
||||
description = ''
|
||||
Path to a SSL private key file for the server
|
||||
'';
|
||||
};
|
||||
|
||||
certFile = mkOption {
|
||||
type = with types; nullOr path;
|
||||
default = null;
|
||||
example = "/var/lib/prometheus-pve-exporter/full-chain.pem";
|
||||
description = ''
|
||||
Path to a SSL certificate file for the server
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
collectors = {
|
||||
status = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect Node/VM/CT status
|
||||
'';
|
||||
};
|
||||
version = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect PVE version info
|
||||
'';
|
||||
};
|
||||
node = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect PVE node info
|
||||
'';
|
||||
};
|
||||
cluster = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect PVE cluster info
|
||||
'';
|
||||
};
|
||||
resources = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect PVE resources info
|
||||
'';
|
||||
};
|
||||
config = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = lib.mdDoc ''
|
||||
description = ''
|
||||
Collect PVE onboot status
|
||||
'';
|
||||
};
|
||||
|
@ -102,8 +122,10 @@ in
|
|||
--${optionalString (!cfg.collectors.cluster) "no-"}collector.cluster \
|
||||
--${optionalString (!cfg.collectors.resources) "no-"}collector.resources \
|
||||
--${optionalString (!cfg.collectors.config) "no-"}collector.config \
|
||||
%d/configFile \
|
||||
${toString cfg.port} ${cfg.listenAddress}
|
||||
${optionalString (cfg.server.keyFile != null) "--server.keyfile ${cfg.server.keyFile}"} \
|
||||
${optionalString (cfg.server.certFile != null) "--server.certfile ${cfg.server.certFile}"} \
|
||||
--config.file %d/configFile \
|
||||
--web.listen-address ${cfg.listenAddress}:${toString cfg.port}
|
||||
'';
|
||||
} // optionalAttrs (cfg.environmentFile != null) {
|
||||
EnvironmentFile = cfg.environmentFile;
|
||||
|
|
131
third_party/nixpkgs/nixos/modules/services/monitoring/prometheus/exporters/restic.nix
vendored
Normal file
131
third_party/nixpkgs/nixos/modules/services/monitoring/prometheus/exporters/restic.nix
vendored
Normal file
|
@ -0,0 +1,131 @@
|
|||
{ config, lib, pkgs, options }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.prometheus.exporters.restic;
|
||||
in
|
||||
{
|
||||
port = 9753;
|
||||
extraOpts = {
|
||||
repository = mkOption {
|
||||
type = types.str;
|
||||
description = lib.mdDoc ''
|
||||
URI pointing to the repository to monitor.
|
||||
'';
|
||||
example = "sftp:backup@192.168.1.100:/backups/example";
|
||||
};
|
||||
|
||||
passwordFile = mkOption {
|
||||
type = types.path;
|
||||
description = lib.mdDoc ''
|
||||
File containing the password to the repository.
|
||||
'';
|
||||
example = "/etc/nixos/restic-password";
|
||||
};
|
||||
|
||||
environmentFile = mkOption {
|
||||
type = with types; nullOr path;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
File containing the credentials to access the repository, in the
|
||||
format of an EnvironmentFile as described by systemd.exec(5)
|
||||
'';
|
||||
};
|
||||
|
||||
refreshInterval = mkOption {
|
||||
type = types.ints.unsigned;
|
||||
default = 60;
|
||||
description = lib.mdDoc ''
|
||||
Refresh interval for the metrics in seconds.
|
||||
Computing the metrics is an expensive task, keep this value as high as possible.
|
||||
'';
|
||||
};
|
||||
|
||||
rcloneOptions = mkOption {
|
||||
type = with types; attrsOf (oneOf [ str bool ]);
|
||||
default = { };
|
||||
description = lib.mdDoc ''
|
||||
Options to pass to rclone to control its behavior.
|
||||
See <https://rclone.org/docs/#options> for
|
||||
available options. When specifying option names, strip the
|
||||
leading `--`. To set a flag such as
|
||||
`--drive-use-trash`, which does not take a value,
|
||||
set the value to the Boolean `true`.
|
||||
'';
|
||||
};
|
||||
|
||||
rcloneConfig = mkOption {
|
||||
type = with types; attrsOf (oneOf [ str bool ]);
|
||||
default = { };
|
||||
description = lib.mdDoc ''
|
||||
Configuration for the rclone remote being used for backup.
|
||||
See the remote's specific options under rclone's docs at
|
||||
<https://rclone.org/docs/>. When specifying
|
||||
option names, use the "config" name specified in the docs.
|
||||
For example, to set `--b2-hard-delete` for a B2
|
||||
remote, use `hard_delete = true` in the
|
||||
attribute set.
|
||||
|
||||
::: {.warning}
|
||||
Secrets set in here will be world-readable in the Nix
|
||||
store! Consider using the {option}`rcloneConfigFile`
|
||||
option instead to specify secret values separately. Note that
|
||||
options set here will override those set in the config file.
|
||||
:::
|
||||
'';
|
||||
};
|
||||
|
||||
rcloneConfigFile = mkOption {
|
||||
type = with types; nullOr path;
|
||||
default = null;
|
||||
description = lib.mdDoc ''
|
||||
Path to the file containing rclone configuration. This file
|
||||
must contain configuration for the remote specified in this backup
|
||||
set and also must be readable by root.
|
||||
|
||||
::: {.caution}
|
||||
Options set in `rcloneConfig` will override those set in this
|
||||
file.
|
||||
:::
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
serviceOpts = {
|
||||
serviceConfig = {
|
||||
ExecStart = ''
|
||||
${pkgs.prometheus-restic-exporter}/bin/restic-exporter.py \
|
||||
${concatStringsSep " \\\n " cfg.extraFlags}
|
||||
'';
|
||||
EnvironmentFile = mkIf (cfg.environmentFile != null) cfg.environmentFile;
|
||||
};
|
||||
environment =
|
||||
let
|
||||
rcloneRemoteName = builtins.elemAt (splitString ":" cfg.repository) 1;
|
||||
rcloneAttrToOpt = v: "RCLONE_" + toUpper (builtins.replaceStrings [ "-" ] [ "_" ] v);
|
||||
rcloneAttrToConf = v: "RCLONE_CONFIG_" + toUpper (rcloneRemoteName + "_" + v);
|
||||
toRcloneVal = v: if lib.isBool v then lib.boolToString v else v;
|
||||
in
|
||||
{
|
||||
RESTIC_REPO_URL = cfg.repository;
|
||||
RESTIC_REPO_PASSWORD_FILE = cfg.passwordFile;
|
||||
LISTEN_ADDRESS = cfg.listenAddress;
|
||||
LISTEN_PORT = toString cfg.port;
|
||||
REFRESH_INTERVAL = toString cfg.refreshInterval;
|
||||
}
|
||||
// (mapAttrs'
|
||||
(name: value:
|
||||
nameValuePair (rcloneAttrToOpt name) (toRcloneVal value)
|
||||
)
|
||||
cfg.rcloneOptions)
|
||||
// optionalAttrs (cfg.rcloneConfigFile != null) {
|
||||
RCLONE_CONFIG = cfg.rcloneConfigFile;
|
||||
}
|
||||
// (mapAttrs'
|
||||
(name: value:
|
||||
nameValuePair (rcloneAttrToConf name) (toRcloneVal value)
|
||||
)
|
||||
cfg.rcloneConfig);
|
||||
};
|
||||
}
|
|
@ -3,6 +3,7 @@
|
|||
with lib;
|
||||
|
||||
let
|
||||
logPrefix = "services.prometheus.exporters.snmp";
|
||||
cfg = config.services.prometheus.exporters.snmp;
|
||||
|
||||
# This ensures that we can deal with string paths, path types and
|
||||
|
|
|
@ -59,9 +59,10 @@ in {
|
|||
group = "riemanndash";
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.dataDir}' - riemanndash riemanndash - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-riemanndash".${cfg.dataDir}.d = {
|
||||
user = "riemanndash";
|
||||
group = "riemanndash";
|
||||
};
|
||||
|
||||
systemd.services.riemann-dash = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
|
|
@ -299,10 +299,7 @@ in
|
|||
fi
|
||||
'' + optionalString (cfg.database.passwordFile != null) ''
|
||||
# create a copy of the supplied password file in a format zabbix can consume
|
||||
touch ${passwordFile}
|
||||
chmod 0600 ${passwordFile}
|
||||
echo -n "DBPassword = " > ${passwordFile}
|
||||
cat ${cfg.database.passwordFile} >> ${passwordFile}
|
||||
install -m 0600 <(echo "DBPassword = $(cat ${cfg.database.passwordFile})") ${passwordFile}
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
|
|
|
@ -292,10 +292,7 @@ in
|
|||
fi
|
||||
'' + optionalString (cfg.database.passwordFile != null) ''
|
||||
# create a copy of the supplied password file in a format zabbix can consume
|
||||
touch ${passwordFile}
|
||||
chmod 0600 ${passwordFile}
|
||||
echo -n "DBPassword = " > ${passwordFile}
|
||||
cat ${cfg.database.passwordFile} >> ${passwordFile}
|
||||
install -m 0600 <(echo "DBPassword = $(cat ${cfg.database.passwordFile})") ${passwordFile}
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
|
|
|
@ -56,8 +56,10 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d ${cfg.cacheDir} 0700 root root - -"
|
||||
];
|
||||
systemd.tmpfiles.settings."10-cachefilesd".${cfg.cacheDir}.d = {
|
||||
user = "root";
|
||||
group = "root";
|
||||
mode = "0700";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue