Project import generated by Copybara.

GitOrigin-RevId: a100acd7bbf105915b0004427802286c37738fef
This commit is contained in:
Default email 2023-02-02 18:25:31 +00:00
parent 7c6bdab11c
commit a0cb138ada
7481 changed files with 69895 additions and 47213 deletions

View file

@ -134,7 +134,9 @@
/pkgs/development/ruby-modules @marsam
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq @winterqt @figsoda
/pkgs/build-support/rust @zowoq @winterqt @figsoda
/doc/languages-frameworks/rust.section.md @zowoq @winterqt @figsoda
# C compilers
/pkgs/development/compilers/gcc @matthewbauer

View file

@ -24,7 +24,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Create backport PRs
uses: korthout/backport-action@v1.0.1
uses: korthout/backport-action@v1.1.0
with:
# Config README: https://github.com/korthout/backport-action#backport-action
pull_description: |-

View file

@ -11,7 +11,7 @@ on:
jobs:
tests:
runs-on: ubuntu-latest
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip editorconfig]')"
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip treewide]')"
steps:
- name: Get list of changed files from PR
env:

View file

@ -16,7 +16,7 @@ permissions:
jobs:
labels:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip treewide]')"
steps:
- uses: actions/labeler@v4
with:

View file

@ -145,7 +145,7 @@ Create a Docker image with many of the store paths being on their own layer to i
`architecture` is _optional_ and used to specify the image architecture, this is useful for multi-architecture builds that don't need cross compiling. If not specified it will default to `hostPlatform`.
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
: Run-time configuration of the container. A full list of the options available is in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`

View file

@ -4,7 +4,7 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a
## Basic usage {#sec-citrix-base}
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.com/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
## Citrix Self-service {#sec-citrix-selfservice}
@ -19,7 +19,7 @@ $ selfservice
## Custom certificates {#sec-citrix-custom-certs}
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://citrix.github.io/receiver-for-linux-command-reference/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
```nix
with import <nixpkgs> { config.allowUnfree = true; };

View file

@ -4,7 +4,7 @@ This package is an ibus-based completion method to speed up typing.
## Activating the engine {#sec-ibus-typing-booster-activate}
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/).
On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:

View file

@ -1,6 +1,19 @@
# Testers {#chap-testers}
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
## `hasPkgConfigModule` {#tester-hasPkgConfigModule}
Checks whether a package exposes a certain `pkg-config` module.
Example:
```nix
passthru.tests.pkg-config = testers.hasPkgConfigModule {
package = finalAttrs.finalPackage;
moduleName = "libfoo";
}
```
## `testVersion` {#tester-testVersion}
Checks the command output contains the specified version

View file

@ -27,7 +27,7 @@ If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
Additional syntax extensions are available, though not all extensions can be used in NixOS option documentation. The following extensions are currently used:
Additional syntax extensions are available, all of which can be used in NixOS option documentation. The following extensions are currently used:
- []{#ssec-contributing-markup-anchors}
Explicitly defined **anchors** on headings, to allow linking to sections. These should be always used, to ensure the anchors can be linked even when the heading text changes, and to prevent conflicts between [automatically assigned identifiers](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/auto_identifiers.md).
@ -38,6 +38,10 @@ Additional syntax extensions are available, though not all extensions can be use
## Syntax {#sec-contributing-markup}
```
::: {.note}
NixOS option documentation does not support headings in general.
:::
- []{#ssec-contributing-markup-anchors-inline}
**Inline anchors**, which allow linking arbitrary place in the text (e.g. individual list items, sentences…).
@ -67,10 +71,6 @@ Additional syntax extensions are available, though not all extensions can be use
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
::: {.note}
Inline roles are available for option documentation.
:::
- []{#ssec-contributing-markup-admonitions}
**Admonitions**, set off from the text to bring attention to something.
@ -96,10 +96,6 @@ Additional syntax extensions are available, though not all extensions can be use
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
::: {.note}
Admonitions are available for option documentation.
:::
- []{#ssec-contributing-markup-definition-lists}
[**Definition lists**](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md), for defining a group of terms:

View file

@ -9,7 +9,7 @@ stdenv.mkDerivation {
# ...
checkInputs = [
nativeCheckInputs = [
postgresql
postgresqlTestHook
];
@ -46,6 +46,12 @@ Bash-only variables:
- `postgresqlEnableTCP`: set to `1` to enable TCP listening. Flaky; not recommended.
- `postgresqlStartCommands`: defaults to `pg_ctl start`.
## Hooks {#sec-postgresqlTestHook-hooks}
A number of additional hooks are ran in postgresqlTestHook
- `postgresqlTestSetupPost`: ran after postgresql has been set up.
## TCP and the Nix sandbox {#sec-postgresqlTestHook-tcp}
`postgresqlEnableTCP` relies on network sandboxing, which is not available on macOS and some custom Nix installations, resulting in flaky tests.

View file

@ -13,6 +13,7 @@ with import <nixpkgs> {};
let
androidComposition = androidenv.composeAndroidPackages {
cmdLineToolsVersion = "8.0";
toolsVersion = "26.1.1";
platformToolsVersion = "30.0.5";
buildToolsVersions = [ "30.0.3" ];
@ -42,7 +43,10 @@ exceptions are the tools, platform-tools and build-tools sub packages.
The following parameters are supported:
* `toolsVersion`, specifies the version of the tools package to use
* `cmdLineToolsVersion `, specifies the version of the `cmdline-tools` package to use
* `toolsVersion`, specifies the version of the `tools` package. Notice `tools` is
obsolete, and currently only `26.1.1` is available, so there's not a lot of
options here, however, you can set it as `null` if you don't want it.
* `platformsToolsVersion` specifies the version of the `platform-tools` plugin
* `buildToolsVersions` specifies the versions of the `build-tools` plugins to
use.

View file

@ -128,7 +128,7 @@ You will need to run the build process once to fix the hash to correspond to you
###### FOD {#fixed-output-derivation}
A fixed output derivation will download mix dependencies from the internet. To ensure reproducibility, a hash will be supplied. Note that mix is relatively reproducible. An FOD generating a different hash on each run hasn't been observed (as opposed to npm where the chances are relatively high). See [elixir_ls](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/beam-modules/elixir_ls.nix) for a usage example of FOD.
A fixed output derivation will download mix dependencies from the internet. To ensure reproducibility, a hash will be supplied. Note that mix is relatively reproducible. An FOD generating a different hash on each run hasn't been observed (as opposed to npm where the chances are relatively high). See [elixir_ls](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/beam-modules/elixir-ls/default.nix) for a usage example of FOD.
Practical steps

View file

@ -28,13 +28,13 @@ mkShell {
packages = [
(with dotnetCorePackages; combinePackages [
sdk_3_1
sdk_5_0
sdk_6_0
])
];
}
```
This will produce a dotnet installation that has the dotnet 3.1, 3.0, and 2.1 sdk. The first sdk listed will have it's cli utility present in the resulting environment. Example info output:
This will produce a dotnet installation that has the dotnet 3.1 6.0 sdk. The first sdk listed will have it's cli utility present in the resulting environment. Example info output:
```ShellSession
$ dotnet --info
@ -120,7 +120,7 @@ in buildDotnetModule rec {
projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
dotnet-sdk = dotnetCorePackages.sdk_3_1;
dotnet-runtime = dotnetCorePackages.net_5_0;
dotnet-runtime = dotnetCorePackages.net_6_0;
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
executables = []; # Don't install any executables.

View file

@ -34,7 +34,7 @@ To allow software to use various virtual file systems, `gvfs` package can be als
### GdkPixbuf loaders {#ssec-gnome-gdk-pixbuf-loaders}
GTK applications typically use [GdkPixbuf](https://developer.gnome.org/gdk-pixbuf/stable/) to load images. But `gdk-pixbuf` package only supports basic bitmap formats like JPEG, PNG or TIFF, requiring to use third-party loader modules for other formats. This is especially painful since GTK itself includes SVG icons, which cannot be rendered without a loader provided by `librsvg`.
GTK applications typically use [GdkPixbuf](https://gitlab.gnome.org/GNOME/gdk-pixbuf/) to load images. But `gdk-pixbuf` package only supports basic bitmap formats like JPEG, PNG or TIFF, requiring to use third-party loader modules for other formats. This is especially painful since GTK itself includes SVG icons, which cannot be rendered without a loader provided by `librsvg`.
Unlike other libraries mentioned in this section, GdkPixbuf only supports a single value in its controlling environment variable `GDK_PIXBUF_MODULE_FILE`. It is supposed to point to a cache file containing information about the available loaders. Each loader package will contain a `lib/gdk-pixbuf-2.0/2.10.0/loaders.cache` file describing the default loaders in `gdk-pixbuf` package plus the loader contained in the package itself. If you want to use multiple third-party loaders, you will need to create your own cache file manually. Fortunately, this is pretty rare as [not many loaders exist](https://gitlab.gnome.org/federico/gdk-pixbuf-survey/blob/master/src/modules.md).
@ -70,7 +70,7 @@ Also make sure that `icon-theme.cache` is installed for each theme provided by t
### GTK Themes {#ssec-gnome-themes}
Previously, a GTK theme needed to be in `XDG_DATA_DIRS`. This is no longer necessary for most programs since GTK incorporated Adwaita theme. Some programs (for example, those designed for [elementary HIG](https://elementary.io/docs/human-interface-guidelines#human-interface-guidelines)) might require a special theme like `pantheon.elementary-gtk-theme`.
Previously, a GTK theme needed to be in `XDG_DATA_DIRS`. This is no longer necessary for most programs since GTK incorporated Adwaita theme. Some programs (for example, those designed for [elementary HIG](https://docs.elementary.io/hig)) might require a special theme like `pantheon.elementary-gtk-theme`.
### GObject introspection typelibs {#ssec-gnome-typelibs}

View file

@ -32,6 +32,7 @@
<xi:include href="octave.section.xml" />
<xi:include href="perl.section.xml" />
<xi:include href="php.section.xml" />
<xi:include href="pkg-config.section.xml" />
<xi:include href="python.section.xml" />
<xi:include href="qt.section.xml" />
<xi:include href="r.section.xml" />

View file

@ -129,3 +129,8 @@ packaged libraries may still use the old spelling: maintainers are invited to
fix this when updating packages. Massive renaming is strongly discouraged as it
would be challenging to review, difficult to test, and will cause unnecessary
rebuild.
The build will automatically fail if two distinct versions of the same library
are added to `buildInputs` (which usually happens transitively because of
`propagatedBuildInputs`). Set `dontDetectOcamlConflicts` to true to disable this
behavior.

View file

@ -0,0 +1,9 @@
# pkg-config {#sec-pkg-config}
*pkg-config* is a unified interface for declaring and querying built C/C++ libraries.
Nixpkgs provides a couple of facilities for working with this tool.
- A [setup hook](#setup-hook-pkg-config) bundled with in the `pkg-config` package, to bring a derivation's declared build inputs into the environment.
- The [`validatePkgConfig` setup hook](https://nixos.org/manual/nixpkgs/stable/#validatepkgconfig), for packages that provide pkg-config modules.
- The `defaultPkgConfigPackages` package set: a set of aliases, named after the modules they provide. This is meant to be used by language-to-nix integrations. Hand-written packages should use the normal Nixpkgs attribute name instead.

View file

@ -58,7 +58,7 @@ with a nix-shell that has `numpy` and `toolz` in Python 3.9; then we will create
a re-usable environment in a single-file Python script; then we will create a
full Python environment for development with this same environment.
Philosphically, this should be familiar to users who are used to a `venv` style
Philosophically, this should be familiar to users who are used to a `venv` style
of development: individual projects create their own Python environments without
impacting the global environment or each other.
@ -436,7 +436,7 @@ arguments `buildInputs` and `propagatedBuildInputs` to specify dependencies. If
something is exclusively a build-time dependency, then the dependency should be
included in `buildInputs`, but if it is (also) a runtime dependency, then it
should be added to `propagatedBuildInputs`. Test dependencies are considered
build-time dependencies and passed to `checkInputs`.
build-time dependencies and passed to `nativeCheckInputs`.
The following example shows which arguments are given to `buildPythonPackage` in
order to build [`datashape`](https://github.com/blaze/datashape).
@ -453,7 +453,7 @@ buildPythonPackage rec {
hash = "sha256-FLLvdm1MllKrgTGC6Gb0k0deZeVYvtCCLji/B7uhong=";
};
checkInputs = [ pytest ];
nativeCheckInputs = [ pytest ];
propagatedBuildInputs = [ numpy multipledispatch python-dateutil ];
meta = with lib; {
@ -466,7 +466,7 @@ buildPythonPackage rec {
```
We can see several runtime dependencies, `numpy`, `multipledispatch`, and
`python-dateutil`. Furthermore, we have one `checkInputs`, i.e. `pytest`. `pytest` is a
`python-dateutil`. Furthermore, we have one `nativeCheckInputs`, i.e. `pytest`. `pytest` is a
test runner and is only used during the `checkPhase` and is therefore not added
to `propagatedBuildInputs`.
@ -569,7 +569,7 @@ Pytest is the most common test runner for python repositories. A trivial
test run would be:
```
checkInputs = [ pytest ];
nativeCheckInputs = [ pytest ];
checkPhase = ''
runHook preCheck
@ -585,7 +585,7 @@ sandbox, and will generally need many tests to be disabled.
To filter tests using pytest, one can do the following:
```
checkInputs = [ pytest ];
nativeCheckInputs = [ pytest ];
# avoid tests which need additional data or touch network
checkPhase = ''
runHook preCheck
@ -618,7 +618,7 @@ when a package may need many items disabled to run the test suite.
Using the example above, the analogous `pytestCheckHook` usage would be:
```
checkInputs = [ pytestCheckHook ];
nativeCheckInputs = [ pytestCheckHook ];
# requires additional data
pytestFlagsArray = [ "tests/" "--ignore=tests/integration" ];
@ -744,17 +744,17 @@ work in any of the formats supported by `buildPythonPackage` currently,
with the exception of `other` (see `format` in
[`buildPythonPackage` parameters](#buildpythonpackage-parameters) for more details).
### Using unittestCheckHook {#using-unittestcheckhook}
#### Using unittestCheckHook {#using-unittestcheckhook}
`unittestCheckHook` is a hook which will substitute the setuptools `test` command for a `checkPhase` which runs `python -m unittest discover`:
```
checkInputs = [ unittestCheckHook ];
nativeCheckInputs = [ unittestCheckHook ];
unittestFlags = [ "-s" "tests" "-v" ];
unittestFlagsArray = [ "-s" "tests" "-v" ];
```
##### Using sphinxHook {#using-sphinxhook}
#### Using sphinxHook {#using-sphinxhook}
The `sphinxHook` is a helpful tool to build documentation and manpages
using the popular Sphinx documentation generator.
@ -1006,7 +1006,7 @@ buildPythonPackage rec {
rm testing/test_argcomplete.py
'';
checkInputs = [ hypothesis ];
nativeCheckInputs = [ hypothesis ];
nativeBuildInputs = [ setuptools-scm ];
propagatedBuildInputs = [ attrs py setuptools six pluggy ];
@ -1028,7 +1028,7 @@ The `buildPythonPackage` mainly does four things:
* In the `installCheck` phase, `${python.interpreter} setup.py test` is run.
By default tests are run because `doCheck = true`. Test dependencies, like
e.g. the test runner, should be added to `checkInputs`.
e.g. the test runner, should be added to `nativeCheckInputs`.
By default `meta.platforms` is set to the same value
as the interpreter unless overridden otherwise.
@ -1082,7 +1082,7 @@ because their behaviour is different:
* `buildInputs ? []`: Build and/or run-time dependencies that need to be
compiled for the host machine. Typically non-Python libraries which are being
linked.
* `checkInputs ? []`: Dependencies needed for running the `checkPhase`. These
* `nativeCheckInputs ? []`: Dependencies needed for running the `checkPhase`. These
are added to `nativeBuildInputs` when `doCheck = true`. Items listed in
`tests_require` go here.
* `propagatedBuildInputs ? []`: Aside from propagating dependencies,
@ -1416,7 +1416,7 @@ example of such a situation is when `py.test` is used.
buildPythonPackage {
# ...
# assumes the tests are located in tests
checkInputs = [ pytest ];
nativeCheckInputs = [ pytest ];
checkPhase = ''
runHook preCheck
@ -1768,7 +1768,7 @@ In a `setup.py` or `setup.cfg` it is common to declare dependencies:
* `setup_requires` corresponds to `nativeBuildInputs`
* `install_requires` corresponds to `propagatedBuildInputs`
* `tests_require` corresponds to `checkInputs`
* `tests_require` corresponds to `nativeCheckInputs`
## Contributing {#contributing}

View file

@ -1,5 +1,8 @@
{
"nix.conf(5)": "https://nixos.org/manual/nix/stable/#sec-conf-file",
"gnunet.conf(5)": "https://docs.gnunet.org/users/configuration.html",
"mpd(1)": "https://mpd.readthedocs.io/en/latest/mpd.1.html",
"mpd.conf(5)": "https://mpd.readthedocs.io/en/latest/mpd.conf.5.html",
"nix.conf(5)": "https://nixos.org/manual/nix/stable/command-ref/conf-file.html",
"journald.conf(5)": "https://www.freedesktop.org/software/systemd/man/journald.conf.html",
"logind.conf(5)": "https://www.freedesktop.org/software/systemd/man/logind.conf.html",

View file

@ -150,7 +150,7 @@ depsBuildBuild = [ buildPackages.stdenv.cc ];
Add the following to your `mkDerivation` invocation.
```nix
doCheck = stdenv.hostPlatform == stdenv.buildPlatform;
doCheck = stdenv.buildPlatform.canExecute stdenv.hostPlatform;
```
#### Package using Meson needs to run binaries for the host platform during build. {#cross-meson-runs-host-code}

View file

@ -253,7 +253,7 @@ The propagated equivalent of `depsTargetTarget`. This is prefixed for the same r
#### `NIX_DEBUG` {#var-stdenv-NIX_DEBUG}
A natural number indicating how much information to log. If set to 1 or higher, `stdenv` will print moderate debugging information during the build. In particular, the `gcc` and `ld` wrapper scripts will print out the complete command line passed to the wrapped tools. If set to 6 or higher, the `stdenv` setup script will be run with `set -x` tracing. If set to 7 or higher, the `gcc` and `ld` wrapper scripts will also be run with `set -x` tracing.
A number between 0 and 7 indicating how much information to log. If set to 1 or higher, `stdenv` will print moderate debugging information during the build. In particular, the `gcc` and `ld` wrapper scripts will print out the complete command line passed to the wrapped tools. If set to 6 or higher, the `stdenv` setup script will be run with `set -x` tracing. If set to 7 or higher, the `gcc` and `ld` wrapper scripts will also be run with `set -x` tracing.
### Attributes affecting build properties {#attributes-affecting-build-properties}
@ -626,7 +626,7 @@ Before and after running `make`, the hooks `preBuild` and `postBuild` are called
### The check phase {#ssec-check-phase}
The check phase checks whether the package was built correctly by running its test suite. The default `checkPhase` calls `make check`, but only if the `doCheck` variable is enabled.
The check phase checks whether the package was built correctly by running its test suite. The default `checkPhase` calls `make $checkTarget`, but only if the `doCheck` variable is enabled (see below).
#### Variables controlling the check phase {#variables-controlling-the-check-phase}
@ -646,7 +646,7 @@ See the [build phase](#var-stdenv-makeFlags) for details.
##### `checkTarget` {#var-stdenv-checkTarget}
The make target that runs the tests. Defaults to `check`.
The make target that runs the tests. Defaults to `check` if it exists, otherwise `test`; if neither is found, do nothing.
##### `checkFlags` / `checkFlagsArray` {#var-stdenv-checkFlags}
@ -654,7 +654,11 @@ A list of strings passed as additional flags to `make`. Like `makeFlags` and `ma
##### `checkInputs` {#var-stdenv-checkInputs}
A list of dependencies used by the phase. This gets included in `nativeBuildInputs` when `doCheck` is set.
A list of host dependencies used by the phase, usually libraries linked into executables built during tests. This gets included in `buildInputs` when `doCheck` is set.
##### `nativeCheckInputs` {#var-stdenv-nativeCheckInputs}
A list of native dependencies used by the phase, notably tools needed on `$PATH`. This gets included in `nativeBuildInputs` when `doCheck` is set.
##### `preCheck` {#var-stdenv-preCheck}
@ -821,7 +825,11 @@ A list of strings passed as additional flags to `make`. Like `makeFlags` and `ma
##### `installCheckInputs` {#var-stdenv-installCheckInputs}
A list of dependencies used by the phase. This gets included in `nativeBuildInputs` when `doInstallCheck` is set.
A list of host dependencies used by the phase, usually libraries linked into executables built during tests. This gets included in `buildInputs` when `doInstallCheck` is set.
##### `nativeInstallCheckInputs` {#var-stdenv-nativeInstallCheckInputs}
A list of native dependencies used by the phase, notably tools needed on `$PATH`. This gets included in `nativeBuildInputs` when `doInstallCheck` is set.
##### `preInstallCheck` {#var-stdenv-preInstallCheck}

View file

@ -168,7 +168,7 @@ rec {
] { a.b.c = 0; }
=> { a = { b = { d = 1; }; }; x = { y = "xy"; }; }
Type: updateManyAttrsByPath :: [{ path :: [String], update :: (Any -> Any) }] -> AttrSet -> AttrSet
Type: updateManyAttrsByPath :: [{ path :: [String]; update :: (Any -> Any); }] -> AttrSet -> AttrSet
*/
updateManyAttrsByPath = let
# When recursing into attributes, instead of updating the `path` of each
@ -414,7 +414,7 @@ rec {
=> { name = "some"; value = 6; }
Type:
nameValuePair :: String -> Any -> { name :: String, value :: Any }
nameValuePair :: String -> Any -> { name :: String; value :: Any; }
*/
nameValuePair =
# Attribute name
@ -449,7 +449,7 @@ rec {
=> { foo_x = "bar-a"; foo_y = "bar-b"; }
Type:
mapAttrs' :: (String -> Any -> { name = String; value = Any }) -> AttrSet -> AttrSet
mapAttrs' :: (String -> Any -> { name :: String; value :: Any; }) -> AttrSet -> AttrSet
*/
mapAttrs' =
# A function, given an attribute's name and value, returns a new `nameValuePair`.
@ -480,8 +480,13 @@ rec {
/* Like `mapAttrs`, except that it recursively applies itself to
attribute sets. Also, the first argument of the argument
function is a *list* of the names of the containing attributes.
the *leaf* attributes of a potentially-nested attribute set:
the second argument of the function will never be an attrset.
Also, the first argument of the argument function is a *list*
of the attribute names that form the path to the leaf attribute.
For a function that gives you control over what counts as a leaf,
see `mapAttrsRecursiveCond`.
Example:
mapAttrsRecursive (path: value: concatStringsSep "-" (path ++ [value]))
@ -644,7 +649,7 @@ rec {
Example:
zipAttrsWith (name: values: values) [{a = "x";} {a = "y"; b = "z";}]
=> { a = ["x" "y"]; b = ["z"] }
=> { a = ["x" "y"]; b = ["z"]; }
Type:
zipAttrsWith :: (String -> [ Any ] -> Any) -> [ AttrSet ] -> AttrSet
@ -659,7 +664,7 @@ rec {
Example:
zipAttrs [{a = "x";} {a = "y"; b = "z";}]
=> { a = ["x" "y"]; b = ["z"] }
=> { a = ["x" "y"]; b = ["z"]; }
Type:
zipAttrs :: [ AttrSet ] -> AttrSet

View file

@ -252,7 +252,8 @@ rec {
outputsList = map makeOutput outputs;
drv' = (lib.head outputsList).value;
in lib.deepSeq drv' drv';
in if drv == null then null else
lib.deepSeq drv' drv';
/* Make a set of packages with a common scope. All packages called
with the provided `callPackage` will be evaluated with the same

View file

@ -94,7 +94,7 @@ let
subtractLists mutuallyExclusive groupBy groupBy';
inherit (self.strings) concatStrings concatMapStrings concatImapStrings
intersperse concatStringsSep concatMapStringsSep
concatImapStringsSep makeSearchPath makeSearchPathOutput
concatImapStringsSep concatLines makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath optionalString
hasInfix hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs

View file

@ -558,6 +558,12 @@ in mkLicense lset) ({
redistributable = false;
};
fair = {
fullName = "Fair License";
spdxId = "Fair";
free = true;
};
issl = {
fullName = "Intel Simplified Software License";
url = "https://software.intel.com/en-us/license/intel-simplified-software-license";
@ -709,7 +715,12 @@ in mkLicense lset) ({
ncsa = {
spdxId = "NCSA";
fullName = "University of Illinois/NCSA Open Source License";
fullName = "University of Illinois/NCSA Open Source License";
};
nlpl = {
spdxId = "NLPL";
fullName = "No Limit Public License";
};
nposl3 = {

View file

@ -306,7 +306,7 @@ rec {
/* Splits the elements of a list in two lists, `right` and
`wrong`, depending on the evaluation of a predicate.
Type: (a -> bool) -> [a] -> { right :: [a], wrong :: [a] }
Type: (a -> bool) -> [a] -> { right :: [a]; wrong :: [a]; }
Example:
partition (x: x > 2) [ 5 1 2 3 4 ]
@ -374,7 +374,7 @@ rec {
/* Merges two lists of the same size together. If the sizes aren't the same
the merging stops at the shortest.
Type: zipLists :: [a] -> [b] -> [{ fst :: a, snd :: b}]
Type: zipLists :: [a] -> [b] -> [{ fst :: a; snd :: b; }]
Example:
zipLists [ 1 2 ] [ "a" "b" ]

View file

@ -76,7 +76,9 @@ rec {
1. (legacy) a system string.
2. (modern) a pattern for the platform `parsed` field.
2. (modern) a pattern for the entire platform structure (see `lib.systems.inspect.platformPatterns`).
3. (modern) a pattern for the platform `parsed` field (see `lib.systems.inspect.patterns`).
We can inject these into a pattern for the whole of a structured platform,
and then match that.
@ -85,6 +87,8 @@ rec {
pattern =
if builtins.isString elem
then { system = elem; }
else if elem?parsed
then elem
else { parsed = elem; };
in lib.matchAttrs pattern platform;

View file

@ -114,7 +114,7 @@ rec {
You can omit the default path if the name of the option is also attribute path in nixpkgs.
Type: mkPackageOption :: pkgs -> string -> { default :: [string], example :: null | string | [string] } -> option
Type: mkPackageOption :: pkgs -> string -> { default :: [string]; example :: null | string | [string]; } -> option
Example:
mkPackageOption pkgs "hello" { }
@ -201,7 +201,7 @@ rec {
/* Extracts values of all "value" keys of the given list.
Type: getValues :: [ { value :: a } ] -> [a]
Type: getValues :: [ { value :: a; } ] -> [a]
Example:
getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ]
@ -211,7 +211,7 @@ rec {
/* Extracts values of all "file" keys of the given list
Type: getFiles :: [ { file :: a } ] -> [a]
Type: getFiles :: [ { file :: a; } ] -> [a]
Example:
getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ]

View file

@ -128,6 +128,17 @@ rec {
# List of input strings
list: concatStringsSep sep (lib.imap1 f list);
/* Concatenate a list of strings, adding a newline at the end of each one.
Defined as `concatMapStrings (s: s + "\n")`.
Type: concatLines :: [string] -> string
Example:
concatLines [ "foo" "bar" ]
=> "foo\nbar\n"
*/
concatLines = concatMapStrings (s: s + "\n");
/* Construct a Unix-style, colon-separated search path consisting of
the given `subDir` appended to each of the given paths.

View file

@ -7,6 +7,7 @@ let abis_ = abis; in
let abis = lib.mapAttrs (_: abi: builtins.removeAttrs abi [ "assertions" ]) abis_; in
rec {
# these patterns are to be matched against {host,build,target}Platform.parsed
patterns = rec {
isi686 = { cpu = cpuTypes.i686; };
isx86_32 = { cpu = { family = "x86"; bits = 32; }; };
@ -81,8 +82,13 @@ rec {
isMusl = with abis; map (a: { abi = a; }) [ musl musleabi musleabihf muslabin32 muslabi64 ];
isUClibc = with abis; map (a: { abi = a; }) [ uclibc uclibceabi uclibceabihf ];
isEfi = map (family: { cpu.family = family; })
[ "x86" "arm" "aarch64" "riscv" ];
isEfi = [
{ cpu = { family = "arm"; version = "6"; }; }
{ cpu = { family = "arm"; version = "7"; }; }
{ cpu = { family = "arm"; version = "8"; }; }
{ cpu = { family = "riscv"; }; }
{ cpu = { family = "x86"; }; }
];
};
matchAnyAttrs = patterns:
@ -90,4 +96,13 @@ rec {
else matchAttrs patterns;
predicates = mapAttrs (_: matchAnyAttrs) patterns;
# these patterns are to be matched against the entire
# {host,build,target}Platform structure; they include a `parsed={}` marker so
# that `lib.meta.availableOn` can distinguish them from the patterns which
# apply only to the `parsed` field.
platformPatterns = mapAttrs (_: p: { parsed = {}; } // p) {
isStatic = { isStatic = true; };
};
}

View file

@ -7,7 +7,8 @@ in {
type = types.str;
};
email = lib.mkOption {
type = types.str;
type = types.nullOr types.str;
default = null;
};
matrix = lib.mkOption {
type = types.nullOr types.str;

View file

@ -1,5 +1,6 @@
# to run these tests (and the others)
# nix-build nixpkgs/lib/tests/release.nix
# These tests should stay in sync with the comment in maintainers/maintainers-list.nix
{ # The pkgs used for dependencies for the testing itself
pkgs ? import ../.. {}
, lib ? pkgs.lib
@ -20,7 +21,7 @@ let
];
}).config;
checkGithubId = lib.optional (checkedAttrs.github != null && checkedAttrs.githubId == null) ''
checks = lib.optional (checkedAttrs.github != null && checkedAttrs.githubId == null) ''
echo ${lib.escapeShellArg (lib.showOption prefix)}': If `github` is specified, `githubId` must be too.'
# Calling this too often would hit non-authenticated API limits, but this
# shouldn't happen since such errors will get fixed rather quickly
@ -28,8 +29,12 @@ let
id=$(jq -r '.id' <<< "$info")
echo "The GitHub ID for GitHub user ${checkedAttrs.github} is $id:"
echo -e " githubId = $id;\n"
'' ++ lib.optional (checkedAttrs.email == null && checkedAttrs.github == null && checkedAttrs.matrix == null) ''
echo ${lib.escapeShellArg (lib.showOption prefix)}': At least one of `email`, `github` or `matrix` must be specified, so that users know how to reach you.'
'' ++ lib.optional (checkedAttrs.email != null && lib.hasSuffix "noreply.github.com" checkedAttrs.email) ''
echo ${lib.escapeShellArg (lib.showOption prefix)}': If an email address is given, it should allow people to reach you. If you do not want that, you can just provide `github` or `matrix` instead.'
'';
in lib.deepSeq checkedAttrs checkGithubId;
in lib.deepSeq checkedAttrs checks;
missingGithubIds = lib.concatLists (lib.mapAttrsToList checkMaintainer lib.maintainers);

View file

@ -153,6 +153,11 @@ runTests {
expected = "a,b,c";
};
testConcatLines = {
expr = concatLines ["a" "b" "c"];
expected = "a\nb\nc\n";
};
testSplitStringsSimple = {
expr = strings.splitString "." "a.b.c.d";
expected = [ "a" "b" "c" "d" ];

File diff suppressed because it is too large Load diff

View file

@ -17,6 +17,7 @@ let
if (builtins.tryEval attrs.drvPath).success
then { inherit (attrs) name drvPath; }
else { failed = true; }
else if attrs == null then {}
else { recurseForDerivations = true; } //
mapAttrs (n: v: let path' = path ++ [n]; in trace path' (recurse path' v)) attrs
else { };

View file

@ -65,6 +65,7 @@ luaevent,,,,,,
luaexpat,,,,1.4.1-1,,arobyn flosse
luaffi,,,http://luarocks.org/dev,,,
luafilesystem,,,,1.8.0-1,,flosse
lualdap,,,,,,aanderse
lualogging,,,,,,
luaossl,,,,,5.1,
luaposix,,,,34.1.1-1,,vyp lblasc

1 name src ref server version luaversion maintainers
65 luaexpat 1.4.1-1 arobyn flosse
66 luaffi http://luarocks.org/dev
67 luafilesystem 1.8.0-1 flosse
68 lualdap aanderse
69 lualogging
70 luaossl 5.1
71 luaposix 34.1.1-1 vyp lblasc

View file

@ -9,15 +9,14 @@ stdenv.mkDerivation {
perl GetoptLongDescriptive CPANPLUS Readonly LogLog4perl
];
phases = [ "installPhase" ];
dontUnpack = true;
installPhase =
''
mkdir -p $out/bin
cp ${./nix-generate-from-cpan.pl} $out/bin/nix-generate-from-cpan
patchShebangs $out/bin/nix-generate-from-cpan
wrapProgram $out/bin/nix-generate-from-cpan --set PERL5LIB $PERL5LIB
'';
installPhase = ''
mkdir -p $out/bin
cp ${./nix-generate-from-cpan.pl} $out/bin/nix-generate-from-cpan
patchShebangs $out/bin/nix-generate-from-cpan
wrapProgram $out/bin/nix-generate-from-cpan --set PERL5LIB $PERL5LIB
'';
meta = {
maintainers = with lib.maintainers; [ eelco ];

View file

@ -168,6 +168,15 @@ with lib.maintainers; {
shortName = "Cosmopolitan";
};
deepin = {
members = [
rewine
];
scope = "Maintain deepin desktop environment and related packages.";
shortName = "DDE";
enableFeatureFreezePing = true;
};
deshaw = {
# Verify additions to this team with at least one already existing member of the team.
members = [
@ -398,6 +407,19 @@ with lib.maintainers; {
shortName = "Linux Kernel";
};
llvm = {
members = [
ericson2314
sternenseemann
lovek323
dtzWill
primeos
];
scope = "Maintain LLVM package sets and related packages";
shortName = "LLVM";
enableFeatureFreezePing = true;
};
lumiguide = {
# Verify additions by approval of an already existing member of the team.
members = [
@ -676,9 +698,11 @@ with lib.maintainers; {
rust = {
members = [
andir
figsoda
lnl7
mic92
tjni
winter
zowoq
];
scope = "Maintain the Rust compiler toolchain and nixpkgs integration.";

View file

@ -170,6 +170,6 @@ Packages
```
The latter option definition changes the default PostgreSQL package
used by NixOS's PostgreSQL service to 10.x. For more information on
used by NixOS's PostgreSQL service to 14.x. For more information on
packages, including how to add new ones, see
[](#sec-custom-packages).

View file

@ -68,12 +68,15 @@ let
sources = lib.sourceFilesBySuffices ./. [".xml"];
modulesDoc = builtins.toFile "modules.xml" ''
<section xmlns:xi="http://www.w3.org/2001/XInclude" id="modules">
${(lib.concatMapStrings (path: ''
<xi:include href="${path}" />
'') (lib.catAttrs "value" config.meta.doc))}
</section>
modulesDoc = runCommand "modules.xml" {
nativeBuildInputs = [ pkgs.nixos-render-docs ];
} ''
nixos-render-docs manual docbook \
--manpage-urls ${pkgs.path + "/doc/manpage-urls.json"} \
"$out" \
--section \
--section-id modules \
--chapters ${lib.concatMapStrings (p: "${p.value} ") config.meta.doc}
'';
generatedSources = runCommand "generated-docbook" {} ''
@ -176,40 +179,10 @@ let
lintrng $out/man-pages-combined.xml
'';
olinkDB = runCommand "manual-olinkdb"
{ inherit sources;
nativeBuildInputs = [ buildPackages.libxml2.bin buildPackages.libxslt.bin ];
}
''
xsltproc \
${manualXsltprocOptions} \
--stringparam collect.xref.targets only \
--stringparam targets.filename "$out/manual.db" \
--nonet \
${docbook_xsl_ns}/xml/xsl/docbook/xhtml/chunktoc.xsl \
${manual-combined}/manual-combined.xml
cat > "$out/olinkdb.xml" <<EOF
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE targetset SYSTEM
"file://${docbook_xsl_ns}/xml/xsl/docbook/common/targetdatabase.dtd" [
<!ENTITY manualtargets SYSTEM "file://$out/manual.db">
]>
<targetset>
<targetsetinfo>
Allows for cross-referencing olinks between the manpages
and manual.
</targetsetinfo>
<document targetdoc="manual">&manualtargets;</document>
</targetset>
EOF
'';
in rec {
inherit generatedSources;
inherit (optionsDoc) optionsJSON optionsNix optionsDocBook;
inherit (optionsDoc) optionsJSON optionsNix optionsDocBook optionsUsedDocbook;
# Generate the NixOS manual.
manualHTML = runCommand "nixos-manual-html"
@ -224,7 +197,6 @@ in rec {
mkdir -p $dst
xsltproc \
${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--stringparam id.warnings "1" \
--nonet --output $dst/ \
${docbook_xsl_ns}/xml/xsl/docbook/xhtml/chunktoc.xsl \
@ -261,7 +233,6 @@ in rec {
xsltproc \
${manualXsltprocOptions} \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
--nonet --xinclude --output $dst/epub/ \
${docbook_xsl_ns}/xml/xsl/docbook/epub/docbook.xsl \
${manual-combined}/manual-combined.xml
@ -295,7 +266,6 @@ in rec {
--param man.output.base.dir "'$out/share/man/'" \
--param man.endnotes.are.numbered 0 \
--param man.break.after.slash 1 \
--stringparam target.database.document "${olinkDB}/olinkdb.xml" \
${docbook_xsl_ns}/xml/xsl/docbook/manpages/docbook.xsl \
${manual-combined}/man-pages-combined.xml
'';

View file

@ -23,7 +23,7 @@ file.
meta = {
maintainers = with lib.maintainers; [ ericsagnes ];
doc = ./default.xml;
doc = ./default.md;
buildDocsInSandbox = true;
};
}
@ -31,7 +31,9 @@ file.
- `maintainers` contains a list of the module maintainers.
- `doc` points to a valid DocBook file containing the module
- `doc` points to a valid [Nixpkgs-flavored CommonMark](
https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup
) file containing the module
documentation. Its contents is automatically added to
[](#ch-configuration). Changes to a module documentation have to
be checked to not break building the NixOS manual:
@ -40,26 +42,6 @@ file.
$ nix-build nixos/release.nix -A manual.x86_64-linux
```
This file should *not* usually be written by hand. Instead it is preferred
to write documentation using CommonMark and converting it to CommonMark
using pandoc. The simplest documentation can be converted using just
```ShellSession
$ pandoc doc.md -t docbook --top-level-division=chapter -f markdown+smart > doc.xml
```
More elaborate documentation may wish to add one or more of the pandoc
filters used to build the remainder of the manual, for example the GNOME
desktop uses
```ShellSession
$ pandoc gnome.md -t docbook --top-level-division=chapter \
--extract-media=media -f markdown+smart \
--lua-filter ../../../../../doc/build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter ../../../../../doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
> gnome.xml
```
- `buildDocsInSandbox` indicates whether the option documentation for the
module can be built in a derivation sandbox. This option is currently only
honored for modules shipped by nixpkgs. User modules and modules taken from

View file

@ -78,7 +78,7 @@ For example:
::: {#ex-options-declarations-util-mkEnableOption-magic .example}
```nix
lib.mkEnableOption "magic"
lib.mkEnableOption (lib.mdDoc "magic")
# is like
lib.mkOption {
type = lib.types.bool;
@ -113,7 +113,7 @@ Examples:
::: {#ex-options-declarations-util-mkPackageOption-hello .example}
```nix
lib.mkPackageOption pkgs "hello" { }
lib.mkPackageOptionMD pkgs "hello" { }
# is like
lib.mkOption {
type = lib.types.package;
@ -125,7 +125,7 @@ lib.mkOption {
::: {#ex-options-declarations-util-mkPackageOption-ghc .example}
```nix
lib.mkPackageOption pkgs "GHC" {
lib.mkPackageOptionMD pkgs "GHC" {
default = [ "ghc" ];
example = "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])";
}

View file

@ -24,6 +24,39 @@ back into the test driver command line upon its completion. This allows
you to inspect the state of the VMs after the test (e.g. to debug the
test script).
## Shell access in interactive mode {#sec-nixos-test-shell-access}
The function `<yourmachine>.shell_interact()` grants access to a shell running
inside a virtual machine. To use it, replace `<yourmachine>` with the name of a
virtual machine defined in the test, for example: `machine.shell_interact()`.
Keep in mind that this shell may not display everything correctly as it is
running within an interactive Python REPL, and logging output from the virtual
machine may overwrite input and output from the guest shell:
```py
>>> machine.shell_interact()
machine: Terminal is ready (there is no initial prompt):
$ hostname
machine
```
As an alternative, you can proxy the guest shell to a local TCP server by first
starting a TCP server in a terminal using the command:
```ShellSession
$ socat 'READLINE,PROMPT=$ ' tcp-listen:4444,reuseaddr`
```
In the terminal where the test driver is running, connect to this server by
using:
```py
>>> machine.shell_interact("tcp:127.0.0.1:4444")
```
Once the connection is established, you can enter commands in the socat terminal
where socat is running.
## Reuse VM state {#sec-nixos-test-reuse-vm-state}
You can re-use the VM states coming from a previous run by setting the

View file

@ -221,7 +221,7 @@ services.postgresql.package = pkgs.postgresql_14;
</programlisting>
<para>
The latter option definition changes the default PostgreSQL
package used by NixOSs PostgreSQL service to 10.x. For more
package used by NixOSs PostgreSQL service to 14.x. For more
information on packages, including how to add new ones, see
<xref linkend="sec-custom-packages" />.
</para>

View file

@ -28,7 +28,7 @@
meta = {
maintainers = with lib.maintainers; [ ericsagnes ];
doc = ./default.xml;
doc = ./default.md;
buildDocsInSandbox = true;
};
}
@ -42,35 +42,16 @@
</listitem>
<listitem>
<para>
<literal>doc</literal> points to a valid DocBook file containing
the module documentation. Its contents is automatically added to
<literal>doc</literal> points to a valid
<link xlink:href="https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup">Nixpkgs-flavored
CommonMark</link> file containing the module documentation. Its
contents is automatically added to
<xref linkend="ch-configuration" />. Changes to a module
documentation have to be checked to not break building the NixOS
manual:
</para>
<programlisting>
$ nix-build nixos/release.nix -A manual.x86_64-linux
</programlisting>
<para>
This file should <emphasis>not</emphasis> usually be written by
hand. Instead it is preferred to write documentation using
CommonMark and converting it to CommonMark using pandoc. The
simplest documentation can be converted using just
</para>
<programlisting>
$ pandoc doc.md -t docbook --top-level-division=chapter -f markdown+smart &gt; doc.xml
</programlisting>
<para>
More elaborate documentation may wish to add one or more of the
pandoc filters used to build the remainder of the manual, for
example the GNOME desktop uses
</para>
<programlisting>
$ pandoc gnome.md -t docbook --top-level-division=chapter \
--extract-media=media -f markdown+smart \
--lua-filter ../../../../../doc/build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter ../../../../../doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
&gt; gnome.xml
</programlisting>
</listitem>
<listitem>

View file

@ -128,7 +128,7 @@ options = {
</para>
<anchor xml:id="ex-options-declarations-util-mkEnableOption-magic" />
<programlisting language="nix">
lib.mkEnableOption &quot;magic&quot;
lib.mkEnableOption (lib.mdDoc &quot;magic&quot;)
# is like
lib.mkOption {
type = lib.types.bool;
@ -188,7 +188,7 @@ mkPackageOption pkgs &quot;name&quot; { default = [ &quot;path&quot; &quot;in&qu
</para>
<anchor xml:id="ex-options-declarations-util-mkPackageOption-hello" />
<programlisting language="nix">
lib.mkPackageOption pkgs &quot;hello&quot; { }
lib.mkPackageOptionMD pkgs &quot;hello&quot; { }
# is like
lib.mkOption {
type = lib.types.package;
@ -199,7 +199,7 @@ lib.mkOption {
</programlisting>
<anchor xml:id="ex-options-declarations-util-mkPackageOption-ghc" />
<programlisting language="nix">
lib.mkPackageOption pkgs &quot;GHC&quot; {
lib.mkPackageOptionMD pkgs &quot;GHC&quot; {
default = [ &quot;ghc&quot; ];
example = &quot;pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])&quot;;
}

View file

@ -25,6 +25,46 @@ $ ./result/bin/nixos-test-driver
completion. This allows you to inspect the state of the VMs after
the test (e.g. to debug the test script).
</para>
<section xml:id="sec-nixos-test-shell-access">
<title>Shell access in interactive mode</title>
<para>
The function
<literal>&lt;yourmachine&gt;.shell_interact()</literal> grants
access to a shell running inside a virtual machine. To use it,
replace <literal>&lt;yourmachine&gt;</literal> with the name of a
virtual machine defined in the test, for example:
<literal>machine.shell_interact()</literal>. Keep in mind that
this shell may not display everything correctly as it is running
within an interactive Python REPL, and logging output from the
virtual machine may overwrite input and output from the guest
shell:
</para>
<programlisting language="python">
&gt;&gt;&gt; machine.shell_interact()
machine: Terminal is ready (there is no initial prompt):
$ hostname
machine
</programlisting>
<para>
As an alternative, you can proxy the guest shell to a local TCP
server by first starting a TCP server in a terminal using the
command:
</para>
<programlisting>
$ socat 'READLINE,PROMPT=$ ' tcp-listen:4444,reuseaddr`
</programlisting>
<para>
In the terminal where the test driver is running, connect to this
server by using:
</para>
<programlisting language="python">
&gt;&gt;&gt; machine.shell_interact(&quot;tcp:127.0.0.1:4444&quot;)
</programlisting>
<para>
Once the connection is established, you can enter commands in the
socat terminal where socat is running.
</para>
</section>
<section xml:id="sec-nixos-test-reuse-vm-state">
<title>Reuse VM state</title>
<para>

View file

@ -61,6 +61,13 @@
<link linkend="opt-services.printing.cups-pdf.enable">services.printing.cups-pdf</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://www.magicbug.co.uk/cloudlog/">Cloudlog</link>,
a web-based Amateur Radio logging application. Available as
<link linkend="opt-services.cloudlog.enable">services.cloudlog</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/junegunn/fzf">fzf</link>,
@ -83,6 +90,14 @@
<link xlink:href="options.html#opt-networking.stevenblack.enable">networking.stevenblack</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/slurdge/goeland">goeland</link>,
an alternative to rss2email written in golang with many
filters. Available as
<link linkend="opt-services.goeland.enable">services.goeland</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/ellie/atuin">atuin</link>,
@ -98,6 +113,14 @@
<link linkend="opt-services.mmsd.enable">services.mmsd</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://dm3mat.darc.de/qdmr/">QDMR</link>, a
gui application and command line tool for programming cheap
DMR radios
<link linkend="opt-programs.qdmr.enable">programs.qdmr</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://v2raya.org">v2rayA</link>, a Linux
@ -122,6 +145,13 @@
<link xlink:href="options.html#opt-services.photoprism.enable">services.photoprism</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/languitar/autosuspend">autosuspend</link>,
a python daemon that suspends a system if certain conditions
are met, or not met.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-23.05-incompatibilities">
@ -138,6 +168,30 @@
instead.
</para>
</listitem>
<listitem>
<para>
<literal>checkInputs</literal> have been renamed to
<literal>nativeCheckInputs</literal>, because they behave the
same as <literal>nativeBuildInputs</literal> when
<literal>doCheck</literal> is set.
<literal>checkInputs</literal> now denote a new type of
dependencies, added to <literal>buildInputs</literal> when
<literal>doCheck</literal> is set. As a rule of thumb,
<literal>nativeCheckInputs</literal> are tools on
<literal>$PATH</literal> used during the tests, and
<literal>checkInputs</literal> are libraries which are linked
to executables built as part of the tests. Similarly,
<literal>installCheckInputs</literal> are renamed to
<literal>nativeInstallCheckInputs</literal>, corresponding to
<literal>nativeBuildInputs</literal>, and
<literal>installCheckInputs</literal> are a new type of
dependencies added to <literal>buildInputs</literal> when
<literal>doInstallCheck</literal> is set. (Note that this
change will not cause breakage to derivations with
<literal>strictDeps</literal> unset, which are most packages
except python, rust and go packages).
</para>
</listitem>
<listitem>
<para>
<literal>borgbackup</literal> module now has an option for
@ -163,6 +217,18 @@
to upgrade existing repositories.
</para>
</listitem>
<listitem>
<para>
The <literal>services.kubo.settings</literal> option is now no
longer stateful. If you changed any of the options in
<literal>services.kubo.settings</literal> in the past and then
removed them from your NixOS configuration again, those
changes are still in your Kubo configuration file but will now
be reset to the default. If youre unsure, you may want to
make a backup of your configuration file (probably
/var/lib/ipfs/config) and compare after the update.
</para>
</listitem>
<listitem>
<para>
The EC2 image module no longer fetches instance metadata in
@ -237,6 +303,33 @@
or configure your firewall.
</para>
</listitem>
<listitem>
<para>
Kime has been updated from 2.5.6 to 3.0.2 and the
<literal>i18n.inputMethod.kime.config</literal> option has
been removed. Users should use
<literal>daemonModules</literal>,
<literal>iconColor</literal>, and
<literal>extraConfig</literal> options under
<literal>i18n.inputMethod.kime</literal> instead.
</para>
</listitem>
<listitem>
<para>
<literal>tut</literal> has been updated from 1.0.34 to 2.0.0,
and now uses the TOML format for the configuration file
instead of INI. Additional information can be found
<link xlink:href="https://github.com/RasmusLindroth/tut/releases/tag/2.0.0">here</link>.
</para>
</listitem>
<listitem>
<para>
The <literal>wordpress</literal> derivation no longer contains
any builtin plugins or themes. If you need them you have to
add them back to prevent your site from breaking. You can find
them in <literal>wordpressPackages.{plugins,themes}</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>llvmPackages_rocm.llvm</literal> will not contain
@ -248,16 +341,6 @@
<literal>llvmPackages_rocm.clang-unwrapped</literal>.
</para>
</listitem>
<listitem>
<para>
The Nginx module now validates the syntax of config files at
build time. For more complex configurations (using
<literal>include</literal> with out-of-store files notably)
you may need to disable this check by setting
<link linkend="opt-services.nginx.validateConfig">services.nginx.validateConfig</link>
to <literal>false</literal>.
</para>
</listitem>
<listitem>
<para>
The EC2 image module previously detected and automatically
@ -270,6 +353,16 @@
stage-2.
</para>
</listitem>
<listitem>
<para>
<literal>teleport</literal> has been upgraded to major version
11. Please see upstream
<link xlink:href="https://goteleport.com/docs/setup/operations/upgrading/">upgrade
instructions</link> and
<link xlink:href="https://goteleport.com/docs/changelog/#1100">release
notes</link>.
</para>
</listitem>
<listitem>
<para>
The EC2 image module previously detected and activated
@ -278,6 +371,12 @@
relying on this should provide their own implementation.
</para>
</listitem>
<listitem>
<para>
Calling <literal>makeSetupHook</literal> without passing a
<literal>name</literal> argument is deprecated.
</para>
</listitem>
<listitem>
<para>
Qt 5.12 and 5.14 have been removed, as the corresponding
@ -287,6 +386,17 @@
updated manually.
</para>
</listitem>
<listitem>
<para>
The
<link linkend="opt-services.wordpress.sites._name_.plugins">services.wordpress.sites.&lt;name&gt;.plugins</link>
and
<link linkend="opt-services.wordpress.sites._name_.themes">services.wordpress.sites.&lt;name&gt;.themes</link>
options have been converted from sets to attribute sets to
allow for consumers to specify explicit install paths via
attribute name.
</para>
</listitem>
<listitem>
<para>
In <literal>mastodon</literal> it is now necessary to specify
@ -298,6 +408,15 @@
been changed to <literal>null</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>--target-host</literal> and
<literal>--build-host</literal> options of
<literal>nixos-rebuild</literal> no longer treat the
<literal>localhost</literal> value specially to build
on/deploy to local machine, omit the relevant flag.
</para>
</listitem>
<listitem>
<para>
The <literal>nix.readOnlyStore</literal> option has been
@ -313,6 +432,24 @@
<literal>freetype</literal> and others.
</para>
</listitem>
<listitem>
<para>
.NET 5.0 was removed due to being end-of-life, use a newer,
supported .NET version -
https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
</para>
</listitem>
<listitem>
<para>
The iputils package, which is installed by default, no longer
provides the <literal>ninfod</literal>,
<literal>rarpd</literal> and <literal>rdisc</literal> tools.
See
<link xlink:href="https://github.com/iputils/iputils/releases/tag/20221126">upstreams
release notes</link> for more details and available
replacements.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-23.05-notable-changes">
@ -374,6 +511,32 @@
sudo and sources the environment variables.
</para>
</listitem>
<listitem>
<para>
DocBook option documentation, which has been deprecated since
22.11, will now cause a warning when documentation is built.
Out-of-tree modules should migrate to using CommonMark
documentation as outlined in
<xref linkend="sec-option-declarations" /> to silence this
warning.
</para>
<para>
DocBook option documentation support will be removed in the
next release and CommonMark will become the default. DocBook
option documentation that has not been migrated until then
will no longer render properly or cause errors.
</para>
</listitem>
<listitem>
<para>
NixOS now defaults to using nsncd (a non-caching
reimplementation in Rust) as NSS lookup dispatcher, instead of
the buggy and deprecated glibc-provided nscd. If you need to
switch back, set
<literal>services.nscd.enableNsncd = false</literal>, but
please open an issue in nixpkgs so your issue can be fixed.
</para>
</listitem>
<listitem>
<para>
The <literal>dnsmasq</literal> service now takes configuration
@ -422,6 +585,17 @@
<literal>nixos/modules/profiles/minimal.nix</literal> profile.
</para>
</listitem>
<listitem>
<para>
The <literal>ghcWithPackages</literal> and
<literal>ghcWithHoogle</literal> wrappers will now also
symlink GHCs and all included libraries documentation to
<literal>$out/share/doc</literal> for convenience. If
undesired, the old behavior can be restored by overriding the
builders with
<literal>{ installDocumentation = false; }</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>mastodon</literal> now supports connection to a
@ -455,6 +629,13 @@
security.
</para>
</listitem>
<listitem>
<para>
<literal>services.dhcpcd</literal> service now dont solicit
or accept IPv6 Router Advertisements on interfaces that use
static IPv6 addresses.
</para>
</listitem>
<listitem>
<para>
The module <literal>services.headscale</literal> was
@ -546,6 +727,36 @@
<link xlink:href="https://github.com/google/ngx_brotli/blob/master/README.md">here</link>.
</para>
</listitem>
<listitem>
<para>
Updated recommended settings in
<literal>services.nginx.recommendedGzipSettings</literal>:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
Enables gzip compression for only certain proxied
requests.
</para>
</listitem>
<listitem>
<para>
Allow checking and loading of precompressed files.
</para>
</listitem>
<listitem>
<para>
Updated gzip mime-types.
</para>
</listitem>
<listitem>
<para>
Increased the minimum length of a response that will be
gzipped.
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>
<link xlink:href="https://garagehq.deuxfleurs.fr/">Garage</link>
@ -568,6 +779,13 @@
<literal>hipcc</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>services.nginx.recommendedProxySettings</literal> now
removes the <literal>Connection</literal> header preventing
clients from closing backend connections.
</para>
</listitem>
<listitem>
<para>
Resilio sync secret keys can now be provided using a secrets

View file

@ -583,15 +583,15 @@
<listitem>
<para>
Specifies the NixOS target host. By setting this to something other than
<replaceable>localhost</replaceable>, the system activation will happen
an empty string, the system activation will happen
on the remote host instead of the local machine. The remote host needs to
be accessible over ssh, and for the commands <option>switch</option>,
<option>boot</option> and <option>test</option> you need root access.
</para>
<para>
If <option>--build-host</option> is not explicitly specified, building
will take place locally.
If <option>--build-host</option> is not explicitly specified or empty,
building will take place locally.
</para>
<para>

View file

@ -50,21 +50,3 @@ for mf in ${MD_FILES[*]}; do
done
popd
# now handle module chapters. we'll need extra checks to ensure that we don't process
# markdown files we're not interested in, so we'll require an x.nix file for ever x.md
# that we'll convert to xml.
pushd "$DIR/../../modules"
mapfile -t MD_FILES < <(find . -type f -regex '.*\.md$')
for mf in ${MD_FILES[*]}; do
[ -f "${mf%.md}.nix" ] || continue
pandoc --top-level-division=chapter "$mf" "${pandoc_flags[@]}" -o "${mf%.md}.xml"
sed -i -e '1 i <!-- Do not edit this file directly, edit its companion .md instead\
and regenerate this file using nixos/doc/manual/md-to-db.sh -->' \
"${mf%.md}.xml"
done
popd

View file

@ -24,34 +24,46 @@ In addition to numerous new and upgraded packages, this release has the followin
- [cups-pdf-to-pdf](https://github.com/alexivkin/CUPS-PDF-to-PDF), a pdf-generating cups backend based on [cups-pdf](https://www.cups-pdf.de/). Available as [services.printing.cups-pdf](#opt-services.printing.cups-pdf.enable).
- [Cloudlog](https://www.magicbug.co.uk/cloudlog/), a web-based Amateur Radio logging application. Available as [services.cloudlog](#opt-services.cloudlog.enable).
- [fzf](https://github.com/junegunn/fzf), a command line fuzzyfinder. Available as [programs.fzf](#opt-programs.fzf.fuzzyCompletion).
- [gmediarender](https://github.com/hzeller/gmrender-resurrect), a simple, headless UPnP/DLNA renderer. Available as [services.gmediarender](options.html#opt-services.gmediarender.enable).
- [stevenblack-blocklist](https://github.com/StevenBlack/hosts), A unified hosts file with base extensions for blocking unwanted websites. Available as [networking.stevenblack](options.html#opt-networking.stevenblack.enable).
- [goeland](https://github.com/slurdge/goeland), an alternative to rss2email written in golang with many filters. Available as [services.goeland](#opt-services.goeland.enable).
- [atuin](https://github.com/ellie/atuin), a sync server for shell history. Available as [services.atuin](#opt-services.atuin.enable).
- [mmsd](https://gitlab.com/kop316/mmsd), a lower level daemon that transmits and recieves MMSes. Available as [services.mmsd](#opt-services.mmsd.enable).
- [QDMR](https://dm3mat.darc.de/qdmr/), a gui application and command line tool for programming cheap DMR radios [programs.qdmr](#opt-programs.qdmr.enable)
- [v2rayA](https://v2raya.org), a Linux web GUI client of Project V which supports V2Ray, Xray, SS, SSR, Trojan and Pingtunnel. Available as [services.v2raya](options.html#opt-services.v2raya.enable).
- [ulogd](https://www.netfilter.org/projects/ulogd/index.html), a userspace logging daemon for netfilter/iptables related logging. Available as [services.ulogd](options.html#opt-services.ulogd.enable).
- [photoprism](https://photoprism.app/), a AI-Powered Photos App for the Decentralized Web. Available as [services.photoprism](options.html#opt-services.photoprism.enable).
- [autosuspend](https://github.com/languitar/autosuspend), a python daemon that suspends a system if certain conditions are met, or not met.
## Backward Incompatibilities {#sec-release-23.05-incompatibilities}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
- `carnix` and `cratesIO` has been removed due to being unmaintained, use alternatives such as [naersk](https://github.com/nix-community/naersk) and [crate2nix](https://github.com/kolloch/crate2nix) instead.
- `checkInputs` have been renamed to `nativeCheckInputs`, because they behave the same as `nativeBuildInputs` when `doCheck` is set. `checkInputs` now denote a new type of dependencies, added to `buildInputs` when `doCheck` is set. As a rule of thumb, `nativeCheckInputs` are tools on `$PATH` used during the tests, and `checkInputs` are libraries which are linked to executables built as part of the tests. Similarly, `installCheckInputs` are renamed to `nativeInstallCheckInputs`, corresponding to `nativeBuildInputs`, and `installCheckInputs` are a new type of dependencies added to `buildInputs` when `doInstallCheck` is set. (Note that this change will not cause breakage to derivations with `strictDeps` unset, which are most packages except python, rust and go packages).
- `borgbackup` module now has an option for inhibiting system sleep while backups are running, defaulting to off (not inhibiting sleep), available as [`services.borgbackup.jobs.<name>.inhibitsSleep`](#opt-services.borgbackup.jobs._name_.inhibitsSleep).
- `podman` now uses the `netavark` network stack. Users will need to delete all of their local containers, images, volumes, etc, by running `podman system reset --force` once before upgrading their systems.
- `git-bug` has been updated to at least version 0.8.0, which includes backwards incompatible changes. The `git-bug-migration` package can be used to upgrade existing repositories.
- The `services.kubo.settings` option is now no longer stateful. If you changed any of the options in `services.kubo.settings` in the past and then removed them from your NixOS configuration again, those changes are still in your Kubo configuration file but will now be reset to the default. If you're unsure, you may want to make a backup of your configuration file (probably /var/lib/ipfs/config) and compare after the update.
- The EC2 image module no longer fetches instance metadata in stage-1. This results in a significantly smaller initramfs, since network drivers no longer need to be included, and faster boots, since metadata fetching can happen in parallel with startup of other services.
This breaks services which rely on metadata being present by the time stage-2 is entered. Anything which reads EC2 metadata from `/etc/ec2-metadata` should now have an `after` dependency on `fetch-ec2-metadata.service`
@ -65,22 +77,41 @@ In addition to numerous new and upgraded packages, this release has the followin
- The [services.unifi-video.openFirewall](#opt-services.unifi-video.openFirewall) module option default value has been changed from `true` to `false`. You will need to explicitly set this option to `true`, or configure your firewall.
- `llvmPackages_rocm.llvm` will not contain `clang` or `compiler-rt`. `llvmPackages_rocm.clang` will not contain `llvm`. `llvmPackages_rocm.clangNoCompilerRt` has been removed in favor of using `llvmPackages_rocm.clang-unwrapped`.
- Kime has been updated from 2.5.6 to 3.0.2 and the `i18n.inputMethod.kime.config` option has been removed. Users should use `daemonModules`, `iconColor`, and `extraConfig` options under `i18n.inputMethod.kime` instead.
- The Nginx module now validates the syntax of config files at build time. For more complex configurations (using `include` with out-of-store files notably) you may need to disable this check by setting [services.nginx.validateConfig](#opt-services.nginx.validateConfig) to `false`.
- `tut` has been updated from 1.0.34 to 2.0.0, and now uses the TOML format for the configuration file instead of INI. Additional information can be found [here](https://github.com/RasmusLindroth/tut/releases/tag/2.0.0).
- The `wordpress` derivation no longer contains any builtin plugins or themes. If you need them you have to add them back to prevent your site from breaking. You can find them in `wordpressPackages.{plugins,themes}`.
- `llvmPackages_rocm.llvm` will not contain `clang` or `compiler-rt`. `llvmPackages_rocm.clang` will not contain `llvm`. `llvmPackages_rocm.clangNoCompilerRt` has been removed in favor of using `llvmPackages_rocm.clang-unwrapped`.
- The EC2 image module previously detected and automatically mounted ext3-formatted instance store devices and partitions in stage-1 (initramfs), storing `/tmp` on the first discovered device. This behaviour, which only catered to very specific use cases and could not be disabled, has been removed. Users relying on this should provide their own implementation, and probably use ext4 and perform the mount in stage-2.
- `teleport` has been upgraded to major version 11. Please see upstream [upgrade instructions](https://goteleport.com/docs/setup/operations/upgrading/) and [release notes](https://goteleport.com/docs/changelog/#1100).
- The EC2 image module previously detected and activated swap-formatted instance store devices and partitions in stage-1 (initramfs). This behaviour has been removed. Users relying on this should provide their own implementation.
- Calling `makeSetupHook` without passing a `name` argument is deprecated.
- Qt 5.12 and 5.14 have been removed, as the corresponding branches have been EOL upstream for a long time. This affected under 10 packages in nixpkgs, largely unmaintained upstream as well, however, out-of-tree package expressions may need to be updated manually.
- The [services.wordpress.sites.&lt;name&gt;.plugins](#opt-services.wordpress.sites._name_.plugins) and [services.wordpress.sites.&lt;name&gt;.themes](#opt-services.wordpress.sites._name_.themes) options have been converted from sets to attribute sets to allow for consumers to specify explicit install paths via attribute name.
- In `mastodon` it is now necessary to specify location of file with `PostgreSQL` database password. In `services.mastodon.database.passwordFile` parameter default value `/var/lib/mastodon/secrets/db-password` has been changed to `null`.
- The `--target-host` and `--build-host` options of `nixos-rebuild` no longer treat the `localhost` value specially to build on/deploy to local machine, omit the relevant flag.
- The `nix.readOnlyStore` option has been renamed to `boot.readOnlyNixStore` to clarify that it configures the NixOS boot process, not the Nix daemon.
- Deprecated `xlibsWrapper` transitional package has been removed in favour of direct use of its constitutents: `xorg.libX11`, `freetype` and others.
- .NET 5.0 was removed due to being end-of-life, use a newer, supported .NET version - https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
- The iputils package, which is installed by default, no longer provides the
`ninfod`, `rarpd` and `rdisc` tools. See
[upstream's release notes](https://github.com/iputils/iputils/releases/tag/20221126)
for more details and available replacements.
## Other Notable Changes {#sec-release-23.05-notable-changes}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
@ -95,6 +126,12 @@ In addition to numerous new and upgraded packages, this release has the followin
- `services.mastodon` gained a tootctl wrapped named `mastodon-tootctl` similar to `nextcloud-occ` which can be executed from any user and switches to the configured mastodon user with sudo and sources the environment variables.
- DocBook option documentation, which has been deprecated since 22.11, will now cause a warning when documentation is built. Out-of-tree modules should migrate to using CommonMark documentation as outlined in [](#sec-option-declarations) to silence this warning.
DocBook option documentation support will be removed in the next release and CommonMark will become the default. DocBook option documentation that has not been migrated until then will no longer render properly or cause errors.
- NixOS now defaults to using nsncd (a non-caching reimplementation in Rust) as NSS lookup dispatcher, instead of the buggy and deprecated glibc-provided nscd. If you need to switch back, set `services.nscd.enableNsncd = false`, but please open an issue in nixpkgs so your issue can be fixed.
- The `dnsmasq` service now takes configuration via the
`services.dnsmasq.settings` attribute set. The option
`services.dnsmasq.extraConfig` will be deprecated when NixOS 22.11 reaches
@ -110,6 +147,11 @@ In addition to numerous new and upgraded packages, this release has the followin
- The minimal ISO image now uses the `nixos/modules/profiles/minimal.nix` profile.
- The `ghcWithPackages` and `ghcWithHoogle` wrappers will now also symlink GHC's
and all included libraries' documentation to `$out/share/doc` for convenience.
If undesired, the old behavior can be restored by overriding the builders with
`{ installDocumentation = false; }`.
- `mastodon` now supports connection to a remote `PostgreSQL` database.
- `services.peertube` now requires you to specify the secret file `secrets.secretsFile`. It can be generated by running `openssl rand -hex 32`.
@ -120,6 +162,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- `services.chronyd` is now started with additional systemd sandbox/hardening options for better security.
- `services.dhcpcd` service now don't solicit or accept IPv6 Router Advertisements on interfaces that use static IPv6 addresses.
- The module `services.headscale` was refactored to be compliant with [RFC 0042](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md). To be precise, this means that the following things have changed:
- Most settings has been migrated under [services.headscale.settings](#opt-services.headscale.settings) which is an attribute-set that
@ -141,10 +185,18 @@ In addition to numerous new and upgraded packages, this release has the followin
- A new option `recommendedBrotliSettings` has been added to `services.nginx`. Learn more about compression in Brotli format [here](https://github.com/google/ngx_brotli/blob/master/README.md).
- Updated recommended settings in `services.nginx.recommendedGzipSettings`:
- Enables gzip compression for only certain proxied requests.
- Allow checking and loading of precompressed files.
- Updated gzip mime-types.
- Increased the minimum length of a response that will be gzipped.
- [Garage](https://garagehq.deuxfleurs.fr/) version is based on [system.stateVersion](options.html#opt-system.stateVersion), existing installations will keep using version 0.7. New installations will use version 0.8. In order to upgrade a Garage cluster, please follow [upstream instructions](https://garagehq.deuxfleurs.fr/documentation/cookbook/upgrading/) and force [services.garage.package](options.html#opt-services.garage.package) or upgrade accordingly [system.stateVersion](options.html#opt-system.stateVersion).
- `hip` has been separated into `hip`, `hip-common` and `hipcc`.
- `services.nginx.recommendedProxySettings` now removes the `Connection` header preventing clients from closing backend connections.
- Resilio sync secret keys can now be provided using a secrets file at runtime, preventing these secrets from ending up in the Nix store.
- The `firewall` and `nat` module now has a nftables based implementation. Enable `networking.nftables` to use it.

View file

@ -78,16 +78,13 @@ let
title = args.title or null;
name = args.name or (lib.concatStringsSep "." args.path);
in ''
<listitem>
<para>
<link xlink:href="https://search.nixos.org/packages?show=${name}&amp;sort=relevance&amp;query=${name}">
<literal>${lib.optionalString (title != null) "${title} aka "}pkgs.${name}</literal>
</link>
</para>
${lib.optionalString (args ? comment) "<para>${args.comment}</para>"}
</listitem>
- [`${lib.optionalString (title != null) "${title} aka "}pkgs.${name}`](
https://search.nixos.org/packages?show=${name}&sort=relevance&query=${name}
)${
lib.optionalString (args ? comment) "\n\n ${args.comment}"
}
'';
in "<itemizedlist>${lib.concatStringsSep "\n" (map (p: describe (unpack p)) packages)}</itemizedlist>";
in lib.concatMapStrings (p: describe (unpack p)) packages;
optionsNix = builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList);
@ -112,16 +109,7 @@ in rec {
{ meta.description = "List of NixOS options in JSON format";
nativeBuildInputs = [
pkgs.brotli
(let
# python3Minimal can't be overridden with packages on Darwin, due to a missing framework.
# Instead of modifying stdenv, we take the easy way out, since most people on Darwin will
# just be hacking on the Nixpkgs manual (which also uses make-options-doc).
python = if pkgs.stdenv.isDarwin then pkgs.python3 else pkgs.python3Minimal;
self = (python.override {
inherit self;
includeSiteCustomize = true;
});
in self.withPackages (p: [ p.mistune ]))
pkgs.python3Minimal
];
options = builtins.toFile "options.json"
(builtins.unsafeDiscardStringContext (builtins.toJSON optionsNix));
@ -131,18 +119,16 @@ in rec {
if baseOptionsJSON == null
then builtins.toFile "base.json" "{}"
else baseOptionsJSON;
MANPAGE_URLS = pkgs.path + "/doc/manpage-urls.json";
}
''
# Export list of options in different format.
dst=$out/share/doc/nixos
mkdir -p $dst
TOUCH_IF_DB=$dst/.used-docbook \
python ${./mergeJSON.py} \
${lib.optionalString warningsAreErrors "--warnings-are-errors"} \
${lib.optionalString (! allowDocBook) "--error-on-docbook"} \
${lib.optionalString markdownByDefault "--markdown-by-default"} \
${if allowDocBook then "--warn-on-docbook" else "--error-on-docbook"} \
$baseJSON $options \
> $dst/options.json
@ -153,21 +139,30 @@ in rec {
echo "file json-br $dst/options.json.br" >> $out/nix-support/hydra-build-products
'';
# Convert options.json into an XML file.
# The actual generation of the xml file is done in nix purely for the convenience
# of not having to generate the xml some other way
optionsXML = pkgs.runCommand "options.xml" {} ''
export NIX_STORE_DIR=$TMPDIR/store
export NIX_STATE_DIR=$TMPDIR/state
${pkgs.nix}/bin/nix-instantiate \
--eval --xml --strict ${./optionsJSONtoXML.nix} \
--argstr file ${optionsJSON}/share/doc/nixos/options.json \
> "$out"
optionsUsedDocbook = pkgs.runCommand "options-used-docbook" {} ''
if [ -e ${optionsJSON}/share/doc/nixos/.used-docbook ]; then
echo 1
else
echo 0
fi >"$out"
'';
optionsDocBook = pkgs.runCommand "options-docbook.xml" {} ''
optionsXML=${optionsXML}
if grep /nixpkgs/nixos/modules $optionsXML; then
optionsDocBook = pkgs.runCommand "options-docbook.xml" {
nativeBuildInputs = [
pkgs.nixos-render-docs
];
} ''
nixos-render-docs options docbook \
--manpage-urls ${pkgs.path + "/doc/manpage-urls.json"} \
--revision ${lib.escapeShellArg revision} \
--document-type ${lib.escapeShellArg documentType} \
--varlist-id ${lib.escapeShellArg variablelistId} \
--id-prefix ${lib.escapeShellArg optionIdPrefix} \
${lib.optionalString markdownByDefault "--markdown-by-default"} \
${optionsJSON}/share/doc/nixos/options.json \
options.xml
if grep /nixpkgs/nixos/modules options.xml; then
echo "The manual appears to depend on the location of Nixpkgs, which is bad"
echo "since this prevents sharing via the NixOS channel. This is typically"
echo "caused by an option default that refers to a relative path (see above"
@ -175,14 +170,7 @@ in rec {
exit 1
fi
${pkgs.python3Minimal}/bin/python ${./sortXML.py} $optionsXML sorted.xml
${pkgs.libxslt.bin}/bin/xsltproc \
--stringparam documentType '${documentType}' \
--stringparam revision '${revision}' \
--stringparam variablelistId '${variablelistId}' \
--stringparam optionIdPrefix '${optionIdPrefix}' \
-o intermediate.xml ${./options-to-docbook.xsl} sorted.xml
${pkgs.libxslt.bin}/bin/xsltproc \
-o "$out" ${./postprocess-option-descriptions.xsl} intermediate.xml
-o "$out" ${./postprocess-option-descriptions.xsl} options.xml
'';
}

View file

@ -4,11 +4,6 @@ import os
import sys
from typing import Any, Dict, List
# for MD conversion
import mistune
import re
from xml.sax.saxutils import escape, quoteattr
JSON = Dict[str, Any]
class Key:
@ -47,200 +42,20 @@ def unpivot(options: Dict[Key, Option]) -> Dict[str, JSON]:
result[opt.name] = opt.value
return result
manpage_urls = json.load(open(os.getenv('MANPAGE_URLS')))
admonitions = {
'.warning': 'warning',
'.important': 'important',
'.note': 'note'
}
class Renderer(mistune.renderers.BaseRenderer):
def _get_method(self, name):
try:
return super(Renderer, self)._get_method(name)
except AttributeError:
def not_supported(*args, **kwargs):
raise NotImplementedError("md node not supported yet", name, args, **kwargs)
return not_supported
def text(self, text):
return escape(text)
def paragraph(self, text):
return text + "\n\n"
def newline(self):
return "<literallayout>\n</literallayout>"
def codespan(self, text):
return f"<literal>{escape(text)}</literal>"
def block_code(self, text, info=None):
info = f" language={quoteattr(info)}" if info is not None else ""
return f"<programlisting{info}>\n{escape(text)}</programlisting>"
def link(self, link, text=None, title=None):
tag = "link"
if link[0:1] == '#':
if text == "":
tag = "xref"
attr = "linkend"
link = quoteattr(link[1:])
else:
# try to faithfully reproduce links that were of the form <link href="..."/>
# in docbook format
if text == link:
text = ""
attr = "xlink:href"
link = quoteattr(link)
return f"<{tag} {attr}={link}>{text}</{tag}>"
def list(self, text, ordered, level, start=None):
if ordered:
raise NotImplementedError("ordered lists not supported yet")
return f"<itemizedlist>\n{text}\n</itemizedlist>"
def list_item(self, text, level):
return f"<listitem><para>{text}</para></listitem>\n"
def block_text(self, text):
return text
def emphasis(self, text):
return f"<emphasis>{text}</emphasis>"
def strong(self, text):
return f"<emphasis role=\"strong\">{text}</emphasis>"
def admonition(self, text, kind):
if kind not in admonitions:
raise NotImplementedError(f"admonition {kind} not supported yet")
tag = admonitions[kind]
# we don't keep whitespace here because usually we'll contain only
# a single paragraph and the original docbook string is no longer
# available to restore the trailer.
return f"<{tag}><para>{text.rstrip()}</para></{tag}>"
def block_quote(self, text):
return f"<blockquote><para>{text}</para></blockquote>"
def command(self, text):
return f"<command>{escape(text)}</command>"
def option(self, text):
return f"<option>{escape(text)}</option>"
def file(self, text):
return f"<filename>{escape(text)}</filename>"
def var(self, text):
return f"<varname>{escape(text)}</varname>"
def env(self, text):
return f"<envar>{escape(text)}</envar>"
def manpage(self, page, section):
man = f"{page}({section})"
title = f"<refentrytitle>{escape(page)}</refentrytitle>"
vol = f"<manvolnum>{escape(section)}</manvolnum>"
ref = f"<citerefentry>{title}{vol}</citerefentry>"
if man in manpage_urls:
return self.link(manpage_urls[man], text=ref)
else:
return ref
def finalize(self, data):
return "".join(data)
def p_command(md):
COMMAND_PATTERN = r'\{command\}`(.*?)`'
def parse(self, m, state):
return ('command', m.group(1))
md.inline.register_rule('command', COMMAND_PATTERN, parse)
md.inline.rules.append('command')
def p_file(md):
FILE_PATTERN = r'\{file\}`(.*?)`'
def parse(self, m, state):
return ('file', m.group(1))
md.inline.register_rule('file', FILE_PATTERN, parse)
md.inline.rules.append('file')
def p_var(md):
VAR_PATTERN = r'\{var\}`(.*?)`'
def parse(self, m, state):
return ('var', m.group(1))
md.inline.register_rule('var', VAR_PATTERN, parse)
md.inline.rules.append('var')
def p_env(md):
ENV_PATTERN = r'\{env\}`(.*?)`'
def parse(self, m, state):
return ('env', m.group(1))
md.inline.register_rule('env', ENV_PATTERN, parse)
md.inline.rules.append('env')
def p_option(md):
OPTION_PATTERN = r'\{option\}`(.*?)`'
def parse(self, m, state):
return ('option', m.group(1))
md.inline.register_rule('option', OPTION_PATTERN, parse)
md.inline.rules.append('option')
def p_manpage(md):
MANPAGE_PATTERN = r'\{manpage\}`(.*?)\((.+?)\)`'
def parse(self, m, state):
return ('manpage', m.group(1), m.group(2))
md.inline.register_rule('manpage', MANPAGE_PATTERN, parse)
md.inline.rules.append('manpage')
def p_admonition(md):
ADMONITION_PATTERN = re.compile(r'^::: \{([^\n]*?)\}\n(.*?)^:::$\n*', flags=re.MULTILINE|re.DOTALL)
def parse(self, m, state):
return {
'type': 'admonition',
'children': self.parse(m.group(2), state),
'params': [ m.group(1) ],
}
md.block.register_rule('admonition', ADMONITION_PATTERN, parse)
md.block.rules.append('admonition')
md = mistune.create_markdown(renderer=Renderer(), plugins=[
p_command, p_file, p_var, p_env, p_option, p_manpage, p_admonition
])
# converts in-place!
def convertMD(options: Dict[str, Any]) -> str:
def convertString(path: str, text: str) -> str:
try:
rendered = md(text)
# keep trailing spaces so we can diff the generated XML to check for conversion bugs.
return rendered.rstrip() + text[len(text.rstrip()):]
except:
print(f"error in {path}")
raise
def optionIs(option: Dict[str, Any], key: str, typ: str) -> bool:
if key not in option: return False
if type(option[key]) != dict: return False
if '_type' not in option[key]: return False
return option[key]['_type'] == typ
for (name, option) in options.items():
try:
if optionIs(option, 'description', 'mdDoc'):
option['description'] = convertString(name, option['description']['text'])
elif markdownByDefault:
option['description'] = convertString(name, option['description'])
if optionIs(option, 'example', 'literalMD'):
docbook = convertString(name, option['example']['text'])
option['example'] = { '_type': 'literalDocBook', 'text': docbook }
if optionIs(option, 'default', 'literalMD'):
docbook = convertString(name, option['default']['text'])
option['default'] = { '_type': 'literalDocBook', 'text': docbook }
except Exception as e:
raise Exception(f"Failed to render option {name}: {str(e)}")
return options
warningsAreErrors = False
warnOnDocbook = False
errorOnDocbook = False
markdownByDefault = False
optOffset = 0
for arg in sys.argv[1:]:
if arg == "--warnings-are-errors":
optOffset += 1
warningsAreErrors = True
if arg == "--error-on-docbook":
if arg == "--warn-on-docbook":
optOffset += 1
warnOnDocbook = True
elif arg == "--error-on-docbook":
optOffset += 1
errorOnDocbook = True
if arg == "--markdown-by-default":
optOffset += 1
markdownByDefault = True
options = pivot(json.load(open(sys.argv[1 + optOffset], 'r')))
overrides = pivot(json.load(open(sys.argv[2 + optOffset], 'r')))
@ -278,26 +93,27 @@ def is_docbook(o, key):
# check that every option has a description
hasWarnings = False
hasErrors = False
hasDocBookErrors = False
hasDocBook = False
for (k, v) in options.items():
if errorOnDocbook:
if warnOnDocbook or errorOnDocbook:
kind = "error" if errorOnDocbook else "warning"
if isinstance(v.value.get('description', {}), str):
hasErrors = True
hasDocBookErrors = True
hasErrors |= errorOnDocbook
hasDocBook = True
print(
f"\x1b[1;31merror: option {v.name} description uses DocBook\x1b[0m",
f"\x1b[1;31m{kind}: option {v.name} description uses DocBook\x1b[0m",
file=sys.stderr)
elif is_docbook(v.value, 'defaultText'):
hasErrors = True
hasDocBookErrors = True
hasErrors |= errorOnDocbook
hasDocBook = True
print(
f"\x1b[1;31merror: option {v.name} default uses DocBook\x1b[0m",
f"\x1b[1;31m{kind}: option {v.name} default uses DocBook\x1b[0m",
file=sys.stderr)
elif is_docbook(v.value, 'example'):
hasErrors = True
hasDocBookErrors = True
hasErrors |= errorOnDocbook
hasDocBook = True
print(
f"\x1b[1;31merror: option {v.name} example uses DocBook\x1b[0m",
f"\x1b[1;31m{kind}: option {v.name} example uses DocBook\x1b[0m",
file=sys.stderr)
if v.value.get('description', None) is None:
@ -310,10 +126,14 @@ for (k, v) in options.items():
f"\x1b[1;31m{severity}: option {v.name} has no type. Please specify a valid type, see " +
"https://nixos.org/manual/nixos/stable/index.html#sec-option-types\x1b[0m", file=sys.stderr)
if hasDocBookErrors:
if hasDocBook:
(why, what) = (
("disallowed for in-tree modules", "contribution") if errorOnDocbook
else ("deprecated for option documentation", "module")
)
print("Explanation: The documentation contains descriptions, examples, or defaults written in DocBook. " +
"NixOS is in the process of migrating from DocBook to Markdown, and " +
"DocBook is disallowed for in-tree modules. To change your contribution to "+
f"DocBook is {why}. To change your {what} to "+
"use Markdown, apply mdDoc and literalMD and use the *MD variants of option creation " +
"functions where they are available. For example:\n" +
"\n" +
@ -326,6 +146,9 @@ if hasDocBookErrors:
" example.package = mkPackageOptionMD pkgs \"your-package\" {};\n" +
" imports = [ (mkAliasOptionModuleMD [ \"example\" \"args\" ] [ \"example\" \"settings\" ]) ];",
file = sys.stderr)
with open(os.getenv('TOUCH_IF_DB'), 'x'):
# just make sure it exists
pass
if hasErrors:
sys.exit(1)
@ -338,4 +161,4 @@ if hasWarnings and warningsAreErrors:
file=sys.stderr)
sys.exit(1)
json.dump(convertMD(unpivot(options)), fp=sys.stdout)
json.dump(unpivot(options), fp=sys.stdout)

View file

@ -1,202 +0,0 @@
<?xml version="1.0"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:str="http://exslt.org/strings"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:nixos="tag:nixos.org"
xmlns="http://docbook.org/ns/docbook"
extension-element-prefixes="str"
>
<xsl:output method='xml' encoding="UTF-8" />
<xsl:param name="revision" />
<xsl:param name="documentType" />
<xsl:param name="program" />
<xsl:param name="variablelistId" />
<xsl:param name="optionIdPrefix" />
<xsl:template match="/expr/list">
<xsl:choose>
<xsl:when test="$documentType = 'appendix'">
<appendix xml:id="appendix-configuration-options">
<title>Configuration Options</title>
<xsl:call-template name="variable-list"/>
</appendix>
</xsl:when>
<xsl:otherwise>
<xsl:call-template name="variable-list"/>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template name="variable-list">
<variablelist>
<xsl:attribute name="id" namespace="http://www.w3.org/XML/1998/namespace"><xsl:value-of select="$variablelistId"/></xsl:attribute>
<xsl:for-each select="attrs">
<xsl:variable name="id" select="
concat($optionIdPrefix,
translate(
attr[@name = 'name']/string/@value,
'*&lt; >[]:&quot;',
'________'
))" />
<varlistentry>
<term xlink:href="#{$id}">
<xsl:attribute name="xml:id"><xsl:value-of select="$id"/></xsl:attribute>
<option>
<xsl:value-of select="attr[@name = 'name']/string/@value" />
</option>
</term>
<listitem>
<nixos:option-description>
<para>
<xsl:value-of disable-output-escaping="yes"
select="attr[@name = 'description']/string/@value" />
</para>
</nixos:option-description>
<xsl:if test="attr[@name = 'type']">
<para>
<emphasis>Type:</emphasis>
<xsl:text> </xsl:text>
<xsl:value-of select="attr[@name = 'type']/string/@value"/>
<xsl:if test="attr[@name = 'readOnly']/bool/@value = 'true'">
<xsl:text> </xsl:text>
<emphasis>(read only)</emphasis>
</xsl:if>
</para>
</xsl:if>
<xsl:if test="attr[@name = 'default']">
<para>
<emphasis>Default:</emphasis>
<xsl:text> </xsl:text>
<xsl:apply-templates select="attr[@name = 'default']/*" mode="top" />
</para>
</xsl:if>
<xsl:if test="attr[@name = 'example']">
<para>
<emphasis>Example:</emphasis>
<xsl:text> </xsl:text>
<xsl:apply-templates select="attr[@name = 'example']/*" mode="top" />
</para>
</xsl:if>
<xsl:if test="attr[@name = 'relatedPackages']">
<para>
<emphasis>Related packages:</emphasis>
<xsl:text> </xsl:text>
<xsl:value-of disable-output-escaping="yes"
select="attr[@name = 'relatedPackages']/string/@value" />
</para>
</xsl:if>
<xsl:if test="count(attr[@name = 'declarations']/list/*) != 0">
<para>
<emphasis>Declared by:</emphasis>
</para>
<xsl:apply-templates select="attr[@name = 'declarations']" />
</xsl:if>
<xsl:if test="count(attr[@name = 'definitions']/list/*) != 0">
<para>
<emphasis>Defined by:</emphasis>
</para>
<xsl:apply-templates select="attr[@name = 'definitions']" />
</xsl:if>
</listitem>
</varlistentry>
</xsl:for-each>
</variablelist>
</xsl:template>
<xsl:template match="attrs[attr[@name = '_type' and string[@value = 'literalExpression']]]" mode = "top">
<xsl:choose>
<xsl:when test="contains(attr[@name = 'text']/string/@value, '&#010;')">
<programlisting><xsl:value-of select="attr[@name = 'text']/string/@value" /></programlisting>
</xsl:when>
<xsl:otherwise>
<literal><xsl:value-of select="attr[@name = 'text']/string/@value" /></literal>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
<xsl:template match="attrs[attr[@name = '_type' and string[@value = 'literalDocBook']]]" mode = "top">
<xsl:value-of disable-output-escaping="yes" select="attr[@name = 'text']/string/@value" />
</xsl:template>
<xsl:template match="attr[@name = 'declarations' or @name = 'definitions']">
<simplelist>
<!--
Example:
opt.declarations = [ { name = "foo/bar.nix"; url = "https://github.com/....."; } ];
-->
<xsl:for-each select="list/attrs[attr[@name = 'name']]">
<member><filename>
<xsl:if test="attr[@name = 'url']">
<xsl:attribute name="xlink:href"><xsl:value-of select="attr[@name = 'url']/string/@value"/></xsl:attribute>
</xsl:if>
<xsl:value-of select="attr[@name = 'name']/string/@value"/>
</filename></member>
</xsl:for-each>
<!--
When the declarations/definitions are raw strings,
fall back to hardcoded location logic, specific to Nixpkgs.
-->
<xsl:for-each select="list/string">
<member><filename>
<!-- Hyperlink the filename either to the NixOS Subversion
repository (if its a module and we have a revision number),
or to the local filesystem. -->
<xsl:choose>
<xsl:when test="not(starts-with(@value, '/'))">
<xsl:choose>
<xsl:when test="$revision = 'local'">
<xsl:attribute name="xlink:href">https://github.com/NixOS/nixpkgs/blob/master/<xsl:value-of select="@value"/></xsl:attribute>
</xsl:when>
<xsl:otherwise>
<xsl:attribute name="xlink:href">https://github.com/NixOS/nixpkgs/blob/<xsl:value-of select="$revision"/>/<xsl:value-of select="@value"/></xsl:attribute>
</xsl:otherwise>
</xsl:choose>
</xsl:when>
<xsl:when test="$revision != 'local' and $program = 'nixops' and contains(@value, '/nix/')">
<xsl:attribute name="xlink:href">https://github.com/NixOS/nixops/blob/<xsl:value-of select="$revision"/>/nix/<xsl:value-of select="substring-after(@value, '/nix/')"/></xsl:attribute>
</xsl:when>
<xsl:otherwise>
<xsl:attribute name="xlink:href">file://<xsl:value-of select="@value"/></xsl:attribute>
</xsl:otherwise>
</xsl:choose>
<!-- Print the filename and make it user-friendly by replacing the
/nix/store/<hash> prefix by the default location of nixos
sources. -->
<xsl:choose>
<xsl:when test="not(starts-with(@value, '/'))">
&lt;nixpkgs/<xsl:value-of select="@value"/>&gt;
</xsl:when>
<xsl:when test="contains(@value, 'nixops') and contains(@value, '/nix/')">
&lt;nixops/<xsl:value-of select="substring-after(@value, '/nix/')"/>&gt;
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="@value" />
</xsl:otherwise>
</xsl:choose>
</filename></member>
</xsl:for-each>
</simplelist>
</xsl:template>
</xsl:stylesheet>

View file

@ -1,6 +0,0 @@
{ file }:
builtins.attrValues
(builtins.mapAttrs
(name: def: def // { inherit name; })
(builtins.fromJSON (builtins.readFile file)))

View file

@ -1,27 +0,0 @@
import xml.etree.ElementTree as ET
import sys
tree = ET.parse(sys.argv[1])
# the xml tree is of the form
# <expr><list> {all options, each an attrs} </list></expr>
options = list(tree.getroot().find('list'))
def sortKey(opt):
def order(s):
if s.startswith("enable"):
return 0
if s.startswith("package"):
return 1
return 2
return [
(order(p.attrib['value']), p.attrib['value'])
for p in opt.findall('attr[@name="loc"]/list/string')
]
options.sort(key=sortKey)
doc = ET.Element("expr")
newOptions = ET.SubElement(doc, "list")
newOptions.extend(options)
ET.ElementTree(doc).write(sys.argv[2], encoding='utf-8')

View file

@ -31,7 +31,7 @@ python3Packages.buildPythonApplication rec {
++ extraPythonPackages python3Packages;
doCheck = true;
checkInputs = with python3Packages; [ mypy pylint black ];
nativeCheckInputs = with python3Packages; [ mypy pylint black ];
checkPhase = ''
mypy --disallow-untyped-defs \
--no-implicit-optional \

View file

@ -549,18 +549,27 @@ class Machine:
return (rc, output.decode())
def shell_interact(self) -> None:
"""Allows you to interact with the guest shell
def shell_interact(self, address: Optional[str] = None) -> None:
"""Allows you to interact with the guest shell for debugging purposes.
Should only be used during test development, not in the production test."""
@address string passed to socat that will be connected to the guest shell.
Check the `Running Tests interactivly` chapter of NixOS manual for an example.
"""
self.connect()
self.log("Terminal is ready (there is no initial prompt):")
if address is None:
address = "READLINE,prompt=$ "
self.log("Terminal is ready (there is no initial prompt):")
assert self.shell
subprocess.run(
["socat", "READLINE,prompt=$ ", f"FD:{self.shell.fileno()}"],
pass_fds=[self.shell.fileno()],
)
try:
subprocess.run(
["socat", address, f"FD:{self.shell.fileno()}"],
pass_fds=[self.shell.fileno()],
)
# allow users to cancel this command without breaking the test
except KeyboardInterrupt:
pass
def console_interact(self) -> None:
"""Allows you to interact with QEMU's stdin

View file

@ -7,7 +7,7 @@ in
options = {
testScript = mkOption {
type = either str (functionTo str);
description = ''
description = mdDoc ''
A series of python declarations and statements that you write to perform
the test.
'';

View file

@ -168,6 +168,7 @@ in
"${config.boot.initrd.systemd.package.kbd}/bin/setfont"
"${config.boot.initrd.systemd.package.kbd}/bin/loadkeys"
"${config.boot.initrd.systemd.package.kbd.gzip}/bin/gzip" # Fonts and keyboard layouts are compressed
"${config.boot.initrd.systemd.package.kbd.gzip}/bin/.gzip-wrapped"
] ++ optionals (hasPrefix builtins.storeDir cfg.font) [
"${cfg.font}"
] ++ optionals (hasPrefix builtins.storeDir cfg.keyMap) [

View file

@ -181,7 +181,7 @@ in
example = "pid";
description = lib.mdDoc ''
The name of the column in the log table to which the pid of the
process utilising the `pam_mysql's` authentication
process utilising the `pam_mysql` authentication
service is stored.
'';
};

View file

@ -32,13 +32,17 @@ with lib;
dbus = super.dbus.override { x11Support = false; };
ffmpeg_4 = super.ffmpeg_4-headless;
ffmpeg_5 = super.ffmpeg_5-headless;
# dep of graphviz, libXpm is optional for Xpm support
gd = super.gd.override { withXorg = false; };
gobject-introspection = super.gobject-introspection.override { x11Support = false; };
gpsd = super.gpsd.override { guiSupport = false; };
graphviz = super.graphviz-nox;
gst_all_1 = super.gst_all_1 // {
gst-plugins-base = super.gst_all_1.gst-plugins-base.override { enableX11 = false; };
};
gpsd = super.gpsd.override { guiSupport = false; };
imagemagick = super.imagemagick.override { libX11Support = false; libXtSupport = false; };
imagemagickBig = super.imagemagickBig.override { libX11Support = false; libXtSupport = false; };
libdevil = super.libdevil-nox;
libextractor = super.libextractor.override { gtkSupport = false; };
libva = super.libva-minimal;
limesuite = super.limesuite.override { withGui = false; };
@ -51,9 +55,16 @@ with lib;
networkmanager-openvpn = super.networkmanager-openvpn.override { withGnome = false; };
networkmanager-sstp = super.networkmanager-vpnc.override { withGnome = false; };
networkmanager-vpnc = super.networkmanager-vpnc.override { withGnome = false; };
pango = super.pango.override { x11Support = false; };
pinentry = super.pinentry.override { enabledFlavors = [ "curses" "tty" "emacs" ]; withLibsecret = false; };
qemu = super.qemu.override { gtkSupport = false; spiceSupport = false; sdlSupport = false; };
qrencode = super.qrencode.overrideAttrs (_: { doCheck = false; });
qt5 = super.qt5.overrideScope' (self': super': {
qtbase = super'.qtbase.override { withGtk3 = false; };
});
stoken = super.stoken.override { withGTK3 = false; };
# translateManpages -> perlPackages.po4a -> texlive-combined-basic -> texlive-core-big -> libX11
util-linux = super.util-linux.override { translateManpages = false; };
zbar = super.zbar.override { enableVideo = false; withXorg = false; };
}));
};

View file

@ -90,7 +90,7 @@ let
only has an effect if {option}`uid` is
{option}`null`, in which case it determines whether
the user's UID is allocated in the range for system users
(below 500) or in the range for normal users (starting at
(below 1000) or in the range for normal users (starting at
1000).
Exactly one of `isNormalUser` and
`isSystemUser` must be true.
@ -677,7 +677,7 @@ in {
{
assertion = let
xor = a: b: a && !b || b && !a;
isEffectivelySystemUser = user.isSystemUser || (user.uid != null && user.uid < 500);
isEffectivelySystemUser = user.isSystemUser || (user.uid != null && user.uid < 1000);
in xor isEffectivelySystemUser user.isNormalUser;
message = ''
Exactly one of users.users.${user.name}.isSystemUser and users.users.${user.name}.isNormalUser must be set.

View file

@ -66,7 +66,7 @@ in
meta = {
maintainers = with lib.maintainers; [ ericsagnes ];
doc = ./default.xml;
doc = ./default.md;
};
}

View file

@ -1,275 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-input-methods">
<title>Input Methods</title>
<para>
Input methods are an operating system component that allows any
data, such as keyboard strokes or mouse movements, to be received as
input. In this way users can enter characters and symbols not found
on their input devices. Using an input method is obligatory for any
language that has more graphemes than there are keys on the
keyboard.
</para>
<para>
The following input methods are available in NixOS:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
IBus: The intelligent input bus.
</para>
</listitem>
<listitem>
<para>
Fcitx: A customizable lightweight input method.
</para>
</listitem>
<listitem>
<para>
Nabi: A Korean input method based on XIM.
</para>
</listitem>
<listitem>
<para>
Uim: The universal input method, is a library with a XIM bridge.
</para>
</listitem>
<listitem>
<para>
Hime: An extremely easy-to-use input method framework.
</para>
</listitem>
<listitem>
<para>
Kime: Korean IME
</para>
</listitem>
</itemizedlist>
<section xml:id="module-services-input-methods-ibus">
<title>IBus</title>
<para>
IBus is an Intelligent Input Bus. It provides full featured and
user friendly input method user interface.
</para>
<para>
The following snippet can be used to configure IBus:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;ibus&quot;;
ibus.engines = with pkgs.ibus-engines; [ anthy hangul mozc ];
};
</programlisting>
<para>
<literal>i18n.inputMethod.ibus.engines</literal> is optional and
can be used to add extra IBus engines.
</para>
<para>
Available extra IBus engines are:
</para>
<itemizedlist>
<listitem>
<para>
Anthy (<literal>ibus-engines.anthy</literal>): Anthy is a
system for Japanese input method. It converts Hiragana text to
Kana Kanji mixed text.
</para>
</listitem>
<listitem>
<para>
Hangul (<literal>ibus-engines.hangul</literal>): Korean input
method.
</para>
</listitem>
<listitem>
<para>
m17n (<literal>ibus-engines.m17n</literal>): m17n is an input
method that uses input methods and corresponding icons in the
m17n database.
</para>
</listitem>
<listitem>
<para>
mozc (<literal>ibus-engines.mozc</literal>): A Japanese input
method from Google.
</para>
</listitem>
<listitem>
<para>
Table (<literal>ibus-engines.table</literal>): An input method
that load tables of input methods.
</para>
</listitem>
<listitem>
<para>
table-others (<literal>ibus-engines.table-others</literal>):
Various table-based input methods. To use this, and any other
table-based input methods, it must appear in the list of
engines along with <literal>table</literal>. For example:
</para>
<programlisting>
ibus.engines = with pkgs.ibus-engines; [ table table-others ];
</programlisting>
</listitem>
</itemizedlist>
<para>
To use any input method, the package must be added in the
configuration, as shown above, and also (after running
<literal>nixos-rebuild</literal>) the input method must be added
from IBus preference dialog.
</para>
<section xml:id="module-services-input-methods-troubleshooting">
<title>Troubleshooting</title>
<para>
If IBus works in some applications but not others, a likely
cause of this is that IBus is depending on a different version
of <literal>glib</literal> to what the applications are
depending on. This can be checked by running
<literal>nix-store -q --requisites &lt;path&gt; | grep glib</literal>,
where <literal>&lt;path&gt;</literal> is the path of either IBus
or an application in the Nix store. The <literal>glib</literal>
packages must match exactly. If they do not, uninstalling and
reinstalling the application is a likely fix.
</para>
</section>
</section>
<section xml:id="module-services-input-methods-fcitx">
<title>Fcitx</title>
<para>
Fcitx is an input method framework with extension support. It has
three built-in Input Method Engine, Pinyin, QuWei and Table-based
input methods.
</para>
<para>
The following snippet can be used to configure Fcitx:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;fcitx&quot;;
fcitx.engines = with pkgs.fcitx-engines; [ mozc hangul m17n ];
};
</programlisting>
<para>
<literal>i18n.inputMethod.fcitx.engines</literal> is optional and
can be used to add extra Fcitx engines.
</para>
<para>
Available extra Fcitx engines are:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
Anthy (<literal>fcitx-engines.anthy</literal>): Anthy is a
system for Japanese input method. It converts Hiragana text to
Kana Kanji mixed text.
</para>
</listitem>
<listitem>
<para>
Chewing (<literal>fcitx-engines.chewing</literal>): Chewing is
an intelligent Zhuyin input method. It is one of the most
popular input methods among Traditional Chinese Unix users.
</para>
</listitem>
<listitem>
<para>
Hangul (<literal>fcitx-engines.hangul</literal>): Korean input
method.
</para>
</listitem>
<listitem>
<para>
Unikey (<literal>fcitx-engines.unikey</literal>): Vietnamese
input method.
</para>
</listitem>
<listitem>
<para>
m17n (<literal>fcitx-engines.m17n</literal>): m17n is an input
method that uses input methods and corresponding icons in the
m17n database.
</para>
</listitem>
<listitem>
<para>
mozc (<literal>fcitx-engines.mozc</literal>): A Japanese input
method from Google.
</para>
</listitem>
<listitem>
<para>
table-others (<literal>fcitx-engines.table-others</literal>):
Various table-based input methods.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="module-services-input-methods-nabi">
<title>Nabi</title>
<para>
Nabi is an easy to use Korean X input method. It allows you to
enter phonetic Korean characters (hangul) and pictographic Korean
characters (hanja).
</para>
<para>
The following snippet can be used to configure Nabi:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;nabi&quot;;
};
</programlisting>
</section>
<section xml:id="module-services-input-methods-uim">
<title>Uim</title>
<para>
Uim (short for <quote>universal input method</quote>) is a
multilingual input method framework. Applications can use it
through so-called bridges.
</para>
<para>
The following snippet can be used to configure uim:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;uim&quot;;
};
</programlisting>
<para>
Note: The <xref linkend="opt-i18n.inputMethod.uim.toolbar" />
option can be used to choose uim toolbar.
</para>
</section>
<section xml:id="module-services-input-methods-hime">
<title>Hime</title>
<para>
Hime is an extremely easy-to-use input method framework. It is
lightweight, stable, powerful and supports many commonly used
input methods, including Cangjie, Zhuyin, Dayi, Rank, Shrimp,
Greek, Korean Pinyin, Latin Alphabet, etc…
</para>
<para>
The following snippet can be used to configure Hime:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;hime&quot;;
};
</programlisting>
</section>
<section xml:id="module-services-input-methods-kime">
<title>Kime</title>
<para>
Kime is Korean IME. its built with Rust language and let you get
simple, safe, fast Korean typing
</para>
<para>
The following snippet can be used to configure Kime:
</para>
<programlisting>
i18n.inputMethod = {
enabled = &quot;kime&quot;;
};
</programlisting>
</section>
</chapter>

View file

@ -1,40 +1,37 @@
{ config, pkgs, lib, generators, ... }:
with lib;
let
cfg = config.i18n.inputMethod.kime;
yamlFormat = pkgs.formats.yaml { };
in
{
options = {
i18n.inputMethod.kime = {
config = mkOption {
type = yamlFormat.type;
default = { };
example = literalExpression ''
{
daemon = {
modules = ["Xim" "Indicator"];
};
let imcfg = config.i18n.inputMethod;
in {
imports = [
(lib.mkRemovedOptionModule [ "i18n" "inputMethod" "kime" "config" ] "Use i18n.inputMethod.kime.* instead")
];
indicator = {
icon_color = "White";
};
engine = {
hangul = {
layout = "dubeolsik";
};
};
}
'';
description = lib.mdDoc ''
kime configuration. Refer to <https://github.com/Riey/kime/blob/v${pkgs.kime.version}/docs/CONFIGURATION.md> for details on supported values.
'';
};
options.i18n.inputMethod.kime = {
daemonModules = lib.mkOption {
type = lib.types.listOf (lib.types.enum [ "Xim" "Wayland" "Indicator" ]);
default = [ "Xim" "Wayland" "Indicator" ];
example = [ "Xim" "Indicator" ];
description = lib.mdDoc ''
List of enabled daemon modules
'';
};
iconColor = lib.mkOption {
type = lib.types.enum [ "Black" "White" ];
default = "Black";
example = "White";
description = lib.mdDoc ''
Color of the indicator icon
'';
};
extraConfig = lib.mkOption {
type = lib.types.lines;
default = "";
description = lib.mdDoc ''
extra kime configuration. Refer to <https://github.com/Riey/kime/blob/v${pkgs.kime.version}/docs/CONFIGURATION.md> for details on supported values.
'';
};
};
config = mkIf (config.i18n.inputMethod.enabled == "kime") {
config = lib.mkIf (imcfg.enabled == "kime") {
i18n.inputMethod.package = pkgs.kime;
environment.variables = {
@ -43,7 +40,12 @@ in
XMODIFIERS = "@im=kime";
};
environment.etc."xdg/kime/config.yaml".text = replaceStrings [ "\\\\" ] [ "\\" ] (builtins.toJSON cfg.config);
environment.etc."xdg/kime/config.yaml".text = ''
daemon:
modules: [${lib.concatStringsSep "," imcfg.kime.daemonModules}]
indicator:
icon_color: ${imcfg.kime.iconColor}
'' + imcfg.kime.extraConfig;
};
# uses attributes of the linked package

View file

@ -42,7 +42,7 @@ in
# see discussion in https://github.com/NixOS/nixpkgs/pull/204178#issuecomment-1336289021
nix.registry.nixpkgs.to = {
type = "path";
path = nixpkgs;
path = "${channelSources}/nixos";
};
# Provide the NixOS/Nixpkgs sources in /etc/nixos. This is required

View file

@ -1,7 +1,7 @@
{
x86_64-linux = "/nix/store/h88w1442c7hzkbw8sgpcsbqp4lhz6l5p-nix-2.12.0";
i686-linux = "/nix/store/j23527l1c3hfx17nssc0v53sq6c741zs-nix-2.12.0";
aarch64-linux = "/nix/store/zgzmdymyh934y3r4vqh8z337ba4cwsjb-nix-2.12.0";
x86_64-darwin = "/nix/store/wnlrzllazdyg1nrw9na497p4w0m7i7mm-nix-2.12.0";
aarch64-darwin = "/nix/store/7n5yamgzg5dpp5vb6ipdqgfh6cf30wmn-nix-2.12.0";
x86_64-linux = "/nix/store/lsr79q5xqd9dv97wn87x12kzax8s8i1s-nix-2.13.2";
i686-linux = "/nix/store/wky9xjwiwzpifgk0s3f2nrg8nr67bi7x-nix-2.13.2";
aarch64-linux = "/nix/store/v8drr3x1ia6bdr8y4vl79mlz61xynrpm-nix-2.13.2";
x86_64-darwin = "/nix/store/1l14si31p4aw7c1gwgjy0nq55k38j9nj-nix-2.13.2";
aarch64-darwin = "/nix/store/6x7nr1r780fgn254zhkwhih3f3i8cr45-nix-2.13.2";
}

View file

@ -188,17 +188,6 @@ nix-env --store "$mountPoint" "${extraBuildFlags[@]}" \
mkdir -m 0755 -p "$mountPoint/etc"
touch "$mountPoint/etc/NIXOS"
# Create a bind mount for each of the mount points inside the target file
# system. This preserves the validity of their absolute paths after changing
# the root with `nixos-enter`.
# Without this the bootloader installation may fail due to options that
# contain paths referenced during evaluation, like initrd.secrets.
if (( EUID == 0 )); then
mount --rbind --mkdir "$mountPoint" "$mountPoint$mountPoint"
mount --make-rslave "$mountPoint$mountPoint"
trap 'umount -R "$mountPoint$mountPoint" && rmdir "$mountPoint$mountPoint"' EXIT
fi
# Switch to the new system configuration. This will install Grub with
# a menu default pointing at the kernel/initrd/etc of the new
# configuration.
@ -206,7 +195,20 @@ if [[ -z $noBootLoader ]]; then
echo "installing the boot loader..."
# Grub needs an mtab.
ln -sfn /proc/mounts "$mountPoint"/etc/mtab
NIXOS_INSTALL_BOOTLOADER=1 nixos-enter --root "$mountPoint" -- /run/current-system/bin/switch-to-configuration boot
export mountPoint
NIXOS_INSTALL_BOOTLOADER=1 nixos-enter --root "$mountPoint" -c "$(cat <<'EOF'
# Create a bind mount for each of the mount points inside the target file
# system. This preserves the validity of their absolute paths after changing
# the root with `nixos-enter`.
# Without this the bootloader installation may fail due to options that
# contain paths referenced during evaluation, like initrd.secrets.
# when not root, re-execute the script in an unshared namespace
mount --rbind --mkdir / "$mountPoint"
mount --make-rslave "$mountPoint"
/run/current-system/bin/switch-to-configuration boot
umount -R "$mountPoint" && rmdir "$mountPoint"
EOF
)"
fi
# Ask the user to set a root password, but only if the passwd command

View file

@ -357,6 +357,14 @@ in
(mkIf cfg.nixos.enable {
system.build.manual = manual;
system.activationScripts.check-manual-docbook = ''
if [[ $(cat ${manual.optionsUsedDocbook}) = 1 ]]; then
echo -e "\e[31;1mwarning\e[0m: This configuration contains option documentation in docbook." \
"Support for docbook is deprecated and will be removed after NixOS 23.05." \
"See nix-store --read-log ${builtins.unsafeDiscardStringContext manual.optionsJSON.drvPath}"
fi
'';
environment.systemPackages = []
++ optional cfg.man.enable manual.manpages
++ optionals cfg.doc.enable [ manual.manualHTML nixos-help ];

View file

@ -47,7 +47,7 @@ in
doc = mkOption {
type = docFile;
internal = true;
example = "./meta.chapter.xml";
example = "./meta.chapter.md";
description = lib.mdDoc ''
Documentation prologue for the set of options of each module. This
option should be defined at most once per module.

View file

@ -172,7 +172,6 @@
./programs/geary.nix
./programs/git.nix
./programs/gnome-disks.nix
./programs/gnome-documents.nix
./programs/gnome-terminal.nix
./programs/gnupg.nix
./programs/gpaste.nix
@ -215,6 +214,7 @@
./programs/partition-manager.nix
./programs/plotinus.nix
./programs/proxychains.nix
./programs/qdmr.nix
./programs/qt5ct.nix
./programs/rog-control-center.nix
./programs/rust-motd.nix
@ -530,6 +530,7 @@
./services/mail/dovecot.nix
./services/mail/dspam.nix
./services/mail/exim.nix
./services/mail/goeland.nix
./services/mail/listmonk.nix
./services/mail/maddy.nix
./services/mail/mail.nix
@ -570,6 +571,7 @@
./services/misc/atuin.nix
./services/misc/autofs.nix
./services/misc/autorandr.nix
./services/misc/autosuspend.nix
./services/misc/bazarr.nix
./services/misc/beanstalkd.nix
./services/misc/bees.nix
@ -1117,6 +1119,7 @@
./services/web-apps/bookstack.nix
./services/web-apps/calibre-web.nix
./services/web-apps/changedetection-io.nix
./services/web-apps/cloudlog.nix
./services/web-apps/code-server.nix
./services/web-apps/convos.nix
./services/web-apps/dex.nix

View file

@ -33,7 +33,7 @@ in
};
meta = {
doc = ./default.xml;
doc = ./default.md;
maintainers = with lib.maintainers; [ vidbina ];
};
}

View file

@ -1,70 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-programs-digitalbitbox">
<title>Digital Bitbox</title>
<para>
Digital Bitbox is a hardware wallet and second-factor authenticator.
</para>
<para>
The <literal>digitalbitbox</literal> programs module may be
installed by setting <literal>programs.digitalbitbox</literal> to
<literal>true</literal> in a manner similar to
</para>
<programlisting>
programs.digitalbitbox.enable = true;
</programlisting>
<para>
and bundles the <literal>digitalbitbox</literal> package (see
<xref linkend="sec-digitalbitbox-package" />), which contains the
<literal>dbb-app</literal> and <literal>dbb-cli</literal> binaries,
along with the hardware module (see
<xref linkend="sec-digitalbitbox-hardware-module" />) which sets up
the necessary udev rules to access the device.
</para>
<para>
Enabling the digitalbitbox module is pretty much the easiest way to
get a Digital Bitbox device working on your system.
</para>
<para>
For more information, see
<link xlink:href="https://digitalbitbox.com/start_linux">https://digitalbitbox.com/start_linux</link>.
</para>
<section xml:id="sec-digitalbitbox-package">
<title>Package</title>
<para>
The binaries, <literal>dbb-app</literal> (a GUI tool) and
<literal>dbb-cli</literal> (a CLI tool), are available through the
<literal>digitalbitbox</literal> package which could be installed
as follows:
</para>
<programlisting>
environment.systemPackages = [
pkgs.digitalbitbox
];
</programlisting>
</section>
<section xml:id="sec-digitalbitbox-hardware-module">
<title>Hardware</title>
<para>
The digitalbitbox hardware package enables the udev rules for
Digital Bitbox devices and may be installed as follows:
</para>
<programlisting>
hardware.digitalbitbox.enable = true;
</programlisting>
<para>
In order to alter the udev rules, one may provide different values
for the <literal>udevRule51</literal> and
<literal>udevRule52</literal> attributes by means of overriding as
follows:
</para>
<programlisting>
programs.digitalbitbox = {
enable = true;
package = pkgs.digitalbitbox.override {
udevRule51 = &quot;something else&quot;;
};
};
</programlisting>
</section>
</chapter>

View file

@ -1,54 +0,0 @@
# GNOME Documents.
{ config, pkgs, lib, ... }:
with lib;
{
meta = {
maintainers = teams.gnome.members;
};
# Added 2019-08-09
imports = [
(mkRenamedOptionModule
[ "services" "gnome" "gnome-documents" "enable" ]
[ "programs" "gnome-documents" "enable" ])
];
###### interface
options = {
programs.gnome-documents = {
enable = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Whether to enable GNOME Documents, a document
manager application for GNOME.
'';
};
};
};
###### implementation
config = mkIf config.programs.gnome-documents.enable {
environment.systemPackages = [ pkgs.gnome.gnome-documents ];
services.dbus.packages = [ pkgs.gnome.gnome-documents ];
services.gnome.gnome-online-accounts.enable = true;
services.gnome.gnome-online-miners.enable = true;
};
}

View file

@ -8,7 +8,7 @@ in
{
meta = {
maintainers = pkgs.plotinus.meta.maintainers;
doc = ./plotinus.xml;
doc = ./plotinus.md;
};
###### interface

View file

@ -1,30 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-program-plotinus">
<title>Plotinus</title>
<para>
<emphasis>Source:</emphasis>
<filename>modules/programs/plotinus.nix</filename>
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="https://github.com/p-e-w/plotinus">https://github.com/p-e-w/plotinus</link>
</para>
<para>
Plotinus is a searchable command palette in every modern GTK
application.
</para>
<para>
When in a GTK 3 application and Plotinus is enabled, you can press
<literal>Ctrl+Shift+P</literal> to open the command palette. The
command palette provides a searchable list of of all menu items in
the application.
</para>
<para>
To enable Plotinus, add the following to your
<filename>configuration.nix</filename>:
</para>
<programlisting>
programs.plotinus.enable = true;
</programlisting>
</chapter>

View file

@ -0,0 +1,25 @@
{
config,
lib,
pkgs,
...
}:
let
cfg = config.programs.qdmr;
in {
meta.maintainers = [ lib.maintainers.janik ];
options = {
programs.qdmr = {
enable = lib.mkEnableOption (lib.mdDoc "QDMR - a GUI application and command line tool for programming DMR radios");
package = lib.mkPackageOptionMD pkgs "qdmr" { };
};
};
config = lib.mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
services.udev.packages = [ cfg.package ];
users.groups.wireshark = {};
};
}

View file

@ -142,5 +142,5 @@ in
};
meta.doc = ./oh-my-zsh.xml;
meta.doc = ./oh-my-zsh.md;
}

View file

@ -1,154 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-programs-zsh-ohmyzsh">
<title>Oh my ZSH</title>
<para>
<link xlink:href="https://ohmyz.sh/"><literal>oh-my-zsh</literal></link>
is a framework to manage your
<link xlink:href="https://www.zsh.org/">ZSH</link> configuration
including completion scripts for several CLI tools or custom prompt
themes.
</para>
<section xml:id="module-programs-oh-my-zsh-usage">
<title>Basic usage</title>
<para>
The module uses the <literal>oh-my-zsh</literal> package with all
available features. The initial setup using Nix expressions is
fairly similar to the configuration format of
<literal>oh-my-zsh</literal>.
</para>
<programlisting>
{
programs.zsh.ohMyZsh = {
enable = true;
plugins = [ &quot;git&quot; &quot;python&quot; &quot;man&quot; ];
theme = &quot;agnoster&quot;;
};
}
</programlisting>
<para>
For a detailed explanation of these arguments please refer to the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki"><literal>oh-my-zsh</literal>
docs</link>.
</para>
<para>
The expression generates the needed configuration and writes it
into your <literal>/etc/zshrc</literal>.
</para>
</section>
<section xml:id="module-programs-oh-my-zsh-additions">
<title>Custom additions</title>
<para>
Sometimes third-party or custom scripts such as a modified theme
may be needed. <literal>oh-my-zsh</literal> provides the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-internals"><literal>ZSH_CUSTOM</literal></link>
environment variable for this which points to a directory with
additional scripts.
</para>
<para>
The module can do this as well:
</para>
<programlisting>
{
programs.zsh.ohMyZsh.custom = &quot;~/path/to/custom/scripts&quot;;
}
</programlisting>
</section>
<section xml:id="module-programs-oh-my-zsh-environments">
<title>Custom environments</title>
<para>
There are several extensions for <literal>oh-my-zsh</literal>
packaged in <literal>nixpkgs</literal>. One of them is
<link xlink:href="https://github.com/spwhitt/nix-zsh-completions">nix-zsh-completions</link>
which bundles completion scripts and a plugin for
<literal>oh-my-zsh</literal>.
</para>
<para>
Rather than using a single mutable path for
<literal>ZSH_CUSTOM</literal>, its also possible to generate this
path from a list of Nix packages:
</para>
<programlisting>
{ pkgs, ... }:
{
programs.zsh.ohMyZsh.customPkgs = [
pkgs.nix-zsh-completions
# and even more...
];
}
</programlisting>
<para>
Internally a single store path will be created using
<literal>buildEnv</literal>. Please refer to the docs of
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-building-environment"><literal>buildEnv</literal></link>
for further reference.
</para>
<para>
<emphasis>Please keep in mind that this is not compatible with
<literal>programs.zsh.ohMyZsh.custom</literal> as it requires an
immutable store path while <literal>custom</literal> shall remain
mutable! An evaluation failure will be thrown if both
<literal>custom</literal> and <literal>customPkgs</literal> are
set.</emphasis>
</para>
</section>
<section xml:id="module-programs-oh-my-zsh-packaging-customizations">
<title>Package your own customizations</title>
<para>
If third-party customizations (e.g. new themes) are supposed to be
added to <literal>oh-my-zsh</literal> there are several pitfalls
to keep in mind:
</para>
<itemizedlist>
<listitem>
<para>
To comply with the default structure of <literal>ZSH</literal>
the entire output needs to be written to
<literal>$out/share/zsh.</literal>
</para>
</listitem>
<listitem>
<para>
Completion scripts are supposed to be stored at
<literal>$out/share/zsh/site-functions</literal>. This
directory is part of the
<link xlink:href="http://zsh.sourceforge.net/Doc/Release/Functions.html"><literal>fpath</literal></link>
and the package should be compatible with pure
<literal>ZSH</literal> setups. The module will automatically
link the contents of <literal>site-functions</literal> to
completions directory in the proper store path.
</para>
</listitem>
<listitem>
<para>
The <literal>plugins</literal> directory needs the structure
<literal>pluginname/pluginname.plugin.zsh</literal> as
structured in the
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/tree/91b771914bc7c43dd7c7a43b586c5de2c225ceb7/plugins">upstream
repo.</link>
</para>
</listitem>
</itemizedlist>
<para>
A derivation for <literal>oh-my-zsh</literal> may look like this:
</para>
<programlisting>
{ stdenv, fetchFromGitHub }:
stdenv.mkDerivation rec {
name = &quot;exemplary-zsh-customization-${version}&quot;;
version = &quot;1.0.0&quot;;
src = fetchFromGitHub {
# path to the upstream repository
};
dontBuild = true;
installPhase = ''
mkdir -p $out/share/zsh/site-functions
cp {themes,plugins} $out/share/zsh
cp completions $out/share/zsh/site-functions
'';
}
</programlisting>
</section>
</chapter>

View file

@ -36,6 +36,7 @@ with lib;
'')
(mkRemovedOptionModule [ "networking" "vpnc" ] "Use environment.etc.\"vpnc/service.conf\" instead.")
(mkRemovedOptionModule [ "networking" "wicd" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "gnome-documents" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "tilp2" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "programs" "way-cooler" ] ("way-cooler is abandoned by its author: " +
"https://way-cooler.org/blog/2020/01/09/way-cooler-post-mortem.html"))
@ -49,7 +50,6 @@ with lib;
(mkRemovedOptionModule [ "services" "chronos" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "couchpotato" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "dd-agent" ] "dd-agent was removed from nixpkgs in favor of the newer datadog-agent.")
(mkRemovedOptionModule [ "services" "deepin" ] "The corresponding packages were removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "dnscrypt-proxy" ] "Use services.dnscrypt-proxy2 instead")
(mkRemovedOptionModule [ "services" "firefox" "syncserver" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "flashpolicyd" ] "The flashpolicyd module has been removed. Adobe Flash Player is deprecated.")

View file

@ -727,7 +727,7 @@ in {
Default values inheritable by all configured certs. You can
use this to define options shared by all your certs. These defaults
can also be ignored on a per-cert basis using the
`security.acme.certs.''${cert}.inheritDefaults' option.
{option}`security.acme.certs.''${cert}.inheritDefaults` option.
'';
};
@ -916,6 +916,6 @@ in {
meta = {
maintainers = lib.teams.acme.members;
doc = ./default.xml;
doc = ./default.md;
};
}

View file

@ -1,395 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-security-acme">
<title>SSL/TLS Certificates with ACME</title>
<para>
NixOS supports automatic domain validation &amp; certificate
retrieval and renewal using the ACME protocol. Any provider can be
used, but by default NixOS uses Lets Encrypt. The alternative ACME
client
<link xlink:href="https://go-acme.github.io/lego/">lego</link> is
used under the hood.
</para>
<para>
Automatic cert validation and configuration for Apache and Nginx
virtual hosts is included in NixOS, however if you would like to
generate a wildcard cert or you are not using a web server you will
have to configure DNS based validation.
</para>
<section xml:id="module-security-acme-prerequisites">
<title>Prerequisites</title>
<para>
To use the ACME module, you must accept the providers terms of
service by setting
<xref linkend="opt-security.acme.acceptTerms" /> to
<literal>true</literal>. The Lets Encrypt ToS can be found
<link xlink:href="https://letsencrypt.org/repository/">here</link>.
</para>
<para>
You must also set an email address to be used when creating
accounts with Lets Encrypt. You can set this for all certs with
<xref linkend="opt-security.acme.defaults.email" /> and/or on a
per-cert basis with
<xref linkend="opt-security.acme.certs._name_.email" />. This
address is only used for registration and renewal reminders, and
cannot be used to administer the certificates in any way.
</para>
<para>
Alternatively, you can use a different ACME server by changing the
<xref linkend="opt-security.acme.defaults.server" /> option to a
provider of your choosing, or just change the server for one cert
with <xref linkend="opt-security.acme.certs._name_.server" />.
</para>
<para>
You will need an HTTP server or DNS server for verification. For
HTTP, the server must have a webroot defined that can serve
<filename>.well-known/acme-challenge</filename>. This directory
must be writeable by the user that will run the ACME client. For
DNS, you must set up credentials with your provider/server for use
with lego.
</para>
</section>
<section xml:id="module-security-acme-nginx">
<title>Using ACME certificates in Nginx</title>
<para>
NixOS supports fetching ACME certificates for you by setting
<literal>enableACME = true;</literal> in a virtualHost config. We
first create self-signed placeholder certificates in place of the
real ACME certs. The placeholder certs are overwritten when the
ACME certs arrive. For <literal>foo.example.com</literal> the
config would look like this:
</para>
<programlisting>
security.acme.acceptTerms = true;
security.acme.defaults.email = &quot;admin+acme@example.com&quot;;
services.nginx = {
enable = true;
virtualHosts = {
&quot;foo.example.com&quot; = {
forceSSL = true;
enableACME = true;
# All serverAliases will be added as extra domain names on the certificate.
serverAliases = [ &quot;bar.example.com&quot; ];
locations.&quot;/&quot; = {
root = &quot;/var/www&quot;;
};
};
# We can also add a different vhost and reuse the same certificate
# but we have to append extraDomainNames manually beforehand:
# security.acme.certs.&quot;foo.example.com&quot;.extraDomainNames = [ &quot;baz.example.com&quot; ];
&quot;baz.example.com&quot; = {
forceSSL = true;
useACMEHost = &quot;foo.example.com&quot;;
locations.&quot;/&quot; = {
root = &quot;/var/www&quot;;
};
};
};
}
</programlisting>
</section>
<section xml:id="module-security-acme-httpd">
<title>Using ACME certificates in Apache/httpd</title>
<para>
Using ACME certificates with Apache virtual hosts is identical to
using them with Nginx. The attribute names are all the same, just
replace <quote>nginx</quote> with <quote>httpd</quote> where
appropriate.
</para>
</section>
<section xml:id="module-security-acme-configuring">
<title>Manual configuration of HTTP-01 validation</title>
<para>
First off you will need to set up a virtual host to serve the
challenges. This example uses a vhost called
<literal>certs.example.com</literal>, with the intent that you
will generate certs for all your vhosts and redirect everyone to
HTTPS.
</para>
<programlisting>
security.acme.acceptTerms = true;
security.acme.defaults.email = &quot;admin+acme@example.com&quot;;
# /var/lib/acme/.challenges must be writable by the ACME user
# and readable by the Nginx user. The easiest way to achieve
# this is to add the Nginx user to the ACME group.
users.users.nginx.extraGroups = [ &quot;acme&quot; ];
services.nginx = {
enable = true;
virtualHosts = {
&quot;acmechallenge.example.com&quot; = {
# Catchall vhost, will redirect users to HTTPS for all vhosts
serverAliases = [ &quot;*.example.com&quot; ];
locations.&quot;/.well-known/acme-challenge&quot; = {
root = &quot;/var/lib/acme/.challenges&quot;;
};
locations.&quot;/&quot; = {
return = &quot;301 https://$host$request_uri&quot;;
};
};
};
}
# Alternative config for Apache
users.users.wwwrun.extraGroups = [ &quot;acme&quot; ];
services.httpd = {
enable = true;
virtualHosts = {
&quot;acmechallenge.example.com&quot; = {
# Catchall vhost, will redirect users to HTTPS for all vhosts
serverAliases = [ &quot;*.example.com&quot; ];
# /var/lib/acme/.challenges must be writable by the ACME user and readable by the Apache user.
# By default, this is the case.
documentRoot = &quot;/var/lib/acme/.challenges&quot;;
extraConfig = ''
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge [NC]
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301]
'';
};
};
}
</programlisting>
<para>
Now you need to configure ACME to generate a certificate.
</para>
<programlisting>
security.acme.certs.&quot;foo.example.com&quot; = {
webroot = &quot;/var/lib/acme/.challenges&quot;;
email = &quot;foo@example.com&quot;;
# Ensure that the web server you use can read the generated certs
# Take a look at the group option for the web server you choose.
group = &quot;nginx&quot;;
# Since we have a wildcard vhost to handle port 80,
# we can generate certs for anything!
# Just make sure your DNS resolves them.
extraDomainNames = [ &quot;mail.example.com&quot; ];
};
</programlisting>
<para>
The private key <filename>key.pem</filename> and certificate
<filename>fullchain.pem</filename> will be put into
<filename>/var/lib/acme/foo.example.com</filename>.
</para>
<para>
Refer to <xref linkend="ch-options" /> for all available
configuration options for the
<link linkend="opt-security.acme.certs">security.acme</link>
module.
</para>
</section>
<section xml:id="module-security-acme-config-dns">
<title>Configuring ACME for DNS validation</title>
<para>
This is useful if you want to generate a wildcard certificate,
since ACME servers will only hand out wildcard certs over DNS
validation. There are a number of supported DNS providers and
servers you can utilise, see the
<link xlink:href="https://go-acme.github.io/lego/dns/">lego
docs</link> for provider/server specific configuration values. For
the sake of these docs, we will provide a fully self-hosted
example using bind.
</para>
<programlisting>
services.bind = {
enable = true;
extraConfig = ''
include &quot;/var/lib/secrets/dnskeys.conf&quot;;
'';
zones = [
rec {
name = &quot;example.com&quot;;
file = &quot;/var/db/bind/${name}&quot;;
master = true;
extraConfig = &quot;allow-update { key rfc2136key.example.com.; };&quot;;
}
];
}
# Now we can configure ACME
security.acme.acceptTerms = true;
security.acme.defaults.email = &quot;admin+acme@example.com&quot;;
security.acme.certs.&quot;example.com&quot; = {
domain = &quot;*.example.com&quot;;
dnsProvider = &quot;rfc2136&quot;;
credentialsFile = &quot;/var/lib/secrets/certs.secret&quot;;
# We don't need to wait for propagation since this is a local DNS server
dnsPropagationCheck = false;
};
</programlisting>
<para>
The <filename>dnskeys.conf</filename> and
<filename>certs.secret</filename> must be kept secure and thus you
should not keep their contents in your Nix config. Instead,
generate them one time with a systemd service:
</para>
<programlisting>
systemd.services.dns-rfc2136-conf = {
requiredBy = [&quot;acme-example.com.service&quot; &quot;bind.service&quot;];
before = [&quot;acme-example.com.service&quot; &quot;bind.service&quot;];
unitConfig = {
ConditionPathExists = &quot;!/var/lib/secrets/dnskeys.conf&quot;;
};
serviceConfig = {
Type = &quot;oneshot&quot;;
UMask = 0077;
};
path = [ pkgs.bind ];
script = ''
mkdir -p /var/lib/secrets
chmod 755 /var/lib/secrets
tsig-keygen rfc2136key.example.com &gt; /var/lib/secrets/dnskeys.conf
chown named:root /var/lib/secrets/dnskeys.conf
chmod 400 /var/lib/secrets/dnskeys.conf
# extract secret value from the dnskeys.conf
while read x y; do if [ &quot;$x&quot; = &quot;secret&quot; ]; then secret=&quot;''${y:1:''${#y}-3}&quot;; fi; done &lt; /var/lib/secrets/dnskeys.conf
cat &gt; /var/lib/secrets/certs.secret &lt;&lt; EOF
RFC2136_NAMESERVER='127.0.0.1:53'
RFC2136_TSIG_ALGORITHM='hmac-sha256.'
RFC2136_TSIG_KEY='rfc2136key.example.com'
RFC2136_TSIG_SECRET='$secret'
EOF
chmod 400 /var/lib/secrets/certs.secret
'';
};
</programlisting>
<para>
Now youre all set to generate certs! You should monitor the first
invocation by running
<literal>systemctl start acme-example.com.service &amp; journalctl -fu acme-example.com.service</literal>
and watching its log output.
</para>
</section>
<section xml:id="module-security-acme-config-dns-with-vhosts">
<title>Using DNS validation with web server virtual hosts</title>
<para>
It is possible to use DNS-01 validation with all certificates,
including those automatically configured via the Nginx/Apache
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME"><literal>enableACME</literal></link>
option. This configuration pattern is fully supported and part of
the modules test suite for Nginx + Apache.
</para>
<para>
You must follow the guide above on configuring DNS-01 validation
first, however instead of setting the options for one certificate
(e.g.
<xref linkend="opt-security.acme.certs._name_.dnsProvider" />) you
will set them as defaults (e.g.
<xref linkend="opt-security.acme.defaults.dnsProvider" />).
</para>
<programlisting>
# Configure ACME appropriately
security.acme.acceptTerms = true;
security.acme.defaults.email = &quot;admin+acme@example.com&quot;;
security.acme.defaults = {
dnsProvider = &quot;rfc2136&quot;;
credentialsFile = &quot;/var/lib/secrets/certs.secret&quot;;
# We don't need to wait for propagation since this is a local DNS server
dnsPropagationCheck = false;
};
# For each virtual host you would like to use DNS-01 validation with,
# set acmeRoot = null
services.nginx = {
enable = true;
virtualHosts = {
&quot;foo.example.com&quot; = {
enableACME = true;
acmeRoot = null;
};
};
}
</programlisting>
<para>
And thats it! Next time your configuration is rebuilt, or when
you add a new virtualHost, it will be DNS-01 validated.
</para>
</section>
<section xml:id="module-security-acme-root-owned">
<title>Using ACME with services demanding root owned
certificates</title>
<para>
Some services refuse to start if the configured certificate files
are not owned by root. PostgreSQL and OpenSMTPD are examples of
these. There is no way to change the user the ACME module uses (it
will always be <literal>acme</literal>), however you can use
systemds <literal>LoadCredential</literal> feature to resolve
this elegantly. Below is an example configuration for OpenSMTPD,
but this pattern can be applied to any service.
</para>
<programlisting>
# Configure ACME however you like (DNS or HTTP validation), adding
# the following configuration for the relevant certificate.
# Note: You cannot use `systemctl reload` here as that would mean
# the LoadCredential configuration below would be skipped and
# the service would continue to use old certificates.
security.acme.certs.&quot;mail.example.com&quot;.postRun = ''
systemctl restart opensmtpd
'';
# Now you must augment OpenSMTPD's systemd service to load
# the certificate files.
systemd.services.opensmtpd.requires = [&quot;acme-finished-mail.example.com.target&quot;];
systemd.services.opensmtpd.serviceConfig.LoadCredential = let
certDir = config.security.acme.certs.&quot;mail.example.com&quot;.directory;
in [
&quot;cert.pem:${certDir}/cert.pem&quot;
&quot;key.pem:${certDir}/key.pem&quot;
];
# Finally, configure OpenSMTPD to use these certs.
services.opensmtpd = let
credsDir = &quot;/run/credentials/opensmtpd.service&quot;;
in {
enable = true;
setSendmail = false;
serverConfiguration = ''
pki mail.example.com cert &quot;${credsDir}/cert.pem&quot;
pki mail.example.com key &quot;${credsDir}/key.pem&quot;
listen on localhost tls pki mail.example.com
action act1 relay host smtp://127.0.0.1:10027
match for local action act1
'';
};
</programlisting>
</section>
<section xml:id="module-security-acme-regenerate">
<title>Regenerating certificates</title>
<para>
Should you need to regenerate a particular certificate in a hurry,
such as when a vulnerability is found in Lets Encrypt, there is
now a convenient mechanism for doing so. Running
<literal>systemctl clean --what=state acme-example.com.service</literal>
will remove all certificate files and the account data for the
given domain, allowing you to then
<literal>systemctl start acme-example.com.service</literal> to
generate fresh ones.
</para>
</section>
<section xml:id="module-security-acme-fix-jws">
<title>Fixing JWS Verification error</title>
<para>
It is possible that your account credentials file may become
corrupt and need to be regenerated. In this scenario lego will
produce the error <literal>JWS verification error</literal>. The
solution is to simply delete the associated accounts file and
re-run the affected service(s).
</para>
<programlisting>
# Find the accounts folder for the certificate
systemctl cat acme-example.com.service | grep -Po 'accounts/[^:]*'
export accountdir=&quot;$(!!)&quot;
# Move this folder to some place else
mv /var/lib/acme/.lego/$accountdir{,.bak}
# Recreate the folder using systemd-tmpfiles
systemd-tmpfiles --create
# Get a new account and reissue certificates
# Note: Do this for all certs that share the same account email address
systemctl start acme-example.com.service
</programlisting>
</section>
</chapter>

View file

@ -57,7 +57,7 @@ in {
type = types.enum [ false true "lock" ];
default = false;
description = lib.mdDoc ''
Whether to enable the Linux audit system. The special `lock' value can be used to
Whether to enable the Linux audit system. The special `lock` value can be used to
enable auditing and prevent disabling it until a restart. Be careful about locking
this, as it will prevent you from changing your audit configuration until you
restart. If possible, test your configuration using build-vm beforehand.

View file

@ -94,7 +94,6 @@ in {
};
config = let
rootName = "${mkPathSafeName name}-chroot";
inherit (config.confinement) binSh fullUnit;
wantsAPIVFS = lib.mkDefault (config.confinement.mode == "full-apivfs");
in lib.mkIf config.confinement.enable {

View file

@ -7,20 +7,19 @@ let
cfg = config.services.activemq;
activemqBroker = stdenv.mkDerivation {
name = "activemq-broker";
phases = [ "installPhase" ];
buildInputs = [ jdk ];
installPhase = ''
mkdir -p $out/lib
source ${activemq}/lib/classpath.env
export CLASSPATH
ln -s "${./ActiveMQBroker.java}" ActiveMQBroker.java
javac -d $out/lib ActiveMQBroker.java
'';
};
activemqBroker = runCommand "activemq-broker"
{
nativeBuildInputs = [ jdk ];
} ''
mkdir -p $out/lib
source ${activemq}/lib/classpath.env
export CLASSPATH
ln -s "${./ActiveMQBroker.java}" ActiveMQBroker.java
javac -d $out/lib ActiveMQBroker.java
'';
in {
in
{
options = {
services.activemq = {

View file

@ -82,7 +82,6 @@ in
etc = {
"hqplayer/hqplayerd.xml" = mkIf (cfg.config != null) { source = pkgs.writeText "hqplayerd.xml" cfg.config; };
"hqplayer/hqplayerd4-key.xml" = mkIf (cfg.licenseFile != null) { source = cfg.licenseFile; };
"modules-load.d/taudio2.conf".source = "${pkg}/etc/modules-load.d/taudio2.conf";
};
systemPackages = [ pkg ];
};
@ -91,8 +90,6 @@ in
allowedTCPPorts = [ 8088 4321 ];
};
services.udev.packages = [ pkg ];
systemd = {
tmpfiles.rules = [
"d ${configDir} 0755 hqplayer hqplayer - -"

View file

@ -102,7 +102,7 @@ in {
Extra directives added to to the end of MPD's configuration file,
mpd.conf. Basic configuration like file location and uid/gid
is added automatically to the beginning of the file. For available
options see `man 5 mpd.conf`'.
options see {manpage}`mpd.conf(5)`.
'';
};

View file

@ -42,7 +42,7 @@ in {
environment.ROON_DATAROOT = "/var/lib/${name}";
serviceConfig = {
ExecStart = "${pkgs.roon-bridge}/start.sh";
ExecStart = "${pkgs.roon-bridge}/bin/RoonBridge";
LimitNOFILE = 8192;
User = cfg.user;
Group = cfg.group;

View file

@ -226,7 +226,7 @@ let
in {
meta.maintainers = with maintainers; [ dotlambda ];
meta.doc = ./borgbackup.xml;
meta.doc = ./borgbackup.md;
###### interface

View file

@ -1,215 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-borgbase">
<title>BorgBackup</title>
<para>
<emphasis>Source:</emphasis>
<filename>modules/services/backup/borgbackup.nix</filename>
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="https://borgbackup.readthedocs.io/">https://borgbackup.readthedocs.io/</link>
</para>
<para>
<link xlink:href="https://www.borgbackup.org/">BorgBackup</link>
(short: Borg) is a deduplicating backup program. Optionally, it
supports compression and authenticated encryption.
</para>
<para>
The main goal of Borg is to provide an efficient and secure way to
backup data. The data deduplication technique used makes Borg
suitable for daily backups since only changes are stored. The
authenticated encryption technique makes it suitable for backups to
not fully trusted targets.
</para>
<section xml:id="module-services-backup-borgbackup-configuring">
<title>Configuring</title>
<para>
A complete list of options for the Borgbase module may be found
<link linkend="opt-services.borgbackup.jobs">here</link>.
</para>
</section>
<section xml:id="opt-services-backup-borgbackup-local-directory">
<title>Basic usage for a local backup</title>
<para>
A very basic configuration for backing up to a locally accessible
directory is:
</para>
<programlisting>
{
opt.services.borgbackup.jobs = {
{ rootBackup = {
paths = &quot;/&quot;;
exclude = [ &quot;/nix&quot; &quot;/path/to/local/repo&quot; ];
repo = &quot;/path/to/local/repo&quot;;
doInit = true;
encryption = {
mode = &quot;repokey&quot;;
passphrase = &quot;secret&quot;;
};
compression = &quot;auto,lzma&quot;;
startAt = &quot;weekly&quot;;
};
}
};
}
</programlisting>
<warning>
<para>
If you do not want the passphrase to be stored in the
world-readable Nix store, use passCommand. You find an example
below.
</para>
</warning>
</section>
<section xml:id="opt-services-backup-create-server">
<title>Create a borg backup server</title>
<para>
You should use a different SSH key for each repository you write
to, because the specified keys are restricted to running borg
serve and can only access this single repository. You need the
output of the generate pub file.
</para>
<programlisting>
# sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_my_borg_repo
# cat /run/keys/id_ed25519_my_borg_repo
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixos
</programlisting>
<para>
Add the following snippet to your NixOS configuration:
</para>
<programlisting>
{
services.borgbackup.repos = {
my_borg_repo = {
authorizedKeys = [
&quot;ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixos&quot;
] ;
path = &quot;/var/lib/my_borg_repo&quot; ;
};
};
}
</programlisting>
</section>
<section xml:id="opt-services-backup-borgbackup-remote-server">
<title>Backup to the borg repository server</title>
<para>
The following NixOS snippet creates an hourly backup to the
service (on the host nixos) as created in the section above. We
assume that you have stored a secret passphrasse in the file
<filename>/run/keys/borgbackup_passphrase</filename>, which should
be only accessible by root
</para>
<programlisting>
{
services.borgbackup.jobs = {
backupToLocalServer = {
paths = [ &quot;/etc/nixos&quot; ];
doInit = true;
repo = &quot;borg@nixos:.&quot; ;
encryption = {
mode = &quot;repokey-blake2&quot;;
passCommand = &quot;cat /run/keys/borgbackup_passphrase&quot;;
};
environment = { BORG_RSH = &quot;ssh -i /run/keys/id_ed25519_my_borg_repo&quot;; };
compression = &quot;auto,lzma&quot;;
startAt = &quot;hourly&quot;;
};
};
};
</programlisting>
<para>
The following few commands (run as root) let you test your backup.
</para>
<programlisting>
&gt; nixos-rebuild switch
...restarting the following units: polkit.service
&gt; systemctl restart borgbackup-job-backupToLocalServer
&gt; sleep 10
&gt; systemctl restart borgbackup-job-backupToLocalServer
&gt; export BORG_PASSPHRASE=topSecrect
&gt; borg list --rsh='ssh -i /run/keys/id_ed25519_my_borg_repo' borg@nixos:.
nixos-backupToLocalServer-2020-03-30T21:46:17 Mon, 2020-03-30 21:46:19 [84feb97710954931ca384182f5f3cb90665f35cef214760abd7350fb064786ac]
nixos-backupToLocalServer-2020-03-30T21:46:30 Mon, 2020-03-30 21:46:32 [e77321694ecd160ca2228611747c6ad1be177d6e0d894538898de7a2621b6e68]
</programlisting>
</section>
<section xml:id="opt-services-backup-borgbackup-borgbase">
<title>Backup to a hosting service</title>
<para>
Several companies offer
<link xlink:href="https://www.borgbackup.org/support/commercial.html">(paid)
hosting services</link> for Borg repositories.
</para>
<para>
To backup your home directory to borgbase you have to:
</para>
<itemizedlist>
<listitem>
<para>
Generate a SSH key without a password, to access the remote
server. E.g.
</para>
<programlisting>
sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_borgbase
</programlisting>
</listitem>
<listitem>
<para>
Create the repository on the server by following the
instructions for your hosting server.
</para>
</listitem>
<listitem>
<para>
Initialize the repository on the server. Eg.
</para>
<programlisting>
sudo borg init --encryption=repokey-blake2 \
-rsh &quot;ssh -i /run/keys/id_ed25519_borgbase&quot; \
zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repo
</programlisting>
</listitem>
<listitem>
<para>
Add it to your NixOS configuration, e.g.
</para>
<programlisting>
{
services.borgbackup.jobs = {
my_Remote_Backup = {
paths = [ &quot;/&quot; ];
exclude = [ &quot;/nix&quot; &quot;'**/.cache'&quot; ];
repo = &quot;zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repo&quot;;
encryption = {
mode = &quot;repokey-blake2&quot;;
passCommand = &quot;cat /run/keys/borgbackup_passphrase&quot;;
};
environment = { BORG_RSH = &quot;ssh -i /run/keys/id_ed25519_borgbase&quot;; };
compression = &quot;auto,lzma&quot;;
startAt = &quot;daily&quot;;
};
};
}}
</programlisting>
</listitem>
</itemizedlist>
</section>
<section xml:id="opt-services-backup-borgbackup-vorta">
<title>Vorta backup client for the desktop</title>
<para>
Vorta is a backup client for macOS and Linux desktops. It
integrates the mighty BorgBackup with your desktop environment to
protect your data from disk failure, ransomware and theft.
</para>
<para>
It can be installed in NixOS e.g. by adding
<literal>pkgs.vorta</literal> to
<xref linkend="opt-environment.systemPackages" />.
</para>
<para>
Details about using Vorta can be found under
<link xlink:href="https://vorta.borgbase.com/usage">https://vorta.borgbase.com</link>
.
</para>
</section>
</chapter>

View file

@ -126,6 +126,21 @@ in
];
};
exclude = mkOption {
type = types.listOf types.str;
default = [ ];
description = lib.mdDoc ''
Patterns to exclude when backing up. See
https://restic.readthedocs.io/en/latest/040_backup.html#excluding-files for
details on syntax.
'';
example = [
"/var/cache"
"/home/*/.cache"
".git"
];
};
timerConfig = mkOption {
type = types.attrsOf unitOption;
default = {
@ -249,6 +264,7 @@ in
example = {
localbackup = {
paths = [ "/home" ];
exclude = [ "/home/*/.cache" ];
repository = "/mnt/backup-hdd";
passwordFile = "/etc/nixos/secrets/restic-password";
initialize = true;
@ -270,12 +286,17 @@ in
config = {
warnings = mapAttrsToList (n: v: "services.restic.backups.${n}.s3CredentialsFile is deprecated, please use services.restic.backups.${n}.environmentFile instead.") (filterAttrs (n: v: v.s3CredentialsFile != null) config.services.restic.backups);
assertions = mapAttrsToList (n: v: {
assertion = (v.repository == null) != (v.repositoryFile == null);
message = "services.restic.backups.${n}: exactly one of repository or repositoryFile should be set";
}) config.services.restic.backups;
systemd.services =
mapAttrs'
(name: backup:
let
extraOptions = concatMapStrings (arg: " -o ${arg}") backup.extraOptions;
resticCmd = "${backup.package}/bin/restic${extraOptions}";
excludeFlags = if (backup.exclude != []) then ["--exclude-file=${pkgs.writeText "exclude-patterns" (concatStringsSep "\n" backup.exclude)}"] else [];
filesFromTmpFile = "/run/restic-backups-${name}/includes";
backupPaths =
if (backup.dynamicFilesFrom == null)
@ -311,7 +332,7 @@ in
restartIfChanged = false;
serviceConfig = {
Type = "oneshot";
ExecStart = (optionals (backupPaths != "") [ "${resticCmd} backup --cache-dir=%C/restic-backups-${name} ${concatStringsSep " " backup.extraBackupArgs} ${backupPaths}" ])
ExecStart = (optionals (backupPaths != "") [ "${resticCmd} backup --cache-dir=%C/restic-backups-${name} ${concatStringsSep " " (backup.extraBackupArgs ++ excludeFlags)} ${backupPaths}" ])
++ pruneCmd;
User = backup.user;
RuntimeDirectory = "restic-backups-${name}";

View file

@ -424,6 +424,6 @@ in
};
};
meta.doc = ./foundationdb.xml;
meta.doc = ./foundationdb.md;
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
}

View file

@ -1,425 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-foundationdb">
<title>FoundationDB</title>
<para>
<emphasis>Source:</emphasis>
<filename>modules/services/databases/foundationdb.nix</filename>
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="https://apple.github.io/foundationdb/">https://apple.github.io/foundationdb/</link>
</para>
<para>
<emphasis>Maintainer:</emphasis> Austin Seipp
</para>
<para>
<emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x
</para>
<para>
FoundationDB (or <quote>FDB</quote>) is an open source, distributed,
transactional key-value store.
</para>
<section xml:id="module-services-foundationdb-configuring">
<title>Configuring and basic setup</title>
<para>
To enable FoundationDB, add the following to your
<filename>configuration.nix</filename>:
</para>
<programlisting>
services.foundationdb.enable = true;
services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
</programlisting>
<para>
The <option>services.foundationdb.package</option> option is
required, and must always be specified. Due to the fact
FoundationDB network protocols and on-disk storage formats may
change between (major) versions, and upgrades must be explicitly
handled by the user, you must always manually specify this
yourself so that the NixOS module will use the proper version.
Note that minor, bugfix releases are always compatible.
</para>
<para>
After running <command>nixos-rebuild</command>, you can verify
whether FoundationDB is running by executing
<command>fdbcli</command> (which is added to
<option>environment.systemPackages</option>):
</para>
<programlisting>
$ sudo -u foundationdb fdbcli
Using cluster file `/etc/foundationdb/fdb.cluster'.
The database is available.
Welcome to the fdbcli. For help, type `help'.
fdb&gt; status
Using cluster file `/etc/foundationdb/fdb.cluster'.
Configuration:
Redundancy mode - single
Storage engine - memory
Coordinators - 1
Cluster:
FoundationDB processes - 1
Machines - 1
Memory availability - 5.4 GB per process on machine with least available
Fault Tolerance - 0 machines
Server time - 04/20/18 15:21:14
...
fdb&gt;
</programlisting>
<para>
You can also write programs using the available client libraries.
For example, the following Python program can be run in order to
grab the cluster status, as a quick example. (This example uses
<command>nix-shell</command> shebang support to automatically
supply the necessary Python modules).
</para>
<programlisting>
a@link&gt; cat fdb-status.py
#! /usr/bin/env nix-shell
#! nix-shell -i python -p python pythonPackages.foundationdb52
import fdb
import json
def main():
fdb.api_version(520)
db = fdb.open()
@fdb.transactional
def get_status(tr):
return str(tr['\xff\xff/status/json'])
obj = json.loads(get_status(db))
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
if __name__ == &quot;__main__&quot;:
main()
a@link&gt; chmod +x fdb-status.py
a@link&gt; ./fdb-status.py
FoundationDB available: True
a@link&gt;
</programlisting>
<para>
FoundationDB is run under the <command>foundationdb</command> user
and group by default, but this may be changed in the NixOS
configuration. The systemd unit
<command>foundationdb.service</command> controls the
<command>fdbmonitor</command> process.
</para>
<para>
By default, the NixOS module for FoundationDB creates a single
SSD-storage based database for development and basic usage. This
storage engine is designed for SSDs and will perform poorly on
HDDs; however it can handle far more data than the alternative
<quote>memory</quote> engine and is a better default choice for
most deployments. (Note that you can change the storage backend
on-the-fly for a given FoundationDB cluster using
<command>fdbcli</command>.)
</para>
<para>
Furthermore, only 1 server process and 1 backup agent are started
in the default configuration. See below for more on scaling to
increase this.
</para>
<para>
FoundationDB stores all data for all server processes under
<filename>/var/lib/foundationdb</filename>. You can override this
using <option>services.foundationdb.dataDir</option>, e.g.
</para>
<programlisting>
services.foundationdb.dataDir = &quot;/data/fdb&quot;;
</programlisting>
<para>
Similarly, logs are stored under
<filename>/var/log/foundationdb</filename> by default, and there
is a corresponding <option>services.foundationdb.logDir</option>
as well.
</para>
</section>
<section xml:id="module-services-foundationdb-scaling">
<title>Scaling processes and backup agents</title>
<para>
Scaling the number of server processes is quite easy; simply
specify <option>services.foundationdb.serverProcesses</option> to
be the number of FoundationDB worker processes that should be
started on the machine.
</para>
<para>
FoundationDB worker processes typically require 4GB of RAM
per-process at minimum for good performance, so this option is set
to 1 by default since the maximum amount of RAM is unknown. Youre
advised to abide by this restriction, so pick a number of
processes so that each has 4GB or more.
</para>
<para>
A similar option exists in order to scale backup agent processes,
<option>services.foundationdb.backupProcesses</option>. Backup
agents are not as performance/RAM sensitive, so feel free to
experiment with the number of available backup processes.
</para>
</section>
<section xml:id="module-services-foundationdb-clustering">
<title>Clustering</title>
<para>
FoundationDB on NixOS works similarly to other Linux systems, so
this section will be brief. Please refer to the full FoundationDB
documentation for more on clustering.
</para>
<para>
FoundationDB organizes clusters using a set of
<emphasis>coordinators</emphasis>, which are just
specially-designated worker processes. By default, every
installation of FoundationDB on NixOS will start as its own
individual cluster, with a single coordinator: the first worker
process on <command>localhost</command>.
</para>
<para>
Coordinators are specified globally using the
<command>/etc/foundationdb/fdb.cluster</command> file, which all
servers and client applications will use to find and join
coordinators. Note that this file <emphasis>can not</emphasis> be
managed by NixOS so easily: FoundationDB is designed so that it
will rewrite the file at runtime for all clients and nodes when
cluster coordinators change, with clients transparently handling
this without intervention. It is fundamentally a mutable file, and
you should not try to manage it in any way in NixOS.
</para>
<para>
When dealing with a cluster, there are two main things you want to
do:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
Add a node to the cluster for storage/compute.
</para>
</listitem>
<listitem>
<para>
Promote an ordinary worker to a coordinator.
</para>
</listitem>
</itemizedlist>
<para>
A node must already be a member of the cluster in order to
properly be promoted to a coordinator, so you must always add it
first if you wish to promote it.
</para>
<para>
To add a machine to a FoundationDB cluster:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
Choose one of the servers to start as the initial coordinator.
</para>
</listitem>
<listitem>
<para>
Copy the <command>/etc/foundationdb/fdb.cluster</command> file
from this server to all the other servers. Restart
FoundationDB on all of these other servers, so they join the
cluster.
</para>
</listitem>
<listitem>
<para>
All of these servers are now connected and working together in
the cluster, under the chosen coordinator.
</para>
</listitem>
</itemizedlist>
<para>
At this point, you can add as many nodes as you want by just
repeating the above steps. By default there will still be a single
coordinator: you can use <command>fdbcli</command> to change this
and add new coordinators.
</para>
<para>
As a convenience, FoundationDB can automatically assign
coordinators based on the redundancy mode you wish to achieve for
the cluster. Once all the nodes have been joined, simply set the
replication policy, and then issue the
<command>coordinators auto</command> command
</para>
<para>
For example, assuming we have 3 nodes available, we can enable
double redundancy mode, then auto-select coordinators. For double
redundancy, 3 coordinators is ideal: therefore FoundationDB will
make <emphasis>every</emphasis> node a coordinator automatically:
</para>
<programlisting>
fdbcli&gt; configure double ssd
fdbcli&gt; coordinators auto
</programlisting>
<para>
This will transparently update all the servers within seconds, and
appropriately rewrite the <command>fdb.cluster</command> file, as
well as informing all client processes to do the same.
</para>
</section>
<section xml:id="module-services-foundationdb-connectivity">
<title>Client connectivity</title>
<para>
By default, all clients must use the current
<command>fdb.cluster</command> file to access a given FoundationDB
cluster. This file is located by default in
<command>/etc/foundationdb/fdb.cluster</command> on all machines
with the FoundationDB service enabled, so you may copy the active
one from your cluster to a new node in order to connect, if it is
not part of the cluster.
</para>
</section>
<section xml:id="module-services-foundationdb-authorization">
<title>Client authorization and TLS</title>
<para>
By default, any user who can connect to a FoundationDB process
with the correct cluster configuration can access anything.
FoundationDB uses a pluggable design to transport security, and
out of the box it supports a LibreSSL-based plugin for TLS
support. This plugin not only does in-flight encryption, but also
performs client authorization based on the given endpoints
certificate chain. For example, a FoundationDB server may be
configured to only accept client connections over TLS, where the
client TLS certificate is from organization <emphasis>Acme
Co</emphasis> in the <emphasis>Research and Development</emphasis>
unit.
</para>
<para>
Configuring TLS with FoundationDB is done using the
<option>services.foundationdb.tls</option> options in order to
control the peer verification string, as well as the certificate
and its private key.
</para>
<para>
Note that the certificate and its private key must be accessible
to the FoundationDB user account that the server runs under. These
files are also NOT managed by NixOS, as putting them into the
store may reveal private information.
</para>
<para>
After you have a key and certificate file in place, it is not
enough to simply set the NixOS module options you must also
configure the <command>fdb.cluster</command> file to specify that
a given set of coordinators use TLS. This is as simple as adding
the suffix <command>:tls</command> to your cluster coordinator
configuration, after the port number. For example, assuming you
have a coordinator on localhost with the default configuration,
simply specifying:
</para>
<programlisting>
XXXXXX:XXXXXX@127.0.0.1:4500:tls
</programlisting>
<para>
will configure all clients and server processes to use TLS from
now on.
</para>
</section>
<section xml:id="module-services-foundationdb-disaster-recovery">
<title>Backups and Disaster Recovery</title>
<para>
The usual rules for doing FoundationDB backups apply on NixOS as
written in the FoundationDB manual. However, one important
difference is the security profile for NixOS: by default, the
<command>foundationdb</command> systemd unit uses <emphasis>Linux
namespaces</emphasis> to restrict write access to the system,
except for the log directory, data directory, and the
<command>/etc/foundationdb/</command> directory. This is enforced
by default and cannot be disabled.
</para>
<para>
However, a side effect of this is that the
<command>fdbbackup</command> command doesnt work properly for
local filesystem backups: FoundationDB uses a server process
alongside the database processes to perform backups and copy the
backups to the filesystem. As a result, this process is put under
the restricted namespaces above: the backup process can only write
to a limited number of paths.
</para>
<para>
In order to allow flexible backup locations on local disks, the
FoundationDB NixOS module supports a
<option>services.foundationdb.extraReadWritePaths</option> option.
This option takes a list of paths, and adds them to the systemd
unit, allowing the processes inside the service to write (and
read) the specified directories.
</para>
<para>
For example, to create backups in
<command>/opt/fdb-backups</command>, first set up the paths in the
module options:
</para>
<programlisting>
services.foundationdb.extraReadWritePaths = [ &quot;/opt/fdb-backups&quot; ];
</programlisting>
<para>
Restart the FoundationDB service, and it will now be able to write
to this directory (even if it does not yet exist.) Note: this path
<emphasis>must</emphasis> exist before restarting the unit.
Otherwise, systemd will not include it in the private FoundationDB
namespace (and it will not add it dynamically at runtime).
</para>
<para>
You can now perform a backup:
</para>
<programlisting>
$ sudo -u foundationdb fdbbackup start -t default -d file:///opt/fdb-backups
$ sudo -u foundationdb fdbbackup status -t default
</programlisting>
</section>
<section xml:id="module-services-foundationdb-limitations">
<title>Known limitations</title>
<para>
The FoundationDB setup for NixOS should currently be considered
beta. FoundationDB is not new software, but the NixOS compilation
and integration has only undergone fairly basic testing of all the
available functionality.
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
There is no way to specify individual parameters for
individual <command>fdbserver</command> processes. Currently,
all server processes inherit all the global
<command>fdbmonitor</command> settings.
</para>
</listitem>
<listitem>
<para>
Ruby bindings are not currently installed.
</para>
</listitem>
<listitem>
<para>
Go bindings are not currently installed.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="module-services-foundationdb-options">
<title>Options</title>
<para>
NixOSs FoundationDB module allows you to configure all of the
most relevant configuration options for
<command>fdbmonitor</command>, matching it quite closely. A
complete list of options for the FoundationDB module may be found
<link linkend="opt-services.foundationdb.enable">here</link>. You
should also read the FoundationDB documentation as well.
</para>
</section>
<section xml:id="module-services-foundationdb-full-docs">
<title>Full documentation</title>
<para>
FoundationDB is a complex piece of software, and requires careful
administration to properly use. Full documentation for
administration can be found here:
<link xlink:href="https://apple.github.io/foundationdb/">https://apple.github.io/foundationdb/</link>.
</para>
</section>
</chapter>

View file

@ -585,6 +585,6 @@ in
};
meta.doc = ./postgresql.xml;
meta.doc = ./postgresql.md;
meta.maintainers = with lib.maintainers; [ thoughtpolice danbst ];
}

View file

@ -1,250 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-postgresql">
<title>PostgreSQL</title>
<para>
<emphasis>Source:</emphasis>
<filename>modules/services/databases/postgresql.nix</filename>
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="http://www.postgresql.org/docs/">http://www.postgresql.org/docs/</link>
</para>
<para>
PostgreSQL is an advanced, free relational database.
</para>
<section xml:id="module-services-postgres-configuring">
<title>Configuring</title>
<para>
To enable PostgreSQL, add the following to your
<filename>configuration.nix</filename>:
</para>
<programlisting>
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_11;
</programlisting>
<para>
Note that you are required to specify the desired version of
PostgreSQL (e.g. <literal>pkgs.postgresql_11</literal>). Since
upgrading your PostgreSQL version requires a database dump and
reload (see below), NixOS cannot provide a default value for
<xref linkend="opt-services.postgresql.package" /> such as the
most recent release of PostgreSQL.
</para>
<para>
By default, PostgreSQL stores its databases in
<filename>/var/lib/postgresql/$psqlSchema</filename>. You can
override this using
<xref linkend="opt-services.postgresql.dataDir" />, e.g.
</para>
<programlisting>
services.postgresql.dataDir = &quot;/data/postgresql&quot;;
</programlisting>
</section>
<section xml:id="module-services-postgres-upgrading">
<title>Upgrading</title>
<note>
<para>
The steps below demonstrate how to upgrade from an older version
to <literal>pkgs.postgresql_13</literal>. These instructions are
also applicable to other versions.
</para>
</note>
<para>
Major PostgreSQL upgrades require a downtime and a few imperative
steps to be called. This is the case because each major version
has some internal changes in the databases state during major
releases. Because of that, NixOS places the state into
<filename>/var/lib/postgresql/&lt;version&gt;</filename> where
each <literal>version</literal> can be obtained like this:
</para>
<programlisting>
$ nix-instantiate --eval -A postgresql_13.psqlSchema
&quot;13&quot;
</programlisting>
<para>
For an upgrade, a script like this can be used to simplify the
process:
</para>
<programlisting>
{ config, pkgs, ... }:
{
environment.systemPackages = [
(let
# XXX specify the postgresql package you'd like to upgrade to.
# Do not forget to list the extensions you need.
newPostgres = pkgs.postgresql_13.withPackages (pp: [
# pp.plv8
]);
in pkgs.writeScriptBin &quot;upgrade-pg-cluster&quot; ''
set -eux
# XXX it's perhaps advisable to stop all services that depend on postgresql
systemctl stop postgresql
export NEWDATA=&quot;/var/lib/postgresql/${newPostgres.psqlSchema}&quot;
export NEWBIN=&quot;${newPostgres}/bin&quot;
export OLDDATA=&quot;${config.services.postgresql.dataDir}&quot;
export OLDBIN=&quot;${config.services.postgresql.package}/bin&quot;
install -d -m 0700 -o postgres -g postgres &quot;$NEWDATA&quot;
cd &quot;$NEWDATA&quot;
sudo -u postgres $NEWBIN/initdb -D &quot;$NEWDATA&quot;
sudo -u postgres $NEWBIN/pg_upgrade \
--old-datadir &quot;$OLDDATA&quot; --new-datadir &quot;$NEWDATA&quot; \
--old-bindir $OLDBIN --new-bindir $NEWBIN \
&quot;$@&quot;
'')
];
}
</programlisting>
<para>
The upgrade process is:
</para>
<orderedlist numeration="arabic">
<listitem>
<para>
Rebuild nixos configuration with the configuration above added
to your <filename>configuration.nix</filename>. Alternatively,
add that into separate file and reference it in
<literal>imports</literal> list.
</para>
</listitem>
<listitem>
<para>
Login as root (<literal>sudo su -</literal>)
</para>
</listitem>
<listitem>
<para>
Run <literal>upgrade-pg-cluster</literal>. It will stop old
postgresql, initialize a new one and migrate the old one to
the new one. You may supply arguments like
<literal>--jobs 4</literal> and <literal>--link</literal> to
speedup migration process. See
<link xlink:href="https://www.postgresql.org/docs/current/pgupgrade.html">https://www.postgresql.org/docs/current/pgupgrade.html</link>
for details.
</para>
</listitem>
<listitem>
<para>
Change postgresql package in NixOS configuration to the one
you were upgrading to via
<xref linkend="opt-services.postgresql.package" />. Rebuild
NixOS. This should start new postgres using upgraded data
directory and all services you stopped during the upgrade.
</para>
</listitem>
<listitem>
<para>
After the upgrade its advisable to analyze the new cluster.
</para>
<itemizedlist>
<listitem>
<para>
For PostgreSQL ≥ 14, use the <literal>vacuumdb</literal>
command printed by the upgrades script.
</para>
</listitem>
<listitem>
<para>
For PostgreSQL &lt; 14, run (as
<literal>su -l postgres</literal> in the
<xref linkend="opt-services.postgresql.dataDir" />, in
this example <filename>/var/lib/postgresql/13</filename>):
</para>
<programlisting>
$ ./analyze_new_cluster.sh
</programlisting>
</listitem>
</itemizedlist>
<warning>
<para>
The next step removes the old state-directory!
</para>
</warning>
<programlisting>
$ ./delete_old_cluster.sh
</programlisting>
</listitem>
</orderedlist>
</section>
<section xml:id="module-services-postgres-options">
<title>Options</title>
<para>
A complete list of options for the PostgreSQL module may be found
<link linkend="opt-services.postgresql.enable">here</link>.
</para>
</section>
<section xml:id="module-services-postgres-plugins">
<title>Plugins</title>
<para>
Plugins collection for each PostgreSQL version can be accessed
with <literal>.pkgs</literal>. For example, for
<literal>pkgs.postgresql_11</literal> package, its plugin
collection is accessed by
<literal>pkgs.postgresql_11.pkgs</literal>:
</para>
<programlisting>
$ nix repl '&lt;nixpkgs&gt;'
Loading '&lt;nixpkgs&gt;'...
Added 10574 variables.
nix-repl&gt; postgresql_11.pkgs.&lt;TAB&gt;&lt;TAB&gt;
postgresql_11.pkgs.cstore_fdw postgresql_11.pkgs.pg_repack
postgresql_11.pkgs.pg_auto_failover postgresql_11.pkgs.pg_safeupdate
postgresql_11.pkgs.pg_bigm postgresql_11.pkgs.pg_similarity
postgresql_11.pkgs.pg_cron postgresql_11.pkgs.pg_topn
postgresql_11.pkgs.pg_hll postgresql_11.pkgs.pgjwt
postgresql_11.pkgs.pg_partman postgresql_11.pkgs.pgroonga
...
</programlisting>
<para>
To add plugins via NixOS configuration, set
<literal>services.postgresql.extraPlugins</literal>:
</para>
<programlisting>
services.postgresql.package = pkgs.postgresql_11;
services.postgresql.extraPlugins = with pkgs.postgresql_11.pkgs; [
pg_repack
postgis
];
</programlisting>
<para>
You can build custom PostgreSQL-with-plugins (to be used outside
of NixOS) using function <literal>.withPackages</literal>. For
example, creating a custom PostgreSQL package in an overlay can
look like:
</para>
<programlisting>
self: super: {
postgresql_custom = self.postgresql_11.withPackages (ps: [
ps.pg_repack
ps.postgis
]);
}
</programlisting>
<para>
Heres a recipe on how to override a particular plugin through an
overlay:
</para>
<programlisting>
self: super: {
postgresql_11 = super.postgresql_11.override { this = self.postgresql_11; } // {
pkgs = super.postgresql_11.pkgs // {
pg_repack = super.postgresql_11.pkgs.pg_repack.overrideAttrs (_: {
name = &quot;pg_repack-v20181024&quot;;
src = self.fetchzip {
url = &quot;https://github.com/reorg/pg_repack/archive/923fa2f3c709a506e111cc963034bf2fd127aa00.tar.gz&quot;;
sha256 = &quot;17k6hq9xaax87yz79j773qyigm4fwk8z4zh5cyp6z0sxnwfqxxw5&quot;;
};
});
};
};
}
</programlisting>
</section>
</chapter>

View file

@ -7,7 +7,7 @@ let
cfg = config.services.flatpak;
in {
meta = {
doc = ./flatpak.xml;
doc = ./flatpak.md;
maintainers = pkgs.flatpak.meta.maintainers;
};

View file

@ -1,59 +0,0 @@
<!-- Do not edit this file directly, edit its companion .md instead
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-flatpak">
<title>Flatpak</title>
<para>
<emphasis>Source:</emphasis>
<filename>modules/services/desktop/flatpak.nix</filename>
</para>
<para>
<emphasis>Upstream documentation:</emphasis>
<link xlink:href="https://github.com/flatpak/flatpak/wiki">https://github.com/flatpak/flatpak/wiki</link>
</para>
<para>
Flatpak is a system for building, distributing, and running
sandboxed desktop applications on Linux.
</para>
<para>
To enable Flatpak, add the following to your
<filename>configuration.nix</filename>:
</para>
<programlisting>
services.flatpak.enable = true;
</programlisting>
<para>
For the sandboxed apps to work correctly, desktop integration
portals need to be installed. If you run GNOME, this will be handled
automatically for you; in other cases, you will need to add
something like the following to your
<filename>configuration.nix</filename>:
</para>
<programlisting>
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];
</programlisting>
<para>
Then, you will need to add a repository, for example,
<link xlink:href="https://github.com/flatpak/flatpak/wiki">Flathub</link>,
either using the following commands:
</para>
<programlisting>
$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak update
</programlisting>
<para>
or by opening the
<link xlink:href="https://flathub.org/repo/flathub.flatpakrepo">repository
file</link> in GNOME Software.
</para>
<para>
Finally, you can search and install programs:
</para>
<programlisting>
$ flatpak search bustle
$ flatpak install flathub org.freedesktop.Bustle
$ flatpak run org.freedesktop.Bustle
</programlisting>
<para>
Again, GNOME Software offers graphical interface for these tasks.
</para>
</chapter>

View file

@ -35,5 +35,20 @@
}
],
"filter.properties": {},
"stream.properties": {}
"stream.properties": {},
"alsa.properties": {},
"alsa.rules": [
{
"matches": [
{
"application.process.binary": "resolve"
}
],
"actions": {
"update-props": {
"alsa.buffer-bytes": 131072
}
}
}
]
}

View file

@ -58,6 +58,18 @@
"node.passive": true
}
}
},
{
"matches": [
{
"client.name": "Mixxx"
}
],
"actions": {
"update-props": {
"jack.merge-monitor": false
}
}
}
]
}

Some files were not shown because too many files have changed in this diff Show more