Project import generated by Copybara.

GitOrigin-RevId: 20fc948445a6c22d4e8d5178e9a6bc6e1f5417c8
This commit is contained in:
Default email 2022-11-21 19:40:18 +02:00
parent 8d4ed3dc15
commit 01ed8ef136
2418 changed files with 66508 additions and 47847 deletions

View file

@ -49,11 +49,16 @@
/pkgs/build-support/writers @lassulus @Profpatsch /pkgs/build-support/writers @lassulus @Profpatsch
# Nixpkgs documentation # Nixpkgs documentation
/doc @fricklerhandwerk
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm /maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm /maintainers/scripts/doc @jtojnar @ryantm
/doc/* @fricklerhandwerk
/doc/build-aux/pandoc-filters @jtojnar /doc/build-aux/pandoc-filters @jtojnar
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar /doc/builders/trivial-builders.chapter.md @fricklerhandwerk
/doc/contributing/ @fricklerhandwerk
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar @fricklerhandwerk
/doc/stdenv @fricklerhandwerk
/doc/using @fricklerhandwerk
# NixOS Internals # NixOS Internals
/nixos/default.nix @nbp @infinisil /nixos/default.nix @nbp @infinisil
@ -289,3 +294,8 @@
# Dotnet # Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones /pkgs/build-support/dotnet @IvarWithoutBones
/pkgs/development/compilers/dotnet @IvarWithoutBones /pkgs/development/compilers/dotnet @IvarWithoutBones
# Node.js
/pkgs/build-support/node/build-npm-package @winterqt
/pkgs/build-support/node/fetch-npm-deps @winterqt
/doc/languages-frameworks/javascript.section.md @winterqt

View file

@ -0,0 +1,31 @@
---
name: Unreproducible package
about: A package that does not produce a bit-by-bit reproducible result each time it is built
title: ''
labels: '0.kind: enhancement', '6.topic: reproducible builds'
assignees: ''
---
Building this package twice does not produce the bit-by-bit identical result each time, making it harder to detect CI breaches. You can read more about this at https://reproducible-builds.org/ .
Fixing bit-by-bit reproducibility also has additional advantages, such as avoiding hard-to-reproduce bugs, making content-addressed storage more effective and reducing rebuilds in such systems.
### Steps To Reproduce
```
nix-build '<nixpkgs>' -A ... --check --keep-failed
```
You can use `diffoscope` to analyze the differences in the output of the two builds.
To view the build log of the build that produced the artifact in the binary cache:
```
nix-store --read-log $(nix-instantiate '<nixpkgs>' -A ...)
```
### Additional context
(please share the relevant fragment of the diffoscope output here,
and any additional analysis you may have done)

View file

@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View file

@ -51,7 +51,7 @@ See the nixpkgs manual for more details on [standard meta-attributes](https://ni
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work. In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient. Package version upgrades usually allow for simpler commit messages, including attribute name, old and new version, as well as a reference to the relevant release notes/changelog. Every once in a while a package upgrade requires more extensive changes, and that subsequently warrants a more verbose message.
## Rebasing between branches (i.e. from master to staging) ## Rebasing between branches (i.e. from master to staging)

View file

@ -37,7 +37,7 @@ dependencies of the two derivations in the `units` list.
`units` must be a list of derivations, and their names must be prefixed with the service name (`"demo"` in this case). `units` must be a list of derivations, and their names must be prefixed with the service name (`"demo"` in this case).
Otherwise `systemd-portabled` will ignore them. Otherwise `systemd-portabled` will ignore them.
:::{.Note} ::: {.note}
The `.raw` file extension of the image is required by the portable services specification. The `.raw` file extension of the image is required by the portable services specification.
::: :::
@ -76,6 +76,6 @@ portablectl attach demo_1.0.raw
systemctl enable --now demo.socket systemctl enable --now demo.socket
systemctl enable --now demo.service systemctl enable --now demo.service
``` ```
:::{.Note} ::: {.note}
See the [man page](https://www.freedesktop.org/software/systemd/man/portablectl.html) of `portablectl` for more info on its usage. See the [man page](https://www.freedesktop.org/software/systemd/man/portablectl.html) of `portablectl` for more info on its usage.
::: :::

View file

@ -35,6 +35,70 @@ passthru.tests.version = testers.testVersion {
}; };
``` ```
## `testBuildFailure` {#tester-testBuildFailure}
Make sure that a build does not succeed. This is useful for testing testers.
This returns a derivation with an override on the builder, with the following effects:
- Fail the build when the original builder succeeds
- Move `$out` to `$out/result`, if it exists (assuming `out` is the default output)
- Save the build log to `$out/testBuildFailure.log` (same)
Example:
```nix
runCommand "example" {
failed = testers.testBuildFailure (runCommand "fail" {} ''
echo ok-ish >$out
echo failing though
exit 3
'');
} ''
grep -F 'ok-ish' $failed/result
grep -F 'failing though' $failed/testBuildFailure.log
[[ 3 = $(cat $failed/testBuildFailure.exit) ]]
touch $out
'';
```
While `testBuildFailure` is designed to keep changes to the original builder's
environment to a minimum, some small changes are inevitable.
- The file `$TMPDIR/testBuildFailure.log` is present. It should not be deleted.
- `stdout` and `stderr` are a pipe instead of a tty. This could be improved.
- One or two extra processes are present in the sandbox during the original
builder's execution.
- The derivation and output hashes are different, but not unusual.
- The derivation includes a dependency on `buildPackages.bash` and
`expect-failure.sh`, which is built to include a transitive dependency on
`buildPackages.coreutils` and possibly more. These are not added to `PATH`
or any other environment variable, so they should be hard to observe.
## `testEqualContents` {#tester-equalContents}
Check that two paths have the same contents.
Example:
```nix
testers.testEqualContents {
assertion = "sed -e performs replacement";
expected = writeText "expected" ''
foo baz baz
'';
actual = runCommand "actual" {
# not really necessary for a package that's in stdenv
nativeBuildInputs = [ gnused ];
base = writeText "base" ''
foo bar baz
'';
} ''
sed -e 's/bar/baz/g' $base >$out
'';
}
```
## `testEqualDerivation` {#tester-testEqualDerivation} ## `testEqualDerivation` {#tester-testEqualDerivation}
Checks that two packages produce the exact same build instructions. Checks that two packages produce the exact same build instructions.

View file

@ -22,6 +22,7 @@
<xi:include href="./libxml2.section.xml" /> <xi:include href="./libxml2.section.xml" />
<xi:include href="./meson.section.xml" /> <xi:include href="./meson.section.xml" />
<xi:include href="./ninja.section.xml" /> <xi:include href="./ninja.section.xml" />
<xi:include href="./patch-rc-path-hooks.section.xml" />
<xi:include href="./perl.section.xml" /> <xi:include href="./perl.section.xml" />
<xi:include href="./pkg-config.section.xml" /> <xi:include href="./pkg-config.section.xml" />
<xi:include href="./postgresql-test-hook.section.xml" /> <xi:include href="./postgresql-test-hook.section.xml" />

View file

@ -0,0 +1,50 @@
# `patchRcPath` hooks {#sec-patchRcPathHooks}
These hooks provide shell-specific utilities (with the same name as the hook) to patch shell scripts meant to be sourced by software users.
The typical usage is to patch initialisation or [rc](https://unix.stackexchange.com/questions/3467/what-does-rc-in-bashrc-stand-for) scripts inside `$out/bin` or `$out/etc`.
Such scripts, when being sourced, would insert the binary locations of certain commands into `PATH`, modify other environment variables or run a series of start-up commands.
When shipped from the upstream, they sometimes use commands that might not be available in the environment they are getting sourced in.
The compatible shells for each hook are:
- `patchRcPathBash`: [Bash](https://www.gnu.org/software/bash/), [ksh](http://www.kornshell.org/), [zsh](https://www.zsh.org/) and other shells supporting the Bash-like parameter expansions.
- `patchRcPathCsh`: Csh scripts, such as those targeting [tcsh](https://www.tcsh.org/).
- `patchRcPathFish`: [Fish](https://fishshell.com/) scripts.
- `patchRcPathPosix`: POSIX-conformant shells supporting the limited parameter expansions specified by the POSIX standard. Current implementation uses the parameter expansion `${foo-}` only.
For each supported shell, it modifies the script with a `PATH` prefix that is later removed when the script ends.
It allows nested patching, which guarantees that a patched script may source another patched script.
Syntax to apply the utility to a script:
```sh
patchRcPath<shell> <file> <PATH-prefix>
```
Example usage:
Given a package `foo` containing an init script `this-foo.fish` that depends on `coreutils`, `man` and `which`,
patch the init script for users to source without having the above dependencies in their `PATH`:
```nix
{ lib, stdenv, patchRcPathFish}:
stdenv.mkDerivation {
# ...
nativeBuildInputs = [
patchRcPathFish
];
postFixup = ''
patchRcPathFish $out/bin/this-foo.fish ${lib.makeBinPath [ coreutils man which ]}
'';
}
```
::: {.note}
`patchRcPathCsh` and `patchRcPathPosix` implementation depends on `sed` to do the string processing.
The others are in vanilla shell and have no third-party dependencies.
:::

View file

@ -157,6 +157,61 @@ git config --global url."https://github.com/".insteadOf git://github.com/
## Tool specific instructions {#javascript-tool-specific} ## Tool specific instructions {#javascript-tool-specific}
### buildNpmPackage {#javascript-buildNpmPackage}
`buildNpmPackage` allows you to package npm-based projects in Nixpkgs without the use of an auto-generated dependencies file (as used in [node2nix](#javascript-node2nix)). It works by utilizing npm's cache functionality -- creating a reproducible cache that contains the dependencies of a project, and pointing npm to it.
```nix
{ lib, buildNpmPackage, fetchFromGitHub }:
buildNpmPackage rec {
pname = "flood";
version = "4.7.0";
src = fetchFromGitHub {
owner = "jesec";
repo = pname;
rev = "v${version}";
hash = "sha256-BR+ZGkBBfd0dSQqAvujsbgsEPFYw/ThrylxUbOksYxM=";
};
patches = [ ./remove-prepack-script.patch ];
npmDepsHash = "sha256-s8SpZY/1tKZVd3vt7sA9vsqHvEaNORQBMrSyhWpj048=";
NODE_OPTIONS = "--openssl-legacy-provider";
meta = with lib; {
description = "A modern web UI for various torrent clients with a Node.js backend and React frontend";
homepage = "https://flood.js.org";
license = licenses.gpl3Only;
maintainers = with maintainers; [ winter ];
};
}
```
#### Arguments {#javascript-buildNpmPackage-arguments}
* `npmDepsHash`: The output hash of the dependencies for this project. Can be calculated in advance with [`prefetch-npm-deps`](#javascript-buildNpmPackage-prefetch-npm-deps).
* `makeCacheWritable`: Whether to make the cache writable prior to installing dependencies. Don't set this unless npm tries to write to the cache directory, as it can slow down the build.
* `npmBuildScript`: The script to run to build the project. Defaults to `"build"`.
* `npmFlags`: Flags to pass to all npm commands.
* `npmInstallFlags`: Flags to pass to `npm ci`.
* `npmBuildFlags`: Flags to pass to `npm run ${npmBuildScript}`.
* `npmPackFlags`: Flags to pass to `npm pack`.
#### prefetch-npm-deps {#javascript-buildNpmPackage-prefetch-npm-deps}
`prefetch-npm-deps` can calculate the hash of the dependencies of an npm project ahead of time.
```console
$ ls
package.json package-lock.json index.js
$ prefetch-npm-deps package-lock.json
...
sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
```
### node2nix {#javascript-node2nix} ### node2nix {#javascript-node2nix}
#### Preparation {#javascript-node2nix-preparation} #### Preparation {#javascript-node2nix-preparation}

View file

@ -789,7 +789,7 @@ documentation source root.
``` ```
The hook is also available to packages outside the python ecosystem by The hook is also available to packages outside the python ecosystem by
referencing it using `python3.pkgs.sphinxHook`. referencing it using `sphinxHook` from top-level.
### Develop local package {#develop-local-package} ### Develop local package {#develop-local-package}

View file

@ -15,7 +15,7 @@ For other versions such as daily builds (beta and nightly),
use either `rustup` from nixpkgs (which will manage the rust installation in your home directory), use either `rustup` from nixpkgs (which will manage the rust installation in your home directory),
or use a community maintained [Rust overlay](#using-community-rust-overlays). or use a community maintained [Rust overlay](#using-community-rust-overlays).
## Compiling Rust applications with Cargo {#compiling-rust-applications-with-cargo} ## `buildRustPackage`: Compiling Rust applications with Cargo {#compiling-rust-applications-with-cargo}
Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`: Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`:
@ -608,7 +608,7 @@ buildPythonPackage rec {
} }
``` ```
## Compiling Rust crates using Nix instead of Cargo {#compiling-rust-crates-using-nix-instead-of-cargo} ## `buildRustCrate`: Compiling Rust crates using Nix instead of Cargo {#compiling-rust-crates-using-nix-instead-of-cargo}
### Simple operation {#simple-operation} ### Simple operation {#simple-operation}

View file

@ -125,7 +125,7 @@ If one of your favourite plugins isn't packaged, you can package it yourself:
{ config, pkgs, ... }: { config, pkgs, ... }:
let let
easygrep = pkgs.vimUtils.buildVimPlugin { easygrep = pkgs.vimUtils.buildVimPluginFrom2Nix {
name = "vim-easygrep"; name = "vim-easygrep";
src = pkgs.fetchFromGitHub { src = pkgs.fetchFromGitHub {
owner = "dkprice"; owner = "dkprice";
@ -155,6 +155,8 @@ in
} }
``` ```
If your package requires building specific parts, use instead `pkgs.vimUtils.buildVimPlugin`.
### Specificities for some plugins ### Specificities for some plugins
#### Treesitter #### Treesitter

View file

@ -250,5 +250,5 @@ Thirdly, it is because everything target-mentioning only exists to accommodate c
::: :::
::: {.note} ::: {.note}
If one explores Nixpkgs, they will see derivations with names like `gccCross`. Such `*Cross` derivations is a holdover from before we properly distinguished between the host and target platforms—the derivation with “Cross” in the name covered the `build = host != target` case, while the other covered the `host = target`, with build platform the same or not based on whether one was using its `.nativeDrv` or `.crossDrv`. This ugliness will disappear soon. If one explores Nixpkgs, they will see derivations with names like `gccCross`. Such `*Cross` derivations is a holdover from before we properly distinguished between the host and target platforms—the derivation with “Cross” in the name covered the `build = host != target` case, while the other covered the `host = target`, with build platform the same or not based on whether one was using its `.__spliced.buildHost` or `.__spliced.hostTarget`.
::: :::

View file

@ -44,8 +44,8 @@ $ nix-env -qa hello --json
"mips32-linux", "mips32-linux",
"x86_64-darwin", "x86_64-darwin",
"i686-cygwin", "i686-cygwin",
"i686-freebsd", "i686-freebsd13",
"x86_64-freebsd", "x86_64-freebsd13",
"i686-openbsd", "i686-openbsd",
"x86_64-openbsd" "x86_64-openbsd"
], ],

View file

@ -887,7 +887,7 @@ Packages may expect or require other utilities to be available at runtime.
Use `--prefix` to explicitly set dependencies in `PATH`. Use `--prefix` to explicitly set dependencies in `PATH`.
:::{note} ::: {.note}
`--prefix` essentially hard-codes dependencies into the wrapper. `--prefix` essentially hard-codes dependencies into the wrapper.
They cannot be overridden without rebuilding the package. They cannot be overridden without rebuilding the package.
::: :::
@ -1140,6 +1140,13 @@ Here are some more packages that provide a setup hook. Since the list of hooks i
Many other packages provide hooks, that are not part of `stdenv`. You can find Many other packages provide hooks, that are not part of `stdenv`. You can find
these in the [Hooks Reference](#chap-hooks). these in the [Hooks Reference](#chap-hooks).
### Compiler and Linker wrapper hooks {#compiler-linker-wrapper-hooks}
If the file `${cc}/nix-support/cc-wrapper-hook` exists, it will be run at the end of the [compiler wrapper](#cc-wrapper).
If the file `${binutils}/nix-support/post-link-hook` exists, it will be run at the end of the linker wrapper.
These hooks allow a user to inject code into the wrappers.
As an example, these hooks can be used to extract `extraBefore`, `params` and `extraAfter` which store all the command line arguments passed to the compiler and linker respectively.
## Purity in Nixpkgs {#sec-purity-in-nixpkgs} ## Purity in Nixpkgs {#sec-purity-in-nixpkgs}
*Measures taken to prevent dependencies on packages outside the store, and what you can do to prevent them.* *Measures taken to prevent dependencies on packages outside the store, and what you can do to prevent them.*

View file

@ -11,8 +11,7 @@
lib = import ./lib; lib = import ./lib;
forAllSystems = f: lib.genAttrs lib.systems.flakeExposed (system: f system); forAllSystems = lib.genAttrs lib.systems.flakeExposed;
in in
{ {
lib = lib.extend (final: prev: { lib = lib.extend (final: prev: {
@ -57,7 +56,7 @@
legacyPackages = forAllSystems (system: import ./. { inherit system; }); legacyPackages = forAllSystems (system: import ./. { inherit system; });
nixosModules = { nixosModules = {
notDetected = import ./nixos/modules/installer/scan/not-detected.nix; notDetected = ./nixos/modules/installer/scan/not-detected.nix;
}; };
}; };
} }

View file

@ -3,7 +3,7 @@
let let
inherit (builtins) head tail length; inherit (builtins) head tail length;
inherit (lib.trivial) id; inherit (lib.trivial) flip id mergeAttrs pipe;
inherit (lib.strings) concatStringsSep concatMapStringsSep escapeNixIdentifier sanitizeDerivationName; inherit (lib.strings) concatStringsSep concatMapStringsSep escapeNixIdentifier sanitizeDerivationName;
inherit (lib.lists) foldr foldl' concatMap concatLists elemAt all partition groupBy take foldl; inherit (lib.lists) foldr foldl' concatMap concatLists elemAt all partition groupBy take foldl;
in in
@ -77,6 +77,25 @@ rec {
let errorMsg = "cannot find attribute `" + concatStringsSep "." attrPath + "'"; let errorMsg = "cannot find attribute `" + concatStringsSep "." attrPath + "'";
in attrByPath attrPath (abort errorMsg); in attrByPath attrPath (abort errorMsg);
/* Map each attribute in the given set and merge them into a new attribute set.
Type:
concatMapAttrs ::
(String -> a -> AttrSet)
-> AttrSet
-> AttrSet
Example:
concatMapAttrs
(name: value: {
${name} = value;
${name + value} = value;
})
{ x = "a"; y = "b"; }
=> { x = "a"; xa = "a"; y = "b"; yb = "b"; }
*/
concatMapAttrs = f: flip pipe [ (mapAttrs f) attrValues (foldl' mergeAttrs { }) ];
/* Update or set specific paths of an attribute set. /* Update or set specific paths of an attribute set.
@ -606,7 +625,7 @@ rec {
getMan = getOutput "man"; getMan = getOutput "man";
/* Pick the outputs of packages to place in buildInputs */ /* Pick the outputs of packages to place in buildInputs */
chooseDevOutputs = drvs: builtins.map getDev drvs; chooseDevOutputs = builtins.map getDev;
/* Make various Nix tools consider the contents of the resulting /* Make various Nix tools consider the contents of the resulting
attribute set when looking for what to build, find, etc. attribute set when looking for what to build, find, etc.

View file

@ -38,12 +38,15 @@ rec {
// //
(drv.passthru or {}) (drv.passthru or {})
// //
(if (drv ? crossDrv && drv ? nativeDrv) # TODO(@Artturin): remove before release 23.05 and only have __spliced.
then { (lib.optionalAttrs (drv ? crossDrv && drv ? nativeDrv) {
crossDrv = overrideDerivation drv.crossDrv f; crossDrv = overrideDerivation drv.crossDrv f;
nativeDrv = overrideDerivation drv.nativeDrv f; nativeDrv = overrideDerivation drv.nativeDrv f;
} })
else { })); //
lib.optionalAttrs (drv ? __spliced) {
__spliced = {} // (lib.mapAttrs (_: sDrv: overrideDerivation sDrv f) drv.__spliced);
});
/* `makeOverridable` takes a function from attribute set to attribute set and /* `makeOverridable` takes a function from attribute set to attribute set and

View file

@ -78,7 +78,7 @@ let
inherit (self.attrsets) attrByPath hasAttrByPath setAttrByPath inherit (self.attrsets) attrByPath hasAttrByPath setAttrByPath
getAttrFromPath attrVals attrValues getAttrs catAttrs filterAttrs getAttrFromPath attrVals attrValues getAttrs catAttrs filterAttrs
filterAttrsRecursive foldAttrs collect nameValuePair mapAttrs filterAttrsRecursive foldAttrs collect nameValuePair mapAttrs
mapAttrs' mapAttrsToList mapAttrsRecursive mapAttrsRecursiveCond mapAttrs' mapAttrsToList concatMapAttrs mapAttrsRecursive mapAttrsRecursiveCond
genAttrs isDerivation toDerivation optionalAttrs genAttrs isDerivation toDerivation optionalAttrs
zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil zipAttrsWithNames zipAttrsWith zipAttrs recursiveUpdateUntil
recursiveUpdate matchAttrs overrideExisting showAttrPath getOutput getBin recursiveUpdate matchAttrs overrideExisting showAttrPath getOutput getBin

View file

@ -154,6 +154,11 @@ in mkLicense lset) ({
fullName = "BSD-2-Clause Plus Patent License"; fullName = "BSD-2-Clause Plus Patent License";
}; };
bsd2WithViews = {
spdxId = "BSD-2-Clause-Views";
fullName = "BSD 2-Clause with views sentence";
};
bsd3 = { bsd3 = {
spdxId = "BSD-3-Clause"; spdxId = "BSD-3-Clause";
fullName = ''BSD 3-clause "New" or "Revised" License''; fullName = ''BSD 3-clause "New" or "Revised" License'';
@ -990,21 +995,6 @@ in mkLicense lset) ({
fullName = "GNU Affero General Public License v3.0"; fullName = "GNU Affero General Public License v3.0";
deprecated = true; deprecated = true;
}; };
fdl11 = {
spdxId = "GFDL-1.1";
fullName = "GNU Free Documentation License v1.1";
deprecated = true;
};
fdl12 = {
spdxId = "GFDL-1.2";
fullName = "GNU Free Documentation License v1.2";
deprecated = true;
};
fdl13 = {
spdxId = "GFDL-1.3";
fullName = "GNU Free Documentation License v1.3";
deprecated = true;
};
gpl2 = { gpl2 = {
spdxId = "GPL-2.0"; spdxId = "GPL-2.0";
fullName = "GNU General Public License v2.0"; fullName = "GNU General Public License v2.0";

View file

@ -123,7 +123,7 @@ rec {
Example: Example:
mkPackageOption pkgs "GHC" { mkPackageOption pkgs "GHC" {
default = [ "ghc" ]; default = [ "ghc" ];
example = "pkgs.haskell.packages.ghc924.ghc.withPackages (hkgs: [ hkgs.primes ])"; example = "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])";
} }
=> { _type = "option"; default = «derivation /nix/store/jxx55cxsjrf8kyh3fp2ya17q99w7541r-ghc-8.10.7.drv»; defaultText = { ... }; description = "The GHC package to use."; example = { ... }; type = { ... }; } => { _type = "option"; default = «derivation /nix/store/jxx55cxsjrf8kyh3fp2ya17q99w7541r-ghc-8.10.7.drv»; defaultText = { ... }; description = "The GHC package to use."; example = { ... }; type = { ... }; }
*/ */

View file

@ -166,17 +166,27 @@ let
in type == "directory" || lib.any (ext: lib.hasSuffix ext base) exts; in type == "directory" || lib.any (ext: lib.hasSuffix ext base) exts;
in cleanSourceWith { inherit filter src; }; in cleanSourceWith { inherit filter src; };
pathIsGitRepo = path: (tryEval (commitIdFromGitRepo path)).success; pathIsGitRepo = path: (_commitIdFromGitRepoOrError path)?value;
/* /*
Get the commit id of a git repo. Get the commit id of a git repo.
Example: commitIdFromGitRepo <nixpkgs/.git> Example: commitIdFromGitRepo <nixpkgs/.git>
*/ */
commitIdFromGitRepo = commitIdFromGitRepo = path:
let commitIdOrError = _commitIdFromGitRepoOrError path;
in commitIdOrError.value or (throw commitIdOrError.error);
# Get the commit id of a git repo.
# Returns `{ value = commitHash }` or `{ error = "... message ..." }`.
# Example: commitIdFromGitRepo <nixpkgs/.git>
# not exported, used for commitIdFromGitRepo
_commitIdFromGitRepoOrError =
let readCommitFromFile = file: path: let readCommitFromFile = file: path:
let fileName = toString path + "/" + file; let fileName = path + "/${file}";
packedRefsName = toString path + "/packed-refs"; packedRefsName = path + "/packed-refs";
absolutePath = base: path: absolutePath = base: path:
if lib.hasPrefix "/" path if lib.hasPrefix "/" path
then path then path
@ -186,7 +196,7 @@ let
then then
let m = match "^gitdir: (.*)$" (lib.fileContents path); let m = match "^gitdir: (.*)$" (lib.fileContents path);
in if m == null in if m == null
then throw ("File contains no gitdir reference: " + path) then { error = "File contains no gitdir reference: " + path; }
else else
let gitDir = absolutePath (dirOf path) (lib.head m); let gitDir = absolutePath (dirOf path) (lib.head m);
commonDir'' = if pathIsRegularFile "${gitDir}/commondir" commonDir'' = if pathIsRegularFile "${gitDir}/commondir"
@ -204,7 +214,7 @@ let
let fileContent = lib.fileContents fileName; let fileContent = lib.fileContents fileName;
matchRef = match "^ref: (.*)$" fileContent; matchRef = match "^ref: (.*)$" fileContent;
in if matchRef == null in if matchRef == null
then fileContent then { value = fileContent; }
else readCommitFromFile (lib.head matchRef) path else readCommitFromFile (lib.head matchRef) path
else if pathIsRegularFile packedRefsName else if pathIsRegularFile packedRefsName
@ -218,10 +228,10 @@ let
# https://github.com/NixOS/nix/issues/2147#issuecomment-659868795 # https://github.com/NixOS/nix/issues/2147#issuecomment-659868795
refs = filter isRef (split "\n" fileContent); refs = filter isRef (split "\n" fileContent);
in if refs == [] in if refs == []
then throw ("Could not find " + file + " in " + packedRefsName) then { error = "Could not find " + file + " in " + packedRefsName; }
else lib.head (matchRef (lib.head refs)) else { value = lib.head (matchRef (lib.head refs)); }
else throw ("Not a .git directory: " + path); else { error = "Not a .git directory: " + toString path; };
in readCommitFromFile "HEAD"; in readCommitFromFile "HEAD";
pathHasContext = builtins.hasContext or (lib.hasPrefix storeDir); pathHasContext = builtins.hasContext or (lib.hasPrefix storeDir);

View file

@ -47,9 +47,10 @@ rec {
else if final.isUClibc then "uclibc" else if final.isUClibc then "uclibc"
else if final.isAndroid then "bionic" else if final.isAndroid then "bionic"
else if final.isLinux /* default */ then "glibc" else if final.isLinux /* default */ then "glibc"
else if final.isFreeBSD then "fblibc"
else if final.isNetBSD then "nblibc"
else if final.isAvr then "avrlibc" else if final.isAvr then "avrlibc"
else if final.isNone then "newlib" else if final.isNone then "newlib"
else if final.isNetBSD then "nblibc"
# TODO(@Ericson2314) think more about other operating systems # TODO(@Ericson2314) think more about other operating systems
else "native/impure"; else "native/impure";
# Choose what linker we wish to use by default. Someday we might also # Choose what linker we wish to use by default. Someday we might also

View file

@ -13,7 +13,7 @@ let
"x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin" "x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin"
# FreeBSD # FreeBSD
"i686-freebsd" "x86_64-freebsd" "i686-freebsd13" "x86_64-freebsd13"
# Genode # Genode
"aarch64-genode" "i686-genode" "x86_64-genode" "aarch64-genode" "i686-genode" "x86_64-genode"

View file

@ -303,15 +303,18 @@ rec {
# BSDs # BSDs
x86_64-freebsd = {
config = "x86_64-unknown-freebsd13";
useLLVM = true;
};
x86_64-netbsd = { x86_64-netbsd = {
config = "x86_64-unknown-netbsd"; config = "x86_64-unknown-netbsd";
libc = "nblibc";
}; };
# this is broken and never worked fully # this is broken and never worked fully
x86_64-netbsd-llvm = { x86_64-netbsd-llvm = {
config = "x86_64-unknown-netbsd"; config = "x86_64-unknown-netbsd";
libc = "nblibc";
useLLVM = true; useLLVM = true;
}; };

View file

@ -59,7 +59,7 @@ rec {
isiOS = { kernel = kernels.ios; }; isiOS = { kernel = kernels.ios; };
isLinux = { kernel = kernels.linux; }; isLinux = { kernel = kernels.linux; };
isSunOS = { kernel = kernels.solaris; }; isSunOS = { kernel = kernels.solaris; };
isFreeBSD = { kernel = kernels.freebsd; }; isFreeBSD = { kernel = { name = "freebsd"; }; };
isNetBSD = { kernel = kernels.netbsd; }; isNetBSD = { kernel = kernels.netbsd; };
isOpenBSD = { kernel = kernels.openbsd; }; isOpenBSD = { kernel = kernels.openbsd; };
isWindows = { kernel = kernels.windows; }; isWindows = { kernel = kernels.windows; };

View file

@ -290,7 +290,11 @@ rec {
# the normalized name for macOS. # the normalized name for macOS.
macos = { execFormat = macho; families = { inherit darwin; }; name = "darwin"; }; macos = { execFormat = macho; families = { inherit darwin; }; name = "darwin"; };
ios = { execFormat = macho; families = { inherit darwin; }; }; ios = { execFormat = macho; families = { inherit darwin; }; };
freebsd = { execFormat = elf; families = { inherit bsd; }; }; # A tricky thing about FreeBSD is that there is no stable ABI across
# versions. That means that putting in the version as part of the
# config string is paramount.
freebsd12 = { execFormat = elf; families = { inherit bsd; }; name = "freebsd"; version = 12; };
freebsd13 = { execFormat = elf; families = { inherit bsd; }; name = "freebsd"; version = 13; };
linux = { execFormat = elf; families = { }; }; linux = { execFormat = elf; families = { }; };
netbsd = { execFormat = elf; families = { inherit bsd; }; }; netbsd = { execFormat = elf; families = { inherit bsd; }; };
none = { execFormat = unknown; families = { }; }; none = { execFormat = unknown; families = { }; };
@ -431,6 +435,8 @@ rec {
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "redox"; } then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "redox"; }
else if (elemAt l 2 == "mmixware") else if (elemAt l 2 == "mmixware")
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "mmixware"; } then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = "mmixware"; }
else if hasPrefix "freebsd" (elemAt l 2)
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; }
else if hasPrefix "netbsd" (elemAt l 2) else if hasPrefix "netbsd" (elemAt l 2)
then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; } then { cpu = elemAt l 0; vendor = elemAt l 1; kernel = elemAt l 2; }
else if (elem (elemAt l 2) ["eabi" "eabihf" "elf"]) else if (elem (elemAt l 2) ["eabi" "eabihf" "elf"])
@ -485,10 +491,13 @@ rec {
mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s)); mkSystemFromString = s: mkSystemFromSkeleton (mkSkeletonFromList (lib.splitString "-" s));
kernelName = kernel:
kernel.name + toString (kernel.version or "");
doubleFromSystem = { cpu, kernel, abi, ... }: doubleFromSystem = { cpu, kernel, abi, ... }:
/**/ if abi == abis.cygnus then "${cpu.name}-cygwin" /**/ if abi == abis.cygnus then "${cpu.name}-cygwin"
else if kernel.families ? darwin then "${cpu.name}-darwin" else if kernel.families ? darwin then "${cpu.name}-darwin"
else "${cpu.name}-${kernel.name}"; else "${cpu.name}-${kernelName kernel}";
tripleFromSystem = { cpu, vendor, kernel, abi, ... } @ sys: assert isSystem sys; let tripleFromSystem = { cpu, vendor, kernel, abi, ... } @ sys: assert isSystem sys; let
optExecFormat = optExecFormat =
@ -496,7 +505,7 @@ rec {
gnuNetBSDDefaultExecFormat cpu != kernel.execFormat) gnuNetBSDDefaultExecFormat cpu != kernel.execFormat)
kernel.execFormat.name; kernel.execFormat.name;
optAbi = lib.optionalString (abi != abis.unknown) "-${abi.name}"; optAbi = lib.optionalString (abi != abis.unknown) "-${abi.name}";
in "${cpu.name}-${vendor.name}-${kernel.name}${optExecFormat}${optAbi}"; in "${cpu.name}-${vendor.name}-${kernelName kernel}${optExecFormat}${optAbi}";
################################################################################ ################################################################################

View file

@ -557,7 +557,7 @@ rec {
else if platform.isRiscV then riscv-multiplatform else if platform.isRiscV then riscv-multiplatform
else if platform.parsed.cpu == lib.systems.parse.cpuTypes.mipsel then fuloong2f_n32 else if platform.parsed.cpu == lib.systems.parse.cpuTypes.mipsel then (import ./examples.nix { inherit lib; }).mipsel-linux-gnu
else if platform.parsed.cpu == lib.systems.parse.cpuTypes.powerpc64le then powernv else if platform.parsed.cpu == lib.systems.parse.cpuTypes.powerpc64le then powernv

View file

@ -478,6 +478,23 @@ runTests {
# ATTRSETS # ATTRSETS
testConcatMapAttrs = {
expr = concatMapAttrs
(name: value: {
${name} = value;
${name + value} = value;
})
{
foo = "bar";
foobar = "baz";
};
expected = {
foo = "bar";
foobar = "baz";
foobarbaz = "baz";
};
};
# code from the example # code from the example
testRecursiveUpdateUntil = { testRecursiveUpdateUntil = {
expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) { expr = recursiveUpdateUntil (path: l: r: path == ["foo"]) {

View file

@ -16,17 +16,17 @@ with lib.systems.doubles; lib.runTests {
testall = mseteq all (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos ++ wasi ++ windows ++ embedded ++ mmix ++ js ++ genode ++ redox); testall = mseteq all (linux ++ darwin ++ freebsd ++ openbsd ++ netbsd ++ illumos ++ wasi ++ windows ++ embedded ++ mmix ++ js ++ genode ++ redox);
testarm = mseteq arm [ "armv5tel-linux" "armv6l-linux" "armv6l-netbsd" "armv6l-none" "armv7a-linux" "armv7a-netbsd" "armv7l-linux" "armv7l-netbsd" "arm-none" "armv7a-darwin" ]; testarm = mseteq arm [ "armv5tel-linux" "armv6l-linux" "armv6l-netbsd" "armv6l-none" "armv7a-linux" "armv7a-netbsd" "armv7l-linux" "armv7l-netbsd" "arm-none" "armv7a-darwin" ];
testi686 = mseteq i686 [ "i686-linux" "i686-freebsd" "i686-genode" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-windows" "i686-none" "i686-darwin" ]; testi686 = mseteq i686 [ "i686-linux" "i686-freebsd13" "i686-genode" "i686-netbsd" "i686-openbsd" "i686-cygwin" "i686-windows" "i686-none" "i686-darwin" ];
testmips = mseteq mips [ "mips64el-linux" "mipsel-linux" "mipsel-netbsd" ]; testmips = mseteq mips [ "mips64el-linux" "mipsel-linux" "mipsel-netbsd" ];
testmmix = mseteq mmix [ "mmix-mmixware" ]; testmmix = mseteq mmix [ "mmix-mmixware" ];
testriscv = mseteq riscv [ "riscv32-linux" "riscv64-linux" "riscv32-netbsd" "riscv64-netbsd" "riscv32-none" "riscv64-none" ]; testriscv = mseteq riscv [ "riscv32-linux" "riscv64-linux" "riscv32-netbsd" "riscv64-netbsd" "riscv32-none" "riscv64-none" ];
testriscv32 = mseteq riscv32 [ "riscv32-linux" "riscv32-netbsd" "riscv32-none" ]; testriscv32 = mseteq riscv32 [ "riscv32-linux" "riscv32-netbsd" "riscv32-none" ];
testriscv64 = mseteq riscv64 [ "riscv64-linux" "riscv64-netbsd" "riscv64-none" ]; testriscv64 = mseteq riscv64 [ "riscv64-linux" "riscv64-netbsd" "riscv64-none" ];
testx86_64 = mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd" "x86_64-genode" "x86_64-redox" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" "x86_64-windows" "x86_64-none" ]; testx86_64 = mseteq x86_64 [ "x86_64-linux" "x86_64-darwin" "x86_64-freebsd13" "x86_64-genode" "x86_64-redox" "x86_64-openbsd" "x86_64-netbsd" "x86_64-cygwin" "x86_64-solaris" "x86_64-windows" "x86_64-none" ];
testcygwin = mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ]; testcygwin = mseteq cygwin [ "i686-cygwin" "x86_64-cygwin" ];
testdarwin = mseteq darwin [ "x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin" ]; testdarwin = mseteq darwin [ "x86_64-darwin" "i686-darwin" "aarch64-darwin" "armv7a-darwin" ];
testfreebsd = mseteq freebsd [ "i686-freebsd" "x86_64-freebsd" ]; testfreebsd = mseteq freebsd [ "i686-freebsd13" "x86_64-freebsd13" ];
testgenode = mseteq genode [ "aarch64-genode" "i686-genode" "x86_64-genode" ]; testgenode = mseteq genode [ "aarch64-genode" "i686-genode" "x86_64-genode" ];
testredox = mseteq redox [ "x86_64-redox" ]; testredox = mseteq redox [ "x86_64-redox" ];
testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */); testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */);

View file

@ -213,8 +213,8 @@ rec {
# Default value to return if revision can not be determined # Default value to return if revision can not be determined
default: default:
let let
revisionFile = "${toString ./..}/.git-revision"; revisionFile = ./.. + "/.git-revision";
gitRepo = "${toString ./..}/.git"; gitRepo = ./.. + "/.git";
in if lib.pathIsGitRepo gitRepo in if lib.pathIsGitRepo gitRepo
then lib.commitIdFromGitRepo gitRepo then lib.commitIdFromGitRepo gitRepo
else if lib.pathExists revisionFile then lib.fileContents revisionFile else if lib.pathExists revisionFile then lib.fileContents revisionFile
@ -514,6 +514,8 @@ rec {
in in
[r] ++ go q; [r] ++ go q;
in in
assert (isInt base);
assert (isInt i);
assert (base >= 2); assert (base >= 2);
assert (i >= 0); assert (i >= 0);
lib.reverseList (go i); lib.reverseList (go i);

View file

@ -478,6 +478,7 @@ rec {
path = mkOptionType { path = mkOptionType {
name = "path"; name = "path";
descriptionClass = "noun";
check = x: isCoercibleToString x && builtins.substring 0 1 (toString x) == "/"; check = x: isCoercibleToString x && builtins.substring 0 1 (toString x) == "/";
merge = mergeEqualOption; merge = mergeEqualOption;
}; };

View file

@ -69,6 +69,12 @@
fingerprint = "F466 A548 AD3F C1F1 8C88 4576 8702 7528 B006 D66D"; fingerprint = "F466 A548 AD3F C1F1 8C88 4576 8702 7528 B006 D66D";
}]; }];
}; };
_0xB10C = {
email = "nixpkgs@b10c.me";
name = "0xB10C";
github = "0xb10c";
githubId = 19157360;
};
_0xbe7a = { _0xbe7a = {
email = "nix@be7a.de"; email = "nix@be7a.de";
name = "Bela Stoyan"; name = "Bela Stoyan";
@ -820,6 +826,7 @@
}; };
AndersonTorres = { AndersonTorres = {
email = "torres.anderson.85@protonmail.com"; email = "torres.anderson.85@protonmail.com";
matrix = "@anderson_torres:matrix.org";
github = "AndersonTorres"; github = "AndersonTorres";
githubId = 5954806; githubId = 5954806;
name = "Anderson Torres"; name = "Anderson Torres";
@ -1278,6 +1285,15 @@
fingerprint = "DD52 6BC7 767D BA28 16C0 95E5 6840 89CE 67EB B691"; fingerprint = "DD52 6BC7 767D BA28 16C0 95E5 6840 89CE 67EB B691";
}]; }];
}; };
ataraxiasjel = {
email = "nix@ataraxiadev.com";
github = "AtaraxiaSjel";
githubId = 5314145;
name = "Dmitriy";
keys = [{
fingerprint = "922D A6E7 58A0 FE4C FAB4 E4B2 FD26 6B81 0DF4 8DF2";
}];
};
atemu = { atemu = {
name = "Atemu"; name = "Atemu";
email = "atemu.main+nixpkgs@gmail.com"; email = "atemu.main+nixpkgs@gmail.com";
@ -2259,6 +2275,12 @@
githubId = 5394722; githubId = 5394722;
name = "Spencer Baugh"; name = "Spencer Baugh";
}; };
catouc = {
email = "catouc@philipp.boeschen.me";
github = "catouc";
githubId = 25623213;
name = "Philipp Böschen";
};
caugner = { caugner = {
email = "nixos@caugner.de"; email = "nixos@caugner.de";
github = "caugner"; github = "caugner";
@ -2626,6 +2648,12 @@
githubId = 71959829; githubId = 71959829;
name = "Cleeyv"; name = "Cleeyv";
}; };
clerie = {
email = "nix@clerie.de";
github = "clerie";
githubId = 9381848;
name = "clerie";
};
cleverca22 = { cleverca22 = {
email = "cleverca22@gmail.com"; email = "cleverca22@gmail.com";
matrix = "@cleverca22:matrix.org"; matrix = "@cleverca22:matrix.org";
@ -2767,6 +2795,12 @@
githubId = 40290417; githubId = 40290417;
name = "Seb Blair"; name = "Seb Blair";
}; };
considerate = {
email = "viktor.kronvall@gmail.com";
github = "considerate";
githubId = 217918;
name = "Viktor Kronvall";
};
copumpkin = { copumpkin = {
email = "pumpkingod@gmail.com"; email = "pumpkingod@gmail.com";
github = "copumpkin"; github = "copumpkin";
@ -4133,6 +4167,15 @@
githubId = 147284; githubId = 147284;
name = "Jason Felice"; name = "Jason Felice";
}; };
ercao = {
email = "vip@ercao.cn";
github = "ercao";
githubId = 51725284;
name = "ercao";
keys = [{
fingerprint = "F3B0 36F7 B0CB 0964 3C12 D3C7 FFAB D125 7ECF 0889";
}];
};
erdnaxe = { erdnaxe = {
email = "erdnaxe@crans.org"; email = "erdnaxe@crans.org";
github = "erdnaxe"; github = "erdnaxe";
@ -4436,6 +4479,12 @@
githubId = 1276854; githubId = 1276854;
name = "Florian Peter"; name = "Florian Peter";
}; };
farnoy = {
email = "jakub@okonski.org";
github = "farnoy";
githubId = 345808;
name = "Jakub Okoński";
};
fbeffa = { fbeffa = {
email = "beffa@fbengineering.ch"; email = "beffa@fbengineering.ch";
github = "fedeinthemix"; github = "fedeinthemix";
@ -4592,12 +4641,6 @@
githubId = 66178592; githubId = 66178592;
name = "Pavel Zolotarevskiy"; name = "Pavel Zolotarevskiy";
}; };
flexw = {
email = "felix.weilbach@t-online.de";
github = "FlexW";
githubId = 19961516;
name = "Felix Weilbach";
};
fliegendewurst = { fliegendewurst = {
email = "arne.keller@posteo.de"; email = "arne.keller@posteo.de";
github = "FliegendeWurst"; github = "FliegendeWurst";
@ -4786,6 +4829,12 @@
githubId = 868283; githubId = 868283;
name = "Fatih Altinok"; name = "Fatih Altinok";
}; };
fstamour = {
email = "fr.st-amour@gmail.com";
github = "fstamour";
githubId = 2881922;
name = "Francis St-Amour";
};
ftrvxmtrx = { ftrvxmtrx = {
email = "ftrvxmtrx@gmail.com"; email = "ftrvxmtrx@gmail.com";
github = "ftrvxmtrx"; github = "ftrvxmtrx";
@ -4915,6 +4964,13 @@
githubId = 37017396; githubId = 37017396;
name = "gbtb"; name = "gbtb";
}; };
gdamjan = {
email = "gdamjan@gmail.com";
matrix = "@gdamjan:spodeli.org";
github = "gdamjan";
githubId = 81654;
name = "Damjan Georgievski";
};
gdinh = { gdinh = {
email = "nix@contact.dinh.ai"; email = "nix@contact.dinh.ai";
github = "gdinh"; github = "gdinh";
@ -5274,6 +5330,16 @@
github = "gytis-ivaskevicius"; github = "gytis-ivaskevicius";
githubId = 23264966; githubId = 23264966;
}; };
h7x4 = {
name = "h7x4";
email = "h7x4@nani.wtf";
matrix = "@h7x4:nani.wtf";
github = "h7x4";
githubId = 14929991;
keys = [{
fingerprint = "F7D3 7890 228A 9074 40E1 FD48 46B9 228E 814A 2AAC";
}];
};
hagl = { hagl = {
email = "harald@glie.be"; email = "harald@glie.be";
github = "hagl"; github = "hagl";
@ -7895,6 +7961,13 @@
githubId = 24509182; githubId = 24509182;
name = "Arnaud Pascal"; name = "Arnaud Pascal";
}; };
lightquantum = {
email = "self@lightquantum.me";
github = "PhotonQuantum";
githubId = 18749973;
name = "Yanning Chen";
matrix = "@self:lightquantum.me";
};
lihop = { lihop = {
email = "nixos@leroy.geek.nz"; email = "nixos@leroy.geek.nz";
github = "lihop"; github = "lihop";
@ -8634,6 +8707,15 @@
keys = [{ keys = [{
fingerprint = "1DE4 424D BF77 1192 5DC4 CF5E 9AED 8814 81D8 444E"; fingerprint = "1DE4 424D BF77 1192 5DC4 CF5E 9AED 8814 81D8 444E";
}]; }];
};
maxbrunet = {
email = "max@brnt.mx";
github = "maxbrunet";
githubId = 32458727;
name = "Maxime Brunet";
keys = [{
fingerprint = "E9A2 EE26 EAC6 B3ED 6C10 61F3 4379 62FF 87EC FE2B";
}];
}; };
maxdamantus = { maxdamantus = {
email = "maxdamantus@gmail.com"; email = "maxdamantus@gmail.com";
@ -8665,6 +8747,12 @@
githubId = 1472826; githubId = 1472826;
name = "Max Smolin"; name = "Max Smolin";
}; };
maxux = {
email = "root@maxux.net";
github = "maxux";
githubId = 4141584;
name = "Maxime Daniel";
};
maxxk = { maxxk = {
email = "maxim.krivchikov@gmail.com"; email = "maxim.krivchikov@gmail.com";
github = "maxxk"; github = "maxxk";
@ -9353,12 +9441,6 @@
githubId = 2072185; githubId = 2072185;
name = "Marc Scholten"; name = "Marc Scholten";
}; };
mpsyco = {
email = "fr.st-amour@gmail.com";
github = "fstamour";
githubId = 2881922;
name = "Francis St-Amour";
};
mtrsk = { mtrsk = {
email = "marcos.schonfinkel@protonmail.com"; email = "marcos.schonfinkel@protonmail.com";
github = "mtrsk"; github = "mtrsk";
@ -11264,6 +11346,13 @@
githubId = 35086; githubId = 35086;
name = "Jonathan Wright"; name = "Jonathan Wright";
}; };
quantenzitrone = {
email = "quantenzitrone@protonmail.com";
github = "Quantenzitrone";
githubId = 74491719;
matrix = "@quantenzitrone:matrix.org";
name = "quantenzitrone";
};
queezle = { queezle = {
email = "git@queezle.net"; email = "git@queezle.net";
github = "queezle42"; github = "queezle42";
@ -11756,6 +11845,12 @@
githubId = 12312980; githubId = 12312980;
name = "Robbin C."; name = "Robbin C.";
}; };
robbins = {
email = "nejrobbins@gmail.com";
github = "robbins";
githubId = 31457698;
name = "Nathanael Robbins";
};
roberth = { roberth = {
email = "nixpkgs@roberthensing.nl"; email = "nixpkgs@roberthensing.nl";
matrix = "@roberthensing:matrix.org"; matrix = "@roberthensing:matrix.org";
@ -13216,6 +13311,12 @@
githubId = 19905904; githubId = 19905904;
name = "Simon Weber"; name = "Simon Weber";
}; };
sweenu = {
name = "sweenu";
email = "contact@sweenu.xyz";
github = "sweenu";
githubId = 7051978;
};
swflint = { swflint = {
email = "swflint@flintfam.org"; email = "swflint@flintfam.org";
github = "swflint"; github = "swflint";
@ -13602,6 +13703,12 @@
githubId = 3105057; githubId = 3105057;
name = "Jan Beinke"; name = "Jan Beinke";
}; };
thenonameguy = {
email = "thenonameguy24@gmail.com";
name = "Krisztian Szabo";
github = "thenonameguy";
githubId = 2217181;
};
therealansh = { therealansh = {
email = "tyagiansh23@gmail.com"; email = "tyagiansh23@gmail.com";
github = "therealansh"; github = "therealansh";
@ -14198,6 +14305,12 @@
githubId = 928084; githubId = 928084;
name = "Utku Demir"; name = "Utku Demir";
}; };
uthar = {
email = "galkowskikasper@gmail.com";
github = "uthar";
githubId = 15697697;
name = "Kasper Gałkowski";
};
uvnikita = { uvnikita = {
email = "uv.nikita@gmail.com"; email = "uv.nikita@gmail.com";
github = "uvNikita"; github = "uvNikita";
@ -15796,4 +15909,10 @@
github = "wuyoli"; github = "wuyoli";
githubId = 104238274; githubId = 104238274;
}; };
jordanisaacs = {
name = "Jordan Isaacs";
email = "nix@jdisaacs.com";
github = "jordanisaacs";
githubId = 19742638;
};
} }

View file

@ -61,12 +61,12 @@ Readonly::Hash my %LICENSE_MAP => (
# GNU Free Documentation License, Version 1.2. # GNU Free Documentation License, Version 1.2.
gfdl_1_2 => { gfdl_1_2 => {
licenses => [qw( fdl12 )] licenses => [qw( fdl12Plus )]
}, },
# GNU Free Documentation License, Version 1.3. # GNU Free Documentation License, Version 1.3.
gfdl_1_3 => { gfdl_1_3 => {
licenses => [qw( fdl13 )] licenses => [qw( fdl13Plus )]
}, },
# GNU General Public License, Version 1. # GNU General Public License, Version 1.

View file

@ -342,6 +342,7 @@ class Editor:
self.default_out = default_out or root.joinpath("generated.nix") self.default_out = default_out or root.joinpath("generated.nix")
self.deprecated = deprecated or root.joinpath("deprecated.json") self.deprecated = deprecated or root.joinpath("deprecated.json")
self.cache_file = cache_file or f"{name}-plugin-cache.json" self.cache_file = cache_file or f"{name}-plugin-cache.json"
self.nixpkgs_repo = None
def get_current_plugins(self) -> List[Plugin]: def get_current_plugins(self) -> List[Plugin]:
"""To fill the cache""" """To fill the cache"""
@ -670,16 +671,15 @@ def update_plugins(editor: Editor, args):
autocommit = not args.no_commit autocommit = not args.no_commit
nixpkgs_repo = None
if autocommit: if autocommit:
nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True) editor.nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True)
commit(nixpkgs_repo, f"{editor.attr_path}: update", [args.outfile]) commit(editor.nixpkgs_repo, f"{editor.attr_path}: update", [args.outfile])
if redirects: if redirects:
update() update()
if autocommit: if autocommit:
commit( commit(
nixpkgs_repo, editor.nixpkgs_repo,
f"{editor.attr_path}: resolve github repository redirects", f"{editor.attr_path}: resolve github repository redirects",
[args.outfile, args.input_file, editor.deprecated], [args.outfile, args.input_file, editor.deprecated],
) )
@ -692,7 +692,7 @@ def update_plugins(editor: Editor, args):
plugin, _ = prefetch_plugin(pdesc, ) plugin, _ = prefetch_plugin(pdesc, )
if autocommit: if autocommit:
commit( commit(
nixpkgs_repo, editor.nixpkgs_repo,
"{drv_name}: init at {version}".format( "{drv_name}: init at {version}".format(
drv_name=editor.get_drv_name(plugin.normalized_name), drv_name=editor.get_drv_name(plugin.normalized_name),
version=plugin.version version=plugin.version

View file

@ -631,6 +631,18 @@ with lib.maintainers; {
shortName = "Release"; shortName = "Release";
}; };
rocm = {
members = [
Madouura
Flakebi
];
githubTeams = [
"rocm-maintainers"
];
scope = "Maintain ROCm and related packages.";
shortName = "ROCm";
};
ruby = { ruby = {
members = [ members = [
marsam marsam

View file

@ -15,5 +15,4 @@ NixOS configuration files.
<xi:include href="config-file.section.xml" /> <xi:include href="config-file.section.xml" />
<xi:include href="abstractions.section.xml" /> <xi:include href="abstractions.section.xml" />
<xi:include href="modularity.section.xml" /> <xi:include href="modularity.section.xml" />
<xi:include href="summary.section.xml" />
``` ```

View file

@ -1,46 +0,0 @@
# Syntax Summary {#sec-nix-syntax-summary}
Below is a summary of the most important syntactic constructs in the Nix
expression language. It's not complete. In particular, there are many
other built-in functions. See the [Nix
manual](https://nixos.org/nix/manual/#chap-writing-nix-expressions) for
the rest.
| Example | Description |
|-----------------------------------------------|--------------------------------------------------------------------------------------------------------------------|
| *Basic values* | |
| `"Hello world"` | A string |
| `"${pkgs.bash}/bin/sh"` | A string containing an expression (expands to `"/nix/store/hash-bash-version/bin/sh"`) |
| `true`, `false` | Booleans |
| `123` | An integer |
| `./foo.png` | A path (relative to the containing Nix expression) |
| *Compound values* | |
| `{ x = 1; y = 2; }` | A set with attributes named `x` and `y` |
| `{ foo.bar = 1; }` | A nested set, equivalent to `{ foo = { bar = 1; }; }` |
| `rec { x = "foo"; y = x + "bar"; }` | A recursive set, equivalent to `{ x = "foo"; y = "foobar"; }` |
| `[ "foo" "bar" ]` | A list with two elements |
| *Operators* | |
| `"foo" + "bar"` | String concatenation |
| `1 + 2` | Integer addition |
| `"foo" == "f" + "oo"` | Equality test (evaluates to `true`) |
| `"foo" != "bar"` | Inequality test (evaluates to `true`) |
| `!true` | Boolean negation |
| `{ x = 1; y = 2; }.x` | Attribute selection (evaluates to `1`) |
| `{ x = 1; y = 2; }.z or 3` | Attribute selection with default (evaluates to `3`) |
| `{ x = 1; y = 2; } // { z = 3; }` | Merge two sets (attributes in the right-hand set taking precedence) |
| *Control structures* | |
| `if 1 + 1 == 2 then "yes!" else "no!"` | Conditional expression |
| `assert 1 + 1 == 2; "yes!"` | Assertion check (evaluates to `"yes!"`). See [](#sec-assertions) for using assertions in modules |
| `let x = "foo"; y = "bar"; in x + y` | Variable definition |
| `with pkgs.lib; head [ 1 2 3 ]` | Add all attributes from the given set to the scope (evaluates to `1`) |
| *Functions (lambdas)* | |
| `x: x + 1` | A function that expects an integer and returns it increased by 1 |
| `(x: x + 1) 100` | A function call (evaluates to 101) |
| `let inc = x: x + 1; in inc (inc (inc 100))` | A function bound to a variable and subsequently called by name (evaluates to 103) |
| `{ x, y }: x + y` | A function that expects a set with required attributes `x` and `y` and concatenates them |
| `{ x, y ? "bar" }: x + y` | A function that expects a set with required attribute `x` and optional `y`, using `"bar"` as default value for `y` |
| `{ x, y, ... }: x + y` | A function that expects a set with required attributes `x` and `y` and ignores any other attributes |
| `{ x, y } @ args: x + y` | A function that expects a set with required attributes `x` and `y`, and binds the whole set to `args` |
| *Built-in functions* | |
| `import ./foo.nix` | Load and return Nix expression in given file |
| `map (x: x + x) [ 1 2 3 ]` | Apply a function to every element of a list (evaluates to `[ 2 4 6 ]`) |

View file

@ -32,8 +32,7 @@ account will cease to exist. Also, imperative commands for managing users and
groups, such as useradd, are no longer available. Passwords may still be groups, such as useradd, are no longer available. Passwords may still be
assigned by setting the user\'s assigned by setting the user\'s
[hashedPassword](#opt-users.users._name_.hashedPassword) option. A [hashedPassword](#opt-users.users._name_.hashedPassword) option. A
hashed password can be generated using `mkpasswd -m hashed password can be generated using `mkpasswd`.
sha-512`.
A user ID (uid) is assigned automatically. You can also specify a uid A user ID (uid) is assigned automatically. You can also specify a uid
manually by adding manually by adding

View file

@ -11,7 +11,7 @@ options = {
type = type specification; type = type specification;
default = default value; default = default value;
example = example value; example = example value;
description = "Description for use in the NixOS manual."; description = lib.mdDoc "Description for use in the NixOS manual.";
}; };
}; };
``` ```
@ -59,8 +59,9 @@ The function `mkOption` accepts the following arguments.
: A textual description of the option, in [Nixpkgs-flavored Markdown]( : A textual description of the option, in [Nixpkgs-flavored Markdown](
https://nixos.org/nixpkgs/manual/#sec-contributing-markup) format, that will be https://nixos.org/nixpkgs/manual/#sec-contributing-markup) format, that will be
included in the NixOS manual. During the migration process from DocBook included in the NixOS manual. During the migration process from DocBook
to CommonMark the description may also be written in DocBook, but this is it is necessary to mark descriptions written in CommonMark with `lib.mdDoc`.
discouraged. The description may still be written in DocBook (without any marker), but this
is discouraged and will be deprecated in the future.
## Utility functions for common option patterns {#sec-option-declarations-util} ## Utility functions for common option patterns {#sec-option-declarations-util}
@ -83,7 +84,7 @@ lib.mkOption {
type = lib.types.bool; type = lib.types.bool;
default = false; default = false;
example = true; example = true;
description = "Whether to enable magic."; description = lib.mdDoc "Whether to enable magic.";
} }
``` ```
@ -116,7 +117,7 @@ lib.mkOption {
type = lib.types.package; type = lib.types.package;
default = pkgs.hello; default = pkgs.hello;
defaultText = lib.literalExpression "pkgs.hello"; defaultText = lib.literalExpression "pkgs.hello";
description = "The hello package to use."; description = lib.mdDoc "The hello package to use.";
} }
``` ```
@ -132,7 +133,7 @@ lib.mkOption {
default = pkgs.ghc; default = pkgs.ghc;
defaultText = lib.literalExpression "pkgs.ghc"; defaultText = lib.literalExpression "pkgs.ghc";
example = lib.literalExpression "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])"; example = lib.literalExpression "pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])";
description = "The GHC package to use."; description = lib.mdDoc "The GHC package to use.";
} }
``` ```

View file

@ -17,5 +17,4 @@
<xi:include href="config-file.section.xml" /> <xi:include href="config-file.section.xml" />
<xi:include href="abstractions.section.xml" /> <xi:include href="abstractions.section.xml" />
<xi:include href="modularity.section.xml" /> <xi:include href="modularity.section.xml" />
<xi:include href="summary.section.xml" />
</chapter> </chapter>

View file

@ -1,332 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-nix-syntax-summary">
<title>Syntax Summary</title>
<para>
Below is a summary of the most important syntactic constructs in the
Nix expression language. Its not complete. In particular, there are
many other built-in functions. See the
<link xlink:href="https://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
manual</link> for the rest.
</para>
<informaltable>
<tgroup cols="2">
<colspec align="left" />
<colspec align="left" />
<thead>
<row>
<entry>
Example
</entry>
<entry>
Description
</entry>
</row>
</thead>
<tbody>
<row>
<entry>
<emphasis>Basic values</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>&quot;Hello world&quot;</literal>
</entry>
<entry>
A string
</entry>
</row>
<row>
<entry>
<literal>&quot;${pkgs.bash}/bin/sh&quot;</literal>
</entry>
<entry>
A string containing an expression (expands to
<literal>&quot;/nix/store/hash-bash-version/bin/sh&quot;</literal>)
</entry>
</row>
<row>
<entry>
<literal>true</literal>, <literal>false</literal>
</entry>
<entry>
Booleans
</entry>
</row>
<row>
<entry>
<literal>123</literal>
</entry>
<entry>
An integer
</entry>
</row>
<row>
<entry>
<literal>./foo.png</literal>
</entry>
<entry>
A path (relative to the containing Nix expression)
</entry>
</row>
<row>
<entry>
<emphasis>Compound values</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>{ x = 1; y = 2; }</literal>
</entry>
<entry>
A set with attributes named <literal>x</literal> and
<literal>y</literal>
</entry>
</row>
<row>
<entry>
<literal>{ foo.bar = 1; }</literal>
</entry>
<entry>
A nested set, equivalent to
<literal>{ foo = { bar = 1; }; }</literal>
</entry>
</row>
<row>
<entry>
<literal>rec { x = &quot;foo&quot;; y = x + &quot;bar&quot;; }</literal>
</entry>
<entry>
A recursive set, equivalent to
<literal>{ x = &quot;foo&quot;; y = &quot;foobar&quot;; }</literal>
</entry>
</row>
<row>
<entry>
<literal>[ &quot;foo&quot; &quot;bar&quot; ]</literal>
</entry>
<entry>
A list with two elements
</entry>
</row>
<row>
<entry>
<emphasis>Operators</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>&quot;foo&quot; + &quot;bar&quot;</literal>
</entry>
<entry>
String concatenation
</entry>
</row>
<row>
<entry>
<literal>1 + 2</literal>
</entry>
<entry>
Integer addition
</entry>
</row>
<row>
<entry>
<literal>&quot;foo&quot; == &quot;f&quot; + &quot;oo&quot;</literal>
</entry>
<entry>
Equality test (evaluates to <literal>true</literal>)
</entry>
</row>
<row>
<entry>
<literal>&quot;foo&quot; != &quot;bar&quot;</literal>
</entry>
<entry>
Inequality test (evaluates to <literal>true</literal>)
</entry>
</row>
<row>
<entry>
<literal>!true</literal>
</entry>
<entry>
Boolean negation
</entry>
</row>
<row>
<entry>
<literal>{ x = 1; y = 2; }.x</literal>
</entry>
<entry>
Attribute selection (evaluates to <literal>1</literal>)
</entry>
</row>
<row>
<entry>
<literal>{ x = 1; y = 2; }.z or 3</literal>
</entry>
<entry>
Attribute selection with default (evaluates to
<literal>3</literal>)
</entry>
</row>
<row>
<entry>
<literal>{ x = 1; y = 2; } // { z = 3; }</literal>
</entry>
<entry>
Merge two sets (attributes in the right-hand set taking
precedence)
</entry>
</row>
<row>
<entry>
<emphasis>Control structures</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>if 1 + 1 == 2 then &quot;yes!&quot; else &quot;no!&quot;</literal>
</entry>
<entry>
Conditional expression
</entry>
</row>
<row>
<entry>
<literal>assert 1 + 1 == 2; &quot;yes!&quot;</literal>
</entry>
<entry>
Assertion check (evaluates to
<literal>&quot;yes!&quot;</literal>). See
<xref linkend="sec-assertions" /> for using assertions in
modules
</entry>
</row>
<row>
<entry>
<literal>let x = &quot;foo&quot;; y = &quot;bar&quot;; in x + y</literal>
</entry>
<entry>
Variable definition
</entry>
</row>
<row>
<entry>
<literal>with pkgs.lib; head [ 1 2 3 ]</literal>
</entry>
<entry>
Add all attributes from the given set to the scope
(evaluates to <literal>1</literal>)
</entry>
</row>
<row>
<entry>
<emphasis>Functions (lambdas)</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>x: x + 1</literal>
</entry>
<entry>
A function that expects an integer and returns it increased
by 1
</entry>
</row>
<row>
<entry>
<literal>(x: x + 1) 100</literal>
</entry>
<entry>
A function call (evaluates to 101)
</entry>
</row>
<row>
<entry>
<literal>let inc = x: x + 1; in inc (inc (inc 100))</literal>
</entry>
<entry>
A function bound to a variable and subsequently called by
name (evaluates to 103)
</entry>
</row>
<row>
<entry>
<literal>{ x, y }: x + y</literal>
</entry>
<entry>
A function that expects a set with required attributes
<literal>x</literal> and <literal>y</literal> and
concatenates them
</entry>
</row>
<row>
<entry>
<literal>{ x, y ? &quot;bar&quot; }: x + y</literal>
</entry>
<entry>
A function that expects a set with required attribute
<literal>x</literal> and optional <literal>y</literal>,
using <literal>&quot;bar&quot;</literal> as default value
for <literal>y</literal>
</entry>
</row>
<row>
<entry>
<literal>{ x, y, ... }: x + y</literal>
</entry>
<entry>
A function that expects a set with required attributes
<literal>x</literal> and <literal>y</literal> and ignores
any other attributes
</entry>
</row>
<row>
<entry>
<literal>{ x, y } @ args: x + y</literal>
</entry>
<entry>
A function that expects a set with required attributes
<literal>x</literal> and <literal>y</literal>, and binds the
whole set to <literal>args</literal>
</entry>
</row>
<row>
<entry>
<emphasis>Built-in functions</emphasis>
</entry>
<entry>
</entry>
</row>
<row>
<entry>
<literal>import ./foo.nix</literal>
</entry>
<entry>
Load and return Nix expression in given file
</entry>
</row>
<row>
<entry>
<literal>map (x: x + x) [ 1 2 3 ]</literal>
</entry>
<entry>
Apply a function to every element of a list (evaluates to
<literal>[ 2 4 6 ]</literal>)
</entry>
</row>
</tbody>
</tgroup>
</informaltable>
</section>

View file

@ -39,7 +39,7 @@ users.users.alice = {
Passwords may still be assigned by setting the user's Passwords may still be assigned by setting the user's
<link linkend="opt-users.users._name_.hashedPassword">hashedPassword</link> <link linkend="opt-users.users._name_.hashedPassword">hashedPassword</link>
option. A hashed password can be generated using option. A hashed password can be generated using
<literal>mkpasswd -m sha-512</literal>. <literal>mkpasswd</literal>.
</para> </para>
<para> <para>
A user ID (uid) is assigned automatically. You can also specify a A user ID (uid) is assigned automatically. You can also specify a

View file

@ -12,7 +12,7 @@ options = {
type = type specification; type = type specification;
default = default value; default = default value;
example = example value; example = example value;
description = &quot;Description for use in the NixOS manual.&quot;; description = lib.mdDoc &quot;Description for use in the NixOS manual.&quot;;
}; };
}; };
</programlisting> </programlisting>
@ -98,9 +98,11 @@ options = {
A textual description of the option, in A textual description of the option, in
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-contributing-markup">Nixpkgs-flavored <link xlink:href="https://nixos.org/nixpkgs/manual/#sec-contributing-markup">Nixpkgs-flavored
Markdown</link> format, that will be included in the NixOS Markdown</link> format, that will be included in the NixOS
manual. During the migration process from DocBook to manual. During the migration process from DocBook it is
CommonMark the description may also be written in DocBook, but necessary to mark descriptions written in CommonMark with
this is discouraged. <literal>lib.mdDoc</literal>. The description may still be
written in DocBook (without any marker), but this is
discouraged and will be deprecated in the future.
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
@ -132,7 +134,7 @@ lib.mkOption {
type = lib.types.bool; type = lib.types.bool;
default = false; default = false;
example = true; example = true;
description = &quot;Whether to enable magic.&quot;; description = lib.mdDoc &quot;Whether to enable magic.&quot;;
} }
</programlisting> </programlisting>
<section xml:id="sec-option-declarations-util-mkPackageOption"> <section xml:id="sec-option-declarations-util-mkPackageOption">
@ -182,7 +184,7 @@ lib.mkOption {
type = lib.types.package; type = lib.types.package;
default = pkgs.hello; default = pkgs.hello;
defaultText = lib.literalExpression &quot;pkgs.hello&quot;; defaultText = lib.literalExpression &quot;pkgs.hello&quot;;
description = &quot;The hello package to use.&quot;; description = lib.mdDoc &quot;The hello package to use.&quot;;
} }
</programlisting> </programlisting>
<anchor xml:id="ex-options-declarations-util-mkPackageOption-ghc" /> <anchor xml:id="ex-options-declarations-util-mkPackageOption-ghc" />
@ -197,7 +199,7 @@ lib.mkOption {
default = pkgs.ghc; default = pkgs.ghc;
defaultText = lib.literalExpression &quot;pkgs.ghc&quot;; defaultText = lib.literalExpression &quot;pkgs.ghc&quot;;
example = lib.literalExpression &quot;pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])&quot;; example = lib.literalExpression &quot;pkgs.haskell.packages.ghc92.ghc.withPackages (hkgs: [ hkgs.primes ])&quot;;
description = &quot;The GHC package to use.&quot;; description = lib.mdDoc &quot;The GHC package to use.&quot;;
} }
</programlisting> </programlisting>
<section xml:id="sec-option-declarations-eot"> <section xml:id="sec-option-declarations-eot">

View file

@ -455,8 +455,8 @@ OK
<listitem> <listitem>
<para> <para>
Finally, add a <emphasis>swap</emphasis> partition. The Finally, add a <emphasis>swap</emphasis> partition. The
size required will vary according to needs, here a 8GiB size required will vary according to needs, here a 8GB one
one is created. is created.
</para> </para>
<programlisting> <programlisting>
# parted /dev/sda -- mkpart primary linux-swap -8GB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
@ -814,8 +814,8 @@ $ passwd eelco
</para> </para>
<programlisting> <programlisting>
# parted /dev/sda -- mklabel msdos # parted /dev/sda -- mklabel msdos
# parted /dev/sda -- mkpart primary 1MiB -8GiB # parted /dev/sda -- mkpart primary 1MB -8GB
# parted /dev/sda -- mkpart primary linux-swap -8GiB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
</programlisting> </programlisting>
<anchor xml:id="ex-partition-scheme-UEFI" /> <anchor xml:id="ex-partition-scheme-UEFI" />
<para> <para>
@ -824,9 +824,9 @@ $ passwd eelco
</para> </para>
<programlisting> <programlisting>
# parted /dev/sda -- mklabel gpt # parted /dev/sda -- mklabel gpt
# parted /dev/sda -- mkpart primary 512MiB -8GiB # parted /dev/sda -- mkpart primary 512MB -8GB
# parted /dev/sda -- mkpart primary linux-swap -8GiB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
# parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB # parted /dev/sda -- mkpart ESP fat32 1MB 512MB
# parted /dev/sda -- set 3 esp on # parted /dev/sda -- set 3 esp on
</programlisting> </programlisting>
<anchor xml:id="ex-install-sequence" /> <anchor xml:id="ex-install-sequence" />

View file

@ -2106,7 +2106,7 @@ Superuser created successfully.
<literal>ghc810</literal>. Those attributes point to the same <literal>ghc810</literal>. Those attributes point to the same
compilers and packagesets but have the advantage that e.g. compilers and packagesets but have the advantage that e.g.
<literal>ghc92</literal> stays stable when we update from <literal>ghc92</literal> stays stable when we update from
<literal>ghc924</literal> to <literal>ghc925</literal>. <literal>ghc925</literal> to <literal>ghc926</literal>.
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>

View file

@ -130,6 +130,27 @@
PHP now defaults to PHP 8.1, updated from 8.0. PHP now defaults to PHP 8.1, updated from 8.0.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
PHP is now built <literal>NTS</literal> (Non-Thread Safe)
style by default, for Apache and <literal>mod_php</literal>
usage we still enable <literal>ZTS</literal> (Zend Thread
Safe). This has been a common practice for a long time in
other distributions.
</para>
</listitem>
<listitem>
<para>
PHP 8.2.0 RC 6 is available.
</para>
</listitem>
<listitem>
<para>
<literal>protonup</literal> has been aliased to and replaced
by <literal>protonup-ng</literal> due to upstream not
maintaining it.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
Perl has been updated to 5.36, and its core module Perl has been updated to 5.36, and its core module
@ -189,6 +210,14 @@
<link xlink:href="options.html#opt-virtualisation.appvm.enable">virtualisation.appvm</link>. <link xlink:href="options.html#opt-virtualisation.appvm.enable">virtualisation.appvm</link>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<link xlink:href="https://github.com/maxbrunet/automatic-timezoned">automatic-timezoned</link>.
a Linux daemon to automatically update the system timezone
based on location. Available as
<link linkend="opt-services.automatic-timezoned.enable">services.automatic-timezoned</link>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
[xray] (https://github.com/XTLS/Xray-core), a fully compatible [xray] (https://github.com/XTLS/Xray-core), a fully compatible
@ -457,6 +486,14 @@
<link linkend="opt-services.uptime-kuma.enable">services.uptime-kuma</link>. <link linkend="opt-services.uptime-kuma.enable">services.uptime-kuma</link>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<link xlink:href="https://mepo.milesalan.com">Mepo</link>, a
fast, simple, hackable OSM map viewer for mobile and desktop
Linux. Available as
<link linkend="opt-programs.mepo.enable">programs.mepo.enable</link>.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
<section xml:id="sec-release-22.11-incompatibilities"> <section xml:id="sec-release-22.11-incompatibilities">
@ -592,6 +629,23 @@
binaries, use the <literal>p4d</literal> package instead. binaries, use the <literal>p4d</literal> package instead.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>openssl</literal>-extension for the PHP
interpreter used by Nextcloud is built against OpenSSL 1.1 if
<xref linkend="opt-system.stateVersion" /> is below
<literal>22.11</literal>. This is to make sure that people
using
<link xlink:href="https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/encryption_configuration.html">server-side
encryption</link> dont loose access to their files.
</para>
<para>
In any other case its safe to use OpenSSL 3 for PHPs openssl
extension. This can be done by setting
<xref linkend="opt-services.nextcloud.enableBrokenCiphersForSSE" />
to <literal>false</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>coq</literal> package and versioned variants The <literal>coq</literal> package and versioned variants
@ -804,6 +858,28 @@
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</listitem> </listitem>
<listitem>
<para>
<literal>arangodb</literal> versions 3.3, 3.4, and 3.5 have
been removed because they are at EOL upstream. The default is
now 3.10.0. Support for aarch64-linux has been removed since
the target cannot be built reproducibly. By default
<literal>arangodb</literal> is now built for the
<literal>haswell</literal> architecture. If you wish to build
for a different architecture, you may override the
<literal>targetArchitecture</literal> argument with a value
from
<link xlink:href="https://github.com/arangodb/arangodb/blob/207ec6937e41a46e10aea34953879341f0606841/cmake/OptimizeForArchitecture.cmake#L594">this
list supported upstream</link>. Some architecture specific
optimizations are also conditionally enabled. You may alter
this behavior by overriding the
<literal>asmOptimizations</literal> parameter. You may also
add additional architecture support by adding more
<literal>-DHAS_XYZ</literal> flags to
<literal>cmakeFlags</literal> via
<literal>overrideAttrs</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>meta.mainProgram</literal> attribute of packages The <literal>meta.mainProgram</literal> attribute of packages
@ -824,6 +900,12 @@
for <literal>termonad</literal> has been removed. for <literal>termonad</literal> has been removed.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Linux 4.9 has been removed because it will reach its end of
life within the lifespan of 22.11.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
(Neo)Vim can not be configured with (Neo)Vim can not be configured with
@ -852,6 +934,14 @@
support for 1.22 and older has been dropped. support for 1.22 and older has been dropped.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>zrepl</literal> package has been updated from
0.5.0 to 0.6.0. See the
<link xlink:href="https://zrepl.github.io/changelog.html">changelog</link>
for details.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<literal>k3s</literal> no longer supports docker as runtime <literal>k3s</literal> no longer supports docker as runtime
@ -899,6 +989,30 @@
<literal>mariadb</literal> if possible. <literal>mariadb</literal> if possible.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>obs-studio</literal> hase been updated to version 28.
If you have packaged custom plugins, check if they are
compatible. <literal>obs-websocket</literal> has been
integrated into <literal>obs-studio</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>signald</literal> has been bumped to
<literal>0.23.0</literal>. For the upgrade, a migration
process is necessary. It can be done by running a command like
this before starting <literal>signald.service</literal>:
</para>
<programlisting>
signald -d /var/lib/signald/db \
--database sqlite:/var/lib/signald/db \
--migrate-data
</programlisting>
<para>
For further information, please read the upstream changelogs.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<literal>stylua</literal> no longer accepts <literal>stylua</literal> no longer accepts
@ -908,6 +1022,12 @@
<literal>[ &quot;lua54&quot; &quot;luau&quot; ]</literal>. <literal>[ &quot;lua54&quot; &quot;luau&quot; ]</literal>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>ocamlPackages.ocaml_extlib</literal> has been renamed
to <literal>ocamlPackages.extlib</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<literal>pkgs.fetchNextcloudApp</literal> has been rewritten <literal>pkgs.fetchNextcloudApp</literal> has been rewritten
@ -918,11 +1038,50 @@
longer accepted. longer accepted.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The Syncthing service now only allows absolute paths—starting
with <literal>/</literal> or <literal>~/</literal>—for
<literal>services.syncthing.folders.&lt;name&gt;.path</literal>.
In a future release other paths will be allowed again and
interpreted relative to
<literal>services.syncthing.dataDir</literal>.
</para>
</listitem>
<listitem>
<para>
<literal>services.github-runner</literal> and
<literal>services.github-runners.&lt;name&gt;</literal> gained
the option <literal>serviceOverrides</literal> which allows
overriding the systemd <literal>serviceConfig</literal>. If
you have been overriding the systemd service configuration
(i.e., by defining
<literal>systemd.services.github-runner.serviceConfig</literal>),
you have to use the <literal>serviceOverrides</literal> option
now. Example:
</para>
<programlisting>
services.github-runner.serviceOverrides.SupplementaryGroups = [
&quot;docker&quot;
];
</programlisting>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
<section xml:id="sec-release-22.11-notable-changes"> <section xml:id="sec-release-22.11-notable-changes">
<title>Other Notable Changes</title> <title>Other Notable Changes</title>
<itemizedlist> <itemizedlist>
<listitem>
<para>
<literal>firefox</literal>, <literal>thunderbird</literal> and
<literal>librewolf</literal> come with enabled Wayland support
by default. The <literal>firefox-wayland</literal>,
<literal>firefox-esr-wayland</literal>,
<literal>thunderbird-wayland</literal> and
<literal>librewolf-wayland</literal> attributes are obsolete
and have been aliased to their generic attribute.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>xplr</literal> package has been updated from The <literal>xplr</literal> package has been updated from
@ -931,6 +1090,13 @@
release notes</link> for more details. release notes</link> for more details.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Configuring multiple GitHub runners is now possible through
<literal>services.github-runners.&lt;name&gt;</literal>. The
option <literal>services.github-runner</literal> remains.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<literal>github-runner</literal> gained support for ephemeral <literal>github-runner</literal> gained support for ephemeral
@ -961,6 +1127,13 @@
configure this behaviour. configure this behaviour.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>mastodon</literal> now automatically removes remote
media attachments older than 30 days. This is configurable
through <literal>services.mastodon.mediaAutoRemove</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The Redis module now disables RDB persistence when The Redis module now disables RDB persistence when
@ -1033,22 +1206,146 @@
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The <literal>services.grafana</literal> options were converted The module <literal>services.grafana</literal> was refactored
to a to be compliant with
<link xlink:href="https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md">RFC <link xlink:href="https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md">RFC
0042</link> configuration. 0042</link>. To be precise, this means that the following
things have changed:
</para>
<itemizedlist>
<listitem>
<para>
The newly introduced option
<xref linkend="opt-services.grafana.settings" /> is an
attribute-set that will be converted into Grafanas INI
format. This means that the configuration from
<link xlink:href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/">Grafanas
configuration reference</link> can be directly written as
attribute-set in Nix within this option.
</para> </para>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The <literal>services.grafana.provision.datasources</literal> The option
and <literal>services.grafana.provision.dashboards</literal> <literal>services.grafana.extraOptions</literal> has been
options were converted to a removed. This option was an association of environment
<link xlink:href="https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md">RFC variables for Grafana. If you had an expression like
0042</link> configuration. They also now support specifying
the provisioning YAML file with <literal>path</literal>
option.
</para> </para>
<programlisting language="bash">
{
services.grafana.extraOptions.SECURITY_ADMIN_USER = &quot;foobar&quot;;
}
</programlisting>
<para>
your Grafana instance was running with
<literal>GF_SECURITY_ADMIN_USER=foobar</literal> in its
environment.
</para>
<para>
For the migration, it is recommended to turn it into the
INI format, i.e. to declare
</para>
<programlisting language="bash">
{
services.grafana.settings.security.admin_user = &quot;foobar&quot;;
}
</programlisting>
<para>
instead.
</para>
<para>
The keys in
<literal>services.grafana.extraOptions</literal> have the
format
<literal>&lt;INI section name&gt;_&lt;Key Name&gt;</literal>.
Further details are outlined in the
<link xlink:href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#override-configuration-with-environment-variables">configuration
reference</link>.
</para>
<para>
Alternatively you can also set all your values from
<literal>extraOptions</literal> to
<literal>systemd.services.grafana.environment</literal>,
make sure you dont forget to add the
<literal>GF_</literal> prefix though!
</para>
</listitem>
<listitem>
<para>
Previously, the options
<xref linkend="opt-services.grafana.provision.datasources" />
and
<xref linkend="opt-services.grafana.provision.dashboards" />
expected lists of datasources or dashboards for the
<link xlink:href="https://grafana.com/docs/grafana/latest/administration/provisioning/">declarative
provisioning</link>.
</para>
<para>
To declare lists of
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<emphasis role="strong">datasources</emphasis>, please
rename your declarations to
<xref linkend="opt-services.grafana.provision.datasources.settings.datasources" />.
</para>
</listitem>
<listitem>
<para>
<emphasis role="strong">dashboards</emphasis>, please
rename your declarations to
<xref linkend="opt-services.grafana.provision.dashboards.settings.providers" />.
</para>
</listitem>
</itemizedlist>
<para>
This change was made to support more features for that:
</para>
<itemizedlist>
<listitem>
<para>
Its possible to declare the
<literal>apiVersion</literal> of your dashboards and
datasources by
<xref linkend="opt-services.grafana.provision.datasources.settings.apiVersion" />
(or
<xref linkend="opt-services.grafana.provision.dashboards.settings.apiVersion" />).
</para>
</listitem>
<listitem>
<para>
Instead of declaring datasources and dashboards in
pure Nix, its also possible to specify configuration
files (or directories) with YAML instead using
<xref linkend="opt-services.grafana.provision.datasources.path" />
(or
<xref linkend="opt-services.grafana.provision.dashboards.path" />.
This is useful when having provisioning files from
non-NixOS Grafana instances that you also want to
deploy to NixOS.
</para>
<para>
<emphasis role="strong">Note:</emphasis> secrets from
these files will be leaked into the store unless you
use a
<link xlink:href="https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#file-provider"><emphasis role="strong">file</emphasis>-provider
or env-var</link> for secrets!
</para>
</listitem>
<listitem>
<para>
<xref linkend="opt-services.grafana.provision.notifiers" />
is not affected by this change because this feature is
deprecated by Grafana and will probably removed in
Grafana 10. Its recommended to use
<literal>services.grafana.provision.alerting.contactPoints</literal>
instead.
</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
@ -1121,6 +1418,13 @@
will be removed once the transition to CommonMark is complete. will be removed once the transition to CommonMark is complete.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The redis module now persists each instances configuration
file in the state directory, in order to support some more
advanced use cases like sentinel.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The udisks2 service, available at The udisks2 service, available at
@ -1180,6 +1484,19 @@
Add udev rules for the Teensy family of microcontrollers. Add udev rules for the Teensy family of microcontrollers.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The Qt QML disk cache is now disabled by default. This fixes a
long-standing issue where updating Qt/KDE apps would sometimes
cause them to crash or behave strangely without explanation.
Those concerned about the small (~10%) performance hit to
application startup can re-enable the cache (and expose
themselves to gremlins) by setting the envrionment variable
<literal>QML_FORCE_DISK_CACHE</literal> to
<literal>1</literal> using e.g. the
<literal>environment.sessionVariables</literal> NixOS option.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
systemd-oomd is enabled by default. Depending on which systemd systemd-oomd is enabled by default. Depending on which systemd
@ -1266,6 +1583,16 @@
dbus service. dbus service.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The Mastodon package got upgraded from the major version 3 to
4. See the
<link xlink:href="https://github.com/mastodon/mastodon/releases/tag/v4.0.0">v4.0.0
release notes</link> for a list of changes. On standard
setups, no manual migration steps are required. Nevertheless,
a database backup is recommended.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>nomad</literal> package now defaults to 1.3, The <literal>nomad</literal> package now defaults to 1.3,
@ -1284,6 +1611,42 @@
the npm install step prunes dev dependencies. the npm install step prunes dev dependencies.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<literal>boot.kernel.sysctl</literal> is defined as a
freeformType and adds a custom merge option for
<quote>net.core.rmem_max</quote> (taking the highest value
defined to avoid conflicts between 2 services trying to set
that value).
</para>
</listitem>
<listitem>
<para>
The <literal>mame</literal> package does not ship with its
tools anymore in the default output. They were moved to a
separate <literal>tools</literal> output instead. For
convenience, <literal>mame-tools</literal> package was added
for those who want to use it.
</para>
</listitem>
<listitem>
<para>
A NixOS module for Firefox has been added which allows
preferences and
<link xlink:href="https://github.com/mozilla/policy-templates/blob/master/README.md">policies</link>
to be set. This also allows extensions to be installed via the
<literal>ExtensionSettings</literal> policy. The new options
are under <literal>programs.firefox</literal>.
</para>
</listitem>
<listitem>
<para>
The option
<literal>services.picom.experimentalBackends</literal> was
removed since it is now the default and the option will cause
<literal>picom</literal> to quit instead.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
</section> </section>

View file

@ -307,7 +307,7 @@ update /etc/fstab.
``` ```
4. Finally, add a *swap* partition. The size required will vary 4. Finally, add a *swap* partition. The size required will vary
according to needs, here a 8GiB one is created. according to needs, here a 8GB one is created.
```ShellSession ```ShellSession
# parted /dev/sda -- mkpart primary linux-swap -8GB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
@ -543,8 +543,8 @@ corresponding configuration Nix expression.
::: :::
```ShellSession ```ShellSession
# parted /dev/sda -- mklabel msdos # parted /dev/sda -- mklabel msdos
# parted /dev/sda -- mkpart primary 1MiB -8GiB # parted /dev/sda -- mkpart primary 1MB -8GB
# parted /dev/sda -- mkpart primary linux-swap -8GiB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
``` ```
::: :::
@ -554,9 +554,9 @@ corresponding configuration Nix expression.
::: :::
```ShellSession ```ShellSession
# parted /dev/sda -- mklabel gpt # parted /dev/sda -- mklabel gpt
# parted /dev/sda -- mkpart primary 512MiB -8GiB # parted /dev/sda -- mkpart primary 512MB -8GB
# parted /dev/sda -- mkpart primary linux-swap -8GiB 100% # parted /dev/sda -- mkpart primary linux-swap -8GB 100%
# parted /dev/sda -- mkpart ESP fat32 1MiB 512MiB # parted /dev/sda -- mkpart ESP fat32 1MB 512MB
# parted /dev/sda -- set 3 esp on # parted /dev/sda -- set 3 esp on
``` ```
::: :::

View file

@ -576,4 +576,4 @@ In addition to numerous new and upgraded packages, this release has the followin
- More jdk and jre versions are now exposed via `java-packages.compiler`. - More jdk and jre versions are now exposed via `java-packages.compiler`.
- The sets `haskell.packages` and `haskell.compiler` now contain for every ghc version an attribute with the minor version dropped. E.g. for `ghc8107` there also now exists `ghc810`. Those attributes point to the same compilers and packagesets but have the advantage that e.g. `ghc92` stays stable when we update from `ghc924` to `ghc925`. - The sets `haskell.packages` and `haskell.compiler` now contain for every ghc version an attribute with the minor version dropped. E.g. for `ghc8107` there also now exists `ghc810`. Those attributes point to the same compilers and packagesets but have the advantage that e.g. `ghc92` stays stable when we update from `ghc925` to `ghc926`.

View file

@ -53,6 +53,14 @@ In addition to numerous new and upgraded packages, this release has the followin
- PHP now defaults to PHP 8.1, updated from 8.0. - PHP now defaults to PHP 8.1, updated from 8.0.
- PHP is now built `NTS` (Non-Thread Safe) style by default, for Apache and
`mod_php` usage we still enable `ZTS` (Zend Thread Safe). This has been a
common practice for a long time in other distributions.
- PHP 8.2.0 RC 6 is available.
- `protonup` has been aliased to and replaced by `protonup-ng` due to upstream not maintaining it.
- Perl has been updated to 5.36, and its core module `HTTP::Tiny` was patched to verify SSL/TLS certificates by default. - Perl has been updated to 5.36, and its core module `HTTP::Tiny` was patched to verify SSL/TLS certificates by default.
- Improved performances of `lib.closePropagation` which was previously quadratic. This is used in e.g. `ghcWithPackages`. Please see backward incompatibilities notes below. - Improved performances of `lib.closePropagation` which was previously quadratic. This is used in e.g. `ghcWithPackages`. Please see backward incompatibilities notes below.
@ -72,6 +80,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [appvm](https://github.com/jollheef/appvm), Nix based app VMs. Available as [virtualisation.appvm](options.html#opt-virtualisation.appvm.enable). - [appvm](https://github.com/jollheef/appvm), Nix based app VMs. Available as [virtualisation.appvm](options.html#opt-virtualisation.appvm.enable).
- [automatic-timezoned](https://github.com/maxbrunet/automatic-timezoned). a Linux daemon to automatically update the system timezone based on location. Available as [services.automatic-timezoned](#opt-services.automatic-timezoned.enable).
- [xray] (https://github.com/XTLS/Xray-core), a fully compatible v2ray-core replacement. Features XTLS, which when enabled on server and client, brings UDP FullCone NAT to proxy setups. Available as [services.xray](options.html#opt-services.xray.enable). - [xray] (https://github.com/XTLS/Xray-core), a fully compatible v2ray-core replacement. Features XTLS, which when enabled on server and client, brings UDP FullCone NAT to proxy setups. Available as [services.xray](options.html#opt-services.xray.enable).
- [syncstorage-rs](https://github.com/mozilla-services/syncstorage-rs), a self-hostable sync server for Firefox. Available as [services.firefox-syncserver](options.html#opt-services.firefox-syncserver.enable). - [syncstorage-rs](https://github.com/mozilla-services/syncstorage-rs), a self-hostable sync server for Firefox. Available as [services.firefox-syncserver](options.html#opt-services.firefox-syncserver.enable).
@ -149,6 +159,8 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- [Uptime Kuma](https://uptime.kuma.pet/), a fancy self-hosted monitoring tool. Available as [services.uptime-kuma](#opt-services.uptime-kuma.enable). - [Uptime Kuma](https://uptime.kuma.pet/), a fancy self-hosted monitoring tool. Available as [services.uptime-kuma](#opt-services.uptime-kuma.enable).
- [Mepo](https://mepo.milesalan.com), a fast, simple, hackable OSM map viewer for mobile and desktop Linux. Available as [programs.mepo.enable](#opt-programs.mepo.enable).
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. --> <!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Backward Incompatibilities {#sec-release-22.11-incompatibilities} ## Backward Incompatibilities {#sec-release-22.11-incompatibilities}
@ -192,6 +204,13 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- The `p4` package now only includes the open-source Perforce Helix Core command-line client and APIs. It no longer installs the unfree Helix Core Server binaries `p4d`, `p4broker`, and `p4p`. To install the Helix Core Server binaries, use the `p4d` package instead. - The `p4` package now only includes the open-source Perforce Helix Core command-line client and APIs. It no longer installs the unfree Helix Core Server binaries `p4d`, `p4broker`, and `p4p`. To install the Helix Core Server binaries, use the `p4d` package instead.
- The `openssl`-extension for the PHP interpreter used by Nextcloud is built against OpenSSL 1.1 if
[](#opt-system.stateVersion) is below `22.11`. This is to make sure that people using [server-side encryption](https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/encryption_configuration.html)
don't loose access to their files.
In any other case it's safe to use OpenSSL 3 for PHP's openssl extension. This can be done by setting
[](#opt-services.nextcloud.enableBrokenCiphersForSSE) to `false`.
- The `coq` package and versioned variants starting at `coq_8_14` no - The `coq` package and versioned variants starting at `coq_8_14` no
longer include CoqIDE, which is now available through longer include CoqIDE, which is now available through
`coqPackages.coqide`. It is still possible to get CoqIDE as part of `coqPackages.coqide`. It is still possible to get CoqIDE as part of
@ -253,12 +272,16 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
* `systemd.network.networks.<name>.dhcpV6Config` no longer accepts the `ForceDHCPv6PDOtherInformation=` setting. Please use the `WithoutRA=` and `UseDelegatedPrefix=` settings in your `systemd.network.networks.<name>.dhcpV6Config` and the `DHCPv6Client=` setting in your `systemd.network.networks.<name>.ipv6AcceptRAConfig` to control when the DHCPv6 client is started and how the delegated prefixes are handled by the DHCPv6 client. * `systemd.network.networks.<name>.dhcpV6Config` no longer accepts the `ForceDHCPv6PDOtherInformation=` setting. Please use the `WithoutRA=` and `UseDelegatedPrefix=` settings in your `systemd.network.networks.<name>.dhcpV6Config` and the `DHCPv6Client=` setting in your `systemd.network.networks.<name>.ipv6AcceptRAConfig` to control when the DHCPv6 client is started and how the delegated prefixes are handled by the DHCPv6 client.
* `systemd.network.networks.<name>.networkConfig` no longer accepts the `IPv6Token=` setting. Use the `Token=` setting in your `systemd.network.networks.<name>.ipv6AcceptRAConfig` instead. The `systemd.network.networks.<name>.ipv6Prefixes.*.ipv6PrefixConfig` now also accepts the `Token=` setting. * `systemd.network.networks.<name>.networkConfig` no longer accepts the `IPv6Token=` setting. Use the `Token=` setting in your `systemd.network.networks.<name>.ipv6AcceptRAConfig` instead. The `systemd.network.networks.<name>.ipv6Prefixes.*.ipv6PrefixConfig` now also accepts the `Token=` setting.
- `arangodb` versions 3.3, 3.4, and 3.5 have been removed because they are at EOL upstream. The default is now 3.10.0. Support for aarch64-linux has been removed since the target cannot be built reproducibly. By default `arangodb` is now built for the `haswell` architecture. If you wish to build for a different architecture, you may override the `targetArchitecture` argument with a value from [this list supported upstream](https://github.com/arangodb/arangodb/blob/207ec6937e41a46e10aea34953879341f0606841/cmake/OptimizeForArchitecture.cmake#L594). Some architecture specific optimizations are also conditionally enabled. You may alter this behavior by overriding the `asmOptimizations` parameter. You may also add additional architecture support by adding more `-DHAS_XYZ` flags to `cmakeFlags` via `overrideAttrs`.
- The `meta.mainProgram` attribute of packages in `wineWowPackages` now defaults to `"wine64"`. - The `meta.mainProgram` attribute of packages in `wineWowPackages` now defaults to `"wine64"`.
- The `paperless` module now defaults `PAPERLESS_TIME_ZONE` to your configured system timezone. - The `paperless` module now defaults `PAPERLESS_TIME_ZONE` to your configured system timezone.
- The top-level `termonad-with-packages` alias for `termonad` has been removed. - The top-level `termonad-with-packages` alias for `termonad` has been removed.
- Linux 4.9 has been removed because it will reach its end of life within the lifespan of 22.11.
- (Neo)Vim can not be configured with `configure.pathogen` anymore to reduce maintainance burden. - (Neo)Vim can not be configured with `configure.pathogen` anymore to reduce maintainance burden.
Use `configure.packages` instead. Use `configure.packages` instead.
- Neovim can not be configured with plug anymore (still works for vim). - Neovim can not be configured with plug anymore (still works for vim).
@ -267,6 +290,8 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- The default `kops` version is now 1.25.1 and support for 1.22 and older has been dropped. - The default `kops` version is now 1.25.1 and support for 1.22 and older has been dropped.
- The `zrepl` package has been updated from 0.5.0 to 0.6.0. See the [changelog](https://zrepl.github.io/changelog.html) for details.
- `k3s` no longer supports docker as runtime due to upstream dropping support. - `k3s` no longer supports docker as runtime due to upstream dropping support.
- `cassandra_2_1` and `cassandra_2_2` have been removed. Please update to `cassandra_3_11` or `cassandra_3_0`. See the [changelog](https://github.com/apache/cassandra/blob/cassandra-3.11.14/NEWS.txt) for more information about the upgrade process. - `cassandra_2_1` and `cassandra_2_2` have been removed. Please update to `cassandra_3_11` or `cassandra_3_0`. See the [changelog](https://github.com/apache/cassandra/blob/cassandra-3.11.14/NEWS.txt) for more information about the upgrade process.
@ -278,24 +303,58 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- `percona-server56` has been removed. Please migrate to `mysql` or `mariadb` if possible. - `percona-server56` has been removed. Please migrate to `mysql` or `mariadb` if possible.
- `obs-studio` hase been updated to version 28. If you have packaged custom plugins, check if they are compatible. `obs-websocket` has been integrated into `obs-studio`.
- `signald` has been bumped to `0.23.0`. For the upgrade, a migration process is necessary. It can be
done by running a command like this before starting `signald.service`:
```
signald -d /var/lib/signald/db \
--database sqlite:/var/lib/signald/db \
--migrate-data
```
For further information, please read the upstream changelogs.
- `stylua` no longer accepts `lua52Support` and `luauSupport` overrides, use `features` instead, which defaults to `[ "lua54" "luau" ]`. - `stylua` no longer accepts `lua52Support` and `luauSupport` overrides, use `features` instead, which defaults to `[ "lua54" "luau" ]`.
- `ocamlPackages.ocaml_extlib` has been renamed to `ocamlPackages.extlib`.
- `pkgs.fetchNextcloudApp` has been rewritten to circumvent impurities in e.g. tarballs from GitHub and to make it easier to - `pkgs.fetchNextcloudApp` has been rewritten to circumvent impurities in e.g. tarballs from GitHub and to make it easier to
apply patches. This means that your hashes are out-of-date and the (previously required) attributes `name` and `version` apply patches. This means that your hashes are out-of-date and the (previously required) attributes `name` and `version`
are no longer accepted. are no longer accepted.
- The Syncthing service now only allows absolute paths---starting with `/` or
`~/`---for `services.syncthing.folders.<name>.path`.
In a future release other paths will be allowed again and interpreted
relative to `services.syncthing.dataDir`.
- `services.github-runner` and `services.github-runners.<name>` gained the option `serviceOverrides` which allows overriding the systemd `serviceConfig`. If you have been overriding the systemd service configuration (i.e., by defining `systemd.services.github-runner.serviceConfig`), you have to use the `serviceOverrides` option now. Example:
```
services.github-runner.serviceOverrides.SupplementaryGroups = [
"docker"
];
```
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. --> <!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Other Notable Changes {#sec-release-22.11-notable-changes} ## Other Notable Changes {#sec-release-22.11-notable-changes}
- `firefox`, `thunderbird` and `librewolf` come with enabled Wayland support by default. The `firefox-wayland`, `firefox-esr-wayland`, `thunderbird-wayland` and `librewolf-wayland` attributes are obsolete and have been aliased to their generic attribute.
- The `xplr` package has been updated from 0.18.0 to 0.19.0, which brings some breaking changes. See the [upstream release notes](https://github.com/sayanarijit/xplr/releases/tag/v0.19.0) for more details. - The `xplr` package has been updated from 0.18.0 to 0.19.0, which brings some breaking changes. See the [upstream release notes](https://github.com/sayanarijit/xplr/releases/tag/v0.19.0) for more details.
- Configuring multiple GitHub runners is now possible through `services.github-runners.<name>`. The option `services.github-runner` remains.
- `github-runner` gained support for ephemeral runners and registrations using a personal access token (PAT) instead of a registration token. See `services.github-runner.ephemeral` and `services.github-runner.tokenFile` for details. - `github-runner` gained support for ephemeral runners and registrations using a personal access token (PAT) instead of a registration token. See `services.github-runner.ephemeral` and `services.github-runner.tokenFile` for details.
- A new module was added for the Saleae Logic device family, providing the options `hardware.saleae-logic.enable` and `hardware.saleae-logic.package`. - A new module was added for the Saleae Logic device family, providing the options `hardware.saleae-logic.enable` and `hardware.saleae-logic.package`.
- ZFS module will not allow hibernation by default, this is a safety measure to prevent data loss cases like the ones described at [OpenZFS/260](https://github.com/openzfs/zfs/issues/260) and [OpenZFS/12842](https://github.com/openzfs/zfs/issues/12842). Use the `boot.zfs.allowHibernation` option to configure this behaviour. - ZFS module will not allow hibernation by default, this is a safety measure to prevent data loss cases like the ones described at [OpenZFS/260](https://github.com/openzfs/zfs/issues/260) and [OpenZFS/12842](https://github.com/openzfs/zfs/issues/12842). Use the `boot.zfs.allowHibernation` option to configure this behaviour.
- `mastodon` now automatically removes remote media attachments older than 30 days. This is configurable through `services.mastodon.mediaAutoRemove`.
- The Redis module now disables RDB persistence when `services.redis.servers.<name>.save = []` instead of using the Redis default. - The Redis module now disables RDB persistence when `services.redis.servers.<name>.save = []` instead of using the Redis default.
- Neo4j was updated from version 3 to version 4. See this [migration guide](https://neo4j.com/docs/upgrade-migration-guide/current/) on how to migrate your Neo4j instance. - Neo4j was updated from version 3 to version 4. See this [migration guide](https://neo4j.com/docs/upgrade-migration-guide/current/) on how to migrate your Neo4j instance.
@ -320,9 +379,66 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- The `services.matrix-synapse` systemd unit has been hardened. - The `services.matrix-synapse` systemd unit has been hardened.
- The `services.grafana` options were converted to a [RFC 0042](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md) configuration. - The module `services.grafana` was refactored to be compliant with [RFC 0042](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md). To be precise, this means that the following things have changed:
- The newly introduced option [](#opt-services.grafana.settings) is an attribute-set that
will be converted into Grafana's INI format. This means that the configuration from
[Grafana's configuration reference](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/)
can be directly written as attribute-set in Nix within this option.
- The option `services.grafana.extraOptions` has been removed. This option was an association
of environment variables for Grafana. If you had an expression like
- The `services.grafana.provision.datasources` and `services.grafana.provision.dashboards` options were converted to a [RFC 0042](https://github.com/NixOS/rfcs/blob/master/rfcs/0042-config-option.md) configuration. They also now support specifying the provisioning YAML file with `path` option. ```nix
{
services.grafana.extraOptions.SECURITY_ADMIN_USER = "foobar";
}
```
your Grafana instance was running with `GF_SECURITY_ADMIN_USER=foobar` in its environment.
For the migration, it is recommended to turn it into the INI format, i.e.
to declare
```nix
{
services.grafana.settings.security.admin_user = "foobar";
}
```
instead.
The keys in `services.grafana.extraOptions` have the format `<INI section name>_<Key Name>`.
Further details are outlined in the [configuration reference](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#override-configuration-with-environment-variables).
Alternatively you can also set all your values from `extraOptions` to
`systemd.services.grafana.environment`, make sure you don't forget to add
the `GF_` prefix though!
- Previously, the options [](#opt-services.grafana.provision.datasources) and
[](#opt-services.grafana.provision.dashboards) expected lists of datasources
or dashboards for the [declarative provisioning](https://grafana.com/docs/grafana/latest/administration/provisioning/).
To declare lists of
- **datasources**, please rename your declarations to [](#opt-services.grafana.provision.datasources.settings.datasources).
- **dashboards**, please rename your declarations to [](#opt-services.grafana.provision.dashboards.settings.providers).
This change was made to support more features for that:
- It's possible to declare the `apiVersion` of your dashboards and datasources
by [](#opt-services.grafana.provision.datasources.settings.apiVersion) (or
[](#opt-services.grafana.provision.dashboards.settings.apiVersion)).
- Instead of declaring datasources and dashboards in pure Nix, it's also possible
to specify configuration files (or directories) with YAML instead using
[](#opt-services.grafana.provision.datasources.path) (or
[](#opt-services.grafana.provision.dashboards.path). This is useful when having
provisioning files from non-NixOS Grafana instances that you also want to
deploy to NixOS.
__Note:__ secrets from these files will be leaked into the store unless you use a
[**file**-provider or env-var](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#file-provider) for secrets!
- [](#opt-services.grafana.provision.notifiers) is not affected by this change because
this feature is deprecated by Grafana and will probably removed in Grafana 10.
It's recommended to use `services.grafana.provision.alerting.contactPoints` instead.
- The `services.grafana.provision.alerting` option was added. It includes suboptions for every alerting-related objects (with the exception of `notifiers`), which means it's now possible to configure modern Grafana alerting declaratively. - The `services.grafana.provision.alerting` option was added. It includes suboptions for every alerting-related objects (with the exception of `notifiers`), which means it's now possible to configure modern Grafana alerting declaratively.
@ -341,6 +457,8 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- The `documentation.nixos.options.allowDocBook` option was added to ease the transition to CommonMark option documentation. Setting this option to `false` causes an error for every option included in the manual that uses DocBook documentation; it defaults to `true` to preserve the previous behavior and will be removed once the transition to CommonMark is complete. - The `documentation.nixos.options.allowDocBook` option was added to ease the transition to CommonMark option documentation. Setting this option to `false` causes an error for every option included in the manual that uses DocBook documentation; it defaults to `true` to preserve the previous behavior and will be removed once the transition to CommonMark is complete.
- The redis module now persists each instance's configuration file in the state directory, in order to support some more advanced use cases like sentinel.
- The udisks2 service, available at `services.udisks2.enable`, is now disabled by default. It will automatically be enabled through services and desktop environments as needed. - The udisks2 service, available at `services.udisks2.enable`, is now disabled by default. It will automatically be enabled through services and desktop environments as needed.
This also means that polkit will now actually be disabled by default. The default for `security.polkit.enable` was already flipped in the previous release, but udisks2 being enabled by default re-enabled it. This also means that polkit will now actually be disabled by default. The default for `security.polkit.enable` was already flipped in the previous release, but udisks2 being enabled by default re-enabled it.
@ -356,6 +474,14 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- Add udev rules for the Teensy family of microcontrollers. - Add udev rules for the Teensy family of microcontrollers.
- The Qt QML disk cache is now disabled by default. This fixes a
long-standing issue where updating Qt/KDE apps would sometimes cause
them to crash or behave strangely without explanation. Those concerned
about the small (~10%) performance hit to application startup can
re-enable the cache (and expose themselves to gremlins) by setting the
envrionment variable `QML_FORCE_DISK_CACHE` to `1` using e.g. the
`environment.sessionVariables` NixOS option.
- systemd-oomd is enabled by default. Depending on which systemd units have - systemd-oomd is enabled by default. Depending on which systemd units have
`ManagedOOMSwap=kill` or `ManagedOOMMemoryPressure=kill`, systemd-oomd will `ManagedOOMSwap=kill` or `ManagedOOMMemoryPressure=kill`, systemd-oomd will
SIGKILL all the processes under the appropriate descendant cgroups when the SIGKILL all the processes under the appropriate descendant cgroups when the
@ -381,8 +507,18 @@ Available as [services.patroni](options.html#opt-services.patroni.enable).
- There is a new module for the `xfconf` program (the Xfce configuration storage system), which has a dbus service. - There is a new module for the `xfconf` program (the Xfce configuration storage system), which has a dbus service.
- The Mastodon package got upgraded from the major version 3 to 4. See the [v4.0.0 release notes](https://github.com/mastodon/mastodon/releases/tag/v4.0.0) for a list of changes. On standard setups, no manual migration steps are required. Nevertheless, a database backup is recommended.
- The `nomad` package now defaults to 1.3, which no longer has a downgrade path to releases 1.2 or older. - The `nomad` package now defaults to 1.3, which no longer has a downgrade path to releases 1.2 or older.
- The `nodePackages` package set now defaults to the LTS release in the `nodejs` package again, instead of being pinned to `nodejs-14_x`. Several updates to node2nix have been made for compatibility with newer Node.js and npm versions and a new `postRebuild` hook has been added for packages to perform extra build steps before the npm install step prunes dev dependencies. - The `nodePackages` package set now defaults to the LTS release in the `nodejs` package again, instead of being pinned to `nodejs-14_x`. Several updates to node2nix have been made for compatibility with newer Node.js and npm versions and a new `postRebuild` hook has been added for packages to perform extra build steps before the npm install step prunes dev dependencies.
- `boot.kernel.sysctl` is defined as a freeformType and adds a custom merge option for "net.core.rmem_max" (taking the highest value defined to avoid conflicts between 2 services trying to set that value).
- The `mame` package does not ship with its tools anymore in the default output. They were moved to a separate `tools` output instead. For convenience, `mame-tools` package was added for those who want to use it.
- A NixOS module for Firefox has been added which allows preferences and [policies](https://github.com/mozilla/policy-templates/blob/master/README.md) to be set. This also allows extensions to be installed via the `ExtensionSettings` policy. The new options are under `programs.firefox`.
- The option `services.picom.experimentalBackends` was removed since it is now the default and the option will cause `picom` to quit instead.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. --> <!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->

View file

@ -40,6 +40,8 @@
# `false`, and a different renderer may be used with different bugs and performance # `false`, and a different renderer may be used with different bugs and performance
# characteristics but (hopefully) indistinguishable output. # characteristics but (hopefully) indistinguishable output.
, allowDocBook ? true , allowDocBook ? true
# whether lib.mdDoc is required for descriptions to be read as markdown.
, markdownByDefault ? false
}: }:
let let
@ -152,6 +154,7 @@ in rec {
python ${./mergeJSON.py} \ python ${./mergeJSON.py} \
${lib.optionalString warningsAreErrors "--warnings-are-errors"} \ ${lib.optionalString warningsAreErrors "--warnings-are-errors"} \
${lib.optionalString (! allowDocBook) "--error-on-docbook"} \ ${lib.optionalString (! allowDocBook) "--error-on-docbook"} \
${lib.optionalString markdownByDefault "--markdown-by-default"} \
$baseJSON $options \ $baseJSON $options \
> $dst/options.json > $dst/options.json

View file

@ -201,19 +201,27 @@ def convertMD(options: Dict[str, Any]) -> str:
return option[key]['_type'] == typ return option[key]['_type'] == typ
for (name, option) in options.items(): for (name, option) in options.items():
try:
if optionIs(option, 'description', 'mdDoc'): if optionIs(option, 'description', 'mdDoc'):
option['description'] = convertString(name, option['description']['text']) option['description'] = convertString(name, option['description']['text'])
elif markdownByDefault:
option['description'] = convertString(name, option['description'])
if optionIs(option, 'example', 'literalMD'): if optionIs(option, 'example', 'literalMD'):
docbook = convertString(name, option['example']['text']) docbook = convertString(name, option['example']['text'])
option['example'] = { '_type': 'literalDocBook', 'text': docbook } option['example'] = { '_type': 'literalDocBook', 'text': docbook }
if optionIs(option, 'default', 'literalMD'): if optionIs(option, 'default', 'literalMD'):
docbook = convertString(name, option['default']['text']) docbook = convertString(name, option['default']['text'])
option['default'] = { '_type': 'literalDocBook', 'text': docbook } option['default'] = { '_type': 'literalDocBook', 'text': docbook }
except Exception as e:
raise Exception(f"Failed to render option {name}: {str(e)}")
return options return options
warningsAreErrors = False warningsAreErrors = False
errorOnDocbook = False errorOnDocbook = False
markdownByDefault = False
optOffset = 0 optOffset = 0
for arg in sys.argv[1:]: for arg in sys.argv[1:]:
if arg == "--warnings-are-errors": if arg == "--warnings-are-errors":
@ -222,6 +230,9 @@ for arg in sys.argv[1:]:
if arg == "--error-on-docbook": if arg == "--error-on-docbook":
optOffset += 1 optOffset += 1
errorOnDocbook = True errorOnDocbook = True
if arg == "--markdown-by-default":
optOffset += 1
markdownByDefault = True
options = pivot(json.load(open(sys.argv[1 + optOffset], 'r'))) options = pivot(json.load(open(sys.argv[1 + optOffset], 'r')))
overrides = pivot(json.load(open(sys.argv[2 + optOffset], 'r'))) overrides = pivot(json.load(open(sys.argv[2 + optOffset], 'r')))

View file

@ -684,10 +684,10 @@ class Machine:
with self.nested("waiting for {} to appear on tty {}".format(regexp, tty)): with self.nested("waiting for {} to appear on tty {}".format(regexp, tty)):
retry(tty_matches) retry(tty_matches)
def send_chars(self, chars: str) -> None: def send_chars(self, chars: str, delay: Optional[float] = 0.01) -> None:
with self.nested("sending keys {}".format(chars)): with self.nested("sending keys {}".format(chars)):
for char in chars: for char in chars:
self.send_key(char) self.send_key(char, delay)
def wait_for_file(self, filename: str) -> None: def wait_for_file(self, filename: str) -> None:
"""Waits until the file exists in machine's file system.""" """Waits until the file exists in machine's file system."""
@ -860,10 +860,11 @@ class Machine:
if matches is not None: if matches is not None:
return return
def send_key(self, key: str) -> None: def send_key(self, key: str, delay: Optional[float] = 0.01) -> None:
key = CHAR_TO_KEY.get(key, key) key = CHAR_TO_KEY.get(key, key)
self.send_monitor_command("sendkey {}".format(key)) self.send_monitor_command("sendkey {}".format(key))
time.sleep(0.01) if delay is not None:
time.sleep(delay)
def send_console(self, chars: str) -> None: def send_console(self, chars: str) -> None:
assert self.process assert self.process

View file

@ -9,9 +9,6 @@
# Modules to add to each VM # Modules to add to each VM
, extraConfigurations ? [ ] , extraConfigurations ? [ ]
}: }:
with pkgs;
let let
nixos-lib = import ./default.nix { inherit (pkgs) lib; }; nixos-lib = import ./default.nix { inherit (pkgs) lib; };
in in

View file

@ -34,14 +34,16 @@ let
"/share/unimaps" "/share/unimaps"
]; ];
}; };
setVconsole = !config.boot.isContainer;
in in
{ {
###### interface ###### interface
options.console = { options.console = {
enable = mkEnableOption (lib.mdDoc "virtual console") // {
default = true;
};
font = mkOption { font = mkOption {
type = with types; either str path; type = with types; either str path;
default = "Lat2-Terminus16"; default = "Lat2-Terminus16";
@ -125,11 +127,17 @@ in
''); '');
} }
(mkIf (!setVconsole) { (mkIf (!cfg.enable) {
systemd.services.systemd-vconsole-setup.enable = false; systemd.services = {
"serial-getty@ttyS0".enable = false;
"serial-getty@hvc0".enable = false;
"getty@tty1".enable = false;
"autovt@".enable = false;
systemd-vconsole-setup.enable = false;
};
}) })
(mkIf setVconsole (mkMerge [ (mkIf cfg.enable (mkMerge [
{ environment.systemPackages = [ pkgs.kbd ]; { environment.systemPackages = [ pkgs.kbd ];
# Let systemd-vconsole-setup.service do the work of setting up the # Let systemd-vconsole-setup.service do the work of setting up the

View file

@ -52,10 +52,8 @@ with lib;
environment.extraSetup = '' environment.extraSetup = ''
# For each icon theme directory ... # For each icon theme directory ...
find $out/share/icons -exec test -d {} ';' -mindepth 1 -maxdepth 1 -print0 | while read -d $'\0' themedir
find $out/share/icons -mindepth 1 -maxdepth 1 -print0 | while read -d $'\0' themedir
do do
# In order to build the cache, the theme dir should be # In order to build the cache, the theme dir should be
# writable. When the theme dir is a symbolic link to somewhere # writable. When the theme dir is a symbolic link to somewhere
# in the nix store it is not writable and it means that only # in the nix store it is not writable and it means that only

View file

@ -94,7 +94,7 @@ in
after = [ "suspend.target" "hibernate.target" "hybrid-sleep.target" "suspend-then-hibernate.target" ]; after = [ "suspend.target" "hibernate.target" "hybrid-sleep.target" "suspend-then-hibernate.target" ];
script = script =
'' ''
/run/current-system/systemd/bin/systemctl try-restart post-resume.target /run/current-system/systemd/bin/systemctl try-restart --no-block post-resume.target
${cfg.resumeCommands} ${cfg.resumeCommands}
${cfg.powerUpCommands} ${cfg.powerUpCommands}
''; '';

View file

@ -21,11 +21,24 @@ in
options = { options = {
boot.kernel.sysctl = mkOption { boot.kernel.sysctl = mkOption {
type = types.submodule {
freeformType = types.attrsOf sysctlOption;
options."net.core.rmem_max" = mkOption {
type = types.nullOr types.ints.unsigned // {
merge = loc: defs:
foldl
(a: b: if b.value == null then null else lib.max a b.value)
0
(filterOverrides defs);
};
default = null;
description = lib.mdDoc "The maximum socket receive buffer size. In case of conflicting values, the highest will be used.";
};
};
default = {}; default = {};
example = literalExpression '' example = literalExpression ''
{ "net.ipv4.tcp_syncookies" = false; "vm.swappiness" = 60; } { "net.ipv4.tcp_syncookies" = false; "vm.swappiness" = 60; }
''; '';
type = types.attrsOf sysctlOption;
description = lib.mdDoc '' description = lib.mdDoc ''
Runtime parameters of the Linux kernel, as set by Runtime parameters of the Linux kernel, as set by
{manpage}`sysctl(8)`. Note that sysctl {manpage}`sysctl(8)`. Note that sysctl
@ -35,6 +48,7 @@ in
parameter may be a string, integer, boolean, or null parameter may be a string, integer, boolean, or null
(signifying the option will not appear at all). (signifying the option will not appear at all).
''; '';
}; };
}; };

View file

@ -35,7 +35,7 @@ let
''; '';
hashedPasswordDescription = '' hashedPasswordDescription = ''
To generate a hashed password run `mkpasswd -m sha-512`. To generate a hashed password run `mkpasswd`.
If set to an empty string (`""`), this user will If set to an empty string (`""`), this user will
be able to log in without being asked for a password (but not via remote be able to log in without being asked for a password (but not via remote
@ -592,6 +592,26 @@ in {
''; '';
}; };
# Warn about user accounts with deprecated password hashing schemes
system.activationScripts.hashes = {
deps = [ "users" ];
text = ''
users=()
while IFS=: read -r user hash tail; do
if [[ "$hash" = "$"* && ! "$hash" =~ ^\$(y|gy|7|2b|2y|2a|6)\$ ]]; then
users+=("$user")
fi
done </etc/shadow
if (( "''${#users[@]}" )); then
echo "
WARNING: The following user accounts rely on password hashes that will
be removed in NixOS 23.05. They should be renewed as soon as possible."
printf ' - %s\n' "''${users[@]}"
fi
'';
};
# for backwards compatibility # for backwards compatibility
system.activationScripts.groups = stringAfter [ "users" ] ""; system.activationScripts.groups = stringAfter [ "users" ] "";

View file

@ -8,13 +8,12 @@ in
options = { options = {
hardware.brillo = { hardware.brillo = {
enable = mkEnableOption (lib.mdDoc '' enable = mkEnableOption (lib.mdDoc ''
Enable brillo in userspace. brillo in userspace.
This will allow brightness control from users in the video group. This will allow brightness control from users in the video group
''); '');
}; };
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.udev.packages = [ pkgs.brillo ]; services.udev.packages = [ pkgs.brillo ];
environment.systemPackages = [ pkgs.brillo ]; environment.systemPackages = [ pkgs.brillo ];

View file

@ -10,7 +10,7 @@ let
}; };
in { in {
options.hardware.ubertooth = { options.hardware.ubertooth = {
enable = mkEnableOption (lib.mdDoc "Enable the Ubertooth software and its udev rules."); enable = mkEnableOption (lib.mdDoc "Ubertooth software and its udev rules");
group = mkOption { group = mkOption {
type = types.str; type = types.str;

View file

@ -3,7 +3,7 @@
with lib; with lib;
{ {
options.hardware.wooting.enable = options.hardware.wooting.enable =
mkEnableOption (lib.mdDoc "Enable support for Wooting keyboards"); mkEnableOption (lib.mdDoc "support for Wooting keyboards");
config = mkIf config.hardware.wooting.enable { config = mkIf config.hardware.wooting.enable {
environment.systemPackages = [ pkgs.wootility ]; environment.systemPackages = [ pkgs.wootility ];

View file

@ -14,7 +14,10 @@ in
calamares-nixos calamares-nixos
calamares-nixos-autostart calamares-nixos-autostart
calamares-nixos-extensions calamares-nixos-extensions
# Needed for calamares QML module packagechooserq # Get list of locales
libsForQt5.full glibcLocales
]; ];
# Support choosing from any locale
i18n.supportedLocales = [ "all" ];
} }

View file

@ -355,6 +355,7 @@ in
pipewire = 323; pipewire = 323;
rstudio-server = 324; rstudio-server = 324;
localtimed = 325; localtimed = 325;
automatic-timezoned = 326;
# When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399! # When adding a uid, make sure it doesn't match an existing gid. And don't use uids above 399!
@ -664,6 +665,7 @@ in
pipewire = 323; pipewire = 323;
rstudio-server = 324; rstudio-server = 324;
localtimed = 325; localtimed = 325;
automatic-timezoned = 326;
# When adding a gid, make sure it doesn't match an existing # When adding a gid, make sure it doesn't match an existing
# uid. Users and groups with the same name should have equal # uid. Users and groups with the same name should have equal

View file

@ -52,7 +52,7 @@ in
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];
environment.etc."man_db.conf".text = environment.etc."man_db.conf".text =
let let
manualCache = pkgs.runCommandLocal "man-cache" { } '' manualCache = pkgs.runCommand "man-cache" { } ''
echo "MANDB_MAP ${cfg.manualPages}/share/man $out" > man.conf echo "MANDB_MAP ${cfg.manualPages}/share/man $out" > man.conf
${cfg.package}/bin/mandb -C man.conf -psc >/dev/null 2>&1 ${cfg.package}/bin/mandb -C man.conf -psc >/dev/null 2>&1
''; '';

View file

@ -307,7 +307,7 @@ in
'' ''
else else
throw '' throw ''
Neither ${opt.hostPlatform} nor or the legacy option ${opt.system} has been set. Neither ${opt.hostPlatform} nor the legacy option ${opt.system} has been set.
You can set ${opt.hostPlatform} in hardware-configuration.nix by re-running You can set ${opt.hostPlatform} in hardware-configuration.nix by re-running
a recent version of nixos-generate-config. a recent version of nixos-generate-config.
The option ${opt.system} is still fully supported for NixOS 22.05 interoperability, The option ${opt.system} is still fully supported for NixOS 22.05 interoperability,

View file

@ -157,6 +157,7 @@
./programs/extra-container.nix ./programs/extra-container.nix
./programs/feedbackd.nix ./programs/feedbackd.nix
./programs/file-roller.nix ./programs/file-roller.nix
./programs/firefox.nix
./programs/firejail.nix ./programs/firejail.nix
./programs/fish.nix ./programs/fish.nix
./programs/flashrom.nix ./programs/flashrom.nix
@ -186,6 +187,8 @@
./programs/less.nix ./programs/less.nix
./programs/liboping.nix ./programs/liboping.nix
./programs/light.nix ./programs/light.nix
./programs/mdevctl.nix
./programs/mepo.nix
./programs/mosh.nix ./programs/mosh.nix
./programs/mininet.nix ./programs/mininet.nix
./programs/msmtp.nix ./programs/msmtp.nix
@ -320,6 +323,7 @@
./services/backup/znapzend.nix ./services/backup/znapzend.nix
./services/blockchain/ethereum/geth.nix ./services/blockchain/ethereum/geth.nix
./services/blockchain/ethereum/erigon.nix ./services/blockchain/ethereum/erigon.nix
./services/blockchain/ethereum/lighthouse.nix
./services/backup/zrepl.nix ./services/backup/zrepl.nix
./services/cluster/corosync/default.nix ./services/cluster/corosync/default.nix
./services/cluster/hadoop/default.nix ./services/cluster/hadoop/default.nix
@ -378,6 +382,7 @@
./services/databases/pgmanage.nix ./services/databases/pgmanage.nix
./services/databases/postgresql.nix ./services/databases/postgresql.nix
./services/databases/redis.nix ./services/databases/redis.nix
./services/databases/surrealdb.nix
./services/databases/victoriametrics.nix ./services/databases/victoriametrics.nix
./services/desktops/accountsservice.nix ./services/desktops/accountsservice.nix
./services/desktops/bamf.nix ./services/desktops/bamf.nix
@ -572,7 +577,6 @@
./services/misc/etcd.nix ./services/misc/etcd.nix
./services/misc/etebase-server.nix ./services/misc/etebase-server.nix
./services/misc/etesync-dav.nix ./services/misc/etesync-dav.nix
./services/misc/ethminer.nix
./services/misc/exhibitor.nix ./services/misc/exhibitor.nix
./services/misc/felix.nix ./services/misc/felix.nix
./services/misc/freeswitch.nix ./services/misc/freeswitch.nix
@ -715,6 +719,7 @@
./services/monitoring/teamviewer.nix ./services/monitoring/teamviewer.nix
./services/monitoring/telegraf.nix ./services/monitoring/telegraf.nix
./services/monitoring/thanos.nix ./services/monitoring/thanos.nix
./services/monitoring/tremor-rs.nix
./services/monitoring/tuptime.nix ./services/monitoring/tuptime.nix
./services/monitoring/unifi-poller.nix ./services/monitoring/unifi-poller.nix
./services/monitoring/ups.nix ./services/monitoring/ups.nix
@ -771,6 +776,7 @@
./services/networking/blockbook-frontend.nix ./services/networking/blockbook-frontend.nix
./services/networking/blocky.nix ./services/networking/blocky.nix
./services/networking/charybdis.nix ./services/networking/charybdis.nix
./services/networking/chisel-server.nix
./services/networking/cjdns.nix ./services/networking/cjdns.nix
./services/networking/cloudflare-dyndns.nix ./services/networking/cloudflare-dyndns.nix
./services/networking/cntlm.nix ./services/networking/cntlm.nix
@ -1045,6 +1051,7 @@
./services/security/vault.nix ./services/security/vault.nix
./services/security/vaultwarden/default.nix ./services/security/vaultwarden/default.nix
./services/security/yubikey-agent.nix ./services/security/yubikey-agent.nix
./services/system/automatic-timezoned.nix
./services/system/cachix-agent/default.nix ./services/system/cachix-agent/default.nix
./services/system/cachix-watch-store.nix ./services/system/cachix-watch-store.nix
./services/system/cloud-init.nix ./services/system/cloud-init.nix
@ -1221,6 +1228,7 @@
./services/x11/xfs.nix ./services/x11/xfs.nix
./services/x11/xserver.nix ./services/x11/xserver.nix
./system/activation/activation-script.nix ./system/activation/activation-script.nix
./system/activation/specialisation.nix
./system/activation/top-level.nix ./system/activation/top-level.nix
./system/boot/binfmt.nix ./system/boot/binfmt.nix
./system/boot/emergency-mode.nix ./system/boot/emergency-mode.nix

View file

@ -12,7 +12,7 @@ let
cfg = config.programs.bash; cfg = config.programs.bash;
bashAliases = concatStringsSep "\n" ( bashAliases = concatStringsSep "\n" (
mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}") mapAttrsFlatten (k: v: "alias -- ${k}=${escapeShellArg v}")
(filterAttrs (k: v: v != null) cfg.shellAliases) (filterAttrs (k: v: v != null) cfg.shellAliases)
); );

View file

@ -0,0 +1,91 @@
{ pkgs, config, lib, ... }:
with lib;
let
cfg = config.programs.firefox;
policyFormat = pkgs.formats.json { };
organisationInfo = ''
When this option is in use, Firefox will inform you that "your browser
is managed by your organisation". That message appears because NixOS
installs what you have declared here such that it cannot be overridden
through the user interface. It does not mean that someone else has been
given control of your browser, unless of course they also control your
NixOS configuration.
'';
in {
options.programs.firefox = {
enable = mkEnableOption (mdDoc "the Firefox web browser");
package = mkOption {
description = mdDoc "Firefox package to use.";
type = types.package;
default = pkgs.firefox;
defaultText = literalExpression "pkgs.firefox";
relatedPackages = [
"firefox"
"firefox-beta-bin"
"firefox-bin"
"firefox-devedition-bin"
"firefox-esr"
"firefox-esr-wayland"
"firefox-wayland"
];
};
policies = mkOption {
description = mdDoc ''
Group policies to install.
See [Mozilla's documentation](https://github.com/mozilla/policy-templates/blob/master/README.md")
for a list of available options.
This can be used to install extensions declaratively! Check out the
documentation of the `ExtensionSettings` policy for details.
${organisationInfo}
'';
type = policyFormat.type;
default = {};
};
preferences = mkOption {
description = mdDoc ''
Preferences to set from `about://config`.
Some of these might be able to be configured more ergonomically
using policies.
${organisationInfo}
'';
type = with types; attrsOf (oneOf [ bool int string ]);
default = {};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc."firefox/policies/policies.json".source =
let policiesJSON =
policyFormat.generate
"firefox-policies.json"
{ inherit (cfg) policies; };
in mkIf (cfg.policies != {}) "${policiesJSON}";
# Preferences are converted into a policy
programs.firefox.policies =
mkIf (cfg.preferences != {})
{
Preferences = (mapAttrs (name: value: {
Value = value;
Status = "locked";
}) cfg.preferences);
};
};
meta.maintainers = with maintainers; [ danth ];
}

View file

@ -4,7 +4,7 @@ let
cfg = config.programs.kclock; cfg = config.programs.kclock;
kclockPkg = pkgs.libsForQt5.kclock; kclockPkg = pkgs.libsForQt5.kclock;
in { in {
options.programs.kclock = { enable = mkEnableOption (lib.mdDoc "Enable KClock"); }; options.programs.kclock = { enable = mkEnableOption (lib.mdDoc "KClock"); };
config = mkIf cfg.enable { config = mkIf cfg.enable {
services.dbus.packages = [ kclockPkg ]; services.dbus.packages = [ kclockPkg ];

View file

@ -103,7 +103,8 @@ in
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
description = lib.mdDoc '' description = lib.mdDoc ''
When less closes a file opened in such a way, it will call another program, called the input postprocessor, which may perform any desired clean-up action (such as deleting the replacement file created by LESSOPEN). When less closes a file opened in such a way, it will call another program, called the input postprocessor,
which may perform any desired clean-up action (such as deleting the replacement file created by LESSOPEN).
''; '';
}; };
}; };

View file

@ -0,0 +1,18 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.programs.mdevctl;
in {
options.programs.mdevctl = {
enable = mkEnableOption (lib.mdDoc "Mediated Device Management");
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ mdevctl ];
environment.etc."mdevctl.d/scripts.d/notifiers/.keep".text = "";
environment.etc."mdevctl.d/scripts.d/callouts/.keep".text = "";
};
}

View file

@ -0,0 +1,46 @@
{ pkgs, config, lib, ...}:
with lib;
let
cfg = config.programs.mepo;
in
{
options.programs.mepo = {
enable = mkEnableOption (mdDoc "Mepo");
locationBackends = {
gpsd = mkOption {
type = types.bool;
default = false;
description = mdDoc ''
Whether to enable location detection via gpsd.
This may require additional configuration of gpsd, see [here](#opt-services.gpsd.enable)
'';
};
geoclue = mkOption {
type = types.bool;
default = true;
description = mdDoc "Whether to enable location detection via geoclue";
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [
mepo
] ++ lib.optional cfg.locationBackends.geoclue geoclue2-with-demo-agent
++ lib.optional cfg.locationBackends.gpsd gpsd;
services.geoclue2 = mkIf cfg.locationBackends.geoclue {
enable = true;
appConfig.where-am-i = {
isAllowed = true;
isSystem = false;
};
};
services.gpsd.enable = cfg.locationBackends.gpsd;
};
meta.maintainers = with maintainers; [ laalsaas ];
}

View file

@ -4,16 +4,30 @@ with lib;
let let
cfg = config.programs.steam; cfg = config.programs.steam;
in {
options.programs.steam = {
enable = mkEnableOption (lib.mdDoc "steam");
steam = pkgs.steam.override { package = mkOption {
type = types.package;
default = pkgs.steam.override {
extraLibraries = pkgs: with config.hardware.opengl; extraLibraries = pkgs: with config.hardware.opengl;
if pkgs.hostPlatform.is64bit if pkgs.hostPlatform.is64bit
then [ package ] ++ extraPackages then [ package ] ++ extraPackages
else [ package32 ] ++ extraPackages32; else [ package32 ] ++ extraPackages32;
}; };
in { defaultText = literalExpression ''
options.programs.steam = { pkgs.steam.override {
enable = mkEnableOption (lib.mdDoc "steam"); extraLibraries = pkgs: with config.hardware.opengl;
if pkgs.hostPlatform.is64bit
then [ package ] ++ extraPackages
else [ package32 ] ++ extraPackages32;
}
'';
description = lib.mdDoc ''
steam package to use.
'';
};
remotePlay.openFirewall = mkOption { remotePlay.openFirewall = mkOption {
type = types.bool; type = types.bool;
@ -44,7 +58,10 @@ in {
hardware.steam-hardware.enable = true; hardware.steam-hardware.enable = true;
environment.systemPackages = [ steam steam.run ]; environment.systemPackages = [
cfg.package
cfg.package.run
];
networking.firewall = lib.mkMerge [ networking.firewall = lib.mkMerge [
(mkIf cfg.remotePlay.openFirewall { (mkIf cfg.remotePlay.openFirewall {

View file

@ -178,6 +178,16 @@ in {
description = lib.mdDoc "List of plugins to install."; description = lib.mdDoc "List of plugins to install.";
example = lib.literalExpression "[ pkgs.tmuxPlugins.nord ]"; example = lib.literalExpression "[ pkgs.tmuxPlugins.nord ]";
}; };
withUtempter = mkOption {
description = lib.mdDoc ''
Whether to enable libutempter for tmux.
This is required so that tmux can write to /var/run/utmp (which can be queried with `who` to display currently connected user sessions).
Note, this will add a guid wrapper for the group utmp!
'';
default = true;
type = types.bool;
};
}; };
}; };
@ -193,6 +203,15 @@ in {
TMUX_TMPDIR = lib.optional cfg.secureSocket ''''${XDG_RUNTIME_DIR:-"/run/user/$(id -u)"}''; TMUX_TMPDIR = lib.optional cfg.secureSocket ''''${XDG_RUNTIME_DIR:-"/run/user/$(id -u)"}'';
}; };
}; };
security.wrappers = mkIf cfg.withUtempter {
utempter = {
source = "${pkgs.libutempter}/lib/utempter/utempter";
owner = "root";
group = "utmp";
setuid = false;
setgid = true;
};
};
}; };
imports = [ imports = [

View file

@ -12,7 +12,7 @@ let
opt = options.programs.zsh; opt = options.programs.zsh;
zshAliases = concatStringsSep "\n" ( zshAliases = concatStringsSep "\n" (
mapAttrsFlatten (k: v: "alias ${k}=${escapeShellArg v}") mapAttrsFlatten (k: v: "alias -- ${k}=${escapeShellArg v}")
(filterAttrs (k: v: v != null) cfg.shellAliases) (filterAttrs (k: v: v != null) cfg.shellAliases)
); );
@ -173,10 +173,10 @@ in
# This file is read for all shells. # This file is read for all shells.
# Only execute this file once per shell. # Only execute this file once per shell.
if [ -n "$__ETC_ZSHENV_SOURCED" ]; then return; fi if [ -n "''${__ETC_ZSHENV_SOURCED-}" ]; then return; fi
__ETC_ZSHENV_SOURCED=1 __ETC_ZSHENV_SOURCED=1
if [ -z "$__NIXOS_SET_ENVIRONMENT_DONE" ]; then if [ -z "''${__NIXOS_SET_ENVIRONMENT_DONE-}" ]; then
. ${config.system.build.setEnvironment} . ${config.system.build.setEnvironment}
fi fi
@ -206,7 +206,7 @@ in
${zshStartupNotes} ${zshStartupNotes}
# Only execute this file once per shell. # Only execute this file once per shell.
if [ -n "$__ETC_ZPROFILE_SOURCED" ]; then return; fi if [ -n "''${__ETC_ZPROFILE_SOURCED-}" ]; then return; fi
__ETC_ZPROFILE_SOURCED=1 __ETC_ZPROFILE_SOURCED=1
# Setup custom login shell init stuff. # Setup custom login shell init stuff.

View file

@ -392,6 +392,24 @@ let
''; '';
}; };
failDelay = {
enable = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
If enabled, this will replace the `FAIL_DELAY` setting from `login.defs`.
Change the delay on failure per-application.
'';
};
delay = mkOption {
default = 3000000;
type = types.int;
example = 1000000;
description = lib.mdDoc "The delay time (in microseconds) on failure.";
};
};
gnupg = { gnupg = {
enable = mkOption { enable = mkOption {
type = types.bool; type = types.bool;
@ -526,11 +544,13 @@ let
# We use try_first_pass the second time to avoid prompting password twice # We use try_first_pass the second time to avoid prompting password twice
(optionalString (cfg.unixAuth && (optionalString (cfg.unixAuth &&
(config.security.pam.enableEcryptfs (config.security.pam.enableEcryptfs
|| config.security.pam.enableFscrypt
|| cfg.pamMount || cfg.pamMount
|| cfg.enableKwallet || cfg.enableKwallet
|| cfg.enableGnomeKeyring || cfg.enableGnomeKeyring
|| cfg.googleAuthenticator.enable || cfg.googleAuthenticator.enable
|| cfg.gnupg.enable || cfg.gnupg.enable
|| cfg.failDelay.enable
|| cfg.duoSecurity.enable)) || cfg.duoSecurity.enable))
( (
'' ''
@ -539,6 +559,9 @@ let
optionalString config.security.pam.enableEcryptfs '' optionalString config.security.pam.enableEcryptfs ''
auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap
'' + '' +
optionalString config.security.pam.enableFscrypt ''
auth optional ${pkgs.fscrypt-experimental}/lib/security/pam_fscrypt.so
'' +
optionalString cfg.pamMount '' optionalString cfg.pamMount ''
auth optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive auth optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive
'' + '' +
@ -551,6 +574,9 @@ let
optionalString cfg.gnupg.enable '' optionalString cfg.gnupg.enable ''
auth optional ${pkgs.pam_gnupg}/lib/security/pam_gnupg.so ${optionalString cfg.gnupg.storeOnly " store-only"} auth optional ${pkgs.pam_gnupg}/lib/security/pam_gnupg.so ${optionalString cfg.gnupg.storeOnly " store-only"}
'' + '' +
optionalString cfg.failDelay.enable ''
auth optional ${pkgs.pam}/lib/security/pam_faildelay.so delay=${toString cfg.failDelay.delay}
'' +
optionalString cfg.googleAuthenticator.enable '' optionalString cfg.googleAuthenticator.enable ''
auth required ${pkgs.google-authenticator}/lib/security/pam_google_authenticator.so no_increment_hotp auth required ${pkgs.google-authenticator}/lib/security/pam_google_authenticator.so no_increment_hotp
'' + '' +
@ -584,6 +610,9 @@ let
optionalString config.security.pam.enableEcryptfs '' optionalString config.security.pam.enableEcryptfs ''
password optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so password optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so
'' + '' +
optionalString config.security.pam.enableFscrypt ''
password optional ${pkgs.fscrypt-experimental}/lib/security/pam_fscrypt.so
'' +
optionalString cfg.pamMount '' optionalString cfg.pamMount ''
password optional ${pkgs.pam_mount}/lib/security/pam_mount.so password optional ${pkgs.pam_mount}/lib/security/pam_mount.so
'' + '' +
@ -630,6 +659,14 @@ let
optionalString config.security.pam.enableEcryptfs '' optionalString config.security.pam.enableEcryptfs ''
session optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so session optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so
'' + '' +
optionalString config.security.pam.enableFscrypt ''
# Work around https://github.com/systemd/systemd/issues/8598
# Skips the pam_fscrypt module for systemd-user sessions which do not have a password
# anyways.
# See also https://github.com/google/fscrypt/issues/95
session [success=1 default=ignore] pam_succeed_if.so service = systemd-user
session optional ${pkgs.fscrypt-experimental}/lib/security/pam_fscrypt.so
'' +
optionalString cfg.pamMount '' optionalString cfg.pamMount ''
session optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive session optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive
'' + '' +
@ -1146,6 +1183,14 @@ in
}; };
security.pam.enableEcryptfs = mkEnableOption (lib.mdDoc "eCryptfs PAM module (mounting ecryptfs home directory on login)"); security.pam.enableEcryptfs = mkEnableOption (lib.mdDoc "eCryptfs PAM module (mounting ecryptfs home directory on login)");
security.pam.enableFscrypt = mkEnableOption (lib.mdDoc ''
Enables fscrypt to automatically unlock directories with the user's login password.
This also enables a service at security.pam.services.fscrypt which is used by
fscrypt to verify the user's password when setting up a new protector. If you
use something other than pam_unix to verify user passwords, please remember to
adjust this PAM service.
'');
users.motd = mkOption { users.motd = mkOption {
default = null; default = null;
@ -1170,6 +1215,7 @@ in
++ optionals config.security.pam.enableOTPW [ pkgs.otpw ] ++ optionals config.security.pam.enableOTPW [ pkgs.otpw ]
++ optionals config.security.pam.oath.enable [ pkgs.oath-toolkit ] ++ optionals config.security.pam.oath.enable [ pkgs.oath-toolkit ]
++ optionals config.security.pam.p11.enable [ pkgs.pam_p11 ] ++ optionals config.security.pam.p11.enable [ pkgs.pam_p11 ]
++ optionals config.security.pam.enableFscrypt [ pkgs.fscrypt-experimental ]
++ optionals config.security.pam.u2f.enable [ pkgs.pam_u2f ]; ++ optionals config.security.pam.u2f.enable [ pkgs.pam_u2f ];
boot.supportedFilesystems = optionals config.security.pam.enableEcryptfs [ "ecryptfs" ]; boot.supportedFilesystems = optionals config.security.pam.enableEcryptfs [ "ecryptfs" ];
@ -1211,6 +1257,9 @@ in
it complains "Cannot create session: Already running in a it complains "Cannot create session: Already running in a
session". */ session". */
runuser-l = { rootOK = true; unixAuth = false; }; runuser-l = { rootOK = true; unixAuth = false; };
} // optionalAttrs (config.security.pam.enableFscrypt) {
# Allow fscrypt to verify login passphrase
fscrypt = {};
}; };
security.apparmor.includes."abstractions/pam" = let security.apparmor.includes."abstractions/pam" = let
@ -1275,6 +1324,9 @@ in
optionalString config.security.pam.enableEcryptfs '' optionalString config.security.pam.enableEcryptfs ''
mr ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so, mr ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so,
'' + '' +
optionalString config.security.pam.enableFscrypt ''
mr ${pkgs.fscrypt-experimental}/lib/security/pam_fscrypt.so,
'' +
optionalString (isEnabled (cfg: cfg.pamMount)) '' optionalString (isEnabled (cfg: cfg.pamMount)) ''
mr ${pkgs.pam_mount}/lib/security/pam_mount.so, mr ${pkgs.pam_mount}/lib/security/pam_mount.so,
'' + '' +

View file

@ -29,7 +29,7 @@ in {
}; };
port = mkOption { port = mkOption {
type = types.int; type = types.port;
default = config.services.mpd.network.port; default = config.services.mpd.network.port;
defaultText = literalExpression "config.services.mpd.network.port"; defaultText = literalExpression "config.services.mpd.network.port";
description = lib.mdDoc "The port where MPD is listening."; description = lib.mdDoc "The port where MPD is listening.";

View file

@ -314,7 +314,7 @@ in {
port = mkOption { port = mkOption {
default = 9102; default = 9102;
type = types.int; type = types.port;
description = lib.mdDoc '' description = lib.mdDoc ''
This specifies the port number on which the Client listens for This specifies the port number on which the Client listens for
Director connections. It must agree with the FDPort specified in Director connections. It must agree with the FDPort specified in
@ -374,7 +374,7 @@ in {
port = mkOption { port = mkOption {
default = 9103; default = 9103;
type = types.int; type = types.port;
description = lib.mdDoc '' description = lib.mdDoc ''
Specifies port number on which the Storage daemon listens for Specifies port number on which the Storage daemon listens for
Director connections. Director connections.
@ -451,7 +451,7 @@ in {
port = mkOption { port = mkOption {
default = 9101; default = 9101;
type = types.int; type = types.port;
description = lib.mdDoc '' description = lib.mdDoc ''
Specify the port (a positive integer) on which the Director daemon Specify the port (a positive integer) on which the Director daemon
will listen for Bacula Console connections. This same port number will listen for Bacula Console connections. This same port number

View file

@ -12,7 +12,7 @@ in
port = mkOption { port = mkOption {
default = 8200; default = 8200;
type = types.int; type = types.port;
description = lib.mdDoc '' description = lib.mdDoc ''
Port serving the web interface Port serving the web interface
''; '';

View file

@ -13,6 +13,15 @@ in {
services.erigon = { services.erigon = {
enable = mkEnableOption (lib.mdDoc "Ethereum implementation on the efficiency frontier"); enable = mkEnableOption (lib.mdDoc "Ethereum implementation on the efficiency frontier");
secretJwtPath = mkOption {
type = types.path;
description = lib.mdDoc ''
Path to the secret jwt used for the http api authentication.
'';
default = "";
example = "config.age.secrets.ERIGON_JWT.path";
};
settings = mkOption { settings = mkOption {
description = lib.mdDoc '' description = lib.mdDoc ''
Configuration for Erigon Configuration for Erigon
@ -76,11 +85,12 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${pkgs.erigon}/bin/erigon --config ${configFile}"; LoadCredential = "ERIGON_JWT:${cfg.secretJwtPath}";
ExecStart = "${pkgs.erigon}/bin/erigon --config ${configFile} --authrpc.jwtsecret=%d/ERIGON_JWT";
DynamicUser = true;
Restart = "on-failure"; Restart = "on-failure";
StateDirectory = "erigon"; StateDirectory = "erigon";
CapabilityBoundingSet = ""; CapabilityBoundingSet = "";
DynamicUser = true;
NoNewPrivileges = true; NoNewPrivileges = true;
PrivateTmp = true; PrivateTmp = true;
ProtectHome = true; ProtectHome = true;
@ -97,7 +107,6 @@ in {
RestrictNamespaces = true; RestrictNamespaces = true;
LockPersonality = true; LockPersonality = true;
RemoveIPC = true; RemoveIPC = true;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
SystemCallFilter = [ "@system-service" "~@privileged" ]; SystemCallFilter = [ "@system-service" "~@privileged" ];
}; };
}; };

View file

@ -0,0 +1,313 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.lighthouse;
in {
options = {
services.lighthouse = {
beacon = mkOption {
description = lib.mdDoc "Beacon node";
default = {};
type = types.submodule {
options = {
enable = lib.mkEnableOption (lib.mdDoc "Lightouse Beacon node");
dataDir = mkOption {
type = types.str;
default = "/var/lib/lighthouse-beacon";
description = lib.mdDoc ''
Directory where data will be stored. Each chain will be stored under it's own specific subdirectory.
'';
};
address = mkOption {
type = types.str;
default = "0.0.0.0";
description = lib.mdDoc ''
Listen address of Beacon node.
'';
};
port = mkOption {
type = types.port;
default = 9000;
description = lib.mdDoc ''
Port number the Beacon node will be listening on.
'';
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Open the port in the firewall
'';
};
disableDepositContractSync = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Explictly disables syncing of deposit logs from the execution node.
This overrides any previous option that depends on it.
Useful if you intend to run a non-validating beacon node.
'';
};
execution = {
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = lib.mdDoc ''
Listen address for the execution layer.
'';
};
port = mkOption {
type = types.port;
default = 8551;
description = lib.mdDoc ''
Port number the Beacon node will be listening on for the execution layer.
'';
};
jwtPath = mkOption {
type = types.str;
default = "";
description = lib.mdDoc ''
Path for the jwt secret required to connect to the execution layer.
'';
};
};
http = {
enable = lib.mkEnableOption (lib.mdDoc "Beacon node http api");
port = mkOption {
type = types.port;
default = 5052;
description = lib.mdDoc ''
Port number of Beacon node RPC service.
'';
};
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = lib.mdDoc ''
Listen address of Beacon node RPC service.
'';
};
};
metrics = {
enable = lib.mkEnableOption (lib.mdDoc "Beacon node prometheus metrics");
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = lib.mdDoc ''
Listen address of Beacon node metrics service.
'';
};
port = mkOption {
type = types.port;
default = 5054;
description = lib.mdDoc ''
Port number of Beacon node metrics service.
'';
};
};
extraArgs = mkOption {
type = types.str;
description = lib.mdDoc ''
Additional arguments passed to the lighthouse beacon command.
'';
default = "";
example = "";
};
};
};
};
validator = mkOption {
description = lib.mdDoc "Validator node";
default = {};
type = types.submodule {
options = {
enable = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Enable Lightouse Validator node.";
};
dataDir = mkOption {
type = types.str;
default = "/var/lib/lighthouse-validator";
description = lib.mdDoc ''
Directory where data will be stored. Each chain will be stored under it's own specific subdirectory.
'';
};
beaconNodes = mkOption {
type = types.listOf types.str;
default = ["http://localhost:5052"];
description = lib.mdDoc ''
Beacon nodes to connect to.
'';
};
metrics = {
enable = lib.mkEnableOption (lib.mdDoc "Validator node prometheus metrics");
address = mkOption {
type = types.str;
default = "127.0.0.1";
description = lib.mdDoc ''
Listen address of Validator node metrics service.
'';
};
port = mkOption {
type = types.port;
default = 5056;
description = lib.mdDoc ''
Port number of Validator node metrics service.
'';
};
};
extraArgs = mkOption {
type = types.str;
description = lib.mdDoc ''
Additional arguments passed to the lighthouse validator command.
'';
default = "";
example = "";
};
};
};
};
network = mkOption {
type = types.enum [ "mainnet" "prater" "goerli" "gnosis" "kiln" "ropsten" "sepolia" ];
default = "mainnet";
description = lib.mdDoc ''
The network to connect to. Mainnet is the default ethereum network.
'';
};
extraArgs = mkOption {
type = types.str;
description = lib.mdDoc ''
Additional arguments passed to every lighthouse command.
'';
default = "";
example = "";
};
};
};
config = mkIf (cfg.beacon.enable || cfg.validator.enable) {
environment.systemPackages = [ pkgs.lighthouse ] ;
networking.firewall = mkIf cfg.beacon.enable {
allowedTCPPorts = mkIf cfg.beacon.openFirewall [ cfg.beacon.port ];
allowedUDPPorts = mkIf cfg.beacon.openFirewall [ cfg.beacon.port ];
};
systemd.services.lighthouse-beacon = mkIf cfg.beacon.enable {
description = "Lighthouse beacon node (connect to P2P nodes and verify blocks)";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
script = ''
# make sure the chain data directory is created on first run
mkdir -p ${cfg.beacon.dataDir}/${cfg.network}
${pkgs.lighthouse}/bin/lighthouse beacon_node \
--disable-upnp \
${lib.optionalString cfg.beacon.disableDepositContractSync "--disable-deposit-contract-sync"} \
--port ${toString cfg.beacon.port} \
--listen-address ${cfg.beacon.address} \
--network ${cfg.network} \
--datadir ${cfg.beacon.dataDir}/${cfg.network} \
--execution-endpoint http://${cfg.beacon.execution.address}:${toString cfg.beacon.execution.port} \
--execution-jwt ''${CREDENTIALS_DIRECTORY}/LIGHTHOUSE_JWT \
${lib.optionalString cfg.beacon.http.enable '' --http --http-address ${cfg.beacon.http.address} --http-port ${toString cfg.beacon.http.port}''} \
${lib.optionalString cfg.beacon.metrics.enable '' --metrics --metrics-address ${cfg.beacon.metrics.address} --metrics-port ${toString cfg.beacon.metrics.port}''} \
${cfg.extraArgs} ${cfg.beacon.extraArgs}
'';
serviceConfig = {
LoadCredential = "LIGHTHOUSE_JWT:${cfg.beacon.execution.jwtPath}";
DynamicUser = true;
Restart = "on-failure";
StateDirectory = "lighthouse-beacon";
NoNewPrivileges = true;
PrivateTmp = true;
ProtectHome = true;
ProtectClock = true;
ProtectProc = "noaccess";
ProcSubset = "pid";
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
ProtectHostname = true;
RestrictSUIDSGID = true;
RestrictRealtime = true;
RestrictNamespaces = true;
LockPersonality = true;
RemoveIPC = true;
SystemCallFilter = [ "@system-service" "~@privileged" ];
};
};
systemd.services.lighthouse-validator = mkIf cfg.validator.enable {
description = "Lighthouse validtor node (manages validators, using data obtained from the beacon node via a HTTP API)";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
script = ''
# make sure the chain data directory is created on first run
mkdir -p ${cfg.validator.dataDir}/${cfg.network}
${pkgs.lighthouse}/bin/lighthouse validator_client \
--network ${cfg.network} \
--beacon-nodes ${lib.concatStringsSep "," cfg.validator.beaconNodes} \
--datadir ${cfg.validator.dataDir}/${cfg.network}
${optionalString cfg.validator.metrics.enable ''--metrics --metrics-address ${cfg.validator.metrics.address} --metrics-port ${toString cfg.validator.metrics.port}''} \
${cfg.extraArgs} ${cfg.validator.extraArgs}
'';
serviceConfig = {
Restart = "on-failure";
StateDirectory = "lighthouse-validator";
CapabilityBoundingSet = "";
DynamicUser = true;
NoNewPrivileges = true;
PrivateTmp = true;
ProtectHome = true;
ProtectClock = true;
ProtectProc = "noaccess";
ProcSubset = "pid";
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
ProtectHostname = true;
RestrictSUIDSGID = true;
RestrictRealtime = true;
RestrictNamespaces = true;
LockPersonality = true;
RemoveIPC = true;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
SystemCallFilter = [ "@system-service" "~@privileged" ];
};
};
};
}

View file

@ -12,7 +12,7 @@ in
{ {
###### interface ###### interface
options.services.kubernetes.flannel = { options.services.kubernetes.flannel = {
enable = mkEnableOption (lib.mdDoc "enable flannel networking"); enable = mkEnableOption (lib.mdDoc "flannel networking");
}; };
###### implementation ###### implementation

View file

@ -177,8 +177,7 @@ in
hostname = mkOption { hostname = mkOption {
description = lib.mdDoc "Kubernetes kubelet hostname override."; description = lib.mdDoc "Kubernetes kubelet hostname override.";
default = config.networking.hostName; defaultText = literalExpression "config.networking.fqdnOrHostName";
defaultText = literalExpression "config.networking.hostName";
type = str; type = str;
}; };
@ -349,8 +348,8 @@ in
boot.kernelModules = ["br_netfilter" "overlay"]; boot.kernelModules = ["br_netfilter" "overlay"];
services.kubernetes.kubelet.hostname = with config.networking; services.kubernetes.kubelet.hostname =
mkDefault (hostName + optionalString (domain != null) ".${domain}"); mkDefault config.networking.fqdnOrHostName;
services.kubernetes.pki.certs = with top.lib; { services.kubernetes.pki.certs = with top.lib; {
kubelet = mkCert { kubelet = mkCert {

View file

@ -18,7 +18,7 @@ in
'') '')
]; ];
options.services.foldingathome = { options.services.foldingathome = {
enable = mkEnableOption (lib.mdDoc "Enable the Folding@home client"); enable = mkEnableOption (lib.mdDoc "Folding@home client");
package = mkOption { package = mkOption {
type = types.package; type = types.package;

View file

@ -170,6 +170,9 @@ with lib;
# If running in ephemeral mode, restart the service on-exit (i.e., successful de-registration of the runner) # If running in ephemeral mode, restart the service on-exit (i.e., successful de-registration of the runner)
# to trigger a fresh registration. # to trigger a fresh registration.
Restart = if cfg.ephemeral then "on-success" else "no"; Restart = if cfg.ephemeral then "on-success" else "no";
# If the runner exits with `ReturnCode.RetryableError = 2`, always restart the service:
# https://github.com/actions/runner/blob/40ed7f8/src/Runner.Common/Constants.cs#L146
RestartForceExitStatus = [ 2 ];
# Contains _diag # Contains _diag
LogsDirectory = [ systemdDir ]; LogsDirectory = [ systemdDir ];

View file

@ -34,13 +34,7 @@ in {
services.couchdb = { services.couchdb = {
enable = mkOption { enable = mkEnableOption (lib.mdDoc "CouchDB Server");
type = types.bool;
default = false;
description = lib.mdDoc ''
Whether to run CouchDB Server.
'';
};
package = mkOption { package = mkOption {
type = types.package; type = types.package;

View file

@ -15,13 +15,7 @@ in {
services.opentsdb = { services.opentsdb = {
enable = mkOption { enable = mkEnableOption (lib.mdDoc "OpenTSDB");
type = types.bool;
default = false;
description = lib.mdDoc ''
Whether to run OpenTSDB.
'';
};
package = mkOption { package = mkOption {
type = types.package; type = types.package;
@ -49,7 +43,7 @@ in {
}; };
port = mkOption { port = mkOption {
type = types.int; type = types.port;
default = 4242; default = 4242;
description = lib.mdDoc '' description = lib.mdDoc ''
Which port OpenTSDB listens on. Which port OpenTSDB listens on.

View file

@ -85,7 +85,7 @@ in {
}; };
port = mkOption { port = mkOption {
type = types.int; type = types.port;
default = 8080; default = 8080;
description = lib.mdDoc '' description = lib.mdDoc ''
This tells pgmanage what port to listen on for browser requests. This tells pgmanage what port to listen on for browser requests.

View file

@ -105,6 +105,13 @@ in {
''; '';
}; };
extraParams = mkOption {
type = with types; listOf str;
default = [];
description = lib.mdDoc "Extra parameters to append to redis-server invocation";
example = [ "--sentinel" ];
};
bind = mkOption { bind = mkOption {
type = with types; nullOr str; type = with types; nullOr str;
default = "127.0.0.1"; default = "127.0.0.1";
@ -340,16 +347,24 @@ in {
after = [ "network.target" ]; after = [ "network.target" ];
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/redis-server /run/${redisName name}/redis.conf"; ExecStart = "${cfg.package}/bin/redis-server /var/lib/${redisName name}/redis.conf ${escapeShellArgs conf.extraParams}";
ExecStartPre = [("+"+pkgs.writeShellScript "${redisName name}-credentials" ('' ExecStartPre = "+"+pkgs.writeShellScript "${redisName name}-prep-conf" (let
install -o '${conf.user}' -m 600 ${redisConfig conf.settings} /run/${redisName name}/redis.conf redisConfVar = "/var/lib/${redisName name}/redis.conf";
'' + optionalString (conf.requirePassFile != null) '' redisConfRun = "/run/${redisName name}/nixos.conf";
{ redisConfStore = redisConfig conf.settings;
printf requirePass' ' in ''
cat ${escapeShellArg conf.requirePassFile} touch "${redisConfVar}" "${redisConfRun}"
} >>/run/${redisName name}/redis.conf chown '${conf.user}' "${redisConfVar}" "${redisConfRun}"
'') chmod 0600 "${redisConfVar}" "${redisConfRun}"
)]; if [ ! -s ${redisConfVar} ]; then
echo 'include "${redisConfRun}"' > "${redisConfVar}"
fi
echo 'include "${redisConfStore}"' > "${redisConfRun}"
${optionalString (conf.requirePassFile != null) ''
{echo -n "requirepass "
cat ${escapeShellArg conf.requirePassFile}} >> "${redisConfRun}"
''}
'');
Type = "notify"; Type = "notify";
# User and group # User and group
User = conf.user; User = conf.user;

View file

@ -0,0 +1,79 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.surrealdb;
in {
options = {
services.surrealdb = {
enable = mkEnableOption (lib.mdDoc "A scalable, distributed, collaborative, document-graph database, for the realtime web ");
dbPath = mkOption {
type = types.str;
description = lib.mdDoc ''
The path that surrealdb will write data to. Use null for in-memory.
Can be one of "memory", "file://:path", "tikv://:addr".
'';
default = "file:///var/lib/surrealdb/";
example = "memory";
};
host = mkOption {
type = types.str;
description = lib.mdDoc ''
The host that surrealdb will connect to.
'';
default = "127.0.0.1";
example = "127.0.0.1";
};
port = mkOption {
type = types.port;
description = lib.mdDoc ''
The port that surrealdb will connect to.
'';
default = 8000;
example = 8000;
};
};
};
config = mkIf cfg.enable {
# Used to connect to the running service
environment.systemPackages = [ pkgs.surrealdb ] ;
systemd.services.surrealdb = {
description = "A scalable, distributed, collaborative, document-graph database, for the realtime web ";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = {
ExecStart = "${pkgs.surrealdb}/bin/surreal start --bind ${cfg.host}:${toString cfg.port} ${optionalString (cfg.dbPath != null) "-- ${cfg.dbPath}"}";
DynamicUser = true;
Restart = "on-failure";
StateDirectory = "surrealdb";
CapabilityBoundingSet = "";
NoNewPrivileges = true;
PrivateTmp = true;
ProtectHome = true;
ProtectClock = true;
ProtectProc = "noaccess";
ProcSubset = "pid";
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
ProtectHostname = true;
RestrictSUIDSGID = true;
RestrictRealtime = true;
RestrictNamespaces = true;
LockPersonality = true;
RemoveIPC = true;
SystemCallFilter = [ "@system-service" "~@privileged" ];
};
};
};
}

View file

@ -200,6 +200,7 @@ in
}; };
systemd.services.geoclue = { systemd.services.geoclue = {
after = lib.optionals cfg.enableWifi [ "network-online.target" ];
# restart geoclue service when the configuration changes # restart geoclue service when the configuration changes
restartTriggers = [ restartTriggers = [
config.environment.etc."geoclue/geoclue.conf".source config.environment.etc."geoclue/geoclue.conf".source

View file

@ -51,7 +51,10 @@ with lib;
}) })
(mkIf (!config.services.gnome.at-spi2-core.enable) { (mkIf (!config.services.gnome.at-spi2-core.enable) {
environment.variables.NO_AT_BRIDGE = "1"; environment.variables = {
NO_AT_BRIDGE = "1";
GTK_A11Y = "none";
};
}) })
]; ];
} }

View file

@ -70,7 +70,7 @@ in
}; };
port = mkOption { port = mkOption {
type = types.int; type = types.port;
default = 8303; default = 8303;
description = lib.mdDoc '' description = lib.mdDoc ''
Port the server will listen on. Port the server will listen on.

View file

@ -28,8 +28,8 @@ let
}; };
env = { env = {
SANE_CONFIG_DIR = config.hardware.sane.configDir; SANE_CONFIG_DIR = "/etc/sane.d";
LD_LIBRARY_PATH = [ "${saneConfig}/lib/sane" ]; LD_LIBRARY_PATH = [ "/etc/sane-libs" ];
}; };
backends = [ pkg netConf ] ++ optional config.services.saned.enable sanedConf ++ config.hardware.sane.extraBackends; backends = [ pkg netConf ] ++ optional config.services.saned.enable sanedConf ++ config.hardware.sane.extraBackends;
@ -158,6 +158,8 @@ in
environment.systemPackages = backends; environment.systemPackages = backends;
environment.sessionVariables = env; environment.sessionVariables = env;
environment.etc."sane.d".source = config.hardware.sane.configDir;
environment.etc."sane-libs".source = "${saneConfig}/lib/sane";
services.udev.packages = backends; services.udev.packages = backends;
users.groups.scanner.gid = config.ids.gids.scanner; users.groups.scanner.gid = config.ids.gids.scanner;

View file

@ -46,6 +46,11 @@ let
SUBSYSTEM=="input", KERNEL=="mice", TAG+="systemd" SUBSYSTEM=="input", KERNEL=="mice", TAG+="systemd"
''; '';
nixosInitrdRules = ''
# Mark dm devices as db_persist so that they are kept active after switching root
SUBSYSTEM=="block", KERNEL=="dm-[0-9]*", ACTION=="add|change", OPTIONS+="db_persist"
'';
# Perform substitutions in all udev rules files. # Perform substitutions in all udev rules files.
udevRulesFor = { name, udevPackages, udevPath, udev, systemd, binPackages, initrdBin ? null }: pkgs.runCommand name udevRulesFor = { name, udevPackages, udevPath, udev, systemd, binPackages, initrdBin ? null }: pkgs.runCommand name
{ preferLocalBuild = true; { preferLocalBuild = true;
@ -364,8 +369,10 @@ in
EOF EOF
''; '';
boot.initrd.services.udev.rules = nixosInitrdRules;
boot.initrd.systemd.additionalUpstreamUnits = [ boot.initrd.systemd.additionalUpstreamUnits = [
# TODO: "initrd-udevadm-cleanup-db.service" is commented out because of https://github.com/systemd/systemd/issues/12953 "initrd-udevadm-cleanup-db.service"
"systemd-udevd-control.socket" "systemd-udevd-control.socket"
"systemd-udevd-kernel.socket" "systemd-udevd-kernel.socket"
"systemd-udevd.service" "systemd-udevd.service"

View file

@ -62,7 +62,12 @@ in
environment.systemPackages = [ pkgs.udisks2 ]; environment.systemPackages = [ pkgs.udisks2 ];
environment.etc = mapAttrs' (name: value: nameValuePair "udisks2/${name}" { source = value; } ) configFiles; environment.etc = (mapAttrs' (name: value: nameValuePair "udisks2/${name}" { source = value; } ) configFiles) // {
# We need to make sure /etc/libblockdev/conf.d is populated to avoid
# warnings
"libblockdev/conf.d/00-default.cfg".source = "${pkgs.libblockdev}/etc/libblockdev/conf.d/00-default.cfg";
"libblockdev/conf.d/10-lvm-dbus.cfg".source = "${pkgs.libblockdev}/etc/libblockdev/conf.d/10-lvm-dbus.cfg";
};
security.polkit.enable = true; security.polkit.enable = true;

View file

@ -18,7 +18,7 @@ in
]; ];
options.services.zigbee2mqtt = { options.services.zigbee2mqtt = {
enable = mkEnableOption (lib.mdDoc "enable zigbee2mqtt service"); enable = mkEnableOption (lib.mdDoc "zigbee2mqtt service");
package = mkOption { package = mkOption {
description = lib.mdDoc "Zigbee2mqtt package to use"; description = lib.mdDoc "Zigbee2mqtt package to use";

View file

@ -12,11 +12,7 @@ in {
options = { options = {
services.fluentd = { services.fluentd = {
enable = mkOption { enable = mkEnableOption (lib.mdDoc "fluentd");
type = types.bool;
default = false;
description = lib.mdDoc "Whether to enable fluentd.";
};
config = mkOption { config = mkOption {
type = types.lines; type = types.lines;

View file

@ -109,13 +109,7 @@ in
{ {
options = { options = {
services.logcheck = { services.logcheck = {
enable = mkOption { enable = mkEnableOption (lib.mdDoc "logcheck cron job");
default = false;
type = types.bool;
description = lib.mdDoc ''
Enable the logcheck cron job.
'';
};
user = mkOption { user = mkOption {
default = "logcheck"; default = "logcheck";

Some files were not shown because too many files have changed in this diff Show more