Project import generated by Copybara.

GitOrigin-RevId: 9b19f5e77dd906cb52dade0b7bd280339d2a1f3d
This commit is contained in:
Default email 2024-01-13 09:15:51 +01:00
parent 240c3a72f2
commit e7ec2969af
3790 changed files with 109247 additions and 51718 deletions

View file

@ -66,6 +66,10 @@
/doc/build-helpers/images/makediskimage.section.md @raitobezarius
/nixos/lib/make-disk-image.nix @raitobezarius
# Nix, the package manager
pkgs/tools/package-management/nix/ @raitobezarius
nixos/modules/installer/tools/nix-fallback-paths.nix @raitobezarius
# Nixpkgs documentation
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm
@ -167,6 +171,8 @@
# Browsers
/pkgs/applications/networking/browsers/firefox @mweinelt
/pkgs/applications/networking/browsers/chromium @emilylange
/nixos/tests/chromium.nix @emilylange
# Certificate Authorities
pkgs/data/misc/cacert/ @ajs124 @lukegb @mweinelt
@ -336,3 +342,8 @@ nixos/tests/zfs.nix @raitobezarius
# Linux Kernel
pkgs/os-specific/linux/kernel/manual-config.nix @amjoseph-nixpkgs
# Buildbot
nixos/modules/services/continuous-integration/buildbot @Mic92 @zowoq
nixos/tests/buildbot.nix @Mic92 @zowoq
pkgs/development/tools/continuous-integration/buildbot @Mic92 @zowoq

View file

@ -106,6 +106,19 @@ The following are supported:
- [`note`](https://tdg.docbook.org/tdg/5.0/note.html)
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
- [`example`](https://tdg.docbook.org/tdg/5.0/example.html)
Example admonitions require a title to work.
If you don't provide one, the manual won't be built.
```markdown
::: {.example #ex-showing-an-example}
# Title for this example
Text for the example.
:::
```
#### [Definition lists](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md)
@ -139,3 +152,54 @@ watermelon
Closes #216321.
- If the commit contains more than just documentation changes, follow the commit message format relevant for the rest of the changes.
## Documentation conventions
In an effort to keep the Nixpkgs manual in a consistent style, please follow the conventions below, unless they prevent you from properly documenting something.
In that case, please open an issue about the particular documentation convention and tag it with a "needs: documentation" label.
- Put each sentence in its own line.
This makes reviewing documentation much easier, since GitHub's review system is based on lines.
- Use the admonitions syntax for any callouts and examples (see [section above](#admonitions)).
- If you provide an example involving Nix code, make your example into a fully-working package (something that can be passed to `pkgs.callPackage`).
This will help others quickly test that the example works, and will also make it easier if we start automatically testing all example code to make sure it works.
For example, instead of providing something like:
```
pkgs.dockerTools.buildLayeredImage {
name = "hello";
contents = [ pkgs.hello ];
}
```
Provide something like:
```
{ dockerTools, hello }:
dockerTools.buildLayeredImage {
name = "hello";
contents = [ hello ];
}
```
- Use [definition lists](#definition-lists) to document function arguments, and the attributes of such arguments. For example:
```markdown
# pkgs.coolFunction
Description of what `coolFunction` does.
`coolFunction` expects a single argument which should be an attribute set, with the following possible attributes:
`name`
: The name of the resulting image.
`tag` _optional_
: Tag of the generated image.
_Default value:_ the output path's hash.
```

View file

@ -2,35 +2,38 @@
`pkgs.checkpointBuildTools` provides a way to build derivations incrementally. It consists of two functions to make checkpoint builds using Nix possible.
For hermeticity, Nix derivations do not allow any state to carry over between builds, making a transparent incremental build within a derivation impossible.
For hermeticity, Nix derivations do not allow any state to be carried over between builds, making a transparent incremental build within a derivation impossible.
However, we can tell Nix explicitly what the previous build state was, by representing that previous state as a derivation output. This allows the passed build state to be used for an incremental build.
To change a normal derivation to a checkpoint based build, these steps must be taken:
- apply `prepareCheckpointBuild` on the desired derivation
e.g.:
- apply `prepareCheckpointBuild` on the desired derivation, e.g.
```nix
checkpointArtifacts = (pkgs.checkpointBuildTools.prepareCheckpointBuild pkgs.virtualbox);
```
- change something you want in the sources of the package. (e.g. using a source override)
- change something you want in the sources of the package, e.g. use a source override:
```nix
changedVBox = pkgs.virtualbox.overrideAttrs (old: {
src = path/to/vbox/sources;
}
});
```
- use `mkCheckpointedBuild changedVBox buildOutput`
- use `mkCheckpointBuild changedVBox checkpointArtifacts`
- enjoy shorter build times
## Example {#sec-checkpoint-build-example}
```nix
{ pkgs ? import <nixpkgs> {} }: with (pkgs) checkpointBuildTools;
{ pkgs ? import <nixpkgs> {} }:
let
helloCheckpoint = checkpointBuildTools.prepareCheckpointBuild pkgs.hello;
inherit (pkgs.checkpointBuildTools)
prepareCheckpointBuild
mkCheckpointBuild
;
helloCheckpoint = prepareCheckpointBuild pkgs.hello;
changedHello = pkgs.hello.overrideAttrs (_: {
doCheck = false;
patchPhase = ''
sed -i 's/Hello, world!/Hello, Nix!/g' src/hello.c
'';
});
in checkpointBuildTools.mkCheckpointBuild changedHello helloCheckpoint
in mkCheckpointBuild changedHello helloCheckpoint
```

View file

@ -4,22 +4,33 @@
The function `buildDartApplication` builds Dart applications managed with pub.
It fetches its Dart dependencies automatically through `fetchDartDeps`, and (through a series of hooks) builds and installs the executables specified in the pubspec file. The hooks can be used in other derivations, if needed. The phases can also be overridden to do something different from installing binaries.
It fetches its Dart dependencies automatically through `pub2nix`, and (through a series of hooks) builds and installs the executables specified in the pubspec file. The hooks can be used in other derivations, if needed. The phases can also be overridden to do something different from installing binaries.
If you are packaging a Flutter desktop application, use [`buildFlutterApplication`](#ssec-dart-flutter) instead.
`vendorHash`: is the hash of the output of the dependency fetcher derivation. To obtain it, set it to `lib.fakeHash` (or omit it) and run the build ([more details here](#sec-source-hashes)).
`pubspecLock` is the parsed pubspec.lock file. pub2nix uses this to download required packages.
This can be converted to JSON from YAML with something like `yq . pubspec.lock`, and then read by Nix.
If the upstream source is missing a `pubspec.lock` file, you'll have to vendor one and specify it using `pubspecLockFile`. If it is needed, one will be generated for you and printed when attempting to build the derivation.
Alternatively, `autoPubspecLock` can be used instead, and set to a path to a regular `pubspec.lock` file. This relies on import-from-derivation, and is not permitted in Nixpkgs, but can be useful at other times.
The `depsListFile` must always be provided when packaging in Nixpkgs. It will be generated and printed if the derivation is attempted to be built without one. Alternatively, `autoDepsList` may be set to `true` only when outside of Nixpkgs, as it relies on import-from-derivation.
::: {.warning}
When using `autoPubspecLock` with a local source directory, make sure to use a
concatenation operator (e.g. `autoPubspecLock = src + "/pubspec.lock";`), and
not string interpolation.
String interpolation will copy your entire source directory to the Nix store and
use its store path, meaning that unrelated changes to your source tree will
cause the generated `pubspec.lock` derivation to rebuild!
:::
If the package has Git package dependencies, the hashes must be provided in the `gitHashes` set. If a hash is missing, an error message prompting you to add it will be shown.
The `dart` commands run can be overridden through `pubGetScript` and `dartCompileCommand`, you can also add flags using `dartCompileFlags` or `dartJitFlags`.
Dart supports multiple [outputs types](https://dart.dev/tools/dart-compile#types-of-output), you can choose between them using `dartOutputType` (defaults to `exe`). If you want to override the binaries path or the source path they come from, you can use `dartEntryPoints`. Outputs that require a runtime will automatically be wrapped with the relevant runtime (`dartaotruntime` for `aot-snapshot`, `dart run` for `jit-snapshot` and `kernel`, `node` for `js`), this can be overridden through `dartRuntimeCommand`.
```nix
{ buildDartApplication, fetchFromGitHub }:
{ lib, buildDartApplication, fetchFromGitHub }:
buildDartApplication rec {
pname = "dart-sass";
@ -32,12 +43,53 @@ buildDartApplication rec {
hash = "sha256-U6enz8yJcc4Wf8m54eYIAnVg/jsGi247Wy8lp1r1wg4=";
};
pubspecLockFile = ./pubspec.lock;
depsListFile = ./deps.json;
vendorHash = "sha256-Atm7zfnDambN/BmmUf4BG0yUz/y6xWzf0reDw3Ad41s=";
pubspecLock = lib.importJSON ./pubspec.lock.json;
}
```
### Patching dependencies {#ssec-dart-applications-patching-dependencies}
Some Dart packages require patches or build environment changes. Package derivations can be customised with the `customSourceBuilders` argument.
A collection of such customisations can be found in Nixpkgs, in the `development/compilers/dart/package-source-builders` directory.
This allows fixes for packages to be shared between all applications that use them. It is strongly recommended to add to this collection instead of including fixes in your application derivation itself.
### Running executables from dev_dependencies {#ssec-dart-applications-build-tools}
Many Dart applications require executables from the `dev_dependencies` section in `pubspec.yaml` to be run before building them.
This can be done in `preBuild`, in one of two ways:
1. Packaging the tool with `buildDartApplication`, adding it to Nixpkgs, and running it like any other application
2. Running the tool from the package cache
Of these methods, the first is recommended when using a tool that does not need
to be of a specific version.
For the second method, the `packageRun` function from the `dartConfigHook` can be used.
This is an alternative to `dart run` that does not rely on Pub.
e.g., for `build_runner`:
```bash
packageRun build_runner build
```
Do _not_ use `dart run <package_name>`, as this will attempt to download dependencies with Pub.
### Usage with nix-shell {#ssec-dart-applications-nix-shell}
As `buildDartApplication` provides dependencies instead of `pub get`, Dart needs to be explicitly told where to find them.
Run the following commands in the source directory to configure Dart appropriately.
Do not use `pub` after doing so; it will download the dependencies itself and overwrite these changes.
```bash
cp --no-preserve=all "$pubspecLockFilePath" pubspec.lock
mkdir -p .dart_tool && cp --no-preserve=all "$packageConfig" .dart_tool/package_config.json
```
## Flutter applications {#ssec-dart-flutter}
The function `buildFlutterApplication` builds Flutter applications.
@ -59,8 +111,10 @@ flutter.buildFlutterApplication {
fetchSubmodules = true;
};
pubspecLockFile = ./pubspec.lock;
depsListFile = ./deps.json;
vendorHash = "sha256-cdMO+tr6kYiN5xKXa+uTMAcFf2C75F3wVPrn21G4QPQ=";
pubspecLock = lib.importJSON ./pubspec.lock.json;
}
```
### Usage with nix-shell {#ssec-dart-flutter-nix-shell}
See the [Dart documentation](#ssec-dart-applications-nix-shell) for nix-shell instructions.

View file

@ -2,13 +2,13 @@
## Using Ruby {#using-ruby}
Several versions of Ruby interpreters are available on Nix, as well as over 250 gems and many applications written in Ruby. The attribute `ruby` refers to the default Ruby interpreter, which is currently MRI 2.6. It's also possible to refer to specific versions, e.g. `ruby_2_y`, `jruby`, or `mruby`.
Several versions of Ruby interpreters are available on Nix, as well as over 250 gems and many applications written in Ruby. The attribute `ruby` refers to the default Ruby interpreter, which is currently MRI 3.1. It's also possible to refer to specific versions, e.g. `ruby_3_y`, `jruby`, or `mruby`.
In the Nixpkgs tree, Ruby packages can be found throughout, depending on what they do, and are called from the main package set. Ruby gems, however are separate sets, and there's one default set for each interpreter (currently MRI only).
There are two main approaches for using Ruby with gems. One is to use a specifically locked `Gemfile` for an application that has very strict dependencies. The other is to depend on the common gems, which we'll explain further down, and rely on them being updated regularly.
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_2_7.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_3_2.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
Since not all gems have executables like `nokogiri`, it's usually more convenient to use the `withPackages` function like this: `ruby.withPackages (p: with p; [ nokogiri ])`. This will also make sure that the Ruby in your environment will be able to find the gem and it can be used in your Ruby code (for example via `ruby` or `irb` executables) via `require "nokogiri"` as usual.
@ -33,7 +33,7 @@ Again, it's possible to launch the interpreter from the shell. The Ruby interpre
#### Load Ruby environment from `.nix` expression {#load-ruby-environment-from-.nix-expression}
As explained [in the `nix-shell` section](https://nixos.org/manual/nix/stable/command-ref/nix-shell) of the Nix manual, `nix-shell` can also load an expression from a `.nix` file.
Say we want to have Ruby 2.6, `nokogori`, and `pry`. Consider a `shell.nix` file with:
Say we want to have Ruby, `nokogori`, and `pry`. Consider a `shell.nix` file with:
```nix
with import <nixpkgs> {};
@ -114,7 +114,7 @@ With this file in your directory, you can run `nix-shell` to build and use the g
The `bundlerEnv` is a wrapper over all the gems in your gemset. This means that all the `/lib` and `/bin` directories will be available, and the executables of all gems (even of indirect dependencies) will end up in your `$PATH`. The `wrappedRuby` provides you with all executables that come with Ruby itself, but wrapped so they can easily find the gems in your gemset.
One common issue that you might have is that you have Ruby 2.6, but also `bundler` in your gemset. That leads to a conflict for `/bin/bundle` and `/bin/bundler`. You can resolve this by wrapping either your Ruby or your gems in a `lowPrio` call. So in order to give the `bundler` from your gemset priority, it would be used like this:
One common issue that you might have is that you have Ruby, but also `bundler` in your gemset. That leads to a conflict for `/bin/bundle` and `/bin/bundler`. You can resolve this by wrapping either your Ruby or your gems in a `lowPrio` call. So in order to give the `bundler` from your gemset priority, it would be used like this:
```nix
# ...

View file

@ -208,3 +208,23 @@ EOF
cp test.pdf $out
''
```
## LuaLaTeX font cache {#sec-language-texlive-lualatex-font-cache}
The font cache for LuaLaTeX is written to `$HOME`.
Therefore, it is necessary to set `$HOME` to a writable path, e.g. [before using LuaLaTeX in nix derivations](https://github.com/NixOS/nixpkgs/issues/180639):
```nix
runCommandNoCC "lualatex-hello-world" {
buildInputs = [ texliveFull ];
} ''
mkdir $out
echo '\documentclass{article} \begin{document} Hello world \end{document}' > main.tex
env HOME=$(mktemp -d) lualatex -interaction=nonstopmode -output-format=pdf -output-directory=$out ./main.tex
''
```
Additionally, [the cache of a user can diverge from the nix store](https://github.com/NixOS/nixpkgs/issues/278718).
To resolve font issues that might follow, the cache can be removed by the user:
```ShellSession
luaotfload-tool --cache=erase --flush-lookups --force
```

View file

@ -203,7 +203,11 @@ rec {
in if missingArgs == {}
then makeOverridable f allArgs
else throw "lib.customisation.callPackageWith: ${error}";
# This needs to be an abort so it can't be caught with `builtins.tryEval`,
# which is used by nix-env and ofborg to filter out packages that don't evaluate.
# This way we're forced to fix such errors in Nixpkgs,
# which is especially relevant with allowAliases = false
else abort "lib.customisation.callPackageWith: ${error}";
/* Like callPackage, but for a function that returns an attribute

View file

@ -103,42 +103,155 @@ rec {
else converge f x';
/*
Modify the contents of an explicitly recursive attribute set in a way that
honors `self`-references. This is accomplished with a function
Extend a function using an overlay.
Overlays allow modifying and extending fixed-point functions, specifically ones returning attribute sets.
A fixed-point function is a function which is intended to be evaluated by passing the result of itself as the argument.
This is possible due to Nix's lazy evaluation.
A fixed-point function returning an attribute set has the form
```nix
g = self: super: { foo = super.foo + " + "; }
final: { # attributes }
```
that has access to the unmodified input (`super`) as well as the final
non-recursive representation of the attribute set (`self`). `extends`
differs from the native `//` operator insofar as that it's applied *before*
references to `self` are resolved:
where `final` refers to the lazily evaluated attribute set returned by the fixed-point function.
```
nix-repl> fix (extends g f)
{ bar = "bar"; foo = "foo + "; foobar = "foo + bar"; }
An overlay to such a fixed-point function has the form
```nix
final: prev: { # attributes }
```
The name of the function is inspired by object-oriented inheritance, i.e.
think of it as an infix operator `g extends f` that mimics the syntax from
Java. It may seem counter-intuitive to have the "base class" as the second
argument, but it's nice this way if several uses of `extends` are cascaded.
where `prev` refers to the result of the original function to `final`, and `final` is the result of the composition of the overlay and the original function.
To get a better understanding how `extends` turns a function with a fix
point (the package set we start with) into a new function with a different fix
point (the desired packages set) lets just see, how `extends g f`
unfolds with `g` and `f` defined above:
Applying an overlay is done with `extends`:
```nix
let
f = final: { # attributes };
overlay = final: prev: { # attributes };
in extends overlay f;
```
extends g f = self: let super = f self; in super // g self super;
= self: let super = { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; }; in super // g self super
= self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; } // g self { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; }
= self: { foo = "foo"; bar = "bar"; foobar = self.foo + self.bar; } // { foo = "foo" + " + "; }
= self: { foo = "foo + "; bar = "bar"; foobar = self.foo + self.bar; }
To get the value of `final`, use `lib.fix`:
```nix
let
f = final: { # attributes };
overlay = final: prev: { # attributes };
g = extends overlay f;
in fix g
```
:::{.example}
# Extend a fixed-point function with an overlay
Define a fixed-point function `f` that expects its own output as the argument `final`:
```nix-repl
f = final: {
# Constant value a
a = 1;
# b depends on the final value of a, available as final.a
b = final.a + 2;
}
```
Evaluate this using [`lib.fix`](#function-library-lib.fixedPoints.fix) to get the final result:
```nix-repl
fix f
=> { a = 1; b = 3; }
```
An overlay represents a modification or extension of such a fixed-point function.
Here's an example of an overlay:
```nix-repl
overlay = final: prev: {
# Modify the previous value of a, available as prev.a
a = prev.a + 10;
# Extend the attribute set with c, letting it depend on the final values of a and b
c = final.a + final.b;
}
```
Use `extends overlay f` to apply the overlay to the fixed-point function `f`.
This produces a new fixed-point function `g` with the combined behavior of `f` and `overlay`:
```nix-repl
g = extends overlay f
```
The result is a function, so we can't print it directly, but it's the same as:
```nix-repl
g' = final: {
# The constant from f, but changed with the overlay
a = 1 + 10;
# Unchanged from f
b = final.a + 2;
# Extended in the overlay
c = final.a + final.b;
}
```
Evaluate this using [`lib.fix`](#function-library-lib.fixedPoints.fix) again to get the final result:
```nix-repl
fix g
=> { a = 11; b = 13; c = 24; }
```
:::
Type:
extends :: (Attrs -> Attrs -> Attrs) # The overlay to apply to the fixed-point function
-> (Attrs -> Attrs) # A fixed-point function
-> (Attrs -> Attrs) # The resulting fixed-point function
Example:
f = final: { a = 1; b = final.a + 2; }
fix f
=> { a = 1; b = 3; }
fix (extends (final: prev: { a = prev.a + 10; }) f)
=> { a = 11; b = 13; }
fix (extends (final: prev: { b = final.a + 5; }) f)
=> { a = 1; b = 6; }
fix (extends (final: prev: { c = final.a + final.b; }) f)
=> { a = 1; b = 3; c = 4; }
:::{.note}
The argument to the given fixed-point function after applying an overlay will *not* refer to its own return value, but rather to the value after evaluating the overlay function.
The given fixed-point function is called with a separate argument than if it was evaluated with `lib.fix`.
The new argument
:::
*/
extends = f: rattrs: self: let super = rattrs self; in super // f self super;
extends =
# The overlay to apply to the fixed-point function
overlay:
# The fixed-point function
f:
# Wrap with parenthesis to prevent nixdoc from rendering the `final` argument in the documentation
# The result should be thought of as a function, the argument of that function is not an argument to `extends` itself
(
final:
let
prev = f final;
in
prev // overlay final prev
);
/*
Compose two extending functions of the type expected by 'extends'

View file

@ -98,6 +98,9 @@ rec {
{ cpu = { family = "riscv"; }; }
{ cpu = { family = "x86"; }; }
];
isElf = { kernel.execFormat = execFormats.elf; };
isMacho = { kernel.execFormat = execFormats.macho; };
};
# given two patterns, return a pattern which is their logical AND.

View file

@ -534,7 +534,7 @@
name = "James Alexander Feldman-Crough";
};
afontain = {
email = "antoine.fontaine@epfl.ch";
email = "afontain@posteo.net";
github = "necessarily-equal";
githubId = 59283660;
name = "Antoine Fontaine";
@ -563,6 +563,12 @@
githubId = 732652;
name = "Andreas Herrmann";
};
ahirner = {
email = "a.hirner+nixpkgs@gmail.com";
github = "ahirner";
githubId = 6055037;
name = "Alexander Hirner";
};
ahoneybun = {
email = "aaron@system76.com";
github = "ahoneybun";
@ -911,12 +917,15 @@
name = "Alma Cemerlic";
};
Alper-Celik = {
email = "dev.alpercelik@gmail.com";
email = "alper@alper-celik.dev";
name = "Alper Çelik";
github = "Alper-Celik";
githubId = 110625473;
keys = [{
fingerprint = "6B69 19DD CEE0 FAF3 5C9F 2984 FA90 C0AB 738A B873";
}
{
fingerprint = "DF68 C500 4024 23CC F9C5 E6CA 3D17 C832 4696 FE70";
}];
};
alternateved = {
@ -2502,6 +2511,12 @@
githubId = 5700358;
name = "Thomas Blank";
};
blinry = {
name = "blinry";
email = "mail@blinry.org";
github = "blinry";
githubId = 81277;
};
blitz = {
email = "js@alien8.de";
matrix = "@js:ukvly.org";
@ -3858,6 +3873,12 @@
githubId = 6821729;
github = "criyle";
};
crschnick = {
email = "crschnick@xpipe.io";
name = "Christopher Schnick";
github = "crschnick";
githubId = 72509152;
};
CRTified = {
email = "carl.schneider+nixos@rub.de";
matrix = "@schnecfk:ruhr-uni-bochum.de";
@ -6706,7 +6727,7 @@
};
getpsyched = {
name = "Priyanshu Tripathi";
email = "priyanshutr@proton.me";
email = "priyanshu@getpsyched.dev";
matrix = "@getpsyched:matrix.org";
github = "getpsyched";
githubId = 43472218;
@ -7525,6 +7546,16 @@
githubId = 362833;
name = "Hongchang Wu";
};
honnip = {
name = "Jung seungwoo";
email = "me@honnip.page";
matrix = "@honnip:matrix.org";
github = "honnip";
githubId = 108175486;
keys = [{
fingerprint = "E4DD 51F7 FA3F DCF1 BAF6 A72C 576E 43EF 8482 E415";
}];
};
hoppla20 = {
email = "privat@vincentcui.de";
github = "hoppla20";
@ -7629,6 +7660,12 @@
githubId = 51334444;
name = "Akshat Agarwal";
};
hummeltech = {
email = "hummeltech2024@gmail.com";
github = "hummeltech";
githubId = 6109326;
name = "David Hummel";
};
huyngo = {
email = "huyngo@disroot.org";
github = "Huy-Ngo";
@ -9292,6 +9329,12 @@
githubId = 5124422;
name = "Julien Urraca";
};
justanotherariel = {
email = "ariel@ebersberger.io";
github = "justanotherariel";
githubId = 31776703;
name = "Ariel Ebersberger";
};
justinas = {
email = "justinas@justinas.org";
github = "justinas";
@ -11394,6 +11437,12 @@
githubId = 458783;
name = "Martin Gammelsæter";
};
martinjlowm = {
email = "martin@martinjlowm.dk";
github = "martinjlowm";
githubId = 110860;
name = "Martin Jesper Low Madsen";
};
martinramm = {
email = "martin-ramm@gmx.de";
github = "MartinRamm";
@ -11868,6 +11917,12 @@
github = "Mephistophiles";
githubId = 4850908;
};
mevatron = {
email = "mevatron@gmail.com";
name = "mevatron";
github = "mevatron";
githubId = 714585;
};
mfossen = {
email = "msfossen@gmail.com";
github = "mfossen";
@ -13008,6 +13063,12 @@
githubId = 77314501;
name = "Maurice Zhou";
};
Nebucatnetzer = {
email = "andreas+nixpkgs@zweili.ch";
github = "Nebucatnetzer";
githubId = 2287221;
name = "Andreas Zweili";
};
Necior = {
email = "adrian@sadlocha.eu";
github = "Necior";
@ -14614,6 +14675,12 @@
githubId = 610615;
name = "Chih-Mao Chen";
};
pkosel = {
name = "pkosel";
email = "philipp.kosel@gmail.com";
github = "pkosel";
githubId = 170943;
};
pks = {
email = "ps@pks.im";
github = "pks-t";
@ -14948,6 +15015,12 @@
githubId = 18549627;
name = "Proglodyte";
};
proglottis = {
email = "proglottis@gmail.com";
github = "proglottis";
githubId = 74465;
name = "James Fargher";
};
progval = {
email = "progval+nix@progval.net";
github = "progval";
@ -15739,6 +15812,12 @@
githubId = 7221768;
name = "Andika Demas Riyandi";
};
rjpcasalino = {
email = "ryan@rjpc.net";
github = "rjpcasalino";
githubId = 12821230;
name = "Ryan J.P. Casalino";
};
rkitover = {
email = "rkitover@gmail.com";
github = "rkitover";
@ -16990,6 +17069,12 @@
fingerprint = "ADF4 C13D 0E36 1240 BD01 9B51 D1DE 6D7F 6936 63A5";
}];
};
Silver-Golden = {
name = "Brendan Golden";
email = "github+nixpkgs@brendan.ie";
github = "Silver-Golden";
githubId = 7858375;
};
simarra = {
name = "simarra";
email = "loic.martel@protonmail.com";
@ -18132,6 +18217,12 @@
githubId = 2389333;
name = "Andy Tockman";
};
teatwig = {
email = "nix@teatwig.net";
name = "tea";
github = "teatwig";
githubId = 18734648;
};
techknowlogick = {
email = "techknowlogick@gitea.com";
github = "techknowlogick";
@ -19080,6 +19171,12 @@
github = "uakci";
githubId = 6961268;
};
uartman = {
name = "Anton Gusev";
email = "uartman@mail.ru";
github = "UARTman";
githubId = 21099202;
};
udono = {
email = "udono@virtual-things.biz";
github = "udono";
@ -19572,7 +19669,15 @@
githubId = 13259982;
name = "Vanessa McHale";
};
vncsb = {
email = "viniciusbernardino1@hotmail.com";
github = "vncsb";
githubId = 19562240;
name = "Vinicius Bernardino";
keys = [{
fingerprint = "F0D3 920C 722A 541F 0CCD 66E3 A7BA BA05 3D78 E7CA";
}];
};
voidless = {
email = "julius.schmitt@yahoo.de";
github = "voidIess";
@ -20147,7 +20252,7 @@
xfix = {
email = "kamila@borowska.pw";
matrix = "@xfix:matrix.org";
github = "xfix";
github = "KamilaBorowska";
githubId = 1297598;
name = "Kamila Borowska";
};

View file

@ -7,8 +7,11 @@ set -eu -o pipefail
# Stackage solver to use, LTS or Nightly
# (should be capitalized like the display name)
SOLVER=LTS
# Stackage solver verson, if any. Use latest if empty
VERSION=21
TMP_TEMPLATE=update-stackage.XXXXXXX
readonly SOLVER
readonly VERSION
readonly TMP_TEMPLATE
toLower() {
@ -23,7 +26,7 @@ stackage_config="pkgs/development/haskell-modules/configuration-hackage2nix/stac
trap 'rm "${tmpfile}" "${tmpfile_new}"' 0
touch "$tmpfile" "$tmpfile_new" # Creating files here so that trap creates no errors.
curl -L -s "https://stackage.org/$(toLower "$SOLVER")/cabal.config" >"$tmpfile"
curl -L -s "https://stackage.org/$(toLower "$SOLVER")${VERSION:+-$VERSION}/cabal.config" >"$tmpfile"
old_version=$(grep '^# Stackage' $stackage_config | sed -e 's/.\+ \([A-Za-z]\+ [0-9.-]\+\)$/\1/g')
version="$SOLVER $(sed -rn "s/^--.*http:..(www.)?stackage.org.snapshot.$(toLower "$SOLVER")-//p" "$tmpfile")"

View file

@ -7,7 +7,7 @@ binaryheap,,,,,,vcunat
busted,,,,,,
cassowary,,,,,,marsam alerque
cldr,,,,,,alerque
compat53,,,,0.7-1,,vcunat
compat53,,,,,,vcunat
cosmo,,,,,,marsam
coxpcall,,,,1.17.0-1,,
cqueues,,,,,,vcunat
@ -15,6 +15,7 @@ cyan,,,,,,
digestif,https://github.com/astoff/digestif.git,,,,5.3,
dkjson,,,,,,
fennel,,,,,,misterio77
fidget.nvim,,,,,,mrcjkb
fifo,,,,,,
fluent,,,,,,alerque
fzy,,,,,,mrcjkb
@ -55,7 +56,7 @@ lua-subprocess,https://github.com/0x0ade/lua-subprocess,,,,5.1,scoder12
lua-term,,,,,,
lua-toml,,,,,,
lua-zlib,,,,,,koral
lua_cliargs,https://github.com/amireh/lua_cliargs.git,,,,,
lua_cliargs,,,,,,
luabitop,https://github.com/teto/luabitop.git,,,,,
luacheck,,,,,,
luacov,,,,,,
@ -86,7 +87,7 @@ luautf8,,,,,,pstn
luazip,,,,,,
lua-yajl,,,,,,pstn
lua-iconv,,,,7.0.0,,
luuid,,,,,,
luuid,,,,20120509-2,,
luv,,,,1.44.2-1,,
lush.nvim,https://github.com/rktjmp/lush.nvim,,,,,teto
lyaml,,,,,,lblasc

1 name src ref server version luaversion maintainers
7 busted
8 cassowary marsam alerque
9 cldr alerque
10 compat53 0.7-1 vcunat
11 cosmo marsam
12 coxpcall 1.17.0-1
13 cqueues vcunat
15 digestif https://github.com/astoff/digestif.git 5.3
16 dkjson
17 fennel misterio77
18 fidget.nvim mrcjkb
19 fifo
20 fluent alerque
21 fzy mrcjkb
56 lua-term
57 lua-toml
58 lua-zlib koral
59 lua_cliargs https://github.com/amireh/lua_cliargs.git
60 luabitop https://github.com/teto/luabitop.git
61 luacheck
62 luacov
87 luazip
88 lua-yajl pstn
89 lua-iconv 7.0.0
90 luuid 20120509-2
91 luv 1.44.2-1
92 lush.nvim https://github.com/rktjmp/lush.nvim teto
93 lyaml lblasc

View file

@ -17,6 +17,7 @@ import http
import json
import logging
import os
import re
import subprocess
import sys
import time
@ -192,6 +193,11 @@ class RepoGitHub(Repo):
with urllib.request.urlopen(commit_req, timeout=10) as req:
self._check_for_redirect(commit_url, req)
xml = req.read()
# Filter out illegal XML characters
illegal_xml_regex = re.compile(b"[\x00-\x08\x0B-\x0C\x0E-\x1F\x7F]")
xml = illegal_xml_regex.sub(b"", xml)
root = ET.fromstring(xml)
latest_entry = root.find(ATOM_ENTRY)
assert latest_entry is not None, f"No commits found in repository {self}"

View file

@ -96,6 +96,16 @@ with lib.maintainers; {
shortName = "Blockchains";
};
buildbot = {
members = [
lopsided98
mic92
zowoq
];
scope = "Maintain Buildbot CI framework";
shortName = "Buildbot";
};
c = {
members = [
matthewbauer

View file

@ -65,12 +65,10 @@ hardware.opengl.extraPackages = [
[Intel Gen8 and later
GPUs](https://en.wikipedia.org/wiki/List_of_Intel_graphics_processing_units#Gen8)
are supported by the Intel NEO OpenCL runtime that is provided by the
intel-compute-runtime package. For Gen7 GPUs, the deprecated Beignet
runtime can be used, which is provided by the beignet package. The
proprietary Intel OpenCL runtime, in the intel-ocl package, is an
alternative for Gen7 GPUs.
intel-compute-runtime package. The proprietary Intel OpenCL runtime, in
the intel-ocl package, is an alternative for Gen7 GPUs.
The intel-compute-runtime, beignet, or intel-ocl package can be added to
The intel-compute-runtime or intel-ocl package can be added to
[](#opt-hardware.opengl.extraPackages)
to enable OpenCL support. For example, for Gen8 and later GPUs, the following
configuration can be used:

View file

@ -10,6 +10,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- `screen`'s module has been cleaned, and will now require you to set `programs.screen.enable` in order to populate `screenrc` and add the program to the environment.
- `linuxPackages_testing_bcachefs` is now fully deprecated by `linuxPackages_testing`, and is therefore no longer available.
- NixOS now installs a stub ELF loader that prints an informative error message when users attempt to run binaries not made for NixOS.
- This can be disabled through the `environment.stub-ld.enable` option.
- If you use `programs.nix-ld.enable`, no changes are needed. The stub will be disabled automatically.
@ -24,6 +26,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [maubot](https://github.com/maubot/maubot), a plugin-based Matrix bot framework. Available as [services.maubot](#opt-services.maubot.enable).
- systemd's gateway, upload, and remote services, which provides ways of sending journals across the network. Enable using [services.journald.gateway](#opt-services.journald.gateway.enable), [services.journald.upload](#opt-services.journald.upload.enable), and [services.journald.remote](#opt-services.journald.remote.enable).
- [GNS3](https://www.gns3.com/), a network software emulator. Available as [services.gns3-server](#opt-services.gns3-server.enable).
- [rspamd-trainer](https://gitlab.com/onlime/rspamd-trainer), script triggered by a helper which reads mails from a specific mail inbox and feeds them into rspamd for spam/ham training.
@ -43,11 +47,14 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
- `himalaya` was updated to v1.0.0-beta, which introduces breaking changes. Check out the [release note](https://github.com/soywod/himalaya/releases/tag/v1.0.0-beta) for details.
- The `power.ups` module now generates `upsd.conf`, `upsd.users` and `upsmon.conf` automatically from a set of new configuration options. This breaks compatibility with existing `power.ups` setups where these files were created manually. Back up these files before upgrading NixOS.
- `k9s` was updated to v0.30. There have been various breaking changes in the config file format,
check out the changelog of [v0.29](https://github.com/derailed/k9s/releases/tag/v0.29.0) and
[v0.30](https://github.com/derailed/k9s/releases/tag/v0.30.0) for details. It is recommended
- `k9s` was updated to v0.31. There have been various breaking changes in the config file format,
check out the changelog of [v0.29](https://github.com/derailed/k9s/releases/tag/v0.29.0),
[v0.30](https://github.com/derailed/k9s/releases/tag/v0.30.0) and
[v0.31](https://github.com/derailed/k9s/releases/tag/v0.31.0) for details. It is recommended
to back up your current configuration and let k9s recreate the new base configuration.
- `idris2` was updated to v0.7.0. This version introduces breaking changes. Check out the [changelog](https://github.com/idris-lang/Idris2/blob/v0.7.0/CHANGELOG.md#v070) for details.
@ -56,9 +63,23 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- Invidious has changed its default database username from `kemal` to `invidious`. Setups involving an externally provisioned database (i.e. `services.invidious.database.createLocally == false`) should adjust their configuration accordingly. The old `kemal` user will not be removed automatically even when the database is provisioned automatically.(https://github.com/NixOS/nixpkgs/pull/265857)
- `paperless`' `services.paperless.extraConfig` setting has been removed and converted to the freeform type and option named `services.paperless.settings`.
- `mkosi` was updated to v19. Parts of the user interface have changed. Consult the
[release notes](https://github.com/systemd/mkosi/releases/tag/v19) for a list of changes.
- `services.nginx` will no longer advertise HTTP/3 availability automatically. This must now be manually added, preferably to each location block.
Example:
```nix
locations."/".extraConfig = ''
add_header Alt-Svc 'h3=":$server_port"; ma=86400';
'';
locations."^~ /assets/".extraConfig = ''
add_header Alt-Svc 'h3=":$server_port"; ma=86400';
'';
```
- The `kanata` package has been updated to v1.5.0, which includes [breaking changes](https://github.com/jtroo/kanata/releases/tag/v1.5.0).
- The latest available version of Nextcloud is v28 (available as `pkgs.nextcloud28`). The installation logic is as follows:
@ -78,6 +99,8 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
`CONFIG_FILE_NAME` includes `bpf_pinning`, `ematch_map`, `group`, `nl_protos`, `rt_dsfield`, `rt_protos`, `rt_realms`, `rt_scopes`, and `rt_tables`.
- The executable file names for `firefox-devedition`, `firefox-beta`, `firefox-esr` now matches their package names, which is consistent with the `firefox-*-bin` packages. The desktop entries are also updated so that you can have multiple editions of firefox in your app launcher.
- The `systemd.oomd` module behavior is changed as:
- Raise ManagedOOMMemoryPressureLimit from 50% to 80%. This should make systemd-oomd kill things less often, and fix issues like [this](https://pagure.io/fedora-workstation/issue/358).
@ -89,6 +112,9 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- `systemd.oomd.enableUserServices` is renamed to `systemd.oomd.enableUserSlices`.
- `security.pam.enableSSHAgentAuth` now requires `services.openssh.authorizedKeysFiles` to be non-empty,
which is the case when `services.openssh.enable` is true. Previously, `pam_ssh_agent_auth` silently failed to work.
## Other Notable Changes {#sec-release-24.05-notable-changes}
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
@ -104,13 +130,15 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
The `nimPackages` and `nim2Packages` sets have been removed.
See https://nixos.org/manual/nixpkgs/unstable#nim for more information.
- [Portunus](https://github.com/majewsky/portunus) has been updated to 2.0.
- [Portunus](https://github.com/majewsky/portunus) has been updated to major version 2.
This version of Portunus supports strong password hashes, but the legacy hash SHA-256 is also still supported to ensure a smooth migration of existing user accounts.
After upgrading, follow the instructions on the [upstream release notes](https://github.com/majewsky/portunus/releases/tag/v2.0.0) to upgrade all user accounts to strong password hashes.
Support for weak password hashes will be removed in NixOS 24.11.
- `libass` now uses the native CoreText backend on Darwin, which may fix subtitle rendering issues with `mpv`, `ffmpeg`, etc.
- [Lilypond](https://lilypond.org/index.html) and [Denemo](https://www.denemo.org) are now compiled with Guile 3.0.
- The following options of the Nextcloud module were moved into [`services.nextcloud.extraOptions`](#opt-services.nextcloud.extraOptions) and renamed to match the name from Nextcloud's `config.php`:
- `logLevel` -> [`loglevel`](#opt-services.nextcloud.extraOptions.loglevel),
- `logType` -> [`log_type`](#opt-services.nextcloud.extraOptions.log_type),
@ -121,6 +149,9 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- `extraTrustedDomains` -> [`trusted_domains`](#opt-services.nextcloud.extraOptions.trusted_domains) and
- `trustedProxies` -> [`trusted_proxies`](#opt-services.nextcloud.extraOptions.trusted_proxies).
- The option [`services.nextcloud.config.dbport`] of the Nextcloud module was removed to match upstream.
The port can be specified in [`services.nextcloud.config.dbhost`](#opt-services.nextcloud.config.dbhost).
- The Yama LSM is now enabled by default in the kernel, which prevents ptracing
non-child processes. This means you will not be able to attach gdb to an
existing process, but will need to start that process from gdb (so it is a
@ -132,11 +163,17 @@ The pre-existing [services.ankisyncd](#opt-services.ankisyncd.enable) has been m
- The source of the `mockgen` package has changed to the [go.uber.org/mock](https://github.com/uber-go/mock) fork because [the original repository is no longer maintained](https://github.com/golang/mock#gomock).
- `security.pam.enableSSHAgentAuth` was renamed to `security.pam.sshAgentAuth.enable` and an `authorizedKeysFiles`
option was added, to control which `authorized_keys` files are trusted. It defaults to the previous behaviour,
**which is insecure**: see [#31611](https://github.com/NixOS/nixpkgs/issues/31611).
- [](#opt-boot.kernel.sysctl._net.core.wmem_max_) changed from a string to an integer because of the addition of a custom merge option (taking the highest value defined to avoid conflicts between 2 services trying to set that value), just as [](#opt-boot.kernel.sysctl._net.core.rmem_max_) since 22.11.
- `services.zfs.zed.enableMail` now uses the global `sendmail` wrapper defined by an email module
(such as msmtp or Postfix). It no longer requires using a special ZFS build with email support.
- The `krb5` module has been rewritten and moved to `security.krb5`, moving all options but `security.krb5.enable` and `security.krb5.package` into `security.krb5.settings`.
- Gitea 1.21 upgrade has several breaking changes, including:
- Custom themes and other assets that were previously stored in `custom/public/*` now belong in `custom/public/assets/*`
- New instances of Gitea using MySQL now ignore the `[database].CHARSET` config option and always use the `utf8mb4` charset, existing instances should migrate via the `gitea doctor convert` CLI command.

View file

@ -120,7 +120,7 @@ in rec {
{ meta.description = "List of NixOS options in JSON format";
nativeBuildInputs = [
pkgs.brotli
pkgs.python3Minimal
pkgs.python3
];
options = builtins.toFile "options.json"
(builtins.unsafeDiscardStringContext (builtins.toJSON optionsNix));

View file

@ -21,6 +21,9 @@
, # size of the FAT partition, in megabytes.
bootSize ? 1024
, # memory allocated for virtualized build instance
memSize ? 1024
, # The size of the root partition, in megabytes.
rootSize ? 2048
@ -230,7 +233,7 @@ let
).runInLinuxVM (
pkgs.runCommand name
{
memSize = 1024;
inherit memSize;
QEMU_OPTS = "-drive file=$rootDiskImage,if=virtio,cache=unsafe,werror=report";
preVM = ''
PATH=$PATH:${pkgs.qemu_kvm}/bin

View file

@ -18,7 +18,7 @@ python3Packages.buildPythonApplication {
pname = "nixos-test-driver";
version = "1.1";
src = ./.;
format = "pyproject";
pyproject = true;
propagatedBuildInputs = [
coreutils
@ -32,6 +32,10 @@ python3Packages.buildPythonApplication {
++ (lib.optionals enableOCR [ imagemagick_light tesseract4 ])
++ extraPythonPackages python3Packages;
nativeBuildInputs = [
python3Packages.setuptools
];
passthru.tests = {
inherit (nixosTests.nixos-test-driver) driver-timeout;
};

View file

@ -20,6 +20,12 @@ in
default = "nixos-openstack-image-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}";
};
ramMB = mkOption {
type = types.int;
default = 1024;
description = lib.mdDoc "RAM allocation for build VM";
};
sizeMB = mkOption {
type = types.int;
default = 8192;
@ -64,7 +70,7 @@ in
includeChannel = copyChannel;
bootSize = 1000;
memSize = cfg.ramMB;
rootSize = cfg.sizeMB;
rootPoolProperties = {
ashift = 12;

View file

@ -1,369 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.krb5;
# This is to provide support for old configuration options (as much as is
# reasonable). This can be removed after 18.03 was released.
defaultConfig = {
libdefaults = optionalAttrs (cfg.defaultRealm != null)
{ default_realm = cfg.defaultRealm; };
realms = optionalAttrs (lib.all (value: value != null) [
cfg.defaultRealm cfg.kdc cfg.kerberosAdminServer
]) {
${cfg.defaultRealm} = {
kdc = cfg.kdc;
admin_server = cfg.kerberosAdminServer;
};
};
domain_realm = optionalAttrs (lib.all (value: value != null) [
cfg.domainRealm cfg.defaultRealm
]) {
".${cfg.domainRealm}" = cfg.defaultRealm;
${cfg.domainRealm} = cfg.defaultRealm;
};
};
mergedConfig = (recursiveUpdate defaultConfig {
inherit (config.krb5)
kerberos libdefaults realms domain_realm capaths appdefaults plugins
extraConfig config;
});
filterEmbeddedMetadata = value: if isAttrs value then
(filterAttrs
(attrName: attrValue: attrName != "_module" && attrValue != null)
value)
else value;
indent = " ";
mkRelation = name: value:
if (isList value) then
concatMapStringsSep "\n" (mkRelation name) value
else "${name} = ${mkVal value}";
mkVal = value:
if (value == true) then "true"
else if (value == false) then "false"
else if (isInt value) then (toString value)
else if (isAttrs value) then
let configLines = concatLists
(map (splitString "\n")
(mapAttrsToList mkRelation value));
in
(concatStringsSep "\n${indent}"
([ "{" ] ++ configLines))
+ "\n}"
else value;
mkMappedAttrsOrString = value: concatMapStringsSep "\n"
(line: if builtins.stringLength line > 0
then "${indent}${line}"
else line)
(splitString "\n"
(if isAttrs value then
concatStringsSep "\n"
(mapAttrsToList mkRelation value)
else value));
in {
###### interface
options = {
krb5 = {
enable = mkEnableOption (lib.mdDoc "building krb5.conf, configuration file for Kerberos V");
kerberos = mkOption {
type = types.package;
default = pkgs.krb5;
defaultText = literalExpression "pkgs.krb5";
example = literalExpression "pkgs.heimdal";
description = lib.mdDoc ''
The Kerberos implementation that will be present in
`environment.systemPackages` after enabling this
service.
'';
};
libdefaults = mkOption {
type = with types; either attrs lines;
default = {};
apply = attrs: filterEmbeddedMetadata attrs;
example = literalExpression ''
{
default_realm = "ATHENA.MIT.EDU";
};
'';
description = lib.mdDoc ''
Settings used by the Kerberos V5 library.
'';
};
realms = mkOption {
type = with types; either attrs lines;
default = {};
example = literalExpression ''
{
"ATHENA.MIT.EDU" = {
admin_server = "athena.mit.edu";
kdc = [
"athena01.mit.edu"
"athena02.mit.edu"
];
};
};
'';
apply = attrs: filterEmbeddedMetadata attrs;
description = lib.mdDoc "Realm-specific contact information and settings.";
};
domain_realm = mkOption {
type = with types; either attrs lines;
default = {};
example = literalExpression ''
{
"example.com" = "EXAMPLE.COM";
".example.com" = "EXAMPLE.COM";
};
'';
apply = attrs: filterEmbeddedMetadata attrs;
description = lib.mdDoc ''
Map of server hostnames to Kerberos realms.
'';
};
capaths = mkOption {
type = with types; either attrs lines;
default = {};
example = literalExpression ''
{
"ATHENA.MIT.EDU" = {
"EXAMPLE.COM" = ".";
};
"EXAMPLE.COM" = {
"ATHENA.MIT.EDU" = ".";
};
};
'';
apply = attrs: filterEmbeddedMetadata attrs;
description = lib.mdDoc ''
Authentication paths for non-hierarchical cross-realm authentication.
'';
};
appdefaults = mkOption {
type = with types; either attrs lines;
default = {};
example = literalExpression ''
{
pam = {
debug = false;
ticket_lifetime = 36000;
renew_lifetime = 36000;
max_timeout = 30;
timeout_shift = 2;
initial_timeout = 1;
};
};
'';
apply = attrs: filterEmbeddedMetadata attrs;
description = lib.mdDoc ''
Settings used by some Kerberos V5 applications.
'';
};
plugins = mkOption {
type = with types; either attrs lines;
default = {};
example = literalExpression ''
{
ccselect = {
disable = "k5identity";
};
};
'';
apply = attrs: filterEmbeddedMetadata attrs;
description = lib.mdDoc ''
Controls plugin module registration.
'';
};
extraConfig = mkOption {
type = with types; nullOr lines;
default = null;
example = ''
[logging]
kdc = SYSLOG:NOTICE
admin_server = SYSLOG:NOTICE
default = SYSLOG:NOTICE
'';
description = lib.mdDoc ''
These lines go to the end of `krb5.conf` verbatim.
`krb5.conf` may include any of the relations that are
valid for `kdc.conf` (see `man kdc.conf`),
but it is not a recommended practice.
'';
};
config = mkOption {
type = with types; nullOr lines;
default = null;
example = ''
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
admin_server = kerberos.example.com
kdc = kerberos.example.com
default_principal_flags = +preauth
}
[domain_realm]
example.com = EXAMPLE.COM
.example.com = EXAMPLE.COM
[logging]
kdc = SYSLOG:NOTICE
admin_server = SYSLOG:NOTICE
default = SYSLOG:NOTICE
'';
description = lib.mdDoc ''
Verbatim `krb5.conf` configuration. Note that this
is mutually exclusive with configuration via
`libdefaults`, `realms`,
`domain_realm`, `capaths`,
`appdefaults`, `plugins` and
`extraConfig` configuration options. Consult
`man krb5.conf` for documentation.
'';
};
defaultRealm = mkOption {
type = with types; nullOr str;
default = null;
example = "ATHENA.MIT.EDU";
description = lib.mdDoc ''
DEPRECATED, please use
`krb5.libdefaults.default_realm`.
'';
};
domainRealm = mkOption {
type = with types; nullOr str;
default = null;
example = "athena.mit.edu";
description = lib.mdDoc ''
DEPRECATED, please create a map of server hostnames to Kerberos realms
in `krb5.domain_realm`.
'';
};
kdc = mkOption {
type = with types; nullOr str;
default = null;
example = "kerberos.mit.edu";
description = lib.mdDoc ''
DEPRECATED, please pass a `kdc` attribute to a realm
in `krb5.realms`.
'';
};
kerberosAdminServer = mkOption {
type = with types; nullOr str;
default = null;
example = "kerberos.mit.edu";
description = lib.mdDoc ''
DEPRECATED, please pass an `admin_server` attribute
to a realm in `krb5.realms`.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.kerberos ];
environment.etc."krb5.conf".text = if isString cfg.config
then cfg.config
else (''
[libdefaults]
${mkMappedAttrsOrString mergedConfig.libdefaults}
[realms]
${mkMappedAttrsOrString mergedConfig.realms}
[domain_realm]
${mkMappedAttrsOrString mergedConfig.domain_realm}
[capaths]
${mkMappedAttrsOrString mergedConfig.capaths}
[appdefaults]
${mkMappedAttrsOrString mergedConfig.appdefaults}
[plugins]
${mkMappedAttrsOrString mergedConfig.plugins}
'' + optionalString (mergedConfig.extraConfig != null)
("\n" + mergedConfig.extraConfig));
warnings = flatten [
(optional (cfg.defaultRealm != null) ''
The option krb5.defaultRealm is deprecated, please use
krb5.libdefaults.default_realm.
'')
(optional (cfg.domainRealm != null) ''
The option krb5.domainRealm is deprecated, please use krb5.domain_realm.
'')
(optional (cfg.kdc != null) ''
The option krb5.kdc is deprecated, please pass a kdc attribute to a
realm in krb5.realms.
'')
(optional (cfg.kerberosAdminServer != null) ''
The option krb5.kerberosAdminServer is deprecated, please pass an
admin_server attribute to a realm in krb5.realms.
'')
];
assertions = [
{ assertion = !((builtins.any (value: value != null) [
cfg.defaultRealm cfg.domainRealm cfg.kdc cfg.kerberosAdminServer
]) && ((builtins.any (value: value != {}) [
cfg.libdefaults cfg.realms cfg.domain_realm cfg.capaths
cfg.appdefaults cfg.plugins
]) || (builtins.any (value: value != null) [
cfg.config cfg.extraConfig
])));
message = ''
Configuration of krb5.conf by deprecated options is mutually exclusive
with configuration by section. Please migrate your config using the
attributes suggested in the warnings.
'';
}
{ assertion = !(cfg.config != null
&& ((builtins.any (value: value != {}) [
cfg.libdefaults cfg.realms cfg.domain_realm cfg.capaths
cfg.appdefaults cfg.plugins
]) || (builtins.any (value: value != null) [
cfg.extraConfig cfg.defaultRealm cfg.domainRealm cfg.kdc
cfg.kerberosAdminServer
])));
message = ''
Configuration of krb5.conf using krb.config is mutually exclusive with
configuration by section. If you want to mix the two, you can pass
lines to any configuration section or lines to krb5.extraConfig.
'';
}
];
};
}

View file

@ -35,6 +35,7 @@ with lib;
# dep of graphviz, libXpm is optional for Xpm support
gd = super.gd.override { withXorg = false; };
ghostscript = super.ghostscript.override { cupsSupport = false; x11Support = false; };
gjs = super.gjs.overrideAttrs { doCheck = false; installTests = false; }; # avoid test dependency on gtk3
gobject-introspection = super.gobject-introspection.override { x11Support = false; };
gpsd = super.gpsd.override { guiSupport = false; };
graphviz = super.graphviz-nox;

View file

@ -14,7 +14,7 @@ with lib;
config = mkIf config.hardware.usbStorage.manageStartStop {
services.udev.extraRules = ''
ACTION=="add|change", SUBSYSTEM=="scsi_disk", DRIVERS=="usb-storage", ATTR{manage_start_stop}="1"
ACTION=="add|change", SUBSYSTEM=="scsi_disk", DRIVERS=="usb-storage", ATTR{manage_system_start_stop}="1"
'';
};
}

View file

@ -13,11 +13,12 @@ in
enable = mkEnableOption (lib.mdDoc "support for Intel IPU6/MIPI cameras");
platform = mkOption {
type = types.enum [ "ipu6" "ipu6ep" ];
type = types.enum [ "ipu6" "ipu6ep" "ipu6epmtl" ];
description = lib.mdDoc ''
Choose the version for your hardware platform.
Use `ipu6` for Tiger Lake and `ipu6ep` for Alder Lake respectively.
Use `ipu6` for Tiger Lake, `ipu6ep` for Alder Lake or Raptor Lake,
and `ipu6epmtl` for Meteor Lake.
'';
};
@ -29,9 +30,7 @@ in
ipu6-drivers
];
hardware.firmware = with pkgs; [ ]
++ optional (cfg.platform == "ipu6") ipu6-camera-bin
++ optional (cfg.platform == "ipu6ep") ipu6ep-camera-bin;
hardware.firmware = [ pkgs.ipu6-camera-bins ];
services.udev.extraRules = ''
SUBSYSTEM=="intel-ipu6-psys", MODE="0660", GROUP="video"
@ -44,14 +43,13 @@ in
extraPackages = with pkgs.gst_all_1; [ ]
++ optional (cfg.platform == "ipu6") icamerasrc-ipu6
++ optional (cfg.platform == "ipu6ep") icamerasrc-ipu6ep;
++ optional (cfg.platform == "ipu6ep") icamerasrc-ipu6ep
++ optional (cfg.platform == "ipu6epmtl") icamerasrc-ipu6epmtl;
input = {
pipeline = "icamerasrc";
format = mkIf (cfg.platform == "ipu6ep") (mkDefault "NV12");
format = mkIf (cfg.platform != "ipu6") (mkDefault "NV12");
};
};
};
}

View file

@ -19,6 +19,14 @@ in
Enabled Fcitx5 addons.
'';
};
waylandFrontend = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc ''
Use the Wayland input method frontend.
See [Using Fcitx 5 on Wayland](https://fcitx-im.org/wiki/Using_Fcitx_5_on_Wayland).
'';
};
quickPhrase = mkOption {
type = with types; attrsOf str;
default = { };
@ -118,10 +126,11 @@ in
];
environment.variables = {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
XMODIFIERS = "@im=fcitx";
QT_PLUGIN_PATH = [ "${fcitx5Package}/${pkgs.qt6.qtbase.qtPluginPrefix}" ];
} // lib.optionalAttrs (!cfg.waylandFrontend) {
GTK_IM_MODULE = "fcitx";
QT_IM_MODULE = "fcitx";
} // lib.optionalAttrs cfg.ignoreUserConfig {
SKIP_FCITX_USER_PATH = "1";
};

View file

@ -10,7 +10,6 @@
./config/gtk/gtk-icon-cache.nix
./config/i18n.nix
./config/iproute2.nix
./config/krb5/default.nix
./config/ldap.nix
./config/ldso.nix
./config/locale.nix
@ -273,6 +272,7 @@
./programs/virt-manager.nix
./programs/wavemon.nix
./programs/wayland/cardboard.nix
./programs/wayland/labwc.nix
./programs/wayland/river.nix
./programs/wayland/sway.nix
./programs/wayland/waybar.nix
@ -308,6 +308,7 @@
./security/duosec.nix
./security/google_oslogin.nix
./security/ipa.nix
./security/krb5
./security/lock-kernel-modules.nix
./security/misc.nix
./security/oath.nix
@ -497,6 +498,7 @@
./services/development/jupyterhub/default.nix
./services/development/livebook.nix
./services/development/lorri.nix
./services/development/nixseparatedebuginfod.nix
./services/development/rstudio-server/default.nix
./services/development/zammad.nix
./services/display-managers/greetd.nix
@ -832,6 +834,7 @@
./services/monitoring/riemann.nix
./services/monitoring/scollector.nix
./services/monitoring/smartd.nix
./services/monitoring/snmpd.nix
./services/monitoring/statsd.nix
./services/monitoring/sysstat.nix
./services/monitoring/teamviewer.nix
@ -1175,6 +1178,7 @@
./services/search/typesense.nix
./services/security/aesmd.nix
./services/security/authelia.nix
./services/security/bitwarden-directory-connector-cli.nix
./services/security/certmgr.nix
./services/security/cfssl.nix
./services/security/clamav.nix
@ -1472,6 +1476,9 @@
./system/boot/systemd/initrd-secrets.nix
./system/boot/systemd/initrd.nix
./system/boot/systemd/journald.nix
./system/boot/systemd/journald-gateway.nix
./system/boot/systemd/journald-remote.nix
./system/boot/systemd/journald-upload.nix
./system/boot/systemd/logind.nix
./system/boot/systemd/nspawn.nix
./system/boot/systemd/oomd.nix

View file

@ -284,6 +284,7 @@ in
# Preferences are converted into a policy
programs.firefox.policies = {
DisableAppUpdate = true;
Preferences = (mapAttrs
(_: value: { Value = value; Status = cfg.preferencesStatus; })
cfg.preferences);

View file

@ -14,6 +14,6 @@ with lib;
config = mkIf config.programs.partition-manager.enable {
services.dbus.packages = [ pkgs.libsForQt5.kpmcore ];
# `kpmcore` need to be installed to pull in polkit actions.
environment.systemPackages = [ pkgs.libsForQt5.kpmcore pkgs.partition-manager ];
environment.systemPackages = [ pkgs.libsForQt5.kpmcore pkgs.libsForQt5.partitionmanager ];
};
}

View file

@ -61,7 +61,12 @@ in
};
enableSuid = mkOption {
type = types.bool;
default = true;
# SingularityCE requires SETUID for most things. Apptainer prefers user
# namespaces, e.g. `apptainer exec --nv` would fail if built
# `--with-suid`:
# > `FATAL: nvidia-container-cli not allowed in setuid mode`
default = cfg.package.projectName != "apptainer";
defaultText = literalExpression ''config.services.singularity.package.projectName != "apptainer"'';
example = false;
description = mdDoc ''
Whether to enable the SUID support of Singularity/Apptainer.

View file

@ -0,0 +1,25 @@
{ config, lib, pkgs, ... }:
let
cfg = config.programs.labwc;
in
{
meta.maintainers = with lib.maintainers; [ AndersonTorres ];
options.programs.labwc = {
enable = lib.mkEnableOption (lib.mdDoc "labwc");
package = lib.mkPackageOption pkgs "labwc" { };
};
config = lib.mkIf cfg.enable (lib.mkMerge [
{
environment.systemPackages = [ cfg.package ];
xdg.portal.config.wlroots.default = lib.mkDefault [ "wlr" "gtk" ];
# To make a labwc session available for certain DMs like SDDM
services.xserver.displayManager.sessionPackages = [ cfg.package ];
}
(import ./wayland-session.nix { inherit lib pkgs; })
]);
}

View file

@ -117,8 +117,8 @@ in {
config = mkIf cfg.enable {
assertions = [
{
assertion = !config.krb5.enable;
message = "krb5 must be disabled through `krb5.enable` for FreeIPA integration to work.";
assertion = !config.security.krb5.enable;
message = "krb5 must be disabled through `security.krb5.enable` for FreeIPA integration to work.";
}
{
assertion = !config.users.ldap.enable;

View file

@ -0,0 +1,90 @@
{ config, lib, pkgs, ... }:
let
inherit (lib) mdDoc mkIf mkOption mkPackageOption mkRemovedOptionModule;
inherit (lib.types) bool;
mkRemovedOptionModule' = name: reason: mkRemovedOptionModule ["krb5" name] reason;
mkRemovedOptionModuleCfg = name: mkRemovedOptionModule' name ''
The option `krb5.${name}' has been removed. Use
`security.krb5.settings.${name}' for structured configuration.
'';
cfg = config.security.krb5;
format = import ./krb5-conf-format.nix { inherit pkgs lib; } { };
in {
imports = [
(mkRemovedOptionModuleCfg "libdefaults")
(mkRemovedOptionModuleCfg "realms")
(mkRemovedOptionModuleCfg "domain_realm")
(mkRemovedOptionModuleCfg "capaths")
(mkRemovedOptionModuleCfg "appdefaults")
(mkRemovedOptionModuleCfg "plugins")
(mkRemovedOptionModuleCfg "config")
(mkRemovedOptionModuleCfg "extraConfig")
(mkRemovedOptionModule' "kerberos" ''
The option `krb5.kerberos' has been moved to `security.krb5.package'.
'')
];
options = {
security.krb5 = {
enable = mkOption {
default = false;
description = mdDoc "Enable and configure Kerberos utilities";
type = bool;
};
package = mkPackageOption pkgs "krb5" {
example = "heimdal";
};
settings = mkOption {
default = { };
type = format.type;
description = mdDoc ''
Structured contents of the {file}`krb5.conf` file. See
{manpage}`krb5.conf(5)` for details about configuration.
'';
example = {
include = [ "/run/secrets/secret-krb5.conf" ];
includedir = [ "/run/secrets/secret-krb5.conf.d" ];
libdefaults = {
default_realm = "ATHENA.MIT.EDU";
};
realms = {
"ATHENA.MIT.EDU" = {
admin_server = "athena.mit.edu";
kdc = [
"athena01.mit.edu"
"athena02.mit.edu"
];
};
};
domain_realm = {
"mit.edu" = "ATHENA.MIT.EDU";
};
logging = {
kdc = "SYSLOG:NOTICE";
admin_server = "SYSLOG:NOTICE";
default = "SYSLOG:NOTICE";
};
};
};
};
};
config = mkIf cfg.enable {
environment = {
systemPackages = [ cfg.package ];
etc."krb5.conf".source = format.generate "krb5.conf" cfg.settings;
};
};
meta.maintainers = builtins.attrValues {
inherit (lib.maintainers) dblsaiko h7x4;
};
}

View file

@ -0,0 +1,88 @@
{ pkgs, lib, ... }:
# Based on
# - https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html
# - https://manpages.debian.org/unstable/heimdal-docs/krb5.conf.5heimdal.en.html
let
inherit (lib) boolToString concatMapStringsSep concatStringsSep filter
isAttrs isBool isList mapAttrsToList mdDoc mkOption singleton splitString;
inherit (lib.types) attrsOf bool coercedTo either int listOf oneOf path
str submodule;
in
{ }: {
type = let
section = attrsOf relation;
relation = either (attrsOf value) value;
value = either (listOf atom) atom;
atom = oneOf [int str bool];
in submodule {
freeformType = attrsOf section;
options = {
include = mkOption {
default = [ ];
description = mdDoc ''
Files to include in the Kerberos configuration.
'';
type = coercedTo path singleton (listOf path);
};
includedir = mkOption {
default = [ ];
description = mdDoc ''
Directories containing files to include in the Kerberos configuration.
'';
type = coercedTo path singleton (listOf path);
};
module = mkOption {
default = [ ];
description = mdDoc ''
Modules to obtain Kerberos configuration from.
'';
type = coercedTo path singleton (listOf path);
};
};
};
generate = let
indent = str: concatMapStringsSep "\n" (line: " " + line) (splitString "\n" str);
formatToplevel = args @ {
include ? [ ],
includedir ? [ ],
module ? [ ],
...
}: let
sections = removeAttrs args [ "include" "includedir" "module" ];
in concatStringsSep "\n" (filter (x: x != "") [
(concatStringsSep "\n" (mapAttrsToList formatSection sections))
(concatMapStringsSep "\n" (m: "module ${m}") module)
(concatMapStringsSep "\n" (i: "include ${i}") include)
(concatMapStringsSep "\n" (i: "includedir ${i}") includedir)
]);
formatSection = name: section: ''
[${name}]
${indent (concatStringsSep "\n" (mapAttrsToList formatRelation section))}
'';
formatRelation = name: relation:
if isAttrs relation
then ''
${name} = {
${indent (concatStringsSep "\n" (mapAttrsToList formatValue relation))}
}''
else formatValue name relation;
formatValue = name: value:
if isList value
then concatMapStringsSep "\n" (formatAtom name) value
else formatAtom name value;
formatAtom = name: atom: let
v = if isBool atom then boolToString atom else toString atom;
in "${name} = ${v}";
in
name: value: pkgs.writeText name ''
${formatToplevel value}
'';
}

View file

@ -654,8 +654,8 @@ let
{ name = "mysql"; enable = cfg.mysqlAuth; control = "sufficient"; modulePath = "${pkgs.pam_mysql}/lib/security/pam_mysql.so"; settings = {
config_file = "/etc/security/pam_mysql.conf";
}; }
{ name = "ssh_agent_auth"; enable = config.security.pam.enableSSHAgentAuth && cfg.sshAgentAuth; control = "sufficient"; modulePath = "${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so"; settings = {
file = lib.concatStringsSep ":" config.services.openssh.authorizedKeysFiles;
{ name = "ssh_agent_auth"; enable = config.security.pam.sshAgentAuth.enable && cfg.sshAgentAuth; control = "sufficient"; modulePath = "${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so"; settings = {
file = lib.concatStringsSep ":" config.security.pam.sshAgentAuth.authorizedKeysFiles;
}; }
(let p11 = config.security.pam.p11; in { name = "p11"; enable = cfg.p11Auth; control = p11.control; modulePath = "${pkgs.pam_p11}/lib/security/pam_p11.so"; args = [
"${pkgs.opensc}/lib/opensc-pkcs11.so"
@ -943,7 +943,7 @@ let
value.source = pkgs.writeText "${name}.pam" service.text;
};
optionalSudoConfigForSSHAgentAuth = optionalString config.security.pam.enableSSHAgentAuth ''
optionalSudoConfigForSSHAgentAuth = optionalString config.security.pam.sshAgentAuth.enable ''
# Keep SSH_AUTH_SOCK so that pam_ssh_agent_auth.so can do its magic.
Defaults env_keep+=SSH_AUTH_SOCK
'';
@ -956,6 +956,7 @@ in
imports = [
(mkRenamedOptionModule [ "security" "pam" "enableU2F" ] [ "security" "pam" "u2f" "enable" ])
(mkRenamedOptionModule [ "security" "pam" "enableSSHAgentAuth" ] [ "security" "pam" "sshAgentAuth" "enable" ])
];
###### interface
@ -1025,16 +1026,34 @@ in
'';
};
security.pam.enableSSHAgentAuth = mkOption {
type = types.bool;
default = false;
description =
lib.mdDoc ''
Enable sudo logins if the user's SSH agent provides a key
present in {file}`~/.ssh/authorized_keys`.
This allows machines to exclusively use SSH keys instead of
passwords.
security.pam.sshAgentAuth = {
enable = mkEnableOption ''
authenticating using a signature performed by the ssh-agent.
This allows using SSH keys exclusively, instead of passwords, for instance on remote machines
'';
authorizedKeysFiles = mkOption {
type = with types; listOf str;
description = ''
A list of paths to files in OpenSSH's `authorized_keys` format, containing
the keys that will be trusted by the `pam_ssh_agent_auth` module.
The following patterns are expanded when interpreting the path:
- `%f` and `%H` respectively expand to the fully-qualified and short hostname ;
- `%u` expands to the username ;
- `~` or `%h` expands to the user's home directory.
::: {.note}
Specifying user-writeable files here result in an insecure configuration: a malicious process
can then edit such an authorized_keys file and bypass the ssh-agent-based authentication.
See [issue #31611](https://github.com/NixOS/nixpkgs/issues/31611)
:::
'';
example = [ "/etc/ssh/authorized_keys.d/%u" ];
default = config.services.openssh.authorizedKeysFiles;
defaultText = literalExpression "config.services.openssh.authorizedKeysFiles";
};
};
security.pam.enableOTPW = mkEnableOption (lib.mdDoc "the OTPW (one-time password) PAM module");
@ -1067,8 +1086,8 @@ in
security.pam.krb5 = {
enable = mkOption {
default = config.krb5.enable;
defaultText = literalExpression "config.krb5.enable";
default = config.security.krb5.enable;
defaultText = literalExpression "config.security.krb5.enable";
type = types.bool;
description = lib.mdDoc ''
Enables Kerberos PAM modules (`pam-krb5`,
@ -1076,7 +1095,7 @@ in
If set, users can authenticate with their Kerberos password.
This requires a valid Kerberos configuration
(`config.krb5.enable` should be set to
(`config.security.krb5.enable` should be set to
`true`).
Note that the Kerberos PAM modules are not necessary when using SSS
@ -1456,8 +1475,25 @@ in
`security.pam.zfs.enable` requires enabling ZFS (`boot.zfs.enabled` or `boot.zfs.enableUnstable`).
'';
}
{
assertion = with config.security.pam.sshAgentAuth; enable -> authorizedKeysFiles != [];
message = ''
`security.pam.enableSSHAgentAuth` requires `services.openssh.authorizedKeysFiles` to be a non-empty list.
Did you forget to set `services.openssh.enable` ?
'';
}
];
warnings = optional
(with lib; with config.security.pam.sshAgentAuth;
enable && any (s: hasPrefix "%h" s || hasPrefix "~" s) authorizedKeysFiles)
''config.security.pam.sshAgentAuth.authorizedKeysFiles contains files in the user's home directory.
Specifying user-writeable files there result in an insecure configuration:
a malicious process can then edit such an authorized_keys file and bypass the ssh-agent-based authentication.
See https://github.com/NixOS/nixpkgs/issues/31611
'';
environment.systemPackages =
# Include the PAM modules in the system path mostly for the manpages.
[ pkgs.pam ]

View file

@ -6,8 +6,6 @@ let
cfg = config.security.sudo;
inherit (config.security.pam) enableSSHAgentAuth;
toUserString = user: if (isInt user) then "#${toString user}" else "${user}";
toGroupString = group: if (isInt group) then "%#${toString group}" else "%${group}";

View file

@ -44,12 +44,19 @@ in
initialPasswordFile = mkOption {
description = lib.mdDoc ''
Initial password file for the pgAdmin account.
Initial password file for the pgAdmin account. Minimum length by default is 6.
Please see `services.pgadmin.minimumPasswordLength`.
NOTE: Should be string not a store path, to prevent the password from being world readable
'';
type = types.path;
};
minimumPasswordLength = mkOption {
description = lib.mdDoc "Minimum length of the password";
type = types.int;
default = 6;
};
emailServer = {
enable = mkOption {
description = lib.mdDoc ''
@ -116,7 +123,9 @@ in
services.pgadmin.settings = {
DEFAULT_SERVER_PORT = cfg.port;
PASSWORD_LENGTH_MIN = cfg.minimumPasswordLength;
SERVER_MODE = true;
UPGRADE_CHECK_ENABLED = false;
} // (optionalAttrs cfg.openFirewall {
DEFAULT_SERVER = mkDefault "::";
}) // (optionalAttrs cfg.emailServer.enable {
@ -140,6 +149,14 @@ in
preStart = ''
# NOTE: this is idempotent (aka running it twice has no effect)
# Check here for password length to prevent pgadmin from starting
# and presenting a hard to find error message
# see https://github.com/NixOS/nixpkgs/issues/270624
PW_LENGTH=$(wc -m < ${escapeShellArg cfg.initialPasswordFile})
if [ $PW_LENGTH -lt ${toString cfg.minimumPasswordLength} ]; then
echo "Password must be at least ${toString cfg.minimumPasswordLength} characters long"
exit 1
fi
(
# Email address:
echo ${escapeShellArg cfg.initialEmail}

View file

@ -305,5 +305,5 @@ in {
'')
];
meta.maintainers = with lib.maintainers; [ mic92 lopsided98 ];
meta.maintainers = lib.teams.buildbot.members;
}

View file

@ -188,6 +188,6 @@ in {
};
};
meta.maintainers = with lib.maintainers; [ ];
meta.maintainers = lib.teams.buildbot.members;
}

View file

@ -0,0 +1,105 @@
{ pkgs, lib, config, ... }:
let
cfg = config.services.nixseparatedebuginfod;
url = "127.0.0.1:${toString cfg.port}";
in
{
options = {
services.nixseparatedebuginfod = {
enable = lib.mkEnableOption "separatedebuginfod, a debuginfod server providing source and debuginfo for nix packages";
port = lib.mkOption {
description = "port to listen";
default = 1949;
type = lib.types.port;
};
nixPackage = lib.mkOption {
type = lib.types.package;
default = pkgs.nix;
defaultText = lib.literalExpression "pkgs.nix";
description = ''
The version of nix that nixseparatedebuginfod should use as client for the nix daemon. It is strongly advised to use nix version >= 2.18, otherwise some debug info may go missing.
'';
};
allowOldNix = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Do not fail evaluation when {option}`services.nixseparatedebuginfod.nixPackage` is older than nix 2.18.
'';
};
};
};
config = lib.mkIf cfg.enable {
assertions = [ {
assertion = cfg.allowOldNix || (lib.versionAtLeast cfg.nixPackage.version "2.18");
message = "nixseparatedebuginfod works better when `services.nixseparatedebuginfod.nixPackage` is set to nix >= 2.18 (instead of ${cfg.nixPackage.name}). Set `services.nixseparatedebuginfod.allowOldNix` to bypass.";
} ];
systemd.services.nixseparatedebuginfod = {
wantedBy = [ "multi-user.target" ];
wants = [ "nix-daemon.service" ];
after = [ "nix-daemon.service" ];
path = [ cfg.nixPackage ];
serviceConfig = {
ExecStart = [ "${pkgs.nixseparatedebuginfod}/bin/nixseparatedebuginfod -l ${url}" ];
Restart = "on-failure";
CacheDirectory = "nixseparatedebuginfod";
# nix does not like DynamicUsers in allowed-users
User = "nixseparatedebuginfod";
Group = "nixseparatedebuginfod";
# hardening
# Filesystem stuff
ProtectSystem = "strict"; # Prevent writing to most of /
ProtectHome = true; # Prevent accessing /home and /root
PrivateTmp = true; # Give an own directory under /tmp
PrivateDevices = true; # Deny access to most of /dev
ProtectKernelTunables = true; # Protect some parts of /sys
ProtectControlGroups = true; # Remount cgroups read-only
RestrictSUIDSGID = true; # Prevent creating SETUID/SETGID files
PrivateMounts = true; # Give an own mount namespace
RemoveIPC = true;
UMask = "0077";
# Capabilities
CapabilityBoundingSet = ""; # Allow no capabilities at all
NoNewPrivileges = true; # Disallow getting more capabilities. This is also implied by other options.
# Kernel stuff
ProtectKernelModules = true; # Prevent loading of kernel modules
SystemCallArchitectures = "native"; # Usually no need to disable this
ProtectKernelLogs = true; # Prevent access to kernel logs
ProtectClock = true; # Prevent setting the RTC
# Networking
RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6";
# Misc
LockPersonality = true; # Prevent change of the personality
ProtectHostname = true; # Give an own UTS namespace
RestrictRealtime = true; # Prevent switching to RT scheduling
MemoryDenyWriteExecute = true; # Maybe disable this for interpreters like python
RestrictNamespaces = true;
};
};
users.users.nixseparatedebuginfod = {
isSystemUser = true;
group = "nixseparatedebuginfod";
};
users.groups.nixseparatedebuginfod = { };
nix.settings.extra-allowed-users = [ "nixseparatedebuginfod" ];
environment.variables.DEBUGINFOD_URLS = "http://${url}";
environment.systemPackages = [
# valgrind support requires debuginfod-find on PATH
(lib.getBin pkgs.elfutils)
];
environment.etc."gdb/gdbinit.d/nixseparatedebuginfod.gdb".text = "set debuginfod enabled on";
};
}

View file

@ -1,18 +1,15 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.vdr;
libDir = "/var/lib/vdr";
in {
###### interface
inherit (lib)
mkEnableOption mkPackageOption mkOption types mkIf optional mdDoc;
in
{
options = {
services.vdr = {
enable = mkEnableOption (lib.mdDoc "VDR. Please put config into ${libDir}");
enable = mkEnableOption (mdDoc "Start VDR");
package = mkPackageOption pkgs "vdr" {
example = "wrapVdr.override { plugins = with pkgs.vdrPlugins; [ hello ]; }";
@ -21,59 +18,84 @@ in {
videoDir = mkOption {
type = types.path;
default = "/srv/vdr/video";
description = lib.mdDoc "Recording directory";
description = mdDoc "Recording directory";
};
extraArguments = mkOption {
type = types.listOf types.str;
default = [ ];
description = lib.mdDoc "Additional command line arguments to pass to VDR.";
description = mdDoc "Additional command line arguments to pass to VDR.";
};
enableLirc = mkEnableOption (lib.mdDoc "LIRC");
enableLirc = mkEnableOption (mdDoc "LIRC");
user = mkOption {
type = types.str;
default = "vdr";
description = mdDoc ''
User under which the VDR service runs.
'';
};
group = mkOption {
type = types.str;
default = "vdr";
description = mdDoc ''
Group under which the VDRvdr service runs.
'';
};
};
###### implementation
};
config = mkIf cfg.enable {
config = mkIf cfg.enable (mkMerge [{
systemd.tmpfiles.rules = [
"d ${cfg.videoDir} 0755 vdr vdr -"
"Z ${cfg.videoDir} - vdr vdr -"
"d ${cfg.videoDir} 0755 ${cfg.user} ${cfg.group} -"
"Z ${cfg.videoDir} - ${cfg.user} ${cfg.group} -"
];
systemd.services.vdr = {
description = "VDR";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
wants = optional cfg.enableLirc "lircd.service";
after = [ "network.target" ]
++ optional cfg.enableLirc "lircd.service";
serviceConfig = {
ExecStart = ''
${cfg.package}/bin/vdr \
--video="${cfg.videoDir}" \
--config="${libDir}" \
${escapeShellArgs cfg.extraArguments}
'';
User = "vdr";
ExecStart =
let
args = [
"--video=${cfg.videoDir}"
]
++ optional cfg.enableLirc "--lirc=${config.passthru.lirc.socket}"
++ cfg.extraArguments;
in
"${cfg.package}/bin/vdr ${lib.escapeShellArgs args}";
User = cfg.user;
Group = cfg.group;
CacheDirectory = "vdr";
StateDirectory = "vdr";
RuntimeDirectory = "vdr";
Restart = "on-failure";
};
};
users.users.vdr = {
group = "vdr";
home = libDir;
environment.systemPackages = [ cfg.package ];
users.users = mkIf (cfg.user == "vdr") {
vdr = {
inherit (cfg) group;
home = "/run/vdr";
isSystemUser = true;
extraGroups = [
"video"
"audio"
]
++ optional cfg.enableLirc "lirc";
};
};
users.groups.vdr = {};
}
users.groups = mkIf (cfg.group == "vdr") { vdr = { }; };
(mkIf cfg.enableLirc {
services.lirc.enable = true;
users.users.vdr.extraGroups = [ "lirc" ];
services.vdr.extraArguments = [
"--lirc=${config.passthru.lirc.socket}"
];
})]);
};
}

View file

@ -1,8 +1,11 @@
{ options, config, lib, pkgs, ... }:
with lib;
let
inherit (lib) any attrValues concatMapStringsSep concatStrings
concatStringsSep flatten imap1 isList literalExpression mapAttrsToList
mkEnableOption mkIf mkOption mkRemovedOptionModule optional optionalAttrs
optionalString singleton types;
cfg = config.services.dovecot2;
dovecotPkg = pkgs.dovecot;
@ -113,6 +116,36 @@ let
''
)
''
plugin {
sieve_plugins = ${concatStringsSep " " cfg.sieve.plugins}
sieve_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.extensions)}
sieve_global_extensions = ${concatStringsSep " " (map (el: "+${el}") cfg.sieve.globalExtensions)}
''
(optionalString (cfg.imapsieve.mailbox != []) ''
${
concatStringsSep "\n" (flatten (imap1 (
idx: el:
singleton "imapsieve_mailbox${toString idx}_name = ${el.name}"
++ optional (el.from != null) "imapsieve_mailbox${toString idx}_from = ${el.from}"
++ optional (el.causes != null) "imapsieve_mailbox${toString idx}_causes = ${el.causes}"
++ optional (el.before != null) "imapsieve_mailbox${toString idx}_before = file:${stateDir}/imapsieve/before/${baseNameOf el.before}"
++ optional (el.after != null) "imapsieve_mailbox${toString idx}_after = file:${stateDir}/imapsieve/after/${baseNameOf el.after}"
)
cfg.imapsieve.mailbox))
}
'')
(optionalString (cfg.sieve.pipeBins != []) ''
sieve_pipe_bin_dir = ${pkgs.linkFarm "sieve-pipe-bins" (map (el: {
name = builtins.unsafeDiscardStringContext (baseNameOf el);
path = el;
})
cfg.sieve.pipeBins)}
'')
''
}
''
cfg.extraConfig
];
@ -343,6 +376,104 @@ in
description = lib.mdDoc "Quota limit for the user in bytes. Supports suffixes b, k, M, G, T and %.";
};
imapsieve.mailbox = mkOption {
default = [];
description = "Configure Sieve filtering rules on IMAP actions";
type = types.listOf (types.submodule ({ config, ... }: {
options = {
name = mkOption {
description = ''
This setting configures the name of a mailbox for which administrator scripts are configured.
The settings defined hereafter with matching sequence numbers apply to the mailbox named by this setting.
This setting supports wildcards with a syntax compatible with the IMAP LIST command, meaning that this setting can apply to multiple or even all ("*") mailboxes.
'';
example = "Junk";
type = types.str;
};
from = mkOption {
default = null;
description = ''
Only execute the administrator Sieve scripts for the mailbox configured with services.dovecot2.imapsieve.mailbox.<name>.name when the message originates from the indicated mailbox.
This setting supports wildcards with a syntax compatible with the IMAP LIST command, meaning that this setting can apply to multiple or even all ("*") mailboxes.
'';
example = "*";
type = types.nullOr types.str;
};
causes = mkOption {
default = null;
description = ''
Only execute the administrator Sieve scripts for the mailbox configured with services.dovecot2.imapsieve.mailbox.<name>.name when one of the listed IMAPSIEVE causes apply.
This has no effect on the user script, which is always executed no matter the cause.
'';
example = "COPY";
type = types.nullOr (types.enum [ "APPEND" "COPY" "FLAG" ]);
};
before = mkOption {
default = null;
description = ''
When an IMAP event of interest occurs, this sieve script is executed before any user script respectively.
This setting each specify the location of a single sieve script. The semantics of this setting is similar to sieve_before: the specified scripts form a sequence together with the user script in which the next script is only executed when an (implicit) keep action is executed.
'';
example = literalExpression "./report-spam.sieve";
type = types.nullOr types.path;
};
after = mkOption {
default = null;
description = ''
When an IMAP event of interest occurs, this sieve script is executed after any user script respectively.
This setting each specify the location of a single sieve script. The semantics of this setting is similar to sieve_after: the specified scripts form a sequence together with the user script in which the next script is only executed when an (implicit) keep action is executed.
'';
example = literalExpression "./report-spam.sieve";
type = types.nullOr types.path;
};
};
}));
};
sieve = {
plugins = mkOption {
default = [];
example = [ "sieve_extprograms" ];
description = "Sieve plugins to load";
type = types.listOf types.str;
};
extensions = mkOption {
default = [];
description = "Sieve extensions for use in user scripts";
example = [ "notify" "imapflags" "vnd.dovecot.filter" ];
type = types.listOf types.str;
};
globalExtensions = mkOption {
default = [];
example = [ "vnd.dovecot.environment" ];
description = "Sieve extensions for use in global scripts";
type = types.listOf types.str;
};
pipeBins = mkOption {
default = [];
example = literalExpression ''
map lib.getExe [
(pkgs.writeShellScriptBin "learn-ham.sh" "exec ''${pkgs.rspamd}/bin/rspamc learn_ham")
(pkgs.writeShellScriptBin "learn-spam.sh" "exec ''${pkgs.rspamd}/bin/rspamc learn_spam")
]
'';
description = "Programs available for use by the vnd.dovecot.pipe extension";
type = types.listOf types.path;
};
};
};
@ -353,16 +484,25 @@ in
enable = true;
params.dovecot2 = {};
};
services.dovecot2.protocols =
services.dovecot2 = {
protocols =
optional cfg.enableImap "imap"
++ optional cfg.enablePop3 "pop3"
++ optional cfg.enableLmtp "lmtp";
services.dovecot2.mailPlugins = mkIf cfg.enableQuota {
mailPlugins = mkIf cfg.enableQuota {
globally.enable = [ "quota" ];
perProtocol.imap.enable = [ "imap_quota" ];
};
sieve.plugins =
optional (cfg.imapsieve.mailbox != []) "sieve_imapsieve"
++ optional (cfg.sieve.pipeBins != []) "sieve_extprograms";
sieve.globalExtensions = optional (cfg.sieve.pipeBins != []) "vnd.dovecot.pipe";
};
users.users = {
dovenull =
{
@ -415,7 +555,7 @@ in
# (should be 0) so that the compiled sieve script is newer than
# the source file and Dovecot won't try to compile it.
preStart = ''
rm -rf ${stateDir}/sieve
rm -rf ${stateDir}/sieve ${stateDir}/imapsieve
'' + optionalString (cfg.sieveScripts != {}) ''
mkdir -p ${stateDir}/sieve
${concatStringsSep "\n" (
@ -432,6 +572,29 @@ in
) cfg.sieveScripts
)}
chown -R '${cfg.mailUser}:${cfg.mailGroup}' '${stateDir}/sieve'
''
+ optionalString (cfg.imapsieve.mailbox != []) ''
mkdir -p ${stateDir}/imapsieve/{before,after}
${
concatMapStringsSep "\n"
(el:
optionalString (el.before != null) ''
cp -p ${el.before} ${stateDir}/imapsieve/before/${baseNameOf el.before}
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/imapsieve/before/${baseNameOf el.before}'
''
+ optionalString (el.after != null) ''
cp -p ${el.after} ${stateDir}/imapsieve/after/${baseNameOf el.after}
${pkgs.dovecot_pigeonhole}/bin/sievec '${stateDir}/imapsieve/after/${baseNameOf el.after}'
''
)
cfg.imapsieve.mailbox
}
${
optionalString (cfg.mailUser != null && cfg.mailGroup != null)
"chown -R '${cfg.mailUser}:${cfg.mailGroup}' '${stateDir}/imapsieve'"
}
'';
};
@ -459,4 +622,5 @@ in
};
meta.maintainers = [ lib.maintainers.dblsaiko ];
}

View file

@ -22,11 +22,19 @@ let
})
(builtins.genList guixBuildUser numberOfUsers));
# A set of Guix user profiles to be linked at activation.
# A set of Guix user profiles to be linked at activation. All of these should
# be default profiles managed by Guix CLI and the profiles are located in
# `${cfg.stateDir}/profiles/per-user/$USER/$PROFILE`.
guixUserProfiles = {
# The current Guix profile that is created through `guix pull`.
# The default Guix profile managed by `guix pull`. Take note this should be
# the profile with the most precedence in `PATH` env to let users use their
# updated versions of `guix` CLI.
"current-guix" = "\${XDG_CONFIG_HOME}/guix/current";
# The default Guix home profile. This profile contains more than exports
# such as an activation script at `$GUIX_HOME_PROFILE/activate`.
"guix-home" = "$HOME/.guix-home/profile";
# The default Guix profile similar to $HOME/.nix-profile from Nix.
"guix-profile" = "$HOME/.guix-profile";
};
@ -256,20 +264,31 @@ in
# ephemeral setups where only certain part of the filesystem is
# persistent (e.g., "Erase my darlings"-type of setup).
system.userActivationScripts.guix-activate-user-profiles.text = let
linkProfileToPath = acc: profile: location: let
guixProfile = "${cfg.stateDir}/guix/profiles/per-user/\${USER}/${profile}";
in acc + ''
[ -d "${guixProfile}" ] && [ -L "${location}" ] || ln -sf "${guixProfile}" "${location}"
guixProfile = profile: "${cfg.stateDir}/guix/profiles/per-user/\${USER}/${profile}";
linkProfile = profile: location: let
userProfile = guixProfile profile;
in ''
[ -d "${userProfile}" ] && ln -sfn "${userProfile}" "${location}"
'';
linkProfileToPath = acc: profile: location: let
in acc + (linkProfile profile location);
activationScript = lib.foldlAttrs linkProfileToPath "" guixUserProfiles;
# This should contain export-only Guix user profiles. The rest of it is
# handled manually in the activation script.
guixUserProfiles' = lib.attrsets.removeAttrs guixUserProfiles [ "guix-home" ];
linkExportsScript = lib.foldlAttrs linkProfileToPath "" guixUserProfiles';
in ''
# Don't export this please! It is only expected to be used for this
# activation script and nothing else.
XDG_CONFIG_HOME=''${XDG_CONFIG_HOME:-$HOME/.config}
# Linking the usual Guix profiles into the home directory.
${activationScript}
${linkExportsScript}
# Activate all of the default Guix non-exports profiles manually.
${linkProfile "guix-home" "$HOME/.guix-home"}
[ -L "$HOME/.guix-home" ] && "$HOME/.guix-home/activate"
'';
# GUIX_LOCPATH is basically LOCPATH but for Guix libc which in turn used by

View file

@ -0,0 +1,111 @@
{ config, lib, pkgs, utils, ... }:
let
cfg = config.services.llama-cpp;
in {
options = {
services.llama-cpp = {
enable = lib.mkEnableOption "LLaMA C++ server";
package = lib.mkPackageOption pkgs "llama-cpp" { };
model = lib.mkOption {
type = lib.types.path;
example = "/models/mistral-instruct-7b/ggml-model-q4_0.gguf";
description = "Model path.";
};
extraFlags = lib.mkOption {
type = lib.types.listOf lib.types.str;
description = "Extra flags passed to llama-cpp-server.";
example = ["-c" "4096" "-ngl" "32" "--numa"];
default = [];
};
host = lib.mkOption {
type = lib.types.str;
default = "127.0.0.1";
example = "0.0.0.0";
description = "IP address the LLaMA C++ server listens on.";
};
port = lib.mkOption {
type = lib.types.port;
default = 8080;
description = "Listen port for LLaMA C++ server.";
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = "Open ports in the firewall for LLaMA C++ server.";
};
};
};
config = lib.mkIf cfg.enable {
systemd.services.llama-cpp = {
description = "LLaMA C++ server";
after = ["network.target"];
wantedBy = ["multi-user.target"];
serviceConfig = {
Type = "idle";
KillSignal = "SIGINT";
ExecStart = "${cfg.package}/bin/llama-cpp-server --log-disable --host ${cfg.host} --port ${builtins.toString cfg.port} -m ${cfg.model} ${utils.escapeSystemdExecArgs cfg.extraFlags}";
Restart = "on-failure";
RestartSec = 300;
# for GPU acceleration
PrivateDevices = false;
# hardening
DynamicUser = true;
CapabilityBoundingSet = "";
RestrictAddressFamilies = [
"AF_INET"
"AF_INET6"
"AF_UNIX"
];
NoNewPrivileges = true;
PrivateMounts = true;
PrivateTmp = true;
PrivateUsers = true;
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectSystem = "strict";
MemoryDenyWriteExecute = true;
LockPersonality = true;
RemoveIPC = true;
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter = [
"@system-service"
"~@privileged"
"~@resources"
];
SystemCallErrorNumber = "EPERM";
ProtectProc = "invisible";
ProtectHostname = true;
ProcSubset = "pid";
};
};
networking.firewall = lib.mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.port ];
};
};
meta.maintainers = with lib.maintainers; [ newam ];
}

View file

@ -79,12 +79,6 @@ in
cache-file = mkDefault "/var/lib/ntfy-sh/cache-file.db";
};
systemd.tmpfiles.rules = [
"f ${cfg.settings.auth-file} 0600 ${cfg.user} ${cfg.group} - -"
"d ${cfg.settings.attachment-cache-dir} 0700 ${cfg.user} ${cfg.group} - -"
"f ${cfg.settings.cache-file} 0600 ${cfg.user} ${cfg.group} - -"
];
systemd.services.ntfy-sh = {
description = "Push notifications server";

View file

@ -10,7 +10,7 @@ let
defaultFont = "${pkgs.liberation_ttf}/share/fonts/truetype/LiberationSerif-Regular.ttf";
# Don't start a redis instance if the user sets a custom redis connection
enableRedis = !hasAttr "PAPERLESS_REDIS" cfg.extraConfig;
enableRedis = !(cfg.settings ? PAPERLESS_REDIS);
redisServer = config.services.redis.servers.paperless;
env = {
@ -24,9 +24,11 @@ let
PAPERLESS_TIME_ZONE = config.time.timeZone;
} // optionalAttrs enableRedis {
PAPERLESS_REDIS = "unix://${redisServer.unixSocket}";
} // (
lib.mapAttrs (_: toString) cfg.extraConfig
);
} // (lib.mapAttrs (_: s:
if (lib.isAttrs s || lib.isList s) then builtins.toJSON s
else if lib.isBool s then lib.boolToString s
else toString s
) cfg.settings);
manage = pkgs.writeShellScript "manage" ''
set -o allexport # Export the following env vars
@ -82,6 +84,7 @@ in
imports = [
(mkRenamedOptionModule [ "services" "paperless-ng" ] [ "services" "paperless" ])
(mkRenamedOptionModule [ "services" "paperless" "extraConfig" ] [ "services" "paperless" "settings" ])
];
options.services.paperless = {
@ -160,32 +163,30 @@ in
description = lib.mdDoc "Web interface port.";
};
# FIXME this should become an RFC42-style settings attr
extraConfig = mkOption {
type = types.attrs;
settings = mkOption {
type = lib.types.submodule {
freeformType = with lib.types; attrsOf (let
typeList = [ bool float int str path package ];
in oneOf (typeList ++ [ (listOf (oneOf typeList)) (attrsOf (oneOf typeList)) ]));
};
default = { };
description = lib.mdDoc ''
Extra paperless config options.
See [the documentation](https://docs.paperless-ngx.com/configuration/)
for available options.
See [the documentation](https://docs.paperless-ngx.com/configuration/) for available options.
Note that some options such as `PAPERLESS_CONSUMER_IGNORE_PATTERN` expect JSON values. Use `builtins.toJSON` to ensure proper quoting.
Note that some settings such as `PAPERLESS_CONSUMER_IGNORE_PATTERN` expect JSON values.
Settings declared as lists or attrsets will automatically be serialised into JSON strings for your convenience.
'';
example = literalExpression ''
{
example = {
PAPERLESS_OCR_LANGUAGE = "deu+eng";
PAPERLESS_DBHOST = "/run/postgresql";
PAPERLESS_CONSUMER_IGNORE_PATTERN = builtins.toJSON [ ".DS_STORE/*" "desktop.ini" ];
PAPERLESS_OCR_USER_ARGS = builtins.toJSON {
PAPERLESS_CONSUMER_IGNORE_PATTERN = [ ".DS_STORE/*" "desktop.ini" ];
PAPERLESS_OCR_USER_ARGS = {
optimize = 1;
pdfa_image_compression = "lossless";
};
};
'';
};
user = mkOption {

View file

@ -249,6 +249,7 @@ in
acmeDirectory = config.security.acme.certs."${cfg.domain}".directory;
in
{
PORTUNUS_SERVER_HTTP_SECURE = "true";
PORTUNUS_SLAPD_TLS_CA_CERTIFICATE = "/etc/ssl/certs/ca-certificates.crt";
PORTUNUS_SLAPD_TLS_CERTIFICATE = "${acmeDirectory}/cert.pem";
PORTUNUS_SLAPD_TLS_DOMAIN_NAME = cfg.domain;

View file

@ -53,7 +53,7 @@ in
enable = mkEnableOption (lib.mdDoc "Redmine");
package = mkPackageOption pkgs "redmine" {
example = "redmine.override { ruby = pkgs.ruby_2_7; }";
example = "redmine.override { ruby = pkgs.ruby_3_2; }";
};
user = mkOption {

View file

@ -206,7 +206,15 @@ in {
description = "Real time performance monitoring";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
path = (with pkgs; [ curl gawk iproute2 which procps bash ])
path = (with pkgs; [
curl
gawk
iproute2
which
procps
bash
util-linux # provides logger command; required for syslog health alarms
])
++ lib.optional cfg.python.enable (pkgs.python3.withPackages cfg.python.extraPackages)
++ lib.optional config.virtualisation.libvirtd.enable (config.virtualisation.libvirtd.package);
environment = {

View file

@ -0,0 +1,83 @@
{ pkgs, config, lib, ... }:
let
cfg = config.services.snmpd;
configFile = if cfg.configText != "" then
pkgs.writeText "snmpd.cfg" ''
${cfg.configText}
'' else null;
in {
options.services.snmpd = {
enable = lib.mkEnableOption "snmpd";
package = lib.mkPackageOption pkgs "net-snmp" {};
listenAddress = lib.mkOption {
type = lib.types.str;
default = "0.0.0.0";
description = lib.mdDoc ''
The address to listen on for SNMP and AgentX messages.
'';
example = "127.0.0.1";
};
port = lib.mkOption {
type = lib.types.port;
default = 161;
description = lib.mdDoc ''
The port to listen on for SNMP and AgentX messages.
'';
};
openFirewall = lib.mkOption {
type = lib.types.bool;
default = false;
description = lib.mdDoc ''
Open port in firewall for snmpd.
'';
};
configText = lib.mkOption {
type = lib.types.lines;
default = "";
description = lib.mdDoc ''
The contents of the snmpd.conf. If the {option}`configFile` option
is set, this value will be ignored.
Note that the contents of this option will be added to the Nix
store as world-readable plain text, {option}`configFile` can be used in
addition to a secret management tool to protect sensitive data.
'';
};
configFile = lib.mkOption {
type = lib.types.path;
default = configFile;
defaultText = lib.literalMD "The value of {option}`configText`.";
description = lib.mdDoc ''
Path to the snmpd.conf file. By default, if {option}`configText` is set,
a config file will be automatically generated.
'';
};
};
config = lib.mkIf cfg.enable {
systemd.services."snmpd" = {
description = "Simple Network Management Protocol (SNMP) daemon.";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${lib.getExe' cfg.package "snmpd"} -f -Lo -c ${cfg.configFile} ${cfg.listenAddress}:${toString cfg.port}";
};
};
networking.firewall.allowedUDPPorts = lib.mkIf cfg.openFirewall [
cfg.port
];
};
meta.maintainers = [ lib.maintainers.eliandoran ];
}

View file

@ -3,6 +3,7 @@
let
cfg = config.services.eris-server;
stateDirectoryPath = "\${STATE_DIRECTORY}";
nullOrStr = with lib.types; nullOr str;
in {
options.services.eris-server = {
@ -26,7 +27,7 @@ in {
};
listenCoap = lib.mkOption {
type = lib.types.str;
type = nullOrStr;
default = ":5683";
example = "[::1]:5683";
description = ''
@ -39,8 +40,8 @@ in {
};
listenHttp = lib.mkOption {
type = lib.types.str;
default = "";
type = nullOrStr;
default = null;
example = "[::1]:8080";
description = "Server HTTP listen address. Do not listen by default.";
};
@ -58,8 +59,8 @@ in {
};
mountpoint = lib.mkOption {
type = lib.types.str;
default = "";
type = nullOrStr;
default = null;
example = "/eris";
description = ''
Mountpoint for FUSE namespace that exposes "urn:eris:" files.
@ -69,29 +70,40 @@ in {
};
config = lib.mkIf cfg.enable {
assertions = [{
assertion = lib.strings.versionAtLeast cfg.package.version "20231219";
message =
"Version of `config.services.eris-server.package` is incompatible with this module";
}];
systemd.services.eris-server = let
cmd =
"${cfg.package}/bin/eris-go server --coap '${cfg.listenCoap}' --http '${cfg.listenHttp}' ${
lib.optionalString cfg.decode "--decode "
}${
lib.optionalString (cfg.mountpoint != "")
''--mountpoint "${cfg.mountpoint}" ''
}${lib.strings.escapeShellArgs cfg.backends}";
cmd = "${cfg.package}/bin/eris-go server"
+ (lib.optionalString (cfg.listenCoap != null)
" --coap '${cfg.listenCoap}'")
+ (lib.optionalString (cfg.listenHttp != null)
" --http '${cfg.listenHttp}'")
+ (lib.optionalString cfg.decode " --decode")
+ (lib.optionalString (cfg.mountpoint != null)
" --mountpoint '${cfg.mountpoint}'");
in {
description = "ERIS block server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
script = lib.mkIf (cfg.mountpoint != "") ''
environment.ERIS_STORE_URL = toString cfg.backends;
script = lib.mkIf (cfg.mountpoint != null) ''
export PATH=${config.security.wrapperDir}:$PATH
${cmd}
'';
serviceConfig = let
umounter = lib.mkIf (cfg.mountpoint != "")
umounter = lib.mkIf (cfg.mountpoint != null)
"-${config.security.wrapperDir}/fusermount -uz ${cfg.mountpoint}";
in {
in if (cfg.mountpoint == null) then {
ExecStart = cmd;
} else
{
ExecStartPre = umounter;
ExecStart = lib.mkIf (cfg.mountpoint == "") cmd;
ExecStopPost = umounter;
} // {
Restart = "always";
RestartSec = 20;
AmbientCapabilities = "CAP_NET_BIND_SERVICE";

View file

@ -282,8 +282,9 @@ in
environment.systemPackages = [ cfg.package ];
environment.variables.IPFS_PATH = fakeKuboRepo;
# https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size
# https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes
boot.kernel.sysctl."net.core.rmem_max" = mkDefault 2500000;
boot.kernel.sysctl."net.core.wmem_max" = mkDefault 2500000;
programs.fuse = mkIf cfg.autoMount {
userAllowOther = true;

View file

@ -95,7 +95,6 @@ in
ipv6 = mkOption {
type = types.bool;
default = false;
defaultText = literalExpression "config.networking.enableIPv6";
description = lib.mdDoc "Whether to use IPv6.";
};
@ -274,17 +273,17 @@ in
system.nssModules = optional (cfg.nssmdns4 || cfg.nssmdns6) pkgs.nssmdns;
system.nssDatabases.hosts = let
mdnsMinimal = if (cfg.nssmdns4 && cfg.nssmdns6) then
"mdns_minimal"
mdns = if (cfg.nssmdns4 && cfg.nssmdns6) then
"mdns"
else if (!cfg.nssmdns4 && cfg.nssmdns6) then
"mdns6_minimal"
"mdns6"
else if (cfg.nssmdns4 && !cfg.nssmdns6) then
"mdns4_minimal"
"mdns4"
else
"";
in optionals (cfg.nssmdns4 || cfg.nssmdns6) (mkMerge [
(mkBefore [ "${mdnsMinimal} [NOTFOUND=return]" ]) # before resolve
(mkAfter [ "mdns" ]) # after dns
(mkBefore [ "${mdns}_minimal [NOTFOUND=return]" ]) # before resolve
(mkAfter [ "${mdns}" ]) # after dns
]);
environment.systemPackages = [ cfg.package ];

View file

@ -217,7 +217,7 @@ with lib;
inherit RuntimeDirectory;
inherit StateDirectory;
Type = "oneshot";
ExecStartPre = "!${pkgs.writeShellScript "ddclient-prestart" preStart}";
ExecStartPre = [ "!${pkgs.writeShellScript "ddclient-prestart" preStart}" ];
ExecStart = "${lib.getExe cfg.package} -file /run/${RuntimeDirectory}/ddclient.conf";
};
};

View file

@ -0,0 +1,68 @@
# Dnsmasq {#module-services-networking-dnsmasq}
Dnsmasq is an integrated DNS, DHCP and TFTP server for small networks.
## Configuration {#module-services-networking-dnsmasq-configuration}
### An authoritative DHCP and DNS server on a home network {#module-services-networking-dnsmasq-configuration-home}
On a home network, you can use Dnsmasq as a DHCP and DNS server. New devices on
your network will be configured by Dnsmasq, and instructed to use it as the DNS
server by default. This allows you to rely on your own server to perform DNS
queries and caching, with DNSSEC enabled.
The following example assumes that
- you have disabled your router's integrated DHCP server, if it has one
- your router's address is set in [](#opt-networking.defaultGateway.address)
- your system's Ethernet interface is `eth0`
- you have configured the address(es) to forward DNS queries in [](#opt-networking.nameservers)
```nix
{
services.dnsmasq = {
enable = true;
settings = {
interface = "eth0";
bind-interfaces = true; # Only bind to the specified interface
dhcp-authoritative = true; # Should be set when dnsmasq is definitely the only DHCP server on a network
server = config.networking.nameservers; # Upstream dns servers to which requests should be forwarded
dhcp-host = [
# Give the current system a fixed address of 192.168.0.254
"dc:a6:32:0b:ea:b9,192.168.0.254,${config.networking.hostName},infinite"
];
dhcp-option = [
# Address of the gateway, i.e. your router
"option:router,${config.networking.defaultGateway.address}"
];
dhcp-range = [
# Range of IPv4 addresses to give out
# <range start>,<range end>,<lease time>
"192.168.0.10,192.168.0.253,24h"
# Enable stateless IPv6 allocation
"::f,::ff,constructor:eth0,ra-stateless"
];
dhcp-rapid-commit = true; # Faster DHCP negotiation for IPv6
local-service = true; # Accept DNS queries only from hosts whose address is on a local subnet
log-queries = true; # Log results of all DNS queries
bogus-priv = true; # Don't forward requests for the local address ranges (192.168.x.x etc) to upstream nameservers
domain-needed = true; # Don't forward requests without dots or domain parts to upstream nameservers
dnssec = true; # Enable DNSSEC
# DNSSEC trust anchor. Source: https://data.iana.org/root-anchors/root-anchors.xml
trust-anchor = ".,20326,8,2,E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D";
};
};
}
```
## References {#module-services-networking-dnsmasq-references}
- Upstream website: <https://dnsmasq.org>
- Manpage: <https://dnsmasq.org/docs/dnsmasq-man.html>
- FAQ: <https://dnsmasq.org/docs/FAQ>

View file

@ -181,4 +181,6 @@ in
restartTriggers = [ config.environment.etc.hosts.source ];
};
};
meta.doc = ./dnsmasq.md;
}

View file

@ -13,8 +13,17 @@ let
listening_ip=${range}
'') cfg.internalIPs}
${lib.optionalString (firewall == "nftables") ''
upnp_table_name=miniupnpd
upnp_nat_table_name=miniupnpd
''}
${cfg.appendConfig}
'';
firewall = if config.networking.nftables.enable then "nftables" else "iptables";
miniupnpd = pkgs.miniupnpd.override { inherit firewall; };
firewallScripts = lib.optionals (firewall == "iptables")
([ "iptables"] ++ lib.optional (config.networking.enableIPv6) "ip6tables");
in
{
options = {
@ -57,20 +66,50 @@ in
};
config = mkIf cfg.enable {
networking.firewall.extraCommands = ''
${pkgs.bash}/bin/bash -x ${pkgs.miniupnpd}/etc/miniupnpd/iptables_init.sh -i ${cfg.externalInterface}
'';
networking.firewall.extraCommands = lib.mkIf (firewallScripts != []) (builtins.concatStringsSep "\n" (map (fw: ''
EXTIF=${cfg.externalInterface} ${pkgs.bash}/bin/bash -x ${miniupnpd}/etc/miniupnpd/${fw}_init.sh
'') firewallScripts));
networking.firewall.extraStopCommands = ''
${pkgs.bash}/bin/bash -x ${pkgs.miniupnpd}/etc/miniupnpd/iptables_removeall.sh -i ${cfg.externalInterface}
networking.firewall.extraStopCommands = lib.mkIf (firewallScripts != []) (builtins.concatStringsSep "\n" (map (fw: ''
EXTIF=${cfg.externalInterface} ${pkgs.bash}/bin/bash -x ${miniupnpd}/etc/miniupnpd/${fw}_removeall.sh
'') firewallScripts));
networking.nftables = lib.mkIf (firewall == "nftables") {
# see nft_init in ${miniupnpd-nftables}/etc/miniupnpd
tables.miniupnpd = {
family = "inet";
# The following is omitted because it's expected that the firewall is to be responsible for it.
#
# chain forward {
# type filter hook forward priority filter; policy drop;
# jump miniupnpd
# }
#
# Otherwise, it quickly gets ugly with (potentially) two forward chains with "policy drop".
# This means the chain "miniupnpd" never actually gets triggered and is simply there to satisfy
# miniupnpd. If you're doing it yourself (without networking.firewall), the easiest way to get
# it to work is adding a rule "ct status dnat accept" - this is what networking.firewall does.
# If you don't want to simply accept forwarding for all "ct status dnat" packets, override
# upnp_table_name with whatever your table is, create a chain "miniupnpd" in your table and
# jump into it from your forward chain.
content = ''
chain miniupnpd {}
chain prerouting_miniupnpd {
type nat hook prerouting priority dstnat; policy accept;
}
chain postrouting_miniupnpd {
type nat hook postrouting priority srcnat; policy accept;
}
'';
};
};
systemd.services.miniupnpd = {
description = "MiniUPnP daemon";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${pkgs.miniupnpd}/bin/miniupnpd -f ${configFile}";
ExecStart = "${miniupnpd}/bin/miniupnpd -f ${configFile}";
PIDFile = "/run/miniupnpd.pid";
Type = "forking";
};

View file

@ -674,7 +674,11 @@ in
(lport: "sshd -G -T -C lport=${toString lport} -f ${sshconf} > /dev/null")
cfg.ports}
${concatMapStringsSep "\n"
(la: "sshd -G -T -C ${escapeShellArg "laddr=${la.addr},lport=${toString la.port}"} -f ${sshconf} > /dev/null")
(la:
concatMapStringsSep "\n"
(port: "sshd -G -T -C ${escapeShellArg "laddr=${la.addr},lport=${toString port}"} -f ${sshconf} > /dev/null")
(if la.port != null then [ la.port ] else cfg.ports)
)
cfg.listenAddresses}
touch $out
'')

View file

@ -4,9 +4,10 @@ with lib;
let
inherit (pkgs) cups cups-pk-helper cups-filters xdg-utils;
inherit (pkgs) cups-pk-helper cups-filters xdg-utils;
cfg = config.services.printing;
cups = cfg.package;
avahiEnabled = config.services.avahi.enable;
polkitEnabled = config.security.polkit.enable;
@ -140,6 +141,8 @@ in
'';
};
package = lib.mkPackageOption pkgs "cups" {};
stateless = mkOption {
type = types.bool;
default = false;

View file

@ -0,0 +1,323 @@
{
config,
lib,
pkgs,
...
}:
with lib; let
cfg = config.services.bitwarden-directory-connector-cli;
in {
options.services.bitwarden-directory-connector-cli = {
enable = mkEnableOption "Bitwarden Directory Connector";
package = mkPackageOption pkgs "bitwarden-directory-connector-cli" {};
domain = mkOption {
type = types.str;
description = lib.mdDoc "The domain the Bitwarden/Vaultwarden is accessible on.";
example = "https://vaultwarden.example.com";
};
user = mkOption {
type = types.str;
description = lib.mdDoc "User to run the program.";
default = "bwdc";
};
interval = mkOption {
type = types.str;
default = "*:0,15,30,45";
description = lib.mdDoc "The interval when to run the connector. This uses systemd's OnCalendar syntax.";
};
ldap = mkOption {
description = lib.mdDoc ''
Options to configure the LDAP connection.
If you used the desktop application to test the configuration you can find the settings by searching for `ldap` in `~/.config/Bitwarden\ Directory\ Connector/data.json`.
'';
default = {};
type = types.submodule ({
config,
options,
...
}: {
freeformType = types.attrsOf (pkgs.formats.json {}).type;
config.finalJSON = builtins.toJSON (removeAttrs config (filter (x: x == "finalJSON" || ! options.${x}.isDefined or false) (attrNames options)));
options = {
finalJSON = mkOption {
type = (pkgs.formats.json {}).type;
internal = true;
readOnly = true;
visible = false;
};
ssl = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether to use TLS.";
};
startTls = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether to use STARTTLS.";
};
hostname = mkOption {
type = types.str;
description = lib.mdDoc "The host the LDAP is accessible on.";
example = "ldap.example.com";
};
port = mkOption {
type = types.port;
default = 389;
description = lib.mdDoc "Port LDAP is accessible on.";
};
ad = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether the LDAP Server is an Active Directory.";
};
pagedSearch = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether the LDAP server paginates search results.";
};
rootPath = mkOption {
type = types.str;
description = lib.mdDoc "Root path for LDAP.";
example = "dc=example,dc=com";
};
username = mkOption {
type = types.str;
description = lib.mdDoc "The user to authenticate as.";
example = "cn=admin,dc=example,dc=com";
};
};
});
};
sync = mkOption {
description = lib.mdDoc ''
Options to configure what gets synced.
If you used the desktop application to test the configuration you can find the settings by searching for `sync` in `~/.config/Bitwarden\ Directory\ Connector/data.json`.
'';
default = {};
type = types.submodule ({
config,
options,
...
}: {
freeformType = types.attrsOf (pkgs.formats.json {}).type;
config.finalJSON = builtins.toJSON (removeAttrs config (filter (x: x == "finalJSON" || ! options.${x}.isDefined or false) (attrNames options)));
options = {
finalJSON = mkOption {
type = (pkgs.formats.json {}).type;
internal = true;
readOnly = true;
visible = false;
};
removeDisabled = mkOption {
type = types.bool;
default = true;
description = lib.mdDoc "Remove users from bitwarden groups if no longer in the ldap group.";
};
overwriteExisting = mkOption {
type = types.bool;
default = false;
description =
lib.mdDoc "Remove and re-add users/groups, See https://bitwarden.com/help/user-group-filters/#overwriting-syncs for more details.";
};
largeImport = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Enable if you are syncing more than 2000 users/groups.";
};
memberAttribute = mkOption {
type = types.str;
description = lib.mdDoc "Attribute that lists members in a LDAP group.";
example = "uniqueMember";
};
creationDateAttribute = mkOption {
type = types.str;
description = lib.mdDoc "Attribute that lists a user's creation date.";
example = "whenCreated";
};
useEmailPrefixSuffix = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "If a user has no email address, combine a username prefix with a suffix value to form an email.";
};
emailPrefixAttribute = mkOption {
type = types.str;
description = lib.mdDoc "The attribute that contains the users username.";
example = "accountName";
};
emailSuffix = mkOption {
type = types.str;
description = lib.mdDoc "Suffix for the email, normally @example.com.";
example = "@example.com";
};
users = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Sync users.";
};
userPath = mkOption {
type = types.str;
description = lib.mdDoc "User directory, relative to root.";
default = "ou=users";
};
userObjectClass = mkOption {
type = types.str;
description = lib.mdDoc "Class that users must have.";
default = "inetOrgPerson";
};
userEmailAttribute = mkOption {
type = types.str;
description = lib.mdDoc "Attribute for a users email.";
default = "mail";
};
userFilter = mkOption {
type = types.str;
description = lib.mdDoc "LDAP filter for users.";
example = "(memberOf=cn=sales,ou=groups,dc=example,dc=com)";
default = "";
};
groups = mkOption {
type = types.bool;
default = false;
description = lib.mdDoc "Whether to sync ldap groups into BitWarden.";
};
groupPath = mkOption {
type = types.str;
description = lib.mdDoc "Group directory, relative to root.";
default = "ou=groups";
};
groupObjectClass = mkOption {
type = types.str;
description = lib.mdDoc "A class that groups will have.";
default = "groupOfNames";
};
groupNameAttribute = mkOption {
type = types.str;
description = lib.mdDoc "Attribute for a name of group.";
default = "cn";
};
groupFilter = mkOption {
type = types.str;
description = lib.mdDoc "LDAP filter for groups.";
example = "(cn=sales)";
default = "";
};
};
});
};
secrets = {
ldap = mkOption {
type = types.str;
description = "Path to file that contains LDAP password for user in {option}`ldap.username";
};
bitwarden = {
client_path_id = mkOption {
type = types.str;
description = "Path to file that contains Client ID.";
};
client_path_secret = mkOption {
type = types.str;
description = "Path to file that contains Client Secret.";
};
};
};
};
config = mkIf cfg.enable {
users.groups."${cfg.user}" = {};
users.users."${cfg.user}" = {
isSystemUser = true;
group = cfg.user;
};
systemd = {
timers.bitwarden-directory-connector-cli = {
description = "Sync timer for Bitwarden Directory Connector";
wantedBy = ["timers.target"];
after = ["network-online.target"];
timerConfig = {
OnCalendar = cfg.interval;
Unit = "bitwarden-directory-connector-cli.service";
Persistent = true;
};
};
services.bitwarden-directory-connector-cli = {
description = "Main process for Bitwarden Directory Connector";
path = [pkgs.jq];
environment = {
BITWARDENCLI_CONNECTOR_APPDATA_DIR = "/tmp";
BITWARDENCLI_CONNECTOR_PLAINTEXT_SECRETS = "true";
};
serviceConfig = {
Type = "oneshot";
User = "${cfg.user}";
PrivateTmp = true;
preStart = ''
set -eo pipefail
# create the config file
${lib.getExe cfg.package} data-file
touch /tmp/data.json.tmp
chmod 600 /tmp/data.json{,.tmp}
${lib.getExe cfg.package} config server ${cfg.domain}
# now login to set credentials
export BW_CLIENTID="$(< ${escapeShellArg cfg.secrets.bitwarden.client_path_id})"
export BW_CLIENTSECRET="$(< ${escapeShellArg cfg.secrets.bitwarden.client_path_secret})"
${lib.getExe cfg.package} login
jq '.authenticatedAccounts[0] as $account
| .[$account].directoryConfigurations.ldap |= $ldap_data
| .[$account].directorySettings.organizationId |= $orgID
| .[$account].directorySettings.sync |= $sync_data' \
--argjson ldap_data ${escapeShellArg cfg.ldap.finalJSON} \
--arg orgID "''${BW_CLIENTID//organization.}" \
--argjson sync_data ${escapeShellArg cfg.sync.finalJSON} \
/tmp/data.json \
> /tmp/data.json.tmp
mv -f /tmp/data.json.tmp /tmp/data.json
# final config
${lib.getExe cfg.package} config directory 0
${lib.getExe cfg.package} config ldap.password --secretfile ${cfg.secrets.ldap}
'';
ExecStart = "${lib.getExe cfg.package} sync";
};
};
};
};
meta.maintainers = with maintainers; [Silver-Golden];
}

View file

@ -1,8 +1,8 @@
#!/usr/bin/env bash
# Based on: https://github.com/dani-garcia/vaultwarden/wiki/Backing-up-your-vault
if ! mkdir -p "$BACKUP_FOLDER"; then
echo "Could not create backup folder '$BACKUP_FOLDER'" >&2
if [ ! -d "$BACKUP_FOLDER" ]; then
echo "Backup folder '$BACKUP_FOLDER' does not exist" >&2
exit 1
fi

View file

@ -55,6 +55,7 @@ in {
description = lib.mdDoc ''
The directory under which vaultwarden will backup its persistent data.
'';
example = "/var/backup/vaultwarden";
};
config = mkOption {
@ -230,6 +231,13 @@ in {
};
wantedBy = [ "multi-user.target" ];
};
systemd.tmpfiles.settings = mkIf (cfg.backupDir != null) {
"10-vaultwarden".${cfg.backupDir}.d = {
inherit user group;
mode = "0770";
};
};
};
# uses attributes of the linked package

View file

@ -3,7 +3,7 @@
let
inherit (lib) mkOption mkIf types length attrNames;
cfg = config.services.kerberos_server;
kerberos = config.krb5.kerberos;
kerberos = config.security.krb5.package;
aclEntry = {
options = {

View file

@ -4,7 +4,7 @@ let
inherit (lib) mkIf concatStringsSep concatMapStrings toList mapAttrs
mapAttrsToList;
cfg = config.services.kerberos_server;
kerberos = config.krb5.kerberos;
kerberos = config.security.krb5.package;
stateDir = "/var/heimdal";
aclFiles = mapAttrs
(name: {acl, ...}: pkgs.writeText "${name}.acl" (concatMapStrings ((

View file

@ -4,7 +4,7 @@ let
inherit (lib) mkIf concatStrings concatStringsSep concatMapStrings toList
mapAttrs mapAttrsToList;
cfg = config.services.kerberos_server;
kerberos = config.krb5.kerberos;
kerberos = config.security.krb5.package;
stateDir = "/var/lib/krb5kdc";
PIDFile = "/run/kdc.pid";
aclMap = {

View file

@ -27,7 +27,7 @@ in
config = lib.mkIf cfg.enable {
system.requiredKernelConfig = with config.lib.kernelConfig; [
(isModule "ZRAM")
(isEnabled "ZRAM")
];
systemd.packages = [ cfg.package ];

View file

@ -294,7 +294,7 @@ in
requires = optional apparmor.enable "apparmor.service";
wantedBy = [ "multi-user.target" ];
environment.CURL_CA_BUNDLE = etc."ssl/certs/ca-certificates.crt".source;
environment.TRANSMISSION_WEB_HOME = lib.optionalString (cfg.webHome != null) cfg.webHome;
environment.TRANSMISSION_WEB_HOME = lib.mkIf (cfg.webHome != null) cfg.webHome;
serviceConfig = {
# Use "+" because credentialsFile may not be accessible to User= or Group=.

View file

@ -122,62 +122,8 @@ let
};
};
# The current implementations of `doRename`, `mkRenamedOptionModule` do not provide the full options path when used with submodules.
# They would only show `settings.useacl' instead of `services.dokuwiki.sites."site1.local".settings.useacl'
# The partial re-implementation of these functions is done to help users in debugging by showing the full path.
mkRenamed = from: to: { config, options, name, ... }: let
pathPrefix = [ "services" "dokuwiki" "sites" name ];
fromPath = pathPrefix ++ from;
fromOpt = getAttrFromPath from options;
toOp = getAttrsFromPath to config;
toPath = pathPrefix ++ to;
in {
options = setAttrByPath from (mkOption {
visible = false;
description = lib.mdDoc "Alias of {option}${showOption toPath}";
apply = x: builtins.trace "Obsolete option `${showOption fromPath}' is used. It was renamed to ${showOption toPath}" toOp;
});
config = mkMerge [
{
warnings = optional fromOpt.isDefined
"The option `${showOption fromPath}' defined in ${showFiles fromOpt.files} has been renamed to `${showOption toPath}'.";
}
(lib.modules.mkAliasAndWrapDefsWithPriority (setAttrByPath to) fromOpt)
];
};
siteOpts = { options, config, lib, name, ... }:
{
imports = [
(mkRenamed [ "aclUse" ] [ "settings" "useacl" ])
(mkRenamed [ "superUser" ] [ "settings" "superuser" ])
(mkRenamed [ "disableActions" ] [ "settings" "disableactions" ])
({ config, options, ... }: let
showPath = suffix: lib.options.showOption ([ "services" "dokuwiki" "sites" name ] ++ suffix);
replaceExtraConfig = "Please use `${showPath ["settings"]}' to pass structured settings instead.";
ecOpt = options.extraConfig;
ecPath = showPath [ "extraConfig" ];
in {
options.extraConfig = mkOption {
visible = false;
apply = x: throw "The option ${ecPath} can no longer be used since it's been removed.\n${replaceExtraConfig}";
};
config.assertions = [
{
assertion = !ecOpt.isDefined;
message = "The option definition `${ecPath}' in ${showFiles ecOpt.files} no longer has any effect; please remove it.\n${replaceExtraConfig}";
}
{
assertion = config.mergedConfig.useacl -> (config.acl != null || config.aclFile != null);
message = "Either ${showPath [ "acl" ]} or ${showPath [ "aclFile" ]} is mandatory if ${showPath [ "settings" "useacl" ]} is true";
}
{
assertion = config.usersFile != null -> config.mergedConfig.useacl != false;
message = "${showPath [ "settings" "useacl" ]} is required when ${showPath [ "usersFile" ]} is set (Currently defined as `${config.usersFile}' in ${showFiles options.usersFile.files}).";
}
];
})
];
options = {
enable = mkEnableOption (lib.mdDoc "DokuWiki web application");
@ -392,21 +338,6 @@ let
'';
};
# Required for the mkRenamedOptionModule
# TODO: Remove me once https://github.com/NixOS/nixpkgs/issues/96006 is fixed
# or we don't have any more notes about the removal of extraConfig, ...
warnings = mkOption {
type = types.listOf types.unspecified;
default = [ ];
visible = false;
internal = true;
};
assertions = mkOption {
type = types.listOf types.unspecified;
default = [ ];
visible = false;
internal = true;
};
};
};
in
@ -440,10 +371,6 @@ in
# implementation
config = mkIf (eachSite != {}) (mkMerge [{
warnings = flatten (mapAttrsToList (_: cfg: cfg.warnings) eachSite);
assertions = flatten (mapAttrsToList (_: cfg: cfg.assertions) eachSite);
services.phpfpm.pools = mapAttrs' (hostName: cfg: (
nameValuePair "dokuwiki-${hostName}" {
inherit user;

View file

@ -155,8 +155,9 @@ let
to work, the username used to connect to PostgreSQL must match the database name, that is
services.invidious.settings.db.user must match services.invidious.settings.db.dbname.
This is the default since NixOS 24.05. For older systems, it is normally safe to manually set
services.invidious.database.user to "invidious" as the new user will be created with permissions
for the existing database. `REASSIGN OWNED BY kemal TO invidious;` may also be needed.
the user to "invidious" as the new user will be created with permissions
for the existing database. `REASSIGN OWNED BY kemal TO invidious;` may also be needed, it can be
run as `sudo -u postgres env psql --user=postgres --dbname=invidious -c 'reassign OWNED BY kemal to invidious;'`.
'';
}
];

View file

@ -9,6 +9,7 @@ let
jsonFormat = pkgs.formats.json {};
defaultPHPSettings = {
output_buffering = "0";
short_open_tag = "Off";
expose_php = "Off";
error_reporting = "E_ALL & ~E_DEPRECATED & ~E_STRICT";
@ -131,6 +132,9 @@ in {
(mkRemovedOptionModule [ "services" "nextcloud" "disableImagemagick" ] ''
Use services.nextcloud.enableImagemagick instead.
'')
(mkRemovedOptionModule [ "services" "nextcloud" "config" "dbport" ] ''
Add port to services.nextcloud.config.dbhost instead.
'')
(mkRenamedOptionModule
[ "services" "nextcloud" "logLevel" ] [ "services" "nextcloud" "extraOptions" "loglevel" ])
(mkRenamedOptionModule
@ -142,7 +146,7 @@ in {
(mkRenamedOptionModule
[ "services" "nextcloud" "skeletonDirectory" ] [ "services" "nextcloud" "extraOptions" "skeletondirectory" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "globalProfiles" ] [ "services" "nextcloud" "extraOptions" "profile.enabled" ])
[ "services" "nextcloud" "globalProfiles" ] [ "services" "nextcloud" "extraOptions" "profile.enabled" ])
(mkRenamedOptionModule
[ "services" "nextcloud" "config" "extraTrustedDomains" ] [ "services" "nextcloud" "extraOptions" "trusted_domains" ])
(mkRenamedOptionModule
@ -362,18 +366,14 @@ in {
else if mysqlLocal then "localhost:/run/mysqld/mysqld.sock"
else "localhost";
defaultText = "localhost";
example = "localhost:5000";
description = lib.mdDoc ''
Database host or socket path.
Database host (+port) or socket path.
If [](#opt-services.nextcloud.database.createLocally) is true and
[](#opt-services.nextcloud.config.dbtype) is either `pgsql` or `mysql`,
defaults to the correct Unix socket instead.
'';
};
dbport = mkOption {
type = with types; nullOr (either int str);
default = null;
description = lib.mdDoc "Database port.";
};
dbtableprefix = mkOption {
type = types.nullOr types.str;
default = null;
@ -885,7 +885,6 @@ in {
${optionalString cfg.caching.apcu "'memcache.local' => '\\OC\\Memcache\\APCu',"}
${optionalString (c.dbname != null) "'dbname' => '${c.dbname}',"}
${optionalString (c.dbhost != null) "'dbhost' => '${c.dbhost}',"}
${optionalString (c.dbport != null) "'dbport' => '${toString c.dbport}',"}
${optionalString (c.dbuser != null) "'dbuser' => '${c.dbuser}',"}
${optionalString (c.dbtableprefix != null) "'dbtableprefix' => '${toString c.dbtableprefix}',"}
${optionalString (c.dbpassFile != null) ''
@ -930,7 +929,6 @@ in {
# will be omitted.
${if c.dbname != null then "--database-name" else null} = ''"${c.dbname}"'';
${if c.dbhost != null then "--database-host" else null} = ''"${c.dbhost}"'';
${if c.dbport != null then "--database-port" else null} = ''"${toString c.dbport}"'';
${if c.dbuser != null then "--database-user" else null} = ''"${c.dbuser}"'';
"--database-pass" = "\"\$${dbpass.arg}\"";
"--admin-user" = ''"${c.adminuser}"'';

View file

@ -334,8 +334,8 @@ let
+ optionalString vhost.default "default_server "
+ optionalString vhost.reuseport "reuseport "
+ optionalString (extraParameters != []) (concatStringsSep " "
(let inCompatibleParameters = [ "ssl" "proxy_protocol" "http2" ];
isCompatibleParameter = param: !(any (p: p == param) inCompatibleParameters);
(let inCompatibleParameters = [ "accept_filter" "backlog" "deferred" "fastopen" "http2" "proxy_protocol" "so_keepalive" "ssl" ];
isCompatibleParameter = param: !(any (p: lib.hasPrefix p param) inCompatibleParameters);
in filter isCompatibleParameter extraParameters))
+ ";"))
+ "
@ -408,12 +408,6 @@ let
ssl_conf_command Options KTLS;
''}
${optionalString (hasSSL && vhost.quic && vhost.http3)
# Advertise that HTTP/3 is available
''
add_header Alt-Svc 'h3=":$server_port"; ma=86400';
''}
${mkBasicAuth vhostName vhost}
${optionalString (vhost.root != null) "root ${vhost.root};"}
@ -475,7 +469,7 @@ let
mkCertOwnershipAssertion = import ../../../security/acme/mk-cert-ownership-assertion.nix;
oldHTTP2 = versionOlder cfg.package.version "1.25.1";
oldHTTP2 = (versionOlder cfg.package.version "1.25.1" && !(cfg.package.pname == "angie" || cfg.package.pname == "angieQuic"));
in
{

View file

@ -235,9 +235,9 @@ with lib;
which can be achieved by setting `services.nginx.package = pkgs.nginxQuic;`
and activate the QUIC transport protocol
`services.nginx.virtualHosts.<name>.quic = true;`.
Note that HTTP/3 support is experimental and
*not* yet recommended for production.
Note that HTTP/3 support is experimental and *not* yet recommended for production.
Read more at https://quic.nginx.org/
HTTP/3 availability must be manually advertised, preferably in each location block.
'';
};
@ -250,8 +250,7 @@ with lib;
which can be achieved by setting `services.nginx.package = pkgs.nginxQuic;`
and activate the QUIC transport protocol
`services.nginx.virtualHosts.<name>.quic = true;`.
Note that special application protocol support is experimental and
*not* yet recommended for production.
Note that special application protocol support is experimental and *not* yet recommended for production.
Read more at https://quic.nginx.org/
'';
};

View file

@ -449,7 +449,6 @@ in
gnome-color-manager
gnome-control-center
gnome-shell-extensions
gnome-themes-extra
pkgs.gnome-tour # GNOME Shell detects the .desktop file on first log-in.
pkgs.gnome-user-docs
pkgs.orca

View file

@ -7,7 +7,7 @@ let
cfg = dmcfg.sddm;
xEnv = config.systemd.services.display-manager.environment;
sddm = pkgs.libsForQt5.sddm;
sddm = cfg.package;
iniFmt = pkgs.formats.ini { };
@ -108,6 +108,8 @@ in
'';
};
package = mkPackageOption pkgs [ "plasma5Packages" "sddm" ] {};
enableHidpi = mkOption {
type = types.bool;
default = true;

View file

@ -130,9 +130,9 @@ let cfg = config.services.xserver.libinput;
default = true;
description =
lib.mdDoc ''
Disables horizontal scrolling. When disabled, this driver will discard any horizontal scroll
events from libinput. Note that this does not disable horizontal scrolling, it merely
discards the horizontal axis from any scroll events.
Enables or disables horizontal scrolling. When disabled, this driver will discard any
horizontal scroll events from libinput. This does not disable horizontal scroll events
from libinput; it merely discards the horizontal axis from any scroll events.
'';
};

View file

@ -11,6 +11,7 @@
let
cfg = config.boot.bootspec;
children = lib.mapAttrs (childName: childConfig: childConfig.configuration.system.build.toplevel) config.specialisation;
hasAtLeastOneInitrdSecret = lib.length (lib.attrNames config.boot.initrd.secrets) > 0;
schemas = {
v1 = rec {
filename = "boot.json";
@ -27,6 +28,7 @@ let
label = "${config.system.nixos.distroName} ${config.system.nixos.codeName} ${config.system.nixos.label} (Linux ${config.boot.kernelPackages.kernel.modDirVersion})";
} // lib.optionalAttrs config.boot.initrd.enable {
initrd = "${config.system.build.initialRamdisk}/${config.system.boot.loader.initrdFile}";
} // lib.optionalAttrs hasAtLeastOneInitrdSecret {
initrdSecrets = "${config.system.build.initialRamdiskSecretAppender}/bin/append-initrd-secrets";
};
}));

View file

@ -20,13 +20,13 @@ from dataclasses import dataclass
class BootSpec:
init: str
initrd: str
initrdSecrets: str
kernel: str
kernelParams: List[str]
label: str
system: str
toplevel: str
specialisations: Dict[str, "BootSpec"]
initrdSecrets: str | None = None
@ -131,9 +131,8 @@ def write_entry(profile: str | None, generation: int, specialisation: str | None
specialisation=" (%s)" % specialisation if specialisation else "")
try:
if bootspec.initrdSecrets is not None:
subprocess.check_call([bootspec.initrdSecrets, "@efiSysMountPoint@%s" % (initrd)])
except FileNotFoundError:
pass
except subprocess.CalledProcessError:
if current:
print("failed to create initrd secrets!", file=sys.stderr)

View file

@ -396,8 +396,7 @@ in {
ManagerEnvironment=${lib.concatStringsSep " " (lib.mapAttrsToList (n: v: "${n}=${lib.escapeShellArg v}") cfg.managerEnvironment)}
'';
"/lib/modules".source = "${modulesClosure}/lib/modules";
"/lib/firmware".source = "${modulesClosure}/lib/firmware";
"/lib".source = "${modulesClosure}/lib";
"/etc/modules-load.d/nixos.conf".text = concatStringsSep "\n" config.boot.initrd.kernelModules;

View file

@ -0,0 +1,135 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.journald.gateway;
cliArgs = lib.cli.toGNUCommandLineShell { } {
# If either of these are null / false, they are not passed in the command-line
inherit (cfg) cert key trust system user merge;
};
in
{
meta.maintainers = [ lib.maintainers.raitobezarius ];
options.services.journald.gateway = {
enable = lib.mkEnableOption "the HTTP gateway to the journal";
port = lib.mkOption {
default = 19531;
type = lib.types.port;
description = ''
The port to listen to.
'';
};
cert = lib.mkOption {
default = null;
type = with lib.types; nullOr str;
description = lib.mdDoc ''
The path to a file or `AF_UNIX` stream socket to read the server
certificate from.
The certificate must be in PEM format. This option switches
`systemd-journal-gatewayd` into HTTPS mode and must be used together
with {option}`services.journald.gateway.key`.
'';
};
key = lib.mkOption {
default = null;
type = with lib.types; nullOr str;
description = lib.mdDoc ''
Specify the path to a file or `AF_UNIX` stream socket to read the
secret server key corresponding to the certificate specified with
{option}`services.journald.gateway.cert` from.
The key must be in PEM format.
This key should not be world-readable, and must be readably by the
`systemd-journal-gateway` user.
'';
};
trust = lib.mkOption {
default = null;
type = with lib.types; nullOr str;
description = lib.mdDoc ''
Specify the path to a file or `AF_UNIX` stream socket to read a CA
certificate from.
The certificate must be in PEM format.
Setting this option enforces client certificate checking.
'';
};
system = lib.mkOption {
default = true;
type = lib.types.bool;
description = lib.mdDoc ''
Serve entries from system services and the kernel.
This has the same meaning as `--system` for {manpage}`journalctl(1)`.
'';
};
user = lib.mkOption {
default = true;
type = lib.types.bool;
description = lib.mdDoc ''
Serve entries from services for the current user.
This has the same meaning as `--user` for {manpage}`journalctl(1)`.
'';
};
merge = lib.mkOption {
default = false;
type = lib.types.bool;
description = lib.mdDoc ''
Serve entries interleaved from all available journals, including other
machines.
This has the same meaning as `--merge` option for
{manpage}`journalctl(1)`.
'';
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
# This prevents the weird case were disabling "system" and "user"
# actually enables both because the cli flags are not present.
assertion = cfg.system || cfg.user;
message = ''
systemd-journal-gatewayd cannot serve neither "system" nor "user"
journals.
'';
}
];
systemd.additionalUpstreamSystemUnits = [
"systemd-journal-gatewayd.socket"
"systemd-journal-gatewayd.service"
];
users.users.systemd-journal-gateway.uid = config.ids.uids.systemd-journal-gateway;
users.users.systemd-journal-gateway.group = "systemd-journal-gateway";
users.groups.systemd-journal-gateway.gid = config.ids.gids.systemd-journal-gateway;
systemd.services.systemd-journal-gatewayd.serviceConfig.ExecStart = [
# Clear the default command line
""
"${pkgs.systemd}/lib/systemd/systemd-journal-gatewayd ${cliArgs}"
];
systemd.sockets.systemd-journal-gatewayd = {
wantedBy = [ "sockets.target" ];
listenStreams = [
# Clear the default port
""
(toString cfg.port)
];
};
};
}

View file

@ -0,0 +1,163 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.journald.remote;
format = pkgs.formats.systemd;
cliArgs = lib.cli.toGNUCommandLineShell { } {
inherit (cfg) output;
# "-3" specifies the file descriptor from the .socket unit.
"listen-${cfg.listen}" = "-3";
};
in
{
meta.maintainers = [ lib.maintainers.raitobezarius ];
options.services.journald.remote = {
enable = lib.mkEnableOption "receiving systemd journals from the network";
listen = lib.mkOption {
default = "https";
type = lib.types.enum [ "https" "http" ];
description = lib.mdDoc ''
Which protocol to listen to.
'';
};
output = lib.mkOption {
default = "/var/log/journal/remote/";
type = lib.types.str;
description = lib.mdDoc ''
The location of the output journal.
In case the output file is not specified, journal files will be created
underneath the selected directory. Files will be called
{file}`remote-hostname.journal`, where the `hostname` part is the
escaped hostname of the source endpoint of the connection, or the
numerical address if the hostname cannot be determined.
'';
};
port = lib.mkOption {
default = 19532;
type = lib.types.port;
description = ''
The port to listen to.
Note that this option is used only if
{option}`services.journald.upload.listen` is configured to be either
"https" or "http".
'';
};
settings = lib.mkOption {
default = { };
description = lib.mdDoc ''
Configuration in the journal-remote configuration file. See
{manpage}`journal-remote.conf(5)` for available options.
'';
type = lib.types.submodule {
freeformType = format.type;
options.Remote = {
Seal = lib.mkOption {
default = false;
example = true;
type = lib.types.bool;
description = ''
Periodically sign the data in the journal using Forward Secure
Sealing.
'';
};
SplitMode = lib.mkOption {
default = "host";
example = "none";
type = lib.types.enum [ "host" "none" ];
description = lib.mdDoc ''
With "host", a separate output file is used, based on the
hostname of the other endpoint of a connection. With "none", only
one output journal file is used.
'';
};
ServerKeyFile = lib.mkOption {
default = "/etc/ssl/private/journal-remote.pem";
type = lib.types.str;
description = lib.mdDoc ''
A path to a SSL secret key file in PEM format.
Note that due to security reasons, `systemd-journal-remote` will
refuse files from the world-readable `/nix/store`. This file
should be readable by the "" user.
This option can be used with `listen = "https"`. If the path
refers to an `AF_UNIX` stream socket in the file system a
connection is made to it and the key read from it.
'';
};
ServerCertificateFile = lib.mkOption {
default = "/etc/ssl/certs/journal-remote.pem";
type = lib.types.str;
description = lib.mdDoc ''
A path to a SSL certificate file in PEM format.
This option can be used with `listen = "https"`. If the path
refers to an `AF_UNIX` stream socket in the file system a
connection is made to it and the certificate read from it.
'';
};
TrustedCertificateFile = lib.mkOption {
default = "/etc/ssl/ca/trusted.pem";
type = lib.types.str;
description = lib.mdDoc ''
A path to a SSL CA certificate file in PEM format, or `all`.
If `all` is set, then client certificate checking will be
disabled.
This option can be used with `listen = "https"`. If the path
refers to an `AF_UNIX` stream socket in the file system a
connection is made to it and the certificate read from it.
'';
};
};
};
};
};
config = lib.mkIf cfg.enable {
systemd.additionalUpstreamSystemUnits = [
"systemd-journal-remote.service"
"systemd-journal-remote.socket"
];
systemd.services.systemd-journal-remote.serviceConfig.ExecStart = [
# Clear the default command line
""
"${pkgs.systemd}/lib/systemd/systemd-journal-remote ${cliArgs}"
];
systemd.sockets.systemd-journal-remote = {
wantedBy = [ "sockets.target" ];
listenStreams = [
# Clear the default port
""
(toString cfg.port)
];
};
# User and group used by systemd-journal-remote.service
users.groups.systemd-journal-remote = { };
users.users.systemd-journal-remote = {
isSystemUser = true;
group = "systemd-journal-remote";
};
environment.etc."systemd/journal-remote.conf".source =
format.generate "journal-remote.conf" cfg.settings;
};
}

View file

@ -0,0 +1,111 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.journald.upload;
format = pkgs.formats.systemd;
in
{
meta.maintainers = [ lib.maintainers.raitobezarius ];
options.services.journald.upload = {
enable = lib.mkEnableOption "uploading the systemd journal to a remote server";
settings = lib.mkOption {
default = { };
description = lib.mdDoc ''
Configuration for journal-upload. See {manpage}`journal-upload.conf(5)`
for available options.
'';
type = lib.types.submodule {
freeformType = format.type;
options.Upload = {
URL = lib.mkOption {
type = lib.types.str;
example = "https://192.168.1.1";
description = ''
The URL to upload the journal entries to.
See the description of `--url=` option in
{manpage}`systemd-journal-upload(8)` for the description of
possible values.
'';
};
ServerKeyFile = lib.mkOption {
type = with lib.types; nullOr str;
example = lib.literalExpression "./server-key.pem";
# Since systemd-journal-upload uses a DynamicUser, permissions must
# be done using groups
description = ''
SSL key in PEM format.
In contrary to what the name suggests, this option configures the
client private key sent to the remote journal server.
This key should not be world-readable, and must be readably by
the `systemd-journal` group.
'';
default = null;
};
ServerCertificateFile = lib.mkOption {
type = with lib.types; nullOr str;
example = lib.literalExpression "./server-ca.pem";
description = ''
SSL CA certificate in PEM format.
In contrary to what the name suggests, this option configures the
client certificate sent to the remote journal server.
'';
default = null;
};
TrustedCertificateFile = lib.mkOption {
type = with lib.types; nullOr str;
example = lib.literalExpression "./ca";
description = ''
SSL CA certificate.
This certificate will be used to check the remote journal HTTPS
server certificate.
'';
default = null;
};
NetworkTimeoutSec = lib.mkOption {
type = with lib.types; nullOr str;
example = "1s";
description = ''
When network connectivity to the server is lost, this option
configures the time to wait for the connectivity to get restored.
If the server is not reachable over the network for the
configured time, `systemd-journal-upload` exits. Takes a value in
seconds (or in other time units if suffixed with "ms", "min",
"h", etc). For details, see {manpage}`systemd.time(5)`.
'';
default = null;
};
};
};
};
};
config = lib.mkIf cfg.enable {
systemd.additionalUpstreamSystemUnits = [ "systemd-journal-upload.service" ];
systemd.services."systemd-journal-upload" = {
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Restart = "always";
# To prevent flooding the server in case the server is struggling
RestartSec = "3sec";
};
};
environment.etc."systemd/journal-upload.conf".source =
format.generate "journal-upload.conf" cfg.settings;
};
}

View file

@ -5,6 +5,10 @@ with lib;
let
cfg = config.services.journald;
in {
imports = [
(mkRenamedOptionModule [ "services" "journald" "enableHttpGateway" ] [ "services" "journald" "gateway" "enable" ])
];
options = {
services.journald.console = mkOption {
default = "";
@ -71,14 +75,6 @@ in {
'';
};
services.journald.enableHttpGateway = mkOption {
default = false;
type = types.bool;
description = lib.mdDoc ''
Whether to enable the HTTP gateway to the journal.
'';
};
services.journald.forwardToSyslog = mkOption {
default = config.services.rsyslogd.enable || config.services.syslog-ng.enable;
defaultText = literalExpression "services.rsyslogd.enable || services.syslog-ng.enable";
@ -101,9 +97,6 @@ in {
] ++ (optional (!config.boot.isContainer) "systemd-journald-audit.socket") ++ [
"systemd-journald-dev-log.socket"
"syslog.socket"
] ++ optionals cfg.enableHttpGateway [
"systemd-journal-gatewayd.socket"
"systemd-journal-gatewayd.service"
];
environment.etc = {
@ -124,12 +117,6 @@ in {
};
users.groups.systemd-journal.gid = config.ids.gids.systemd-journal;
users.users.systemd-journal-gateway.uid = config.ids.uids.systemd-journal-gateway;
users.users.systemd-journal-gateway.group = "systemd-journal-gateway";
users.groups.systemd-journal-gateway.gid = config.ids.gids.systemd-journal-gateway;
systemd.sockets.systemd-journal-gatewayd.wantedBy =
optional cfg.enableHttpGateway "sockets.target";
systemd.services.systemd-journal-flush.restartIfChanged = false;
systemd.services.systemd-journald.restartTriggers = [ config.environment.etc."systemd/journald.conf".source ];

View file

@ -4,7 +4,7 @@
in {
imports = [
(lib.mkRemovedOptionModule [ "systemd" "oomd" "enableUserServices" ] "Use systemd.oomd.enableUserSlices instead.")
(lib.mkRenamedOptionModule [ "systemd" "oomd" "enableUserServices" ] [ "systemd" "oomd" "enableUserSlices" ])
];
options.systemd.oomd = {
@ -61,6 +61,7 @@ in {
};
systemd.user.units."slice" = lib.mkIf cfg.enableUserSlices {
text = ''
[Slice]
ManagedOOMMemoryPressure=kill
ManagedOOMMemoryPressureLimit=80%
'';

View file

@ -46,6 +46,13 @@ with lib;
wantedBy = [ "sysinit.target" ];
aliases = [ "dbus-org.freedesktop.timesync1.service" ];
restartTriggers = [ config.environment.etc."systemd/timesyncd.conf".source ];
# systemd-timesyncd disables DNSSEC validation in the nss-resolve module by setting SYSTEMD_NSS_RESOLVE_VALIDATE to 0 in the unit file.
# This is required in order to solve the chicken-and-egg problem when DNSSEC validation needs the correct time to work, but to set the
# correct time, we need to connect to an NTP server, which usually requires resolving its hostname.
# In order for nss-resolve to be able to read this environment variable we patch systemd-timesyncd to disable NSCD and use NSS modules directly.
# This means that systemd-timesyncd needs to have NSS modules path in LD_LIBRARY_PATH. When systemd-resolved is disabled we still need to set
# NSS module path so that systemd-timesyncd keeps using other NSS modules that are configured in the system.
environment.LD_LIBRARY_PATH = config.system.nssModules.path;
preStart = (
# Ensure that we have some stored time to prevent

View file

@ -123,15 +123,8 @@ in
inherit assertions;
# needed for systemd-remount-fs
system.fsPackages = [ pkgs.bcachefs-tools ];
# FIXME: Replace this with `linuxPackages_testing` after NixOS 23.11 is released
# FIXME: Replace this with `linuxPackages_latest` when 6.7 is released, remove this line when the LTS version is at least 6.7
boot.kernelPackages = lib.mkDefault (
# FIXME: Remove warning after NixOS 23.11 is released
lib.warn "Please upgrade to Linux 6.7-rc1 or later: 'linuxPackages_testing_bcachefs' is deprecated. Use 'boot.kernelPackages = pkgs.linuxPackages_testing;' to silence this warning"
pkgs.linuxPackages_testing_bcachefs
);
# FIXME: Remove this line when the default kernel has bcachefs
boot.kernelPackages = lib.mkDefault pkgs.linuxPackages_latest;
systemd.services = lib.mapAttrs' (mkUnits "") (lib.filterAttrs (n: fs: (fs.fsType == "bcachefs") && (!utils.fsNeededForBoot fs)) config.fileSystems);
}

View file

@ -71,7 +71,7 @@ let
done
poolReady() {
pool="$1"
state="$("${zpoolCmd}" import 2>/dev/null | "${awkCmd}" "/pool: $pool/ { found = 1 }; /state:/ { if (found == 1) { print \$2; exit } }; END { if (found == 0) { print \"MISSING\" } }")"
state="$("${zpoolCmd}" import -d "${cfgZfs.devNodes}" 2>/dev/null | "${awkCmd}" "/pool: $pool/ { found = 1 }; /state:/ { if (found == 1) { print \$2; exit } }; END { if (found == 0) { print \"MISSING\" } }")"
if [[ "$state" = "ONLINE" ]]; then
return 0
else

View file

@ -116,6 +116,15 @@ let
QEMU's swtpm options.
'';
};
vhostUserPackages = mkOption {
type = types.listOf types.package;
default = [ ];
example = lib.literalExpression "[ pkgs.virtiofsd ]";
description = lib.mdDoc ''
Packages containing out-of-tree vhost-user drivers.
'';
};
};
};
@ -502,6 +511,14 @@ in
# https://libvirt.org/daemons.html#monolithic-systemd-integration
systemd.sockets.libvirtd.wantedBy = [ "sockets.target" ];
systemd.tmpfiles.rules = let
vhostUserCollection = pkgs.buildEnv {
name = "vhost-user";
paths = cfg.qemu.vhostUserPackages;
pathsToLink = [ "/share/qemu/vhost-user" ];
};
in [ "L+ /var/lib/qemu/vhost-user - - - - ${vhostUserCollection}/share/qemu/vhost-user" ];
security.polkit = {
enable = true;
extraConfig = ''

View file

@ -33,21 +33,11 @@ in {
'';
};
package = lib.mkOption {
type = lib.types.package;
default = pkgs.lxd;
defaultText = lib.literalExpression "pkgs.lxd";
description = lib.mdDoc ''
The LXD package to use.
'';
};
package = lib.mkPackageOption pkgs "lxd" { };
lxcPackage = lib.mkOption {
type = lib.types.package;
default = pkgs.lxc;
defaultText = lib.literalExpression "pkgs.lxc";
description = lib.mdDoc ''
The LXC package to use with LXD (required for AppArmor profiles).
lxcPackage = lib.mkPackageOption pkgs "lxc" {
extraDescription = ''
Required for AppArmor profiles.
'';
};
@ -149,7 +139,7 @@ in {
ui = {
enable = lib.mkEnableOption (lib.mdDoc "(experimental) LXD UI");
package = lib.mkPackageOption pkgs.lxd-unwrapped "ui" { };
package = lib.mkPackageOption pkgs [ "lxd-unwrapped" "ui" ] { };
};
};
};

View file

@ -5,7 +5,7 @@
{ nixpkgs ? { outPath = (import ../lib).cleanSource ./..; revCount = 56789; shortRev = "gfedcba"; }
, stableBranch ? false
, supportedSystems ? [ "aarch64-linux" "x86_64-linux" ]
, limitedSupportedSystems ? [ "i686-linux" ]
, limitedSupportedSystems ? [ ]
}:
let
@ -168,6 +168,7 @@ in rec {
(onFullSupported "nixos.tests.xfce")
(onFullSupported "nixpkgs.emacs")
(onFullSupported "nixpkgs.jdk")
(onSystems ["x86_64-linux"] "nixpkgs.mesa_i686") # i686 sanity check + useful
["nixpkgs.tarball"]
# Ensure that nixpkgs-check-by-name is available in all release channels and nixos-unstable,

View file

@ -605,6 +605,7 @@ in {
nixos-rebuild-install-bootloader = handleTestOn ["x86_64-linux"] ./nixos-rebuild-install-bootloader.nix {};
nixos-rebuild-specialisations = handleTestOn ["x86_64-linux"] ./nixos-rebuild-specialisations.nix {};
nixpkgs = pkgs.callPackage ../modules/misc/nixpkgs/test.nix { inherit evalMinimalConfig; };
nixseparatedebuginfod = handleTest ./nixseparatedebuginfod.nix {};
node-red = handleTest ./node-red.nix {};
nomad = handleTest ./nomad.nix {};
non-default-filesystems = handleTest ./non-default-filesystems.nix {};
@ -616,6 +617,7 @@ in {
nscd = handleTest ./nscd.nix {};
nsd = handleTest ./nsd.nix {};
ntfy-sh = handleTest ./ntfy-sh.nix {};
ntfy-sh-migration = handleTest ./ntfy-sh-migration.nix {};
nzbget = handleTest ./nzbget.nix {};
nzbhydra2 = handleTest ./nzbhydra2.nix {};
oh-my-zsh = handleTest ./oh-my-zsh.nix {};
@ -772,6 +774,7 @@ in {
sing-box = handleTest ./sing-box.nix {};
slimserver = handleTest ./slimserver.nix {};
slurm = handleTest ./slurm.nix {};
snmpd = handleTest ./snmpd.nix {};
smokeping = handleTest ./smokeping.nix {};
snapcast = handleTest ./snapcast.nix {};
snapper = handleTest ./snapper.nix {};
@ -841,6 +844,8 @@ in {
systemd-initrd-networkd-openvpn = handleTestOn [ "x86_64-linux" "i686-linux" ] ./initrd-network-openvpn { systemdStage1 = true; };
systemd-initrd-vlan = handleTest ./systemd-initrd-vlan.nix {};
systemd-journal = handleTest ./systemd-journal.nix {};
systemd-journal-gateway = handleTest ./systemd-journal-gateway.nix {};
systemd-journal-upload = handleTest ./systemd-journal-upload.nix {};
systemd-machinectl = handleTest ./systemd-machinectl.nix {};
systemd-networkd = handleTest ./systemd-networkd.nix {};
systemd-networkd-dhcpserver = handleTest ./systemd-networkd-dhcpserver.nix {};
@ -856,10 +861,12 @@ in {
systemd-shutdown = handleTest ./systemd-shutdown.nix {};
systemd-sysupdate = runTest ./systemd-sysupdate.nix;
systemd-timesyncd = handleTest ./systemd-timesyncd.nix {};
systemd-timesyncd-nscd-dnssec = handleTest ./systemd-timesyncd-nscd-dnssec.nix {};
systemd-user-tmpfiles-rules = handleTest ./systemd-user-tmpfiles-rules.nix {};
systemd-misc = handleTest ./systemd-misc.nix {};
systemd-userdbd = handleTest ./systemd-userdbd.nix {};
systemd-homed = handleTest ./systemd-homed.nix {};
systemtap = handleTest ./systemtap.nix {};
tandoor-recipes = handleTest ./tandoor-recipes.nix {};
tang = handleTest ./tang.nix {};
taskserver = handleTest ./taskserver.nix {};
@ -904,7 +911,8 @@ in {
unbound = handleTest ./unbound.nix {};
unifi = handleTest ./unifi.nix {};
unit-php = handleTest ./web-servers/unit-php.nix {};
upnp = handleTest ./upnp.nix {};
upnp.iptables = handleTest ./upnp.nix { useNftables = false; };
upnp.nftables = handleTest ./upnp.nix { useNftables = true; };
uptermd = handleTest ./uptermd.nix {};
uptime-kuma = handleTest ./uptime-kuma.nix {};
usbguard = handleTest ./usbguard.nix {};

View file

@ -112,10 +112,39 @@ in
bootspec = json.loads(machine.succeed("jq -r '.\"org.nixos.bootspec.v1\"' /run/current-system/boot.json"))
assert all(key in bootspec for key in ('initrd', 'initrdSecrets')), "Bootspec should contain initrd or initrdSecrets field when initrd is enabled"
assert 'initrd' in bootspec, "Bootspec should contain initrd field when initrd is enabled"
assert 'initrdSecrets' not in bootspec, "Bootspec should not contain initrdSecrets when there's no initrdSecrets"
'';
};
# Check that initrd secrets create corresponding entries in bootspec.
initrd-secrets = makeTest {
name = "bootspec-with-initrd-secrets";
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
nodes.machine = {
imports = [ standard ];
environment.systemPackages = [ pkgs.jq ];
# It's probably the case, but we want to make it explicit here.
boot.initrd.enable = true;
boot.initrd.secrets."/some/example" = pkgs.writeText "example-secret" "test";
};
testScript = ''
import json
machine.start()
machine.wait_for_unit("multi-user.target")
machine.succeed("test -e /run/current-system/boot.json")
bootspec = json.loads(machine.succeed("jq -r '.\"org.nixos.bootspec.v1\"' /run/current-system/boot.json"))
assert 'initrdSecrets' in bootspec, "Bootspec should contain an 'initrdSecrets' field given there's an initrd secret"
'';
};
# Check that specialisations create corresponding entries in bootspec.
specialisation = makeTest {
name = "bootspec-with-specialisation";

View file

@ -104,5 +104,5 @@ import ./make-test-python.nix ({ pkgs, ... }: {
bbworker.fail("nc -z bbmaster 8011")
'';
meta.maintainers = with pkgs.lib.maintainers; [ ];
meta.maintainers = pkgs.lib.teams.buildbot.members;
})

View file

@ -29,7 +29,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
name = "frr";
meta = with pkgs.lib.maintainers; {
maintainers = [ hexa ];
maintainers = [ ];
};
nodes = {

View file

@ -510,14 +510,8 @@ let
ntp
perlPackages.ListCompare
perlPackages.XMLLibXML
python3Minimal
# make-options-doc/default.nix
(let
self = (pkgs.python3Minimal.override {
inherit self;
includeSiteCustomize = true;
});
in self.withPackages (p: [ p.mistune ]))
(python3.withPackages (p: [ p.mistune ]))
shared-mime-info
sudo
texinfo
@ -1266,68 +1260,6 @@ in {
'';
};
bcachefsLinuxTesting = makeInstallerTest "bcachefs-linux-testing" {
extraInstallerConfig = {
imports = [ no-zfs-module ];
boot = {
supportedFilesystems = [ "bcachefs" ];
kernelPackages = pkgs.linuxPackages_testing;
};
};
extraConfig = ''
boot.kernelPackages = pkgs.linuxPackages_testing;
'';
createPartitions = ''
machine.succeed(
"flock /dev/vda parted --script /dev/vda -- mklabel msdos"
+ " mkpart primary ext2 1M 100MB" # /boot
+ " mkpart primary linux-swap 100M 1024M" # swap
+ " mkpart primary 1024M -1s", # /
"udevadm settle",
"mkswap /dev/vda2 -L swap",
"swapon -L swap",
"mkfs.bcachefs -L root /dev/vda3",
"mount -t bcachefs /dev/vda3 /mnt",
"mkfs.ext3 -L boot /dev/vda1",
"mkdir -p /mnt/boot",
"mount /dev/vda1 /mnt/boot",
)
'';
};
bcachefsUpgradeToLinuxTesting = makeInstallerTest "bcachefs-upgrade-to-linux-testing" {
extraInstallerConfig = {
imports = [ no-zfs-module ];
boot.supportedFilesystems = [ "bcachefs" ];
# We don't have network access in the VM, we need this for `nixos-install`
system.extraDependencies = [ pkgs.linux_testing ];
};
extraConfig = ''
boot.kernelPackages = pkgs.linuxPackages_testing;
'';
createPartitions = ''
machine.succeed(
"flock /dev/vda parted --script /dev/vda -- mklabel msdos"
+ " mkpart primary ext2 1M 100MB" # /boot
+ " mkpart primary linux-swap 100M 1024M" # swap
+ " mkpart primary 1024M -1s", # /
"udevadm settle",
"mkswap /dev/vda2 -L swap",
"swapon -L swap",
"mkfs.bcachefs -L root /dev/vda3",
"mount -t bcachefs /dev/vda3 /mnt",
"mkfs.ext3 -L boot /dev/vda1",
"mkdir -p /mnt/boot",
"mount /dev/vda1 /mnt/boot",
)
'';
};
# Test using labels to identify volumes in grub
simpleLabels = makeInstallerTest "simpleLabels" {
createPartitions = ''

View file

@ -1,5 +1,6 @@
import ../make-test-python.nix ({pkgs, ...}: {
name = "kerberos_server-heimdal";
nodes.machine = { config, libs, pkgs, ...}:
{ services.kerberos_server =
{ enable = true;
@ -7,9 +8,10 @@ import ../make-test-python.nix ({pkgs, ...}: {
"FOO.BAR".acl = [{principal = "admin"; access = ["add" "cpw"];}];
};
};
krb5 = {
security.krb5 = {
enable = true;
kerberos = pkgs.heimdal;
package = pkgs.heimdal;
settings = {
libdefaults = {
default_realm = "FOO.BAR";
};
@ -21,6 +23,7 @@ import ../make-test-python.nix ({pkgs, ...}: {
};
};
};
};
testScript = ''
machine.succeed(
@ -39,4 +42,6 @@ import ../make-test-python.nix ({pkgs, ...}: {
"kinit -kt alice.keytab alice",
)
'';
meta.maintainers = [ pkgs.lib.maintainers.dblsaiko ];
})

View file

@ -1,5 +1,6 @@
import ../make-test-python.nix ({pkgs, ...}: {
name = "kerberos_server-mit";
nodes.machine = { config, libs, pkgs, ...}:
{ services.kerberos_server =
{ enable = true;
@ -7,9 +8,10 @@ import ../make-test-python.nix ({pkgs, ...}: {
"FOO.BAR".acl = [{principal = "admin"; access = ["add" "cpw"];}];
};
};
krb5 = {
security.krb5 = {
enable = true;
kerberos = pkgs.krb5;
package = pkgs.krb5;
settings = {
libdefaults = {
default_realm = "FOO.BAR";
};
@ -20,6 +22,7 @@ import ../make-test-python.nix ({pkgs, ...}: {
};
};
};
};
users.extraUsers.alice = { isNormalUser = true; };
};
@ -38,4 +41,6 @@ import ../make-test-python.nix ({pkgs, ...}: {
"echo alice_pw | sudo -u alice kinit",
)
'';
meta.maintainers = [ pkgs.lib.maintainers.dblsaiko ];
})

View file

@ -1,5 +1,4 @@
{ system ? builtins.currentSystem }:
{
example-config = import ./example-config.nix { inherit system; };
deprecated-config = import ./deprecated-config.nix { inherit system; };
}

View file

@ -1,50 +0,0 @@
# Verifies that the configuration suggested in deprecated example values
# will result in the expected output.
import ../make-test-python.nix ({ pkgs, ...} : {
name = "krb5-with-deprecated-config";
meta = with pkgs.lib.maintainers; {
maintainers = [ eqyiel ];
};
nodes.machine =
{ ... }: {
krb5 = {
enable = true;
defaultRealm = "ATHENA.MIT.EDU";
domainRealm = "athena.mit.edu";
kdc = "kerberos.mit.edu";
kerberosAdminServer = "kerberos.mit.edu";
};
};
testScript =
let snapshot = pkgs.writeText "krb5-with-deprecated-config.conf" ''
[libdefaults]
default_realm = ATHENA.MIT.EDU
[realms]
ATHENA.MIT.EDU = {
admin_server = kerberos.mit.edu
kdc = kerberos.mit.edu
}
[domain_realm]
.athena.mit.edu = ATHENA.MIT.EDU
athena.mit.edu = ATHENA.MIT.EDU
[capaths]
[appdefaults]
[plugins]
'';
in ''
machine.succeed(
"diff /etc/krb5.conf ${snapshot}"
)
'';
})

View file

@ -4,14 +4,21 @@
import ../make-test-python.nix ({ pkgs, ...} : {
name = "krb5-with-example-config";
meta = with pkgs.lib.maintainers; {
maintainers = [ eqyiel ];
maintainers = [ eqyiel dblsaiko ];
};
nodes.machine =
{ pkgs, ... }: {
krb5 = {
security.krb5 = {
enable = true;
kerberos = pkgs.krb5;
package = pkgs.krb5;
settings = {
includedir = [
"/etc/krb5.conf.d"
];
include = [
"/etc/krb5-extra.conf"
];
libdefaults = {
default_realm = "ATHENA.MIT.EDU";
};
@ -46,44 +53,18 @@ import ../make-test-python.nix ({ pkgs, ...} : {
initial_timeout = 1;
};
};
plugins = {
ccselect = {
disable = "k5identity";
plugins.ccselect.disable = "k5identity";
logging = {
kdc = "SYSLOG:NOTICE";
admin_server = "SYSLOG:NOTICE";
default = "SYSLOG:NOTICE";
};
};
extraConfig = ''
[logging]
kdc = SYSLOG:NOTICE
admin_server = SYSLOG:NOTICE
default = SYSLOG:NOTICE
'';
};
};
testScript =
let snapshot = pkgs.writeText "krb5-with-example-config.conf" ''
[libdefaults]
default_realm = ATHENA.MIT.EDU
[realms]
ATHENA.MIT.EDU = {
admin_server = athena.mit.edu
kdc = athena01.mit.edu
kdc = athena02.mit.edu
}
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
[capaths]
ATHENA.MIT.EDU = {
EXAMPLE.COM = .
}
EXAMPLE.COM = {
ATHENA.MIT.EDU = .
}
[appdefaults]
pam = {
debug = false
@ -94,15 +75,40 @@ import ../make-test-python.nix ({ pkgs, ...} : {
timeout_shift = 2
}
[capaths]
ATHENA.MIT.EDU = {
EXAMPLE.COM = .
}
EXAMPLE.COM = {
ATHENA.MIT.EDU = .
}
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
[libdefaults]
default_realm = ATHENA.MIT.EDU
[logging]
admin_server = SYSLOG:NOTICE
default = SYSLOG:NOTICE
kdc = SYSLOG:NOTICE
[plugins]
ccselect = {
disable = k5identity
}
[logging]
kdc = SYSLOG:NOTICE
admin_server = SYSLOG:NOTICE
default = SYSLOG:NOTICE
[realms]
ATHENA.MIT.EDU = {
admin_server = athena.mit.edu
kdc = athena01.mit.edu
kdc = athena02.mit.edu
}
include /etc/krb5-extra.conf
includedir /etc/krb5.conf.d
'';
in ''
machine.succeed(

View file

@ -1,15 +1,17 @@
import ../make-test-python.nix ({ pkgs, lib, ... }:
let
krb5 =
{ enable = true;
security.krb5 = {
enable = true;
settings = {
domain_realm."nfs.test" = "NFS.TEST";
libdefaults.default_realm = "NFS.TEST";
realms."NFS.TEST" =
{ admin_server = "server.nfs.test";
realms."NFS.TEST" = {
admin_server = "server.nfs.test";
kdc = "server.nfs.test";
};
};
};
hosts =
''
@ -32,7 +34,7 @@ in
nodes = {
client = { lib, ... }:
{ inherit krb5 users;
{ inherit security users;
networking.extraHosts = hosts;
networking.domain = "nfs.test";
@ -48,7 +50,7 @@ in
};
server = { lib, ...}:
{ inherit krb5 users;
{ inherit security users;
networking.extraHosts = hosts;
networking.domain = "nfs.test";
@ -128,4 +130,6 @@ in
expected = ["alice", "users"]
assert ids == expected, f"ids incorrect: got {ids} expected {expected}"
'';
meta.maintainers = [ lib.maintainers.dblsaiko ];
})

View file

@ -0,0 +1,80 @@
import ./make-test-python.nix ({ pkgs, lib, ... }:
let
secret-key = "key-name:/COlMSRbehSh6YSruJWjL+R0JXQUKuPEn96fIb+pLokEJUjcK/2Gv8Ai96D7JGay5gDeUTx5wdpPgNvum9YtwA==";
public-key = "key-name:BCVI3Cv9hr/AIveg+yRmsuYA3lE8ecHaT4Db7pvWLcA=";
in
{
name = "nixseparatedebuginfod";
/* A binary cache with debug info and source for nix */
nodes.cache = { pkgs, ... }: {
services.nix-serve = {
enable = true;
secretKeyFile = builtins.toFile "secret-key" secret-key;
openFirewall = true;
};
system.extraDependencies = [
pkgs.nix.debug
pkgs.nix.src
pkgs.sl
];
};
/* the machine where we need the debuginfo */
nodes.machine = {
imports = [
../modules/installer/cd-dvd/channel.nix
];
services.nixseparatedebuginfod.enable = true;
nix.settings = {
substituters = lib.mkForce [ "http://cache:5000" ];
trusted-public-keys = [ public-key ];
};
environment.systemPackages = [
pkgs.valgrind
pkgs.gdb
(pkgs.writeShellScriptBin "wait_for_indexation" ''
set -x
while debuginfod-find debuginfo /run/current-system/sw/bin/nix |& grep 'File too large'; do
sleep 1;
done
'')
];
};
testScript = ''
start_all()
cache.wait_for_unit("nix-serve.service")
cache.wait_for_open_port(5000)
machine.wait_for_unit("nixseparatedebuginfod.service")
machine.wait_for_open_port(1949)
with subtest("show the config to debug the test"):
machine.succeed("nix --extra-experimental-features nix-command show-config |& logger")
machine.succeed("cat /etc/nix/nix.conf |& logger")
with subtest("check that the binary cache works"):
machine.succeed("nix-store -r ${pkgs.sl}")
# nixseparatedebuginfod needs .drv to associate executable -> source
# on regular systems this would be provided by nixos-rebuild
machine.succeed("nix-instantiate '<nixpkgs>' -A nix")
machine.succeed("timeout 600 wait_for_indexation")
# test debuginfod-find
machine.succeed("debuginfod-find debuginfo /run/current-system/sw/bin/nix")
# test that gdb can fetch source
out = machine.succeed("gdb /run/current-system/sw/bin/nix --batch -x ${builtins.toFile "commands" ''
start
l
''}")
print(out)
assert 'int main(' in out
# test that valgrind can display location information
# this relies on the fact that valgrind complains about nix
# libgc helps in this regard, and we also ask valgrind to show leak kinds
# which are usually false positives.
out = machine.succeed("valgrind --leak-check=full --show-leak-kinds=all nix-env --version 2>&1")
print(out)
assert 'main.cc' in out
'';
})

Some files were not shown because too many files have changed in this diff Show more