Project import generated by Copybara.

GitOrigin-RevId: f2537a505d45c31fe5d9c27ea9829b6f4c4e6ac5
This commit is contained in:
Default email 2022-06-26 12:26:21 +02:00
parent d75eec319c
commit 889482aab3
1739 changed files with 32135 additions and 22812 deletions

View file

@ -5,13 +5,13 @@
.idea/
.vscode/
outputs/
result
result-*
source/
/doc/NEWS.html
/doc/NEWS.txt
/doc/manual.html
/doc/manual.pdf
/result
/source/
.version-suffix
.DS_Store

View file

@ -27,6 +27,10 @@ function Code(elem)
content = '<refentrytitle>' .. title .. '</refentrytitle>' .. (volnum ~= nil and ('<manvolnum>' .. volnum .. '</manvolnum>') or '')
elseif elem.attributes['role'] == 'file' then
tag = 'filename'
elseif elem.attributes['role'] == 'command' then
tag = 'command'
elseif elem.attributes['role'] == 'option' then
tag = 'option'
end
if tag ~= nil then

View file

@ -302,7 +302,7 @@ buildImage {
runAsRoot = ''
#!${pkgs.runtimeShell}
${shadowSetup}
${pkgs.dockerTools.shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data

View file

@ -27,7 +27,7 @@ If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
Additionally, the following syntax extensions are currently used:
Additional syntax extensions are available, though not all extensions can be used in NixOS option documentation. The following extensions are currently used:
- []{#ssec-contributing-markup-anchors}
Explicitly defined **anchors** on headings, to allow linking to sections. These should be always used, to ensure the anchors can be linked even when the heading text changes, and to prevent conflicts between [automatically assigned identifiers](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/auto_identifiers.md).
@ -53,12 +53,22 @@ Additionally, the following syntax extensions are currently used:
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
- []{#ssec-contributing-markup-inline-roles}
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``, which will turn into {manpage}`nix.conf(5)`.
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``, which will turn into {manpage}`nix.conf(5)`. The references will turn into links when a mapping exists in {file}`doc/build-aux/pandoc-filters/link-unix-man-references.lua`.
The references will turn into links when a mapping exists in {file}`doc/build-aux/pandoc-filters/link-unix-man-references.lua`.
A few markups for other kinds of literals are also available:
- `` {command}`rm -rfi` `` turns into {command}`rm -rfi`
- `` {option}`networking.useDHCP` `` turns into {option}`networking.useDHCP`
- `` {file}`/etc/passwd` `` turns into {file}`/etc/passwd`
These literal kinds are used mostly in NixOS option documentation.
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
::: {.note}
Inline roles are available for option documentation.
:::
- []{#ssec-contributing-markup-admonitions}
**Admonitions**, set off from the text to bring attention to something.
@ -84,6 +94,10 @@ Additionally, the following syntax extensions are currently used:
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
::: {.note}
Admonitions are available for option documentation.
:::
- []{#ssec-contributing-markup-definition-lists}
[**Definition lists**](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md), for defining a group of terms:

View file

@ -185,6 +185,111 @@ Sample template for a new module review is provided below.
##### Comments
```
## Individual maintainer list {#reviewing-contributions-indvidual-maintainer-list}
When adding users to `maintainers/maintainer-list.nix`, the following
checks should be performed:
- If the user has specified a GPG key, verify that the commit is
signed by their key.
First, validate that the commit adding the maintainer is signed by
the key the maintainer listed. Check out the pull request and
compare its signing key with the listed key in the commit.
If the commit is not signed or it is signed by a different user, ask
them to either recommit using that key or to remove their key
information.
Given a maintainter entry like this:
``` nix
{
example = {
email = "user@example.com";
name = "Example User";
keys = [{
fingerprint = "0000 0000 2A70 6423 0AED 3C11 F04F 7A19 AAA6 3AFE";
}];
}
};
```
First receive their key from a keyserver:
$ gpg --recv-keys 0xF04F7A19AAA63AFE
gpg: key 0xF04F7A19AAA63AFE: public key "Example <user@example.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
Then check the commit is signed by that key:
$ git log --show-signature
commit b87862a4f7d32319b1de428adb6cdbdd3a960153
gpg: Signature made Wed Mar 12 13:32:24 2003 +0000
gpg: using RSA key 000000002A7064230AED3C11F04F7A19AAA63AFE
gpg: Good signature from "Example User <user@example.com>
Author: Example User <user@example.com>
Date: Wed Mar 12 13:32:24 2003 +0000
maintainers: adding example
and validate that there is a `Good signature` and the printed key
matches the user's submitted key.
Note: GitHub's "Verified" label does not display the user's full key
fingerprint, and should not be used for validating the key matches.
- If the user has specified a `github` account name, ensure they have
also specified a `githubId` and verify the two match.
Maintainer entries that include a `github` field must also include
their `githubId`. People can and do change their GitHub name
frequently, and the ID is used as the official and stable identity
of the maintainer.
Given a maintainer entry like this:
``` nix
{
example = {
email = "user@example.com";
name = "Example User";
github = "ghost";
githubId = 10137;
}
};
```
First, make sure that the listed GitHub handle matches the author of
the commit.
Then, visit the URL `https://api.github.com/users/ghost` and
validate that the `id` field matches the provided `githubId`.
## Maintainer teams {#reviewing-contributions-maintainer-teams}
Feel free to create a new maintainer team in `maintainers/team-list.nix`
when a group is collectively responsible for a collection of packages.
Use taste and personal judgement when deciding if a team is warranted.
Teams are allowed to define their own rules about membership.
For example, some teams will represent a business or other group which
wants to carefully track its members. Other teams may be very open about
who can join, and allow anybody to participate.
When reviewing changes to a team, read the team's scope and the context
around the member list for indications about the team's membership
policy.
In any case, request reviews from the existing team members. If the team
lists no specific membership policy, feel free to merge changes to the
team after giving the existing members a few days to respond.
*Important:* If a team says it is a closed group, do not merge additions
to the team without an approval by at least one existing member.
## Other submissions {#reviewing-contributions-other-submissions}
Other type of submissions requires different reviewing steps.

View file

@ -1,5 +1,8 @@
{ pkgs ? (import ../.. {}), nixpkgs ? { }}:
let
inherit (pkgs) lib;
inherit (lib) hasPrefix removePrefix;
locationsXml = import ./lib-function-locations.nix { inherit pkgs nixpkgs; };
functionDocs = import ./lib-function-docs.nix { inherit locationsXml pkgs; };
version = pkgs.lib.version;
@ -29,6 +32,18 @@ let
optionsDoc = pkgs.nixosOptionsDoc {
inherit (pkgs.lib.evalModules { modules = [ ../../pkgs/top-level/config.nix ]; }) options;
documentType = "none";
transformOptions = opt:
opt // {
declarations =
map
(decl:
if hasPrefix (toString ../..) (toString decl)
then
let subpath = removePrefix "/" (removePrefix (toString ../..) (toString decl));
in { url = "https://github.com/NixOS/nixpkgs/blob/master/${subpath}"; name = subpath; }
else decl)
opt.declarations;
};
};
in pkgs.runCommand "doc-support" {}

View file

@ -41,7 +41,7 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
* `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variables `DESTDIR` and `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,
* `useMelquiondRemake` (optional, default to `null`) is an attribute set, which, if given, overloads the `preConfigurePhases`, `configureFlags`, `buildPhase`, and `installPhase` attributes of the derivation for a specific use in libraries using `remake` as set up by Guillaume Melquiond for `flocq`, `gappalib`, `interval`, and `coquelicot` (see the corresponding derivation for concrete examples of use of this option). For backward compatibility, the attribute `useMelquiondRemake.logpath` must be set to the logical root of the library (otherwise, one can pass `useMelquiondRemake = {}` to activate this without backward compatibility).
* `dropAttrs`, `keepAttrs`, `dropDerivationAttrs` are all optional and allow to tune which attribute is added or removed from the final call to `mkDerivation`.

View file

@ -72,7 +72,7 @@ The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given
To package Dotnet applications, you can use `buildDotnetModule`. This has similar arguments to `stdenv.mkDerivation`, with the following additions:
* `projectFile` has to be used for specifying the dotnet project file relative to the source root. These usually have `.sln` or `.csproj` file extensions. This can be an array of multiple projects as well.
* `nugetDeps` has to be used to specify the NuGet dependency file. Unfortunately, these cannot be deterministically fetched without a lockfile. A script to fetch these is available as `passthru.fetch-deps`. This file can also be generated manually using `nuget-to-nix` tool, which is available in nixpkgs.
* `nugetDeps` takes either a path to a `deps.nix` file, or a derivation. The `deps.nix` file can be generated using the script attached to `passthru.fetch-deps`. This file can also be generated manually using `nuget-to-nix` tool, which is available in nixpkgs. If the argument is a derivation, it will be used directly and assume it has the same output as `mkNugetDeps`.
* `packNupkg` is used to pack project as a `nupkg`, and installs it to `$out/share`. If set to `true`, the derivation can be used as a dependency for another dotnet project by adding it to `projectReferences`.
* `projectReferences` can be used to resolve `ProjectReference` project items. Referenced projects can be packed with `buildDotnetModule` by setting the `packNupkg = true` attribute and passing a list of derivations to `projectReferences`. Since we are sharing referenced projects as NuGets they must be added to csproj/fsproj files as `PackageReference` as well.
For example, your project has a local dependency:

View file

@ -8,9 +8,9 @@
Several versions of the Python interpreter are available on Nix, as well as a
high amount of packages. The attribute `python3` refers to the default
interpreter, which is currently CPython 3.9. The attribute `python` refers to
interpreter, which is currently CPython 3.10. The attribute `python` refers to
CPython 2.7 for backwards-compatibility. It is also possible to refer to
specific versions, e.g. `python38` refers to CPython 3.8, and `pypy` refers to
specific versions, e.g. `python39` refers to CPython 3.9, and `pypy` refers to
the default PyPy interpreter.
Python is used a lot, and in different ways. This affects also how it is
@ -26,10 +26,10 @@ however, are in separate sets, with one set per interpreter version.
The interpreters have several common attributes. One of these attributes is
`pkgs`, which is a package set of Python libraries for this specific
interpreter. E.g., the `toolz` package corresponding to the default interpreter
is `python.pkgs.toolz`, and the CPython 3.8 version is `python38.pkgs.toolz`.
is `python.pkgs.toolz`, and the CPython 3.9 version is `python39.pkgs.toolz`.
The main package set contains aliases to these package sets, e.g.
`pythonPackages` refers to `python.pkgs` and `python38Packages` to
`python38.pkgs`.
`pythonPackages` refers to `python.pkgs` and `python39Packages` to
`python39.pkgs`.
#### Installing Python and packages {#installing-python-and-packages}
@ -54,7 +54,7 @@ with `python.buildEnv` or `python.withPackages` where the interpreter and other
executables are wrapped to be able to find each other and all of the modules.
In the following examples we will start by creating a simple, ad-hoc environment
with a nix-shell that has `numpy` and `toolz` in Python 3.8; then we will create
with a nix-shell that has `numpy` and `toolz` in Python 3.9; then we will create
a re-usable environment in a single-file Python script; then we will create a
full Python environment for development with this same environment.
@ -70,10 +70,10 @@ temporary shell session with a Python and a *precise* list of packages (plus
their runtime dependencies), with no other Python packages in the Python
interpreter's scope.
To create a Python 3.8 session with `numpy` and `toolz` available, run:
To create a Python 3.9 session with `numpy` and `toolz` available, run:
```sh
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz ])'
$ nix-shell -p 'python39.withPackages(ps: with ps; [ numpy toolz ])'
```
By default `nix-shell` will start a `bash` session with this interpreter in our
@ -81,8 +81,8 @@ By default `nix-shell` will start a `bash` session with this interpreter in our
```Python console
[nix-shell:~/src/nixpkgs]$ python3
Python 3.8.1 (default, Dec 18 2019, 19:06:26)
[GCC 9.2.0] on linux
Python 3.9.12 (main, Mar 23 2022, 21:36:19)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy; import toolz
```
@ -102,13 +102,16 @@ will still get 1 wrapped Python interpreter. We can start the interpreter
directly like so:
```sh
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz requests ])' --run python3
these derivations will be built:
/nix/store/xbdsrqrsfa1yva5s7pzsra8k08gxlbz1-python3-3.8.1-env.drv
building '/nix/store/xbdsrqrsfa1yva5s7pzsra8k08gxlbz1-python3-3.8.1-env.drv'...
created 277 symlinks in user environment
Python 3.8.1 (default, Dec 18 2019, 19:06:26)
[GCC 9.2.0] on linux
$ nix-shell -p "python39.withPackages (ps: with ps; [ numpy toolz requests ])" --run python3
this derivation will be built:
/nix/store/mpn7k6bkjl41fm51342rafaqfsl10qs4-python3-3.9.12-env.drv
this path will be fetched (0.09 MiB download, 0.41 MiB unpacked):
/nix/store/5gaiacnzi096b6prc6aa1pwrhncmhc8b-python3.9-toolz-0.11.2
copying path '/nix/store/5gaiacnzi096b6prc6aa1pwrhncmhc8b-python3.9-toolz-0.11.2' from 'https://cache.nixos.org'...
building '/nix/store/mpn7k6bkjl41fm51342rafaqfsl10qs4-python3-3.9.12-env.drv'...
created 279 symlinks in user environment
Python 3.9.12 (main, Mar 23 2022, 21:36:19)
[GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>>
@ -147,7 +150,7 @@ Executing this script requires a `python3` that has `numpy`. Using what we learn
in the previous section, we could startup a shell and just run it like so:
```ShellSession
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy ])' --run 'python3 foo.py'
$ nix-shell -p 'python39.withPackages(ps: with ps; [ numpy ])' --run 'python3 foo.py'
The dot product of [1 2] and [3 4] is: 11
```
@ -210,12 +213,12 @@ create a single script with Python dependencies, but in the course of normal
development we're usually working in an entire package repository.
As explained in the Nix manual, `nix-shell` can also load an expression from a
`.nix` file. Say we want to have Python 3.8, `numpy` and `toolz`, like before,
`.nix` file. Say we want to have Python 3.9, `numpy` and `toolz`, like before,
in an environment. We can add a `shell.nix` file describing our dependencies:
```nix
with import <nixpkgs> {};
(python38.withPackages (ps: [ps.numpy ps.toolz])).env
(python39.withPackages (ps: [ps.numpy ps.toolz])).env
```
And then at the command line, just typing `nix-shell` produces the same
@ -229,7 +232,7 @@ What's happening here?
imports the `<nixpkgs>` function, `{}` calls it and the `with` statement
brings all attributes of `nixpkgs` in the local scope. These attributes form
the main package set.
2. Then we create a Python 3.8 environment with the `withPackages` function, as before.
2. Then we create a Python 3.9 environment with the `withPackages` function, as before.
3. The `withPackages` function expects us to provide a function as an argument
that takes the set of all Python packages and returns a list of packages to
include in the environment. Here, we select the packages `numpy` and `toolz`
@ -240,7 +243,7 @@ To combine this with `mkShell` you can:
```nix
with import <nixpkgs> {};
let
pythonEnv = python38.withPackages (ps: [
pythonEnv = python39.withPackages (ps: [
ps.numpy
ps.toolz
]);
@ -378,8 +381,8 @@ information. The output of the function is a derivation.
An expression for `toolz` can be found in the Nixpkgs repository. As explained
in the introduction of this Python section, a derivation of `toolz` is available
for each interpreter version, e.g. `python38.pkgs.toolz` refers to the `toolz`
derivation corresponding to the CPython 3.8 interpreter.
for each interpreter version, e.g. `python39.pkgs.toolz` refers to the `toolz`
derivation corresponding to the CPython 3.9 interpreter.
The above example works when you're directly working on
`pkgs/top-level/python-packages.nix` in the Nixpkgs repository. Often though,
@ -392,11 +395,11 @@ and adds it along with a `numpy` package to a Python environment.
with import <nixpkgs> {};
( let
my_toolz = python38.pkgs.buildPythonPackage rec {
my_toolz = python39.pkgs.buildPythonPackage rec {
pname = "toolz";
version = "0.10.0";
src = python38.pkgs.fetchPypi {
src = python39.pkgs.fetchPypi {
inherit pname version;
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
};
@ -414,7 +417,7 @@ with import <nixpkgs> {};
```
Executing `nix-shell` will result in an environment in which you can use
Python 3.8 and the `toolz` package. As you can see we had to explicitly mention
Python 3.9 and the `toolz` package. As you can see we had to explicitly mention
for which Python version we want to build a package.
So, what did we do here? Well, we took the Nix expression that we used earlier
@ -742,7 +745,7 @@ If we create a `shell.nix` file which calls `buildPythonPackage`, and if `src`
is a local source, and if the local source has a `setup.py`, then development
mode is activated.
In the following example we create a simple environment that has a Python 3.8
In the following example we create a simple environment that has a Python 3.9
version of our package in it, as well as its dependencies and other packages we
like to have in the environment, all specified with `propagatedBuildInputs`.
Indeed, we can just add any package we like to have in our environment to
@ -750,7 +753,7 @@ Indeed, we can just add any package we like to have in our environment to
```nix
with import <nixpkgs> {};
with python38Packages;
with python39Packages;
buildPythonPackage rec {
name = "mypackage";
@ -828,9 +831,9 @@ and in this case the `python38` interpreter is automatically used.
### Interpreters {#interpreters}
Versions 2.7, 3.7, 3.8 and 3.9 of the CPython interpreter are available as
respectively `python27`, `python37`, `python38` and `python39`. The
aliases `python2` and `python3` correspond to respectively `python27` and
Versions 2.7, 3.7, 3.8, 3.9 and 3.10 of the CPython interpreter are available
as respectively `python27`, `python37`, `python38`, `python39` and `python310`.
The aliases `python2` and `python3` correspond to respectively `python27` and
`python39`. The attribute `python` maps to `python2`. The PyPy interpreters
compatible with Python 2.7 and 3 are available as `pypy27` and `pypy3`, with
aliases `pypy2` mapping to `pypy27` and `pypy` mapping to `pypy2`. The Nix

View file

@ -131,7 +131,9 @@ let
getValues getFiles
optionAttrSetToDocList optionAttrSetToDocList'
scrubOptionValue literalExpression literalExample literalDocBook
showOption showFiles unknownModule mkOption mkPackageOption;
showOption showOptionWithDefLocs showFiles
unknownModule mkOption mkPackageOption
mdDoc literalMD;
inherit (self.types) isType setType defaultTypeMerge defaultFunctor
isOptionType mkOptionType;
inherit (self.asserts)

View file

@ -7,6 +7,7 @@ let
collect
concatLists
concatMap
concatMapStringsSep
elemAt
filter
foldl'
@ -280,6 +281,21 @@ rec {
if ! isString text then throw "literalDocBook expects a string."
else { _type = "literalDocBook"; inherit text; };
/* Transition marker for documentation that's already migrated to markdown
syntax.
*/
mdDoc = text:
if ! isString text then throw "mdDoc expects a string."
else { _type = "mdDoc"; inherit text; };
/* For use in the `defaultText` and `example` option attributes. Causes the
given MD text to be inserted verbatim in the documentation, for when
a `literalExpression` would be too hard to read.
*/
literalMD = text:
if ! isString text then throw "literalMD expects a string."
else { _type = "literalMD"; inherit text; };
# Helper functions.
/* Convert an option, described as a list of the option parts in to a
@ -325,6 +341,11 @@ rec {
in "\n- In `${def.file}'${result}"
) defs;
showOptionWithDefLocs = opt: ''
${showOption opt.loc}, with values defined in:
${concatMapStringsSep "\n" (defFile: " - ${defFile}") opt.files}
'';
unknownModule = "<unknown-file>";
}

View file

@ -0,0 +1,31 @@
{ lib, ... }:
let
inherit (lib) types;
in {
options = {
name = lib.mkOption {
type = types.str;
};
email = lib.mkOption {
type = types.str;
};
matrix = lib.mkOption {
type = types.nullOr types.str;
default = null;
};
github = lib.mkOption {
type = types.nullOr types.str;
default = null;
};
githubId = lib.mkOption {
type = types.nullOr types.ints.unsigned;
default = null;
};
keys = lib.mkOption {
type = types.listOf (types.submodule {
options.fingerprint = lib.mkOption { type = types.str; };
});
default = [];
};
};
}

View file

@ -1,50 +1,19 @@
# to run these tests (and the others)
# nix-build nixpkgs/lib/tests/release.nix
{ # The pkgs used for dependencies for the testing itself
pkgs
, lib
pkgs ? import ../.. {}
, lib ? pkgs.lib
}:
let
inherit (lib) types;
maintainerModule = { config, ... }: {
options = {
name = lib.mkOption {
type = types.str;
};
email = lib.mkOption {
type = types.str;
};
matrix = lib.mkOption {
type = types.nullOr types.str;
default = null;
};
github = lib.mkOption {
type = types.nullOr types.str;
default = null;
};
githubId = lib.mkOption {
type = types.nullOr types.ints.unsigned;
default = null;
};
keys = lib.mkOption {
type = types.listOf (types.submodule {
options.longkeyid = lib.mkOption { type = types.str; };
options.fingerprint = lib.mkOption { type = types.str; };
});
default = [];
};
};
};
checkMaintainer = handle: uncheckedAttrs:
let
prefix = [ "lib" "maintainers" handle ];
checkedAttrs = (lib.modules.evalModules {
inherit prefix;
modules = [
maintainerModule
./maintainer-module.nix
{
_file = toString ../../maintainers/maintainer-list.nix;
config = uncheckedAttrs;

View file

@ -11,6 +11,10 @@ pkgs.runCommand "nixpkgs-lib-tests" {
inherit pkgs;
lib = import ../.;
})
(import ./teams.nix {
inherit pkgs;
lib = import ../.;
})
];
} ''
datadir="${pkgs.nix}/share"

50
third_party/nixpkgs/lib/tests/teams.nix vendored Normal file
View file

@ -0,0 +1,50 @@
# to run these tests:
# nix-build nixpkgs/lib/tests/teams.nix
# If it builds, all tests passed
{ pkgs ? import ../.. {}, lib ? pkgs.lib }:
let
inherit (lib) types;
teamModule = { config, ... }: {
options = {
shortName = lib.mkOption {
type = types.str;
};
scope = lib.mkOption {
type = types.str;
};
enableFeatureFreezePing = lib.mkOption {
type = types.bool;
default = false;
};
members = lib.mkOption {
type = types.listOf (types.submodule
(import ./maintainer-module.nix { inherit lib; })
);
default = [];
};
githubTeams = lib.mkOption {
type = types.listOf types.str;
default = [];
};
};
};
checkTeam = team: uncheckedAttrs:
let
prefix = [ "lib" "maintainer-team" team ];
checkedAttrs = (lib.modules.evalModules {
inherit prefix;
modules = [
teamModule
{
_file = toString ../../maintainers/team-list.nix;
config = uncheckedAttrs;
}
];
}).config;
in checkedAttrs;
checkedTeams = lib.mapAttrs checkTeam lib.teams;
in pkgs.writeTextDir "maintainer-teams.json" (builtins.toJSON checkedTeams)

File diff suppressed because it is too large Load diff

View file

@ -86,6 +86,7 @@ plenary.nvim,https://github.com/nvim-lua/plenary.nvim.git,,,,lua5_1,
rapidjson,https://github.com/xpol/lua-rapidjson.git,,,,,
readline,,,,,,
say,https://github.com/Olivine-Labs/say.git,,,,,
sqlite,,,,,,
std._debug,https://github.com/lua-stdlib/_debug.git,,,,,
std.normalize,https://github.com/lua-stdlib/normalize.git,,,,,
stdlib,,,,41.2.2,,vyp

Can't render this file because it has a wrong number of fields in line 72.

View file

@ -19,7 +19,10 @@
More fields may be added in the future.
Please keep the list alphabetically sorted.
When editing this file:
* keep the list alphabetically sorted
* test the validity of the format with:
nix-build lib/tests/teams.nix
*/
{ lib }:
@ -91,6 +94,16 @@ with lib.maintainers; {
enableFeatureFreezePing = true;
};
c3d2 = {
members = [
astro
SuperSandro2000
];
scope = "Maintain packages used in the C3D2 hackspace";
shortName = "c3d2";
enableFeatureFreezePing = true;
};
cinnamon = {
members = [
mkg20001
@ -139,6 +152,7 @@ with lib.maintainers; {
tomberek
];
scope = "Maintain the Cosmopolitan LibC and related programs.";
shortName = "Cosmopolitan";
};
deshaw = {

View file

@ -56,7 +56,14 @@ The function `mkOption` accepts the following arguments.
`description`
: A textual description of the option, in DocBook format, that will be
included in the NixOS manual.
included in the NixOS manual. During the migration process from DocBook
to CommonMark the description may also be written in CommonMark, but has
to be wrapped in `lib.mdDoc` to differentiate it from DocBook. See
the nixpkgs manual for [the list of CommonMark extensions](
https://nixos.org/nixpkgs/manual/#sec-contributing-markup)
supported by NixOS documentation.
New documentation should preferably be written as CommonMark.
## Utility functions for common option patterns {#sec-option-declarations-util}

View file

@ -94,7 +94,17 @@ options = {
<listitem>
<para>
A textual description of the option, in DocBook format, that
will be included in the NixOS manual.
will be included in the NixOS manual. During the migration
process from DocBook to CommonMark the description may also be
written in CommonMark, but has to be wrapped in
<literal>lib.mdDoc</literal> to differentiate it from DocBook.
See the nixpkgs manual for
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-contributing-markup">the
list of CommonMark extensions</link> supported by NixOS
documentation.
</para>
<para>
New documentation should preferably be written as CommonMark.
</para>
</listitem>
</varlistentry>

View file

@ -146,6 +146,14 @@
a kernel module for mounting the Apple File System (APFS).
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://gitlab.com/DarkElvenAngel/argononed">argonone</link>,
a replacement daemon for the Raspberry Pi Argon One power
button and cooler. Available at
<link xlink:href="options.html#opt-services.hardware.argonone.enable">services.hardware.argonone</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/JustArchiNET/ArchiSteamFarm">ArchiSteamFarm</link>,

View file

@ -31,11 +31,82 @@
<literal>stdenv.buildPlatform.canExecute stdenv.hostPlatform</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>nixpkgs.hostPlatform</literal> and
<literal>nixpkgs.buildPlatform</literal> options have been
added. These cover and override the
<literal>nixpkgs.{system,localSystem,crossSystem}</literal>
options.
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<literal>hostPlatform</literal> is the platform or
<quote><literal>system</literal></quote> string of the
NixOS system described by the configuration.
</para>
</listitem>
<listitem>
<para>
<literal>buildPlatform</literal> is the platform that is
responsible for building the NixOS configuration. It
defaults to the <literal>hostPlatform</literal>, for a
non-cross build configuration. To cross compile, set
<literal>buildPlatform</literal> to a different value.
</para>
</listitem>
</itemizedlist>
<para>
The new options convey the same information, but with fewer
options, and following the Nixpkgs terminology.
</para>
<para>
The existing options
<literal>nixpkgs.{system,localSystem,crossSystem}</literal>
have not been formally deprecated, to allow for evaluation of
the change and to allow for a transition period so that in
time the ecosystem can switch without breaking compatibility
with any supported NixOS release.
</para>
</listitem>
<listitem>
<para>
<literal>nixos-generate-config</literal> now generates
configurations that can be built in pure mode. This is
achieved by setting the new
<literal>nixpkgs.hostPlatform</literal> option.
</para>
<para>
You may have to unset the <literal>system</literal> parameter
in <literal>lib.nixosSystem</literal>, or similarly remove
definitions of the
<literal>nixpkgs.{system,localSystem,crossSystem}</literal>
options.
</para>
<para>
Alternatively, you can remove the
<literal>hostPlatform</literal> line and use NixOS like you
would in NixOS 22.05 and earlier.
</para>
</listitem>
<listitem>
<para>
PHP now defaults to PHP 8.1, updated from 8.0.
</para>
</listitem>
<listitem>
<para>
<literal>hardware.nvidia</literal> has a new option
<literal>open</literal> that can be used to opt in the
opensource version of NVIDIA kernel driver. Note that the
drivers support for GeForce and Workstation GPUs is still
alpha quality, see
<link xlink:href="https://developer.nvidia.com/blog/nvidia-releases-open-source-gpu-kernel-modules/">NVIDIA
Releases Open-Source GPU Kernel Modules</link> for the
official announcement.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-22.11-new-services">
@ -71,6 +142,13 @@
<link linkend="opt-services.persistent-evdev.enable">services.persistent-evdev</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://schleuder.org/">schleuder</link>, a
mailing list manager with PGP support. Enable using
<link linkend="opt-services.schleuder.enable">services.schleuder</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://www.expressvpn.com">expressvpn</link>,
@ -111,6 +189,16 @@
changed and support for single hypen arguments was dropped.
</para>
</listitem>
<listitem>
<para>
<literal>i18n.supportedLocales</literal> is now by default
only generated with the default locale set in
<literal>i18n.defaultLocale</literal>. This got copied over
from the minimal profile and reduces the final system size by
200MB. If you require all locales installed set the option to
<literal>[ &quot;all&quot; ]</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>isPowerPC</literal> predicate, found on
@ -126,12 +214,32 @@
<literal>(with foo; isPower &amp;&amp; is32bit &amp;&amp; isBigEndian)</literal>.
</para>
</listitem>
<listitem>
<para>
The Barco ClickShare driver/client package
<literal>pkgs.clickshare-csc1</literal> and the option
<literal>programs.clickshare-csc1.enable</literal> have been
removed, as it requires <literal>qt4</literal>, which reached
its end-of-life 2015 and will no longer be supported by
nixpkgs.
<link xlink:href="https://www.barco.com/de/support/knowledge-base/4380-can-i-use-linux-os-with-clickshare-base-units">According
to Barco</link> many of their base unit models can be used
with Google Chrome and the Google Cast extension.
</para>
</listitem>
<listitem>
<para>
PHP 7.4 is no longer supported due to upstream not supporting
this version for the entire lifecycle of the 22.11 release.
</para>
</listitem>
<listitem>
<para>
riak package removed along with
<literal>services.riak</literal> module, due to lack of
maintainer to update the package.
</para>
</listitem>
<listitem>
<para>
(Neo)Vim can not be configured with
@ -140,11 +248,25 @@
instead.
</para>
</listitem>
<listitem>
<para>
<literal>k3s</literal> no longer supports docker as runtime
due to upstream dropping support.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-22.11-notable-changes">
<title>Other Notable Changes</title>
<itemizedlist>
<listitem>
<para>
The <literal>xplr</literal> package has been updated from
0.18.0 to 0.19.0, which brings some breaking changes. See the
<link xlink:href="https://github.com/sayanarijit/xplr/releases/tag/v0.19.0">upstream
release notes</link> for more details.
</para>
</listitem>
<listitem>
<para>
A new module was added for the Saleae Logic device family,
@ -164,6 +286,12 @@
and require manual remediation.
</para>
</listitem>
<listitem>
<para>
<literal>zfs</literal> was updated from 2.1.4 to 2.1.5,
enabling it to be used with Linux kernel 5.18.
</para>
</listitem>
<listitem>
<para>
memtest86+ was updated from 5.00-coreboot-002 to 6.00-beta2.

View file

@ -61,6 +61,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [apfs](https://github.com/linux-apfs/linux-apfs-rw), a kernel module for mounting the Apple File System (APFS).
- [argonone](https://gitlab.com/DarkElvenAngel/argononed), a replacement daemon for the Raspberry Pi Argon One power button and cooler. Available at [services.hardware.argonone](options.html#opt-services.hardware.argonone.enable).
- [ArchiSteamFarm](https://github.com/JustArchiNET/ArchiSteamFarm), a C# application with primary purpose of idling Steam cards from multiple accounts simultaneously. Available as [services.archisteamfarm](#opt-services.archisteamfarm.enable).
- [BaGet](https://loic-sharma.github.io/BaGet/), a lightweight NuGet and symbol server. Available at [services.baget](#opt-services.baget.enable).

View file

@ -17,8 +17,37 @@ In addition to numerous new and upgraded packages, this release has the followin
built for `stdenv.hostPlatform` (i.e. produced by `stdenv.cc`) by evaluating
`stdenv.buildPlatform.canExecute stdenv.hostPlatform`.
- The `nixpkgs.hostPlatform` and `nixpkgs.buildPlatform` options have been added.
These cover and override the `nixpkgs.{system,localSystem,crossSystem}` options.
- `hostPlatform` is the platform or "`system`" string of the NixOS system
described by the configuration.
- `buildPlatform` is the platform that is responsible for building the NixOS
configuration. It defaults to the `hostPlatform`, for a non-cross
build configuration. To cross compile, set `buildPlatform` to a different
value.
The new options convey the same information, but with fewer options, and
following the Nixpkgs terminology.
The existing options `nixpkgs.{system,localSystem,crossSystem}` have not
been formally deprecated, to allow for evaluation of the change and to allow
for a transition period so that in time the ecosystem can switch without
breaking compatibility with any supported NixOS release.
- `nixos-generate-config` now generates configurations that can be built in pure
mode. This is achieved by setting the new `nixpkgs.hostPlatform` option.
You may have to unset the `system` parameter in `lib.nixosSystem`, or similarly
remove definitions of the `nixpkgs.{system,localSystem,crossSystem}` options.
Alternatively, you can remove the `hostPlatform` line and use NixOS like you
would in NixOS 22.05 and earlier.
- PHP now defaults to PHP 8.1, updated from 8.0.
- `hardware.nvidia` has a new option `open` that can be used to opt in the opensource version of NVIDIA kernel driver. Note that the driver's support for GeForce and Workstation GPUs is still alpha quality, see [NVIDIA Releases Open-Source GPU Kernel Modules](https://developer.nvidia.com/blog/nvidia-releases-open-source-gpu-kernel-modules/) for the official announcement.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## New Services {#sec-release-22.11-new-services}
@ -31,6 +60,8 @@ In addition to numerous new and upgraded packages, this release has the followin
Available as [services.infnoise](options.html#opt-services.infnoise.enable).
- [persistent-evdev](https://github.com/aiberia/persistent-evdev), a daemon to add virtual proxy devices that mirror a physical input device but persist even if the underlying hardware is hot-plugged. Available as [services.persistent-evdev](#opt-services.persistent-evdev.enable).
- [schleuder](https://schleuder.org/), a mailing list manager with PGP support. Enable using [services.schleuder](#opt-services.schleuder.enable).
- [expressvpn](https://www.expressvpn.com), the CLI client for ExpressVPN. Available as [services.expressvpn](#opt-services.expressvpn.enable).
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
@ -49,22 +80,38 @@ In addition to numerous new and upgraded packages, this release has the followin
and [changelog](https://ngrok.com/docs/ngrok-agent/changelog). Notably, breaking changes are that the config file format has
changed and support for single hypen arguments was dropped.
- `i18n.supportedLocales` is now by default only generated with the default locale set in `i18n.defaultLocale`.
This got copied over from the minimal profile and reduces the final system size by 200MB.
If you require all locales installed set the option to ``[ "all" ]``.
- The `isPowerPC` predicate, found on `platform` attrsets (`hostPlatform`, `buildPlatform`, `targetPlatform`, etc) has been removed in order to reduce confusion. The predicate was was defined such that it matches only the 32-bit big-endian members of the POWER/PowerPC family, despite having a name which would imply a broader set of systems. If you were using this predicate, you can replace `foo.isPowerPC` with `(with foo; isPower && is32bit && isBigEndian)`.
- The Barco ClickShare driver/client package `pkgs.clickshare-csc1` and the option `programs.clickshare-csc1.enable` have been removed,
as it requires `qt4`, which reached its end-of-life 2015 and will no longer be supported by nixpkgs.
[According to Barco](https://www.barco.com/de/support/knowledge-base/4380-can-i-use-linux-os-with-clickshare-base-units) many of their base unit models can be used with Google Chrome and the Google Cast extension.
- PHP 7.4 is no longer supported due to upstream not supporting this
version for the entire lifecycle of the 22.11 release.
- riak package removed along with `services.riak` module, due to lack of maintainer to update the package.
- (Neo)Vim can not be configured with `configure.pathogen` anymore to reduce maintainance burden.
Use `configure.packages` instead.
- `k3s` no longer supports docker as runtime due to upstream dropping support.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Other Notable Changes {#sec-release-22.11-notable-changes}
- The `xplr` package has been updated from 0.18.0 to 0.19.0, which brings some breaking changes. See the [upstream release notes](https://github.com/sayanarijit/xplr/releases/tag/v0.19.0) for more details.
- A new module was added for the Saleae Logic device family, providing the options `hardware.saleae-logic.enable` and `hardware.saleae-logic.package`.
- Matrix Synapse now requires entries in the `state_group_edges` table to be unique, in order to prevent accidentally introducing duplicate information (for example, because a database backup was restored multiple times). If your Synapse database already has duplicate rows in this table, this could fail with an error and require manual remediation.
- `zfs` was updated from 2.1.4 to 2.1.5, enabling it to be used with Linux kernel 5.18.
- memtest86+ was updated from 5.00-coreboot-002 to 6.00-beta2. It is now the upstream version from https://www.memtest.org/, as coreboot's fork is no longer available.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->

View file

@ -78,6 +78,15 @@ pkgs.stdenv.mkDerivation {
# get rid of the unnecessary slack here--but see
# https://github.com/NixOS/nixpkgs/issues/125121 for caveats.
# shrink to fit
resize2fs -M $img
# Add 16 MebiByte to the current_size
new_size=$(dumpe2fs -h $img | awk -F: \
'/Block count/{count=$2} /Block size/{size=$2} END{print (count*size+16*2**20)/size}')
resize2fs $img $new_size
if [ ${builtins.toString compressImage} ]; then
echo "Compressing image"
zstd -v --no-progress ./$img -o $out

View file

@ -112,7 +112,15 @@ in rec {
optionsJSON = pkgs.runCommand "options.json"
{ meta.description = "List of NixOS options in JSON format";
buildInputs = [ pkgs.brotli ];
buildInputs = [
pkgs.brotli
(let
self = (pkgs.python3Minimal.override {
inherit self;
includeSiteCustomize = true;
});
in self.withPackages (p: [ p.mistune_2_0 ]))
];
options = builtins.toFile "options.json"
(builtins.unsafeDiscardStringContext (builtins.toJSON optionsNix));
}
@ -123,9 +131,13 @@ in rec {
${
if baseOptionsJSON == null
then "cp $options $dst/options.json"
then ''
# `cp $options $dst/options.json`, but with temporary
# markdown processing
python ${./mergeJSON.py} $options <(echo '{}') > $dst/options.json
''
else ''
${pkgs.python3Minimal}/bin/python ${./mergeJSON.py} \
python ${./mergeJSON.py} \
${lib.optionalString warningsAreErrors "--warnings-are-errors"} \
${baseOptionsJSON} $options \
> $dst/options.json

View file

@ -41,6 +41,150 @@ def unpivot(options: Dict[Key, Option]) -> Dict[str, JSON]:
result[opt.name] = opt.value
return result
# converts in-place!
def convertMD(options: Dict[str, Any]) -> str:
import mistune
import re
from xml.sax.saxutils import escape, quoteattr
admonitions = {
'.warning': 'warning',
'.important': 'important',
'.note': 'note'
}
class Renderer(mistune.renderers.BaseRenderer):
def _get_method(self, name):
try:
return super(Renderer, self)._get_method(name)
except AttributeError:
def not_supported(children, **kwargs):
raise NotImplementedError("md node not supported yet", name, children, **kwargs)
return not_supported
def text(self, text):
return escape(text)
def paragraph(self, text):
return text + "\n\n"
def codespan(self, text):
return f"<literal>{text}</literal>"
def block_code(self, text, info=None):
info = f" language={quoteattr(info)}" if info is not None else ""
return f"<programlisting{info}>\n{text}</programlisting>"
def link(self, link, text=None, title=None):
if link[0:1] == '#':
attr = "linkend"
link = quoteattr(link[1:])
else:
# try to faithfully reproduce links that were of the form <link href="..."/>
# in docbook format
if text == link:
text = ""
attr = "xlink:href"
link = quoteattr(link)
return f"<link {attr}={link}>{text}</link>"
def list(self, text, ordered, level, start=None):
if ordered:
raise NotImplementedError("ordered lists not supported yet")
return f"<itemizedlist>\n{text}\n</itemizedlist>"
def list_item(self, text, level):
return f"<listitem><para>{text}</para></listitem>\n"
def block_text(self, text):
return text
def emphasis(self, text):
return f"<emphasis>{text}</emphasis>"
def strong(self, text):
return f"<emphasis role=\"strong\">{text}</emphasis>"
def admonition(self, text, kind):
if kind not in admonitions:
raise NotImplementedError(f"admonition {kind} not supported yet")
tag = admonitions[kind]
# we don't keep whitespace here because usually we'll contain only
# a single paragraph and the original docbook string is no longer
# available to restore the trailer.
return f"<{tag}><para>{text.rstrip()}</para></{tag}>"
def command(self, text):
return f"<command>{escape(text)}</command>"
def option(self, text):
return f"<option>{escape(text)}</option>"
def file(self, text):
return f"<filename>{escape(text)}</filename>"
def manpage(self, page, section):
title = f"<refentrytitle>{escape(page)}</refentrytitle>"
vol = f"<manvolnum>{escape(section)}</manvolnum>"
return f"<citerefentry>{title}{vol}</citerefentry>"
def finalize(self, data):
return "".join(data)
plugins = []
COMMAND_PATTERN = r'\{command\}`(.*?)`'
def command(md):
def parse(self, m, state):
return ('command', m.group(1))
md.inline.register_rule('command', COMMAND_PATTERN, parse)
md.inline.rules.append('command')
plugins.append(command)
FILE_PATTERN = r'\{file\}`(.*?)`'
def file(md):
def parse(self, m, state):
return ('file', m.group(1))
md.inline.register_rule('file', FILE_PATTERN, parse)
md.inline.rules.append('file')
plugins.append(file)
OPTION_PATTERN = r'\{option\}`(.*?)`'
def option(md):
def parse(self, m, state):
return ('option', m.group(1))
md.inline.register_rule('option', OPTION_PATTERN, parse)
md.inline.rules.append('option')
plugins.append(option)
MANPAGE_PATTERN = r'\{manpage\}`(.*?)\((.+?)\)`'
def manpage(md):
def parse(self, m, state):
return ('manpage', m.group(1), m.group(2))
md.inline.register_rule('manpage', MANPAGE_PATTERN, parse)
md.inline.rules.append('manpage')
plugins.append(manpage)
ADMONITION_PATTERN = re.compile(r'^::: \{([^\n]*?)\}\n(.*?)^:::\n', flags=re.MULTILINE|re.DOTALL)
def admonition(md):
def parse(self, m, state):
return {
'type': 'admonition',
'children': self.parse(m.group(2), state),
'params': [ m.group(1) ],
}
md.block.register_rule('admonition', ADMONITION_PATTERN, parse)
md.block.rules.append('admonition')
plugins.append(admonition)
def convertString(text: str) -> str:
rendered = mistune.markdown(text, renderer=Renderer(), plugins=plugins)
# keep trailing spaces so we can diff the generated XML to check for conversion bugs.
return rendered.rstrip() + text[len(text.rstrip()):]
def optionIs(option: Dict[str, Any], key: str, typ: str) -> bool:
if key not in option: return False
if type(option[key]) != dict: return False
if '_type' not in option[key]: return False
return option[key]['_type'] == typ
for (name, option) in options.items():
if optionIs(option, 'description', 'mdDoc'):
option['description'] = convertString(option['description']['text'])
if optionIs(option, 'example', 'literalMD'):
docbook = convertString(option['example']['text'])
option['example'] = { '_type': 'literalDocBook', 'text': docbook }
if optionIs(option, 'default', 'literalMD'):
docbook = convertString(option['default']['text'])
option['default'] = { '_type': 'literalDocBook', 'text': docbook }
return options
warningsAreErrors = sys.argv[1] == "--warnings-are-errors"
optOffset = 1 if warningsAreErrors else 0
options = pivot(json.load(open(sys.argv[1 + optOffset], 'r')))
@ -92,4 +236,4 @@ if hasWarnings and warningsAreErrors:
file=sys.stderr)
sys.exit(1)
json.dump(unpivot(options), fp=sys.stdout)
json.dump(convertMD(unpivot(options)), fp=sys.stdout)

View file

@ -213,6 +213,23 @@
<xsl:template match="attr[@name = 'declarations' or @name = 'definitions']">
<simplelist>
<!--
Example:
opt.declarations = [ { name = "foo/bar.nix"; url = "https://github.com/....."; } ];
-->
<xsl:for-each select="list/attrs[attr[@name = 'name']]">
<member><filename>
<xsl:if test="attr[@name = 'url']">
<xsl:attribute name="xlink:href"><xsl:value-of select="attr[@name = 'url']/string/@value"/></xsl:attribute>
</xsl:if>
<xsl:value-of select="attr[@name = 'name']/string/@value"/>
</filename></member>
</xsl:for-each>
<!--
When the declarations/definitions are raw strings,
fall back to hardcoded location logic, specific to Nixpkgs.
-->
<xsl:for-each select="list/string">
<member><filename>
<!-- Hyperlink the filename either to the NixOS Subversion

View file

@ -20,10 +20,15 @@ in rec {
merge = loc: defs:
let
defs' = filterOverrides defs;
defs'' = getValues defs';
in
if isList (head defs'')
then concatLists defs''
if isList (head defs').value
then concatMap (def:
if builtins.typeOf def.value == "list"
then def.value
else
throw "The definitions for systemd unit options should be either all lists, representing repeatable options, or all non-lists, but for the option ${showOption loc}, the definitions are a mix of list and non-list ${lib.options.showDefs defs'}"
) defs'
else mergeEqualOption loc defs';
};

View file

@ -4,6 +4,7 @@ setup(
name="nixos-test-driver",
version='1.1',
packages=find_packages(),
package_data={"test_driver": ["py.typed"]},
entry_points={
"console_scripts": [
"nixos-test-driver=test_driver:main",

View file

@ -119,6 +119,7 @@ rec {
{
inherit testName;
nativeBuildInputs = [ makeWrapper mypy ];
buildInputs = [ testDriver ];
testScript = testScript';
preferLocalBuild = true;
passthru = passthru // {
@ -138,13 +139,10 @@ rec {
echo "${builtins.toString vlanNames}" >> testScriptWithTypes
echo -n "$testScript" >> testScriptWithTypes
# set pythonpath so mypy knows where to find the imports. this requires the py.typed file.
export PYTHONPATH='${./test-driver}'
mypy --no-implicit-optional \
--pretty \
--no-color-output \
testScriptWithTypes
unset PYTHONPATH
''}
echo -n "$testScript" >> $out/test-script

View file

@ -46,9 +46,9 @@ in
type = with types; either str path;
default = "Lat2-Terminus16";
example = "LatArCyrHeb-16";
description = ''
description = mdDoc ''
The font used for the virtual consoles. Leave empty to use
whatever the <command>setfont</command> program considers the
whatever the {command}`setfont` program considers the
default font.
Can be either a font name or a path to a PSF font file.
'';

View file

@ -9,21 +9,20 @@ with lib;
environment.enableDebugInfo = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Some NixOS packages provide debug symbols. However, these are
not included in the system closure by default to save disk
space. Enabling this option causes the debug symbols to appear
in <filename>/run/current-system/sw/lib/debug/.build-id</filename>,
where tools such as <command>gdb</command> can find them.
in {file}`/run/current-system/sw/lib/debug/.build-id`,
where tools such as {command}`gdb` can find them.
If you need debug symbols for a package that doesn't
provide them by default, you can enable them as follows:
<programlisting>
nixpkgs.config.packageOverrides = pkgs: {
hello = pkgs.hello.overrideAttrs (oldAttrs: {
separateDebugInfo = true;
});
};
</programlisting>
'';
};

View file

@ -53,7 +53,8 @@ with lib;
supportedLocales = mkOption {
type = types.listOf types.str;
default = ["all"];
default = [ (config.i18n.defaultLocale + "/UTF-8") ];
defaultText = literalExpression "[ (config.i18n.defaultLocale + \"/UTF-8\") ]";
example = ["en_US.UTF-8/UTF-8" "nl_NL.UTF-8/UTF-8" "nl_NL/ISO-8859-1"];
description = ''
List of locales that the system should support. The value

View file

@ -36,16 +36,13 @@ let
/plugin/;
/ {
compatible = "raspberrypi";
fragment@0 {
target-path = "/soc";
__overlay__ {
};
&{/soc} {
pps {
compatible = "pps-gpio";
status = "okay";
};
};
};
};
'';
};
@ -88,13 +85,14 @@ let
# Compile single Device Tree overlay source
# file (.dts) into its compiled variant (.dtbo)
compileDTS = name: f: pkgs.callPackage({ dtc }: pkgs.stdenv.mkDerivation {
compileDTS = name: f: pkgs.callPackage({ stdenv, dtc }: stdenv.mkDerivation {
name = "${name}-dtbo";
nativeBuildInputs = [ dtc ];
buildCommand = ''
dtc -I dts ${f} -O dtb -@ -o $out
$CC -E -nostdinc -I${getDev cfg.kernelPackage}/lib/modules/${cfg.kernelPackage.modDirVersion}/source/scripts/dtc/include-prefixes -undef -D__DTS__ -x assembler-with-cpp ${f} | \
dtc -I dts -O dtb -@ -o $out
'';
}) {};

View file

@ -183,6 +183,14 @@ in
'';
example = literalExpression "config.boot.kernelPackages.nvidiaPackages.legacy_340";
};
hardware.nvidia.open = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Whether to use the open source kernel module
'';
};
};
config = let
@ -231,6 +239,11 @@ in
);
message = "Required files for driver based power management don't exist.";
}
{
assertion = cfg.open -> (cfg.package ? open && cfg.package ? firmware);
message = "This version of NVIDIA driver does not provide a corresponding opensource kernel driver";
}
];
# If Optimus/PRIME is enabled, we:
@ -364,7 +377,8 @@ in
++ optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia)
"L+ /run/nvidia-docker/extras/bin/nvidia-persistenced - - - - ${nvidia_x11.persistenced}/origBin/nvidia-persistenced";
boot.extraModulePackages = [ nvidia_x11.bin ];
boot.extraModulePackages = if cfg.open then [ nvidia_x11.open ] else [ nvidia_x11.bin ];
hardware.firmware = lib.optional cfg.open nvidia_x11.firmware;
# nvidia-uvm is required by CUDA applications.
boot.kernelModules = [ "nvidia-uvm" ] ++
@ -372,7 +386,8 @@ in
# If requested enable modesetting via kernel parameter.
boot.kernelParams = optional (offloadCfg.enable || cfg.modesetting.enable) "nvidia-drm.modeset=1"
++ optional cfg.powerManagement.enable "nvidia.NVreg_PreserveVideoMemoryAllocations=1";
++ optional cfg.powerManagement.enable "nvidia.NVreg_PreserveVideoMemoryAllocations=1"
++ optional cfg.open "nvidia.NVreg_OpenRmEnableUnsupportedGpus=1";
services.udev.extraRules =
''

View file

@ -18,7 +18,8 @@
extraGSettingsOverrides = ''
[org.gnome.shell]
welcome-dialog-last-shown-version='9999999999'
[org.gnome.desktop.session]
idle-delay=0
[org.gnome.settings-daemon.plugins.power]
sleep-inactive-ac-type='nothing'
sleep-inactive-battery-type='nothing'

View file

@ -18,7 +18,7 @@ with lib;
let
rootfsImage = pkgs.callPackage ../../../lib/make-ext4-fs.nix ({
inherit (config.sdImage) storePaths;
compressImage = true;
compressImage = config.sdImage.compressImage;
populateImageCommands = config.sdImage.populateRootCommands;
volumeLabel = "NIXOS_SD";
} // optionalAttrs (config.sdImage.rootPartitionUUID != null) {
@ -174,7 +174,8 @@ in
mtools, libfaketime, util-linux, zstd }: stdenv.mkDerivation {
name = config.sdImage.imageName;
nativeBuildInputs = [ dosfstools e2fsprogs mtools libfaketime util-linux zstd ];
nativeBuildInputs = [ dosfstools e2fsprogs libfaketime mtools util-linux ]
++ lib.optional config.sdImage.compressImage zstd;
inherit (config.sdImage) imageName compressImage;
@ -189,14 +190,18 @@ in
echo "file sd-image $img" >> $out/nix-support/hydra-build-products
fi
root_fs=${rootfsImage}
${lib.optionalString config.sdImage.compressImage ''
root_fs=./root-fs.img
echo "Decompressing rootfs image"
zstd -d --no-progress "${rootfsImage}" -o ./root-fs.img
zstd -d --no-progress "${rootfsImage}" -o $root_fs
''}
# Gap in front of the first partition, in MiB
gap=${toString config.sdImage.firmwarePartitionOffset}
# Create the image file sized to fit /boot/firmware and /, plus slack for the gap.
rootSizeBlocks=$(du -B 512 --apparent-size ./root-fs.img | awk '{ print $1 }')
rootSizeBlocks=$(du -B 512 --apparent-size $root_fs | awk '{ print $1 }')
firmwareSizeBlocks=$((${toString config.sdImage.firmwareSize} * 1024 * 1024 / 512))
imageSize=$((rootSizeBlocks * 512 + firmwareSizeBlocks * 512 + gap * 1024 * 1024))
truncate -s $imageSize $img
@ -214,7 +219,7 @@ in
# Copy the rootfs into the SD image
eval $(partx $img -o START,SECTORS --nr 2 --pairs)
dd conv=notrunc if=./root-fs.img of=$img seek=$START count=$SECTORS
dd conv=notrunc if=$root_fs of=$img seek=$START count=$SECTORS
# Create a FAT32 /boot/firmware partition of suitable size into firmware_part.img
eval $(partx $img -o START,SECTORS --nr 1 --pairs)

View file

@ -84,6 +84,15 @@ sub debug {
}
# nixpkgs.system
my ($status, @systemLines) = runCommand("nix-instantiate --impure --eval --expr builtins.currentSystem");
if ($status != 0 || join("", @systemLines) =~ /error/) {
die "Failed to retrieve current system type from nix.\n";
}
chomp(my $system = @systemLines[0]);
push @attrs, "nixpkgs.hostPlatform = lib.mkDefault $system;";
my $cpuinfo = read_file "/proc/cpuinfo";

View file

@ -178,19 +178,12 @@ in
man.generateCaches = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Whether to generate the manual page index caches.
This allows searching for a page or
keyword using utilities like
<citerefentry>
<refentrytitle>apropos</refentrytitle>
<manvolnum>1</manvolnum>
</citerefentry>
and the <literal>-k</literal> option of
<citerefentry>
<refentrytitle>man</refentrytitle>
<manvolnum>1</manvolnum>
</citerefentry>.
keyword using utilities like {manpage}`apropos(1)`
and the `-k` option of
{manpage}`man(1)`.
'';
};
@ -216,16 +209,14 @@ in
dev.enable = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Whether to install documentation targeted at developers.
<itemizedlist>
<listitem><para>This includes man pages targeted at developers if <option>documentation.man.enable</option> is
set (this also includes "devman" outputs).</para></listitem>
<listitem><para>This includes info pages targeted at developers if <option>documentation.info.enable</option>
is set (this also includes "devinfo" outputs).</para></listitem>
<listitem><para>This includes other pages targeted at developers if <option>documentation.doc.enable</option>
is set (this also includes "devdoc" outputs).</para></listitem>
</itemizedlist>
* This includes man pages targeted at developers if {option}`documentation.man.enable` is
set (this also includes "devman" outputs).
* This includes info pages targeted at developers if {option}`documentation.info.enable`
is set (this also includes "devinfo" outputs).
* This includes other pages targeted at developers if {option}`documentation.doc.enable`
is set (this also includes "devdoc" outputs).
'';
};

View file

@ -236,7 +236,7 @@ in
gitit = 202;
riemanntools = 203;
subsonic = 204;
riak = 205;
# riak = 205; # unused, remove 2022-07-22
#shout = 206; # dynamically allocated as of 2021-09-18
gateone = 207;
namecoin = 208;
@ -553,7 +553,7 @@ in
gitit = 202;
riemanntools = 203;
subsonic = 204;
riak = 205;
# riak = 205;#unused, removed 2022-06-22
#shout = 206; #unused
gateone = 207;
namecoin = 208;

View file

@ -23,11 +23,11 @@ in
++ lib.optionals config.documentation.dev.enable [ "devman" ];
ignoreCollisions = true;
};
defaultText = lib.literalDocBook "all man pages in <option>config.environment.systemPackages</option>";
description = ''
The manual pages to generate caches for if <option>documentation.man.generateCaches</option>
defaultText = lib.literalMD "all man pages in {option}`config.environment.systemPackages`";
description = lib.mdDoc ''
The manual pages to generate caches for if {option}`documentation.man.generateCaches`
is enabled. Must be a path to a directory with man pages under
<literal>/share/man</literal>; see the source for an example.
`/share/man`; see the source for an example.
Advanced users can make this a content-addressed derivation to save a few rebuilds.
'';
};

View file

@ -55,7 +55,44 @@ let
check = builtins.isAttrs;
};
defaultPkgs = import ../../.. {
hasBuildPlatform = opt.buildPlatform.highestPrio < (mkOptionDefault {}).priority;
hasHostPlatform = opt.hostPlatform.isDefined;
hasPlatform = hasHostPlatform || hasBuildPlatform;
# Context for messages
hostPlatformLine = optionalString hasHostPlatform "${showOptionWithDefLocs opt.hostPlatform}";
buildPlatformLine = optionalString hasBuildPlatform "${showOptionWithDefLocs opt.buildPlatform}";
platformLines = optionalString hasPlatform ''
Your system configuration configures nixpkgs with platform parameters:
${hostPlatformLine
}${buildPlatformLine
}'';
legacyOptionsDefined =
optional (opt.localSystem.highestPrio < (mkDefault {}).priority) opt.system
++ optional (opt.localSystem.highestPrio < (mkOptionDefault {}).priority) opt.localSystem
++ optional (opt.crossSystem.highestPrio < (mkOptionDefault {}).priority) opt.crossSystem
;
defaultPkgs =
if opt.hostPlatform.isDefined
then
let isCross = cfg.buildPlatform != cfg.hostPlatform;
systemArgs =
if isCross
then {
localSystem = cfg.buildPlatform;
crossSystem = cfg.hostPlatform;
}
else {
localSystem = cfg.hostPlatform;
};
in
import ../../.. ({
inherit (cfg) config overlays;
} // systemArgs)
else
import ../../.. {
inherit (cfg) config overlays localSystem crossSystem;
};
@ -157,6 +194,46 @@ in
'';
};
hostPlatform = mkOption {
type = types.either types.str types.attrs; # TODO utilize lib.systems.parsedPlatform
example = { system = "aarch64-linux"; config = "aarch64-unknown-linux-gnu"; };
# Make sure that the final value has all fields for sake of other modules
# referring to this. TODO make `lib.systems` itself use the module system.
apply = lib.systems.elaborate;
defaultText = literalExpression
''(import "''${nixos}/../lib").lib.systems.examples.aarch64-multiplatform'';
description = ''
Specifies the platform where the NixOS configuration will run.
To cross-compile, set also <code>nixpkgs.buildPlatform</code>.
Ignored when <code>nixpkgs.pkgs</code> is set.
'';
};
buildPlatform = mkOption {
type = types.either types.str types.attrs; # TODO utilize lib.systems.parsedPlatform
default = cfg.hostPlatform;
example = { system = "x86_64-linux"; config = "x86_64-unknown-linux-gnu"; };
# Make sure that the final value has all fields for sake of other modules
# referring to this.
apply = lib.systems.elaborate;
defaultText = literalExpression
''config.nixpkgs.hostPlatform'';
description = ''
Specifies the platform on which NixOS should be built.
By default, NixOS is built on the system where it runs, but you can
change where it's built. Setting this option will cause NixOS to be
cross-compiled.
For instance, if you're doing distributed multi-platform deployment,
or if you're building machines, you can set this to match your
development system and/or build farm.
Ignored when <code>nixpkgs.pkgs</code> is set.
'';
};
localSystem = mkOption {
type = types.attrs; # TODO utilize lib.systems.parsedPlatform
default = { inherit (cfg) system; };
@ -176,10 +253,13 @@ in
deployment, or when building virtual machines. See its
description in the Nixpkgs manual for more details.
Ignored when <code>nixpkgs.pkgs</code> is set.
Ignored when <code>nixpkgs.pkgs</code> or <code>hostPlatform</code> is set.
'';
};
# TODO deprecate. "crossSystem" is a nonsense identifier, because "cross"
# is a relation between at least 2 systems in the context of a
# specific build step, not a single system.
crossSystem = mkOption {
type = types.nullOr types.attrs; # TODO utilize lib.systems.parsedPlatform
default = null;
@ -193,7 +273,7 @@ in
should be set as null, the default. See its description in the
Nixpkgs manual for more details.
Ignored when <code>nixpkgs.pkgs</code> is set.
Ignored when <code>nixpkgs.pkgs</code> or <code>hostPlatform</code> is set.
'';
};
@ -216,8 +296,7 @@ in
</programlisting>
See <code>nixpkgs.localSystem</code> for more information.
Ignored when <code>nixpkgs.localSystem</code> is set.
Ignored when <code>nixpkgs.pkgs</code> is set.
Ignored when <code>nixpkgs.pkgs</code>, <code>nixpkgs.localSystem</code> or <code>nixpkgs.hostPlatform</code> is set.
'';
};
};
@ -240,10 +319,23 @@ in
else "nixpkgs.localSystem";
pkgsSystem = finalPkgs.stdenv.targetPlatform.system;
in {
assertion = nixosExpectedSystem == pkgsSystem;
assertion = !hasPlatform -> nixosExpectedSystem == pkgsSystem;
message = "The NixOS nixpkgs.pkgs option was set to a Nixpkgs invocation that compiles to target system ${pkgsSystem} but NixOS was configured for system ${nixosExpectedSystem} via NixOS option ${nixosOption}. The NixOS system settings must match the Nixpkgs target system.";
}
)
{
assertion = hasPlatform -> legacyOptionsDefined == [];
message = ''
Your system configures nixpkgs with the platform parameter${optionalString hasBuildPlatform "s"}:
${hostPlatformLine
}${buildPlatformLine
}
However, it also defines the legacy options:
${concatMapStrings showOptionWithDefLocs legacyOptionsDefined}
For a future proof system configuration, we recommend to remove
the legacy definitions.
'';
}
];
};

View file

@ -1,8 +1,63 @@
{ evalMinimalConfig, pkgs, lib, stdenv }:
let
eval = mod: evalMinimalConfig {
imports = [ ../nixpkgs.nix mod ];
};
withHost = eval {
nixpkgs.hostPlatform = "aarch64-linux";
};
withHostAndBuild = eval {
nixpkgs.hostPlatform = "aarch64-linux";
nixpkgs.buildPlatform = "aarch64-darwin";
};
ambiguous = {
_file = "ambiguous.nix";
nixpkgs.hostPlatform = "aarch64-linux";
nixpkgs.buildPlatform = "aarch64-darwin";
nixpkgs.system = "x86_64-linux";
nixpkgs.localSystem.system = "x86_64-darwin";
nixpkgs.crossSystem.system = "i686-linux";
imports = [
{ _file = "repeat.nix";
nixpkgs.hostPlatform = "aarch64-linux";
}
];
};
getErrors = module:
let
uncheckedEval = lib.evalModules { modules = [ ../nixpkgs.nix module ]; };
in map (ass: ass.message) (lib.filter (ass: !ass.assertion) uncheckedEval.config.assertions);
in
lib.recurseIntoAttrs {
invokeNixpkgsSimple =
(evalMinimalConfig ({ config, modulesPath, ... }: {
imports = [ (modulesPath + "/misc/nixpkgs.nix") ];
(eval {
nixpkgs.system = stdenv.hostPlatform.system;
}))._module.args.pkgs.hello;
})._module.args.pkgs.hello;
assertions =
assert withHost._module.args.pkgs.stdenv.hostPlatform.system == "aarch64-linux";
assert withHost._module.args.pkgs.stdenv.buildPlatform.system == "aarch64-linux";
assert withHostAndBuild._module.args.pkgs.stdenv.hostPlatform.system == "aarch64-linux";
assert withHostAndBuild._module.args.pkgs.stdenv.buildPlatform.system == "aarch64-darwin";
assert builtins.trace (lib.head (getErrors ambiguous))
getErrors ambiguous ==
[''
Your system configures nixpkgs with the platform parameters:
nixpkgs.hostPlatform, with values defined in:
- repeat.nix
- ambiguous.nix
nixpkgs.buildPlatform, with values defined in:
- ambiguous.nix
However, it also defines the legacy options:
nixpkgs.system, with values defined in:
- ambiguous.nix
nixpkgs.localSystem, with values defined in:
- ambiguous.nix
nixpkgs.crossSystem, with values defined in:
- ambiguous.nix
For a future proof system configuration, we recommend to remove
the legacy definitions.
''];
pkgs.emptyFile;
}

View file

@ -141,7 +141,6 @@
./programs/cdemu.nix
./programs/cfs-zen-tweaks.nix
./programs/chromium.nix
./programs/clickshare.nix
./programs/cnping.nix
./programs/command-not-found/command-not-found.nix
./programs/criu.nix
@ -366,7 +365,6 @@
./services/databases/pgmanage.nix
./services/databases/postgresql.nix
./services/databases/redis.nix
./services/databases/riak.nix
./services/databases/victoriametrics.nix
./services/desktops/accountsservice.nix
./services/desktops/bamf.nix
@ -431,6 +429,7 @@
./services/games/terraria.nix
./services/hardware/acpid.nix
./services/hardware/actkbd.nix
./services/hardware/argonone.nix
./services/hardware/auto-cpufreq.nix
./services/hardware/bluetooth.nix
./services/hardware/bolt.nix
@ -516,6 +515,7 @@
./services/mail/rspamd.nix
./services/mail/rss2email.nix
./services/mail/roundcube.nix
./services/mail/schleuder.nix
./services/mail/sympa.nix
./services/mail/nullmailer.nix
./services/matrix/appservice-discord.nix
@ -894,6 +894,7 @@
./services/networking/redsocks.nix
./services/networking/resilio.nix
./services/networking/robustirc-bridge.nix
./services/networking/routedns.nix
./services/networking/rpcbind.nix
./services/networking/rxe.nix
./services/networking/sabnzbd.nix

View file

@ -8,9 +8,6 @@ with lib;
{
environment.noXlibs = mkDefault true;
# This isn't perfect, but let's expect the user specifies an UTF-8 defaultLocale
i18n.supportedLocales = [ (config.i18n.defaultLocale + "/UTF-8") ];
documentation.enable = mkDefault false;
documentation.nixos.enable = mkDefault false;

View file

@ -1,21 +0,0 @@
{ config, lib, pkgs, ... }:
{
options.programs.clickshare-csc1.enable =
lib.options.mkEnableOption ''
Barco ClickShare CSC-1 driver/client.
This allows users in the <literal>clickshare</literal>
group to access and use a ClickShare USB dongle
that is connected to the machine
'';
config = lib.modules.mkIf config.programs.clickshare-csc1.enable {
environment.systemPackages = [ pkgs.clickshare-csc1 ];
services.udev.packages = [ pkgs.clickshare-csc1 ];
users.groups.clickshare = {};
};
meta.maintainers = [ lib.maintainers.yarny ];
}

View file

@ -97,6 +97,7 @@ with lib;
(mkRemovedOptionModule [ "services" "gogoclient" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "virtuoso" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "openfire" ] "The corresponding package was removed from nixpkgs.")
(mkRemovedOptionModule [ "services" "riak" ] "The corresponding package was removed from nixpkgs.")
# Do NOT add any option renames here, see top of the file
];

View file

@ -22,16 +22,17 @@ in {
options.confinement.fullUnit = lib.mkOption {
type = types.bool;
default = false;
description = ''
description = lib.mdDoc ''
Whether to include the full closure of the systemd unit file into the
chroot, instead of just the dependencies for the executables.
<warning><para>While it may be tempting to just enable this option to
::: {.warning}
While it may be tempting to just enable this option to
make things work quickly, please be aware that this might add paths
to the closure of the chroot that you didn't anticipate. It's better
to use <option>confinement.packages</option> to <emphasis
role="strong">explicitly</emphasis> add additional store paths to the
chroot.</para></warning>
to use {option}`confinement.packages` to **explicitly** add additional store paths to the
chroot.
:::
'';
};

View file

@ -45,6 +45,8 @@ in {
RootDirectory = "/run/navidrome";
ReadWritePaths = "";
BindReadOnlyPaths = [
# navidrome uses online services to download additional album metadata / covers
"${config.environment.etc."ssl/certs/ca-certificates.crt".source}:/etc/ssl/certs/ca-certificates.crt"
builtins.storeDir
] ++ lib.optional (cfg.settings ? MusicFolder) cfg.settings.MusicFolder;
CapabilityBoundingSet = "";

View file

@ -3,8 +3,14 @@
with lib;
let
cfg = config.services.k3s;
removeOption = config: instruction:
lib.mkRemovedOptionModule ([ "services" "k3s" ] ++ config) instruction;
in
{
imports = [
(removeOption [ "docker" ] "k3s docker option is no longer supported.")
];
# interface
options.services.k3s = {
enable = mkEnableOption "k3s";
@ -48,12 +54,6 @@ in
default = null;
};
docker = mkOption {
type = types.bool;
default = false;
description = "Use docker to run containers rather than the built-in containerd.";
};
extraFlags = mkOption {
description = "Extra flags to pass to the k3s command.";
type = types.str;
@ -88,14 +88,11 @@ in
}
];
virtualisation.docker = mkIf cfg.docker {
enable = mkDefault true;
};
environment.systemPackages = [ config.services.k3s.package ];
systemd.services.k3s = {
description = "k3s service";
after = [ "network.service" "firewall.service" ] ++ (optional cfg.docker "docker.service");
after = [ "network.service" "firewall.service" ];
wants = [ "network.service" "firewall.service" ];
wantedBy = [ "multi-user.target" ];
path = optional config.boot.zfs.enabled config.boot.zfs.package;
@ -113,8 +110,8 @@ in
ExecStart = concatStringsSep " \\\n " (
[
"${cfg.package}/bin/k3s ${cfg.role}"
] ++ (optional cfg.docker "--docker")
++ (optional (cfg.docker && config.systemd.enableUnifiedCgroupHierarchy) "--kubelet-arg=cgroup-driver=systemd")
]
++ (optional (config.systemd.enableUnifiedCgroupHierarchy) "--kubelet-arg=cgroup-driver=systemd")
++ (optional cfg.disableAgent "--disable-agent")
++ (optional (cfg.serverAddr != "") "--server ${cfg.serverAddr}")
++ (optional (cfg.token != "") "--token ${cfg.token}")

View file

@ -4,11 +4,12 @@ let
inherit (lib)
concatStringsSep
flip
literalDocBook
literalMD
literalExpression
optionalAttrs
optionals
recursiveUpdate
mdDoc
mkEnableOption
mkIf
mkOption
@ -107,7 +108,7 @@ in
clusterName = mkOption {
type = types.str;
default = "Test Cluster";
description = ''
description = mdDoc ''
The name of the cluster.
This setting prevents nodes in one logical cluster from joining
another. All nodes in a cluster must have the same value.
@ -117,19 +118,19 @@ in
user = mkOption {
type = types.str;
default = defaultUser;
description = "Run Apache Cassandra under this user.";
description = mdDoc "Run Apache Cassandra under this user.";
};
group = mkOption {
type = types.str;
default = defaultUser;
description = "Run Apache Cassandra under this group.";
description = mdDoc "Run Apache Cassandra under this group.";
};
homeDir = mkOption {
type = types.path;
default = "/var/lib/cassandra";
description = ''
description = mdDoc ''
Home directory for Apache Cassandra.
'';
};
@ -139,7 +140,7 @@ in
default = pkgs.cassandra;
defaultText = literalExpression "pkgs.cassandra";
example = literalExpression "pkgs.cassandra_3_11";
description = ''
description = mdDoc ''
The Apache Cassandra package to use.
'';
};
@ -147,8 +148,8 @@ in
jvmOpts = mkOption {
type = types.listOf types.str;
default = [ ];
description = ''
Populate the JVM_OPT environment variable.
description = mdDoc ''
Populate the `JVM_OPT` environment variable.
'';
};
@ -156,20 +157,20 @@ in
type = types.nullOr types.str;
default = "127.0.0.1";
example = null;
description = ''
description = mdDoc ''
Address or interface to bind to and tell other Cassandra nodes
to connect to. You _must_ change this if you want multiple
nodes to be able to communicate!
Set listenAddress OR listenInterface, not both.
Set {option}`listenAddress` OR {option}`listenInterface`, not both.
Leaving it blank leaves it up to
InetAddress.getLocalHost(). This will always do the Right
Thing _if_ the node is properly configured (hostname, name
`InetAddress.getLocalHost()`. This will always do the "Right
Thing" _if_ the node is properly configured (hostname, name
resolution, etc), and the Right Thing is to use the address
associated with the hostname (it might not be).
Setting listen_address to 0.0.0.0 is always wrong.
Setting {option}`listenAddress` to `0.0.0.0` is always wrong.
'';
};
@ -177,8 +178,8 @@ in
type = types.nullOr types.str;
default = null;
example = "eth1";
description = ''
Set listenAddress OR listenInterface, not both. Interfaces
description = mdDoc ''
Set `listenAddress` OR `listenInterface`, not both. Interfaces
must correspond to a single address, IP aliasing is not
supported.
'';
@ -188,18 +189,18 @@ in
type = types.nullOr types.str;
default = "127.0.0.1";
example = null;
description = ''
description = mdDoc ''
The address or interface to bind the native transport server to.
Set rpcAddress OR rpcInterface, not both.
Set {option}`rpcAddress` OR {option}`rpcInterface`, not both.
Leaving rpcAddress blank has the same effect as on
listenAddress (i.e. it will be based on the configured hostname
Leaving {option}`rpcAddress` blank has the same effect as on
{option}`listenAddress` (i.e. it will be based on the configured hostname
of the node).
Note that unlike listenAddress, you can specify 0.0.0.0, but you
must also set extraConfig.broadcast_rpc_address to a value other
than 0.0.0.0.
Note that unlike {option}`listenAddress`, you can specify `"0.0.0.0"`, but you
must also set `extraConfig.broadcast_rpc_address` to a value other
than `"0.0.0.0"`.
For security reasons, you should not expose this port to the
internet. Firewall it if needed.
@ -210,8 +211,8 @@ in
type = types.nullOr types.str;
default = null;
example = "eth1";
description = ''
Set rpcAddress OR rpcInterface, not both. Interfaces must
description = mdDoc ''
Set {option}`rpcAddress` OR {option}`rpcInterface`, not both. Interfaces must
correspond to a single address, IP aliasing is not supported.
'';
};
@ -233,7 +234,7 @@ in
<logger name="com.thinkaurelius.thrift" level="ERROR"/>
</configuration>
'';
description = ''
description = mdDoc ''
XML logback configuration for cassandra
'';
};
@ -241,24 +242,24 @@ in
seedAddresses = mkOption {
type = types.listOf types.str;
default = [ "127.0.0.1" ];
description = ''
description = mdDoc ''
The addresses of hosts designated as contact points in the cluster. A
joining node contacts one of the nodes in the seeds list to learn the
topology of the ring.
Set to 127.0.0.1 for a single node cluster.
Set to `[ "127.0.0.1" ]` for a single node cluster.
'';
};
allowClients = mkOption {
type = types.bool;
default = true;
description = ''
description = mdDoc ''
Enables or disables the native transport server (CQL binary protocol).
This server uses the same address as the <literal>rpcAddress</literal>,
but the port it uses is not <literal>rpc_port</literal> but
<literal>native_transport_port</literal>. See the official Cassandra
This server uses the same address as the {option}`rpcAddress`,
but the port it uses is not `rpc_port` but
`native_transport_port`. See the official Cassandra
docs for more information on these variables and set them using
<literal>extraConfig</literal>.
{option}`extraConfig`.
'';
};
@ -269,8 +270,8 @@ in
{
commitlog_sync_batch_window_in_ms = 3;
};
description = ''
Extra options to be merged into cassandra.yaml as nix attribute set.
description = mdDoc ''
Extra options to be merged into {file}`cassandra.yaml` as nix attribute set.
'';
};
@ -278,8 +279,8 @@ in
type = types.lines;
default = "";
example = literalExpression ''"CLASSPATH=$CLASSPATH:''${extraJar}"'';
description = ''
Extra shell lines to be appended onto cassandra-env.sh.
description = mdDoc ''
Extra shell lines to be appended onto {file}`cassandra-env.sh`.
'';
};
@ -287,13 +288,13 @@ in
type = types.nullOr types.str;
default = "3w";
example = null;
description = ''
description = mdDoc ''
Set the interval how often full repairs are run, i.e.
<literal>nodetool repair --full</literal> is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
{command}`nodetool repair --full` is executed. See
<https://cassandra.apache.org/doc/latest/operating/repair.html>
for more information.
Set to <literal>null</literal> to disable full repairs.
Set to `null` to disable full repairs.
'';
};
@ -301,7 +302,7 @@ in
type = types.listOf types.str;
default = [ ];
example = [ "--partitioner-range" ];
description = ''
description = mdDoc ''
Options passed through to the full repair command.
'';
};
@ -310,13 +311,13 @@ in
type = types.nullOr types.str;
default = "3d";
example = null;
description = ''
description = mdDoc ''
Set the interval how often incremental repairs are run, i.e.
<literal>nodetool repair</literal> is executed. See
https://cassandra.apache.org/doc/latest/operating/repair.html
{command}`nodetool repair` is executed. See
<https://cassandra.apache.org/doc/latest/operating/repair.html>
for more information.
Set to <literal>null</literal> to disable incremental repairs.
Set to `null` to disable incremental repairs.
'';
};
@ -324,7 +325,7 @@ in
type = types.listOf types.str;
default = [ ];
example = [ "--partitioner-range" ];
description = ''
description = mdDoc ''
Options passed through to the incremental repair command.
'';
};
@ -333,15 +334,15 @@ in
type = types.nullOr types.str;
default = null;
example = "4G";
description = ''
Must be left blank or set together with heapNewSize.
description = mdDoc ''
Must be left blank or set together with {option}`heapNewSize`.
If left blank a sensible value for the available amount of RAM and CPU
cores is calculated.
Override to set the amount of memory to allocate to the JVM at
start-up. For production use you may wish to adjust this for your
environment. MAX_HEAP_SIZE is the total amount of memory dedicated
to the Java heap. HEAP_NEWSIZE refers to the size of the young
environment. `MAX_HEAP_SIZE` is the total amount of memory dedicated
to the Java heap. `HEAP_NEWSIZE` refers to the size of the young
generation.
The main trade-off for the young generation is that the larger it
@ -354,21 +355,21 @@ in
type = types.nullOr types.str;
default = null;
example = "800M";
description = ''
Must be left blank or set together with heapNewSize.
description = mdDoc ''
Must be left blank or set together with {option}`heapNewSize`.
If left blank a sensible value for the available amount of RAM and CPU
cores is calculated.
Override to set the amount of memory to allocate to the JVM at
start-up. For production use you may wish to adjust this for your
environment. HEAP_NEWSIZE refers to the size of the young
environment. `HEAP_NEWSIZE` refers to the size of the young
generation.
The main trade-off for the young generation is that the larger it
is, the longer GC pause times will be. The shorter it is, the more
expensive GC will be (usually).
The example HEAP_NEWSIZE assumes a modern 8-core+ machine for decent pause
The example `HEAP_NEWSIZE` assumes a modern 8-core+ machine for decent pause
times. If in doubt, and if you do not particularly want to tweak, go with
100 MB per physical CPU core.
'';
@ -378,7 +379,7 @@ in
type = types.nullOr types.int;
default = null;
example = 4;
description = ''
description = mdDoc ''
Set this to control the amount of arenas per-thread in glibc.
'';
};
@ -386,19 +387,19 @@ in
remoteJmx = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Cassandra ships with JMX accessible *only* from localhost.
To enable remote JMX connections set to true.
Be sure to also enable authentication and/or TLS.
See: https://wiki.apache.org/cassandra/JmxSecurity
See: <https://wiki.apache.org/cassandra/JmxSecurity>
'';
};
jmxPort = mkOption {
type = types.int;
default = 7199;
description = ''
description = mdDoc ''
Specifies the default port over which Cassandra will be available for
JMX connections.
For security reasons, you should not expose this port to the internet.
@ -408,11 +409,11 @@ in
jmxRoles = mkOption {
default = [ ];
description = ''
Roles that are allowed to access the JMX (e.g. nodetool)
BEWARE: The passwords will be stored world readable in the nix-store.
description = mdDoc ''
Roles that are allowed to access the JMX (e.g. {command}`nodetool`)
BEWARE: The passwords will be stored world readable in the nix store.
It's recommended to use your own protected file using
<literal>jmxRolesFile</literal>
{option}`jmxRolesFile`
Doesn't work in versions older than 3.11 because they don't like that
it's world readable.
@ -437,7 +438,7 @@ in
if versionAtLeast cfg.package.version "3.11"
then pkgs.writeText "jmx-roles-file" defaultJmxRolesFile
else null;
defaultText = literalDocBook ''generated configuration file if version is at least 3.11, otherwise <literal>null</literal>'';
defaultText = literalMD ''generated configuration file if version is at least 3.11, otherwise `null`'';
example = "/var/lib/cassandra/jmx.password";
description = ''
Specify your own jmx roles file.

View file

@ -1,162 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.riak;
in
{
###### interface
options = {
services.riak = {
enable = mkEnableOption "riak";
package = mkOption {
type = types.package;
default = pkgs.riak;
defaultText = literalExpression "pkgs.riak";
description = ''
Riak package to use.
'';
};
nodeName = mkOption {
type = types.str;
default = "riak@127.0.0.1";
description = ''
Name of the Erlang node.
'';
};
distributedCookie = mkOption {
type = types.str;
default = "riak";
description = ''
Cookie for distributed node communication. All nodes in the
same cluster should use the same cookie or they will not be able to
communicate.
'';
};
dataDir = mkOption {
type = types.path;
default = "/var/db/riak";
description = ''
Data directory for Riak.
'';
};
logDir = mkOption {
type = types.path;
default = "/var/log/riak";
description = ''
Log directory for Riak.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional text to be appended to <filename>riak.conf</filename>.
'';
};
extraAdvancedConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional text to be appended to <filename>advanced.config</filename>.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc."riak/riak.conf".text = ''
nodename = ${cfg.nodeName}
distributed_cookie = ${cfg.distributedCookie}
platform_log_dir = ${cfg.logDir}
platform_etc_dir = /etc/riak
platform_data_dir = ${cfg.dataDir}
${cfg.extraConfig}
'';
environment.etc."riak/advanced.config".text = ''
${cfg.extraAdvancedConfig}
'';
users.users.riak = {
name = "riak";
uid = config.ids.uids.riak;
group = "riak";
description = "Riak server user";
};
users.groups.riak.gid = config.ids.gids.riak;
systemd.services.riak = {
description = "Riak Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [
pkgs.util-linux # for `logger`
pkgs.bash
];
environment.HOME = "${cfg.dataDir}";
environment.RIAK_DATA_DIR = "${cfg.dataDir}";
environment.RIAK_LOG_DIR = "${cfg.logDir}";
environment.RIAK_ETC_DIR = "/etc/riak";
preStart = ''
if ! test -e ${cfg.logDir}; then
mkdir -m 0755 -p ${cfg.logDir}
chown -R riak ${cfg.logDir}
fi
if ! test -e ${cfg.dataDir}; then
mkdir -m 0700 -p ${cfg.dataDir}
chown -R riak ${cfg.dataDir}
fi
'';
serviceConfig = {
ExecStart = "${cfg.package}/bin/riak console";
ExecStop = "${cfg.package}/bin/riak stop";
StandardInput = "tty";
User = "riak";
Group = "riak";
PermissionsStartOnly = true;
# Give Riak a decent amount of time to clean up.
TimeoutStopSec = 120;
LimitNOFILE = 65536;
};
unitConfig.RequiresMountsFor = [
"${cfg.dataDir}"
"${cfg.logDir}"
"/etc/riak"
];
};
};
}

View file

@ -42,6 +42,14 @@ in
alsa_monitor.enable = function() end
'';
};
environment.etc."wireplumber/main.lua.d/80-systemwide.lua" = lib.mkIf config.services.pipewire.systemWide {
text = ''
-- When running system-wide, these settings need to be disabled (they
-- use functions that aren't available on the system dbus).
alsa_monitor.properties["alsa.reserve"] = false
default_access.properties["enable-flatpak-portal"] = false
'';
};
systemd.packages = [ cfg.package ];
@ -50,5 +58,10 @@ in
systemd.services.wireplumber.wantedBy = [ "pipewire.service" ];
systemd.user.services.wireplumber.wantedBy = [ "pipewire.service" ];
systemd.services.wireplumber.environment = lib.mkIf config.services.pipewire.systemWide {
# Force wireplumber to use system dbus.
DBUS_SESSION_BUS_ADDRESS = "unix:path=/run/dbus/system_bus_socket";
};
};
}

View file

@ -0,0 +1,58 @@
{ config, lib, pkgs, ... }:
let
cfg = config.services.hardware.argonone;
in
{
options.services.hardware.argonone = {
enable = lib.mkEnableOption "the driver for Argon One Raspberry Pi case fan and power button";
package = lib.mkOption {
type = lib.types.package;
default = pkgs.argononed;
defaultText = "pkgs.argononed";
description = ''
The package implementing the Argon One driver
'';
};
};
config = lib.mkIf cfg.enable {
hardware.i2c.enable = true;
hardware.deviceTree.overlays = [
{
name = "argononed";
dtboFile = "${cfg.package}/boot/overlays/argonone.dtbo";
}
{
name = "i2c1-okay-overlay";
dtsText = ''
/dts-v1/;
/plugin/;
/ {
compatible = "brcm,bcm2711";
fragment@0 {
target = <&i2c1>;
__overlay__ {
status = "okay";
};
};
};
'';
}
];
environment.systemPackages = [ cfg.package ];
systemd.services.argononed = {
description = "Argon One Raspberry Pi case Daemon Service";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "forking";
ExecStart = "${cfg.package}/bin/argononed";
PIDFile = "/run/argononed.pid";
Restart = "on-failure";
};
};
};
meta.maintainers = with lib.maintainers; [ misterio77 ];
}

View file

@ -369,6 +369,17 @@ in {
networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ cfg.config.http.server_port ];
# symlink the configuration to /etc/home-assistant
environment.etc = lib.mkMerge [
(lib.mkIf (cfg.config != null && !cfg.configWritable) {
"home-assistant/configuration.yaml".source = configFile;
})
(lib.mkIf (cfg.lovelaceConfig != null && !cfg.lovelaceConfigWritable) {
"home-assistant/ui-lovelace.yaml".source = lovelaceConfigFile;
})
];
systemd.services.home-assistant = {
description = "Home Assistant";
after = [
@ -378,18 +389,22 @@ in {
"mysql.service"
"postgresql.service"
];
reloadTriggers = [
configFile
lovelaceConfigFile
];
preStart = let
copyConfig = if cfg.configWritable then ''
cp --no-preserve=mode ${configFile} "${cfg.configDir}/configuration.yaml"
'' else ''
rm -f "${cfg.configDir}/configuration.yaml"
ln -s ${configFile} "${cfg.configDir}/configuration.yaml"
ln -s /etc/home-assistant/configuration.yaml "${cfg.configDir}/configuration.yaml"
'';
copyLovelaceConfig = if cfg.lovelaceConfigWritable then ''
cp --no-preserve=mode ${lovelaceConfigFile} "${cfg.configDir}/ui-lovelace.yaml"
'' else ''
rm -f "${cfg.configDir}/ui-lovelace.yaml"
ln -s ${lovelaceConfigFile} "${cfg.configDir}/ui-lovelace.yaml"
ln -s /etc/home-assistant/ui-lovelace.yaml "${cfg.configDir}/ui-lovelace.yaml"
'';
in
(optionalString (cfg.config != null) copyConfig) +

View file

@ -0,0 +1,162 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.schleuder;
settingsFormat = pkgs.formats.yaml { };
postfixMap = entries: lib.concatStringsSep "\n" (lib.mapAttrsToList (name: value: "${name} ${value}") entries);
writePostfixMap = name: entries: pkgs.writeText name (postfixMap entries);
configScript = pkgs.writeScript "schleuder-cfg" ''
#!${pkgs.runtimeShell}
set -exuo pipefail
umask 0077
${pkgs.yq}/bin/yq \
--slurpfile overrides <(${pkgs.yq}/bin/yq . <${lib.escapeShellArg cfg.extraSettingsFile}) \
< ${settingsFormat.generate "schleuder.yml" cfg.settings} \
'. * $overrides[0]' \
> /etc/schleuder/schleuder.yml
chown schleuder: /etc/schleuder/schleuder.yml
'';
in
{
options.services.schleuder = {
enable = lib.mkEnableOption "Schleuder secure remailer";
enablePostfix = lib.mkEnableOption "automatic postfix integration" // { default = true; };
lists = lib.mkOption {
description = ''
List of list addresses that should be handled by Schleuder.
Note that this is only handled by the postfix integration, and
the setup of the lists, their members and their keys has to be
performed separately via schleuder's API, using a tool such as
schleuder-cli.
'';
type = lib.types.listOf lib.types.str;
default = [ ];
example = [ "widget-team@example.com" "security@example.com" ];
};
/* maybe one day....
domains = lib.mkOption {
description = "Domains for which all mail should be handled by Schleuder.";
type = lib.types.listOf lib.types.str;
default = [];
example = ["securelists.example.com"];
};
*/
settings = lib.mkOption {
description = ''
Settings for schleuder.yml.
Check the <link xlink:href="https://0xacab.org/schleuder/schleuder/blob/master/etc/schleuder.yml">example configuration</link> for possible values.
'';
type = lib.types.submodule {
freeformType = settingsFormat.type;
options.keyserver = lib.mkOption {
type = lib.types.str;
description = ''
Key server from which to fetch and update keys.
Note that NixOS uses a different default from upstream, since the upstream default sks-keyservers.net is deprecated.
'';
default = "keys.openpgp.org";
};
};
default = { };
};
extraSettingsFile = lib.mkOption {
description = "YAML file to merge into the schleuder config at runtime. This can be used for secrets such as API keys.";
type = lib.types.nullOr lib.types.path;
default = null;
};
listDefaults = lib.mkOption {
description = ''
Default settings for lists (list-defaults.yml).
Check the <link xlink:href="https://0xacab.org/schleuder/schleuder/-/blob/master/etc/list-defaults.yml">example configuration</link> for possible values.
'';
type = settingsFormat.type;
default = { };
};
};
config = lib.mkIf cfg.enable {
assertions = [
{
assertion = !(cfg.settings.api ? valid_api_keys);
message = ''
services.schleuder.settings.api.valid_api_keys is set. Defining API keys via NixOS config results in them being copied to the world-readable Nix store. Please use the extraSettingsFile option to store API keys in a non-public location.
'';
}
{
assertion = !(lib.any (db: db ? password) (lib.attrValues cfg.settings.database or {}));
message = ''
A password is defined for at least one database in services.schleuder.settings.database. Defining passwords via NixOS config results in them being copied to the world-readable Nix store. Please use the extraSettingsFile option to store database passwords in a non-public location.
'';
}
];
users.users.schleuder.isSystemUser = true;
users.users.schleuder.group = "schleuder";
users.groups.schleuder = {};
environment.systemPackages = [
pkgs.schleuder-cli
];
services.postfix = lib.mkIf cfg.enablePostfix {
extraMasterConf = ''
schleuder unix - n n - - pipe
flags=DRhu user=schleuder argv=/${pkgs.schleuder}/bin/schleuder work ''${recipient}
'';
transport = lib.mkIf (cfg.lists != [ ]) (postfixMap (lib.genAttrs cfg.lists (_: "schleuder:")));
extraConfig = ''
schleuder_destination_recipient_limit = 1
'';
# review: does this make sense?
localRecipients = lib.mkIf (cfg.lists != [ ]) cfg.lists;
};
systemd.services = let commonServiceConfig = {
# We would have liked to use DynamicUser, but since the default
# database is SQLite and lives in StateDirectory, and that same
# database needs to be readable from the postfix service, this
# isn't trivial to do.
User = "schleuder";
StateDirectory = "schleuder";
StateDirectoryMode = "0700";
}; in
{
schleuder-init = {
serviceConfig = commonServiceConfig // {
ExecStartPre = lib.mkIf (cfg.extraSettingsFile != null) [
"+${configScript}"
];
ExecStart = [ "${pkgs.schleuder}/bin/schleuder install" ];
Type = "oneshot";
};
};
schleuder-api-daemon = {
after = [ "local-fs.target" "network.target" "schleuder-init.service" ];
wantedBy = [ "multi-user.target" ];
requires = [ "schleuder-init.service" ];
serviceConfig = commonServiceConfig // {
ExecStart = [ "${pkgs.schleuder}/bin/schleuder-api-daemon" ];
};
};
schleuder-weekly-key-maintenance = {
after = [ "local-fs.target" "network.target" ];
startAt = "weekly";
serviceConfig = commonServiceConfig // {
ExecStart = [
"${pkgs.schleuder}/bin/schleuder refresh_keys"
"${pkgs.schleuder}/bin/schleuder check_keys"
];
};
};
};
environment.etc."schleuder/schleuder.yml" = lib.mkIf (cfg.extraSettingsFile == null) {
source = settingsFormat.generate "schleuder.yml" cfg.settings;
};
environment.etc."schleuder/list-defaults.yml".source = settingsFormat.generate "list-defaults.yml" cfg.listDefaults;
services.schleuder = {
#lists_dir = "/var/lib/schleuder.lists";
settings.filters_dir = lib.mkDefault "/var/lib/schleuder/filters";
settings.keyword_handlers_dir = lib.mkDefault "/var/lib/schleuder/keyword_handlers";
};
};
}

View file

@ -153,6 +153,9 @@ in {
systemd.services.matrix-appservice-irc = {
description = "Matrix-IRC bridge";
before = [ "matrix-synapse.service" ]; # So the registration can be used by Synapse
after = lib.optionals (cfg.settings.database.engine == "postgres") [
"postgresql.service"
];
wantedBy = [ "multi-user.target" ];
preStart = ''

View file

@ -191,12 +191,12 @@ in {
settings = mkOption {
default = {};
description = ''
description = mdDoc ''
The primary synapse configuration. See the
<link xlink:href="https://github.com/matrix-org/synapse/blob/v${cfg.package.version}/docs/sample_config.yaml">sample configuration</link>
[sample configuration](https://github.com/matrix-org/synapse/blob/v${cfg.package.version}/docs/sample_config.yaml)
for possible values.
Secrets should be passed in by using the <literal>extraConfigFiles</literal> option.
Secrets should be passed in by using the `extraConfigFiles` option.
'';
type = with types; submodule {
freeformType = format.type;
@ -230,23 +230,23 @@ in {
registration_shared_secret = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
description = mdDoc ''
If set, allows registration by anyone who also has the shared
secret, even if registration is otherwise disabled.
Secrets should be passed in via <literal>extraConfigFiles</literal>!
Secrets should be passed in via `extraConfigFiles`!
'';
};
macaroon_secret_key = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
description = mdDoc ''
Secret key for authentication tokens. If none is specified,
the registration_shared_secret is used, if one is given; otherwise,
a secret key is derived from the signing key.
Secrets should be passed in via <literal>extraConfigFiles</literal>!
Secrets should be passed in via `extraConfigFiles`!
'';
};
@ -620,10 +620,10 @@ in {
example = literalExpression ''
config.services.coturn.static-auth-secret
'';
description = ''
description = mdDoc ''
The shared secret used to compute passwords for the TURN server.
Secrets should be passed in via <literal>extraConfigFiles</literal>!
Secrets should be passed in via `extraConfigFiles`!
'';
};

View file

@ -13,6 +13,22 @@ let
else
pkgs.postgresql_12;
# Git 2.36.1 seemingly contains a commit-graph related bug which is
# easily triggered through GitLab, so we downgrade it to 2.35.x
# until this issue is solved. See
# https://gitlab.com/gitlab-org/gitlab/-/issues/360783#note_992870101.
gitPackage =
let
version = "2.35.3";
in
pkgs.git.overrideAttrs (oldAttrs: rec {
inherit version;
src = pkgs.fetchurl {
url = "https://www.kernel.org/pub/software/scm/git/git-${version}.tar.xz";
sha256 = "sha256-FenbT5vy7Z//MMtioAxcfAkBAV9asEjNtOiwTd7gD6I=";
};
});
gitlabSocket = "${cfg.statePath}/tmp/sockets/gitlab.socket";
gitalySocket = "${cfg.statePath}/tmp/sockets/gitaly.socket";
pathUrlQuote = url: replaceStrings ["/"] ["%2F"] url;
@ -41,7 +57,7 @@ let
prometheus_listen_addr = "localhost:9236"
[git]
bin_path = "${pkgs.git}/bin/git"
bin_path = "${gitPackage}/bin/git"
[gitaly-ruby]
dir = "${cfg.packages.gitaly.ruby}"
@ -137,7 +153,7 @@ let
};
workhorse.secret_file = "${cfg.statePath}/.gitlab_workhorse_secret";
gitlab_kas.secret_file = "${cfg.statePath}/.gitlab_kas_secret";
git.bin_path = "git";
git.bin_path = "${gitPackage}/bin/git";
monitoring = {
ip_whitelist = [ "127.0.0.0/8" "::1/128" ];
sidekiq_exporter = {
@ -1275,7 +1291,7 @@ in {
});
path = with pkgs; [
postgresqlPackage
git
gitPackage
ruby
openssh
nodejs
@ -1306,7 +1322,7 @@ in {
path = with pkgs; [
openssh
procps # See https://gitlab.com/gitlab-org/gitaly/issues/1562
git
gitPackage
cfg.packages.gitaly.rubyEnv
cfg.packages.gitaly.rubyEnv.wrappedRuby
gzip
@ -1351,7 +1367,7 @@ in {
partOf = [ "gitlab.target" ];
path = with pkgs; [
exiftool
git
gitPackage
gnutar
gzip
openssh
@ -1412,7 +1428,7 @@ in {
environment = gitlabEnv;
path = with pkgs; [
postgresqlPackage
git
gitPackage
openssh
nodejs
procps

View file

@ -47,7 +47,7 @@ in
user-icons = mkOption {
type = types.nullOr (types.enum [ "gravatar" "identicon" ]);
default = null;
description = "User icons for history view";
description = "Enable specific user icons for history view";
};
emoji = mkOption {
@ -68,6 +68,12 @@ in
description = "Disable editing pages";
};
local-time = mkOption {
type = types.bool;
default = false;
description = "Use the browser's local timezone instead of the server's for displaying dates.";
};
branch = mkOption {
type = types.str;
default = "master";
@ -123,6 +129,7 @@ in
${optionalString cfg.emoji "--emoji"} \
${optionalString cfg.h1-title "--h1-title"} \
${optionalString cfg.no-edit "--no-edit"} \
${optionalString cfg.local-time "--local-time"} \
${optionalString (cfg.allowUploads != null) "--allow-uploads ${cfg.allowUploads}"} \
${optionalString (cfg.user-icons != null) "--user-icons ${cfg.user-icons}"} \
${cfg.stateDir}

View file

@ -7,7 +7,7 @@ let
in
{
meta = {
maintainers = with maintainers; [ zimbatm ];
maintainers = with maintainers; [ flokli zimbatm ];
};
options.services.grafana-agent = {
@ -49,14 +49,7 @@ in
};
default = {
server = {
# Don't bind on 0.0.0.0
grpc_listen_address = "127.0.0.1";
http_listen_address = "127.0.0.1";
# Don't bind on the default port 80
http_listen_port = 9090;
};
prometheus = {
metrics = {
wal_directory = "\${STATE_DIRECTORY}";
global.scrape_interval = "5s";
};
@ -69,7 +62,12 @@ in
};
example = {
loki.configs = [{
metrics.global.remote_write = [{
url = "\${METRICS_REMOTE_WRITE_URL}";
basic_auth.username = "\${METRICS_REMOTE_WRITE_USERNAME}";
basic_auth.password_file = "\${CREDENTIALS_DIRECTORY}/metrics_remote_write_password";
}];
logs.configs = [{
name = "default";
scrape_configs = [
{
@ -101,13 +99,6 @@ in
basic_auth.password_file = "\${CREDENTIALS_DIRECTORY}/logs_remote_write_password";
}];
}];
integrations = {
prometheus_remote_write = [{
url = "\${METRICS_REMOTE_WRITE_URL}";
basic_auth.username = "\${METRICS_REMOTE_WRITE_USERNAME}";
basic_auth.password_file = "\${CREDENTIALS_DIRECTORY}/metrics_remote_write_password";
}];
};
};
};
};

View file

@ -74,11 +74,13 @@ in
};
};
serviceOpts = {
after = mkIf cfg.systemd.enable [ cfg.systemd.unit ];
serviceConfig = {
DynamicUser = false;
# By default, each prometheus exporter only gets AF_INET & AF_INET6,
# but AF_UNIX is needed to read from the `showq`-socket.
RestrictAddressFamilies = [ "AF_UNIX" ];
SupplementaryGroups = mkIf cfg.systemd.enable [ "systemd-journal" ];
ExecStart = ''
${pkgs.prometheus-postfix-exporter}/bin/postfix_exporter \
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} \

View file

@ -257,7 +257,7 @@ in
'' + optionalString cfg.autoMigrate ''
${pkgs.ipfs-migrator}/bin/fs-repo-migrations -to '${cfg.package.repoVersion}' -y
'' + ''
ipfs --offline config profile apply ${profile}
ipfs --offline config profile apply ${profile} >/dev/null
fi
'' + optionalString cfg.autoMount ''
ipfs --offline config Mounts.FuseAllowOther --json true

View file

@ -174,6 +174,7 @@ in
serviceConfig = {
DynamicUser = true;
StateDirectory = "bitlbee";
ReadWritePaths = [ cfg.configDir ];
ExecStart = "${bitlbeePkg}/sbin/bitlbee -F -n -c ${bitlbeeConfig}";
};
};

View file

@ -54,10 +54,10 @@ let
hashedPassword = mkOption {
type = uniq (nullOr str);
default = null;
description = ''
description = mdDoc ''
Specifies the hashed password for the MQTT User.
To generate hashed password install <literal>mosquitto</literal>
package and use <literal>mosquitto_passwd</literal>.
To generate hashed password install `mosquitto`
package and use `mosquitto_passwd`.
'';
};
@ -65,11 +65,11 @@ let
type = uniq (nullOr types.path);
example = "/path/to/file";
default = null;
description = ''
description = mdDoc ''
Specifies the path to a file containing the
hashed password for the MQTT user.
To generate hashed password install <literal>mosquitto</literal>
package and use <literal>mosquitto_passwd</literal>.
To generate hashed password install `mosquitto`
package and use `mosquitto_passwd`.
'';
};
@ -155,24 +155,24 @@ let
options = {
plugin = mkOption {
type = path;
description = ''
Plugin path to load, should be a <literal>.so</literal> file.
description = mdDoc ''
Plugin path to load, should be a `.so` file.
'';
};
denySpecialChars = mkOption {
type = bool;
description = ''
Automatically disallow all clients using <literal>#</literal>
or <literal>+</literal> in their name/id.
description = mdDoc ''
Automatically disallow all clients using `#`
or `+` in their name/id.
'';
default = true;
};
options = mkOption {
type = attrsOf optionType;
description = ''
Options for the auth plugin. Each key turns into a <literal>auth_opt_*</literal>
description = mdDoc ''
Options for the auth plugin. Each key turns into a `auth_opt_*`
line in the config.
'';
default = {};
@ -239,8 +239,8 @@ let
address = mkOption {
type = nullOr str;
description = ''
Address to listen on. Listen on <literal>0.0.0.0</literal>/<literal>::</literal>
description = mdDoc ''
Address to listen on. Listen on `0.0.0.0`/`::`
when unset.
'';
default = null;
@ -248,10 +248,10 @@ let
authPlugins = mkOption {
type = listOf authPluginOptions;
description = ''
description = mdDoc ''
Authentication plugin to attach to this listener.
Refer to the <link xlink:href="https://mosquitto.org/man/mosquitto-conf-5.html">
mosquitto.conf documentation</link> for details on authentication plugins.
Refer to the [mosquitto.conf documentation](https://mosquitto.org/man/mosquitto-conf-5.html)
for details on authentication plugins.
'';
default = [];
};
@ -472,10 +472,10 @@ let
includeDirs = mkOption {
type = listOf path;
description = ''
description = mdDoc ''
Directories to be scanned for further config files to include.
Directories will processed in the order given,
<literal>*.conf</literal> files in the directory will be
`*.conf` files in the directory will be
read in case-sensistive alphabetical order.
'';
default = [];

View file

@ -0,0 +1,84 @@
{ config
, lib
, pkgs
, ...
}:
with lib;
let
cfg = config.services.routedns;
settingsFormat = pkgs.formats.toml { };
in
{
options.services.routedns = {
enable = mkEnableOption "RouteDNS - DNS stub resolver, proxy and router";
settings = mkOption {
type = settingsFormat.type;
example = literalExpression ''
{
resolvers.cloudflare-dot = {
address = "1.1.1.1:853";
protocol = "dot";
};
groups.cloudflare-cached = {
type = "cache";
resolvers = ["cloudflare-dot"];
};
listeners.local-udp = {
address = "127.0.0.1:53";
protocol = "udp";
resolver = "cloudflare-cached";
};
listeners.local-tcp = {
address = "127.0.0.1:53";
protocol = "tcp";
resolver = "cloudflare-cached";
};
}
'';
description = ''
Configuration for RouteDNS, see <link xlink:href="https://github.com/folbricht/routedns/blob/master/doc/configuration.md"/>
for more information.
'';
};
configFile = mkOption {
default = settingsFormat.generate "routedns.toml" cfg.settings;
defaultText = "A RouteDNS configuration file automatically generated by values from services.routedns.*";
type = types.path;
example = literalExpression ''"''${pkgs.routedns}/cmd/routedns/example-config/use-case-1.toml"'';
description = "Path to RouteDNS TOML configuration file.";
};
package = mkOption {
default = pkgs.routedns;
defaultText = literalExpression "pkgs.routedns";
type = types.package;
description = "RouteDNS package to use.";
};
};
config = mkIf cfg.enable {
systemd.services.routedns = {
description = "RouteDNS - DNS stub resolver, proxy and router";
after = [ "network.target" ]; # in case a bootstrap resolver is used, this might fail a few times until the respective server is actually reachable
wantedBy = [ "multi-user.target" ];
wants = [ "network.target" ];
startLimitIntervalSec = 30;
startLimitBurst = 5;
serviceConfig = {
Restart = "on-failure";
RestartSec = "5s";
LimitNPROC = 512;
LimitNOFILE = 1048576;
DynamicUser = true;
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
NoNewPrivileges = true;
ExecStart = "${getBin cfg.package}/bin/routedns -l 4 ${cfg.configFile}";
};
};
};
meta.maintainers = with maintainers; [ jsimonetti ];
}

View file

@ -72,39 +72,39 @@ in {
cert = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Path to the <literal>cert.pem</literal> file, which will be copied into Syncthing's
<link linkend="opt-services.syncthing.configDir">configDir</link>.
description = mdDoc ''
Path to the `cert.pem` file, which will be copied into Syncthing's
[configDir](#opt-services.syncthing.configDir).
'';
};
key = mkOption {
type = types.nullOr types.str;
default = null;
description = ''
Path to the <literal>key.pem</literal> file, which will be copied into Syncthing's
<link linkend="opt-services.syncthing.configDir">configDir</link>.
description = mdDoc ''
Path to the `key.pem` file, which will be copied into Syncthing's
[configDir](#opt-services.syncthing.configDir).
'';
};
overrideDevices = mkOption {
type = types.bool;
default = true;
description = ''
description = mdDoc ''
Whether to delete the devices which are not configured via the
<link linkend="opt-services.syncthing.devices">devices</link> option.
If set to <literal>false</literal>, devices added via the web
[devices](#opt-services.syncthing.devices) option.
If set to `false`, devices added via the web
interface will persist and will have to be deleted manually.
'';
};
devices = mkOption {
default = {};
description = ''
description = mdDoc ''
Peers/devices which Syncthing should communicate with.
Note that you can still add devices manually, but those changes
will be reverted on restart if <link linkend="opt-services.syncthing.overrideDevices">overrideDevices</link>
will be reverted on restart if [overrideDevices](#opt-services.syncthing.overrideDevices)
is enabled.
'';
example = {
@ -135,27 +135,27 @@ in {
id = mkOption {
type = types.str;
description = ''
The device ID. See <link xlink:href="https://docs.syncthing.net/dev/device-ids.html"/>.
description = mdDoc ''
The device ID. See <https://docs.syncthing.net/dev/device-ids.html>.
'';
};
introducer = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Whether the device should act as an introducer and be allowed
to add folders on this computer.
See <link xlink:href="https://docs.syncthing.net/users/introducer.html"/>.
See <https://docs.syncthing.net/users/introducer.html>.
'';
};
autoAcceptFolders = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Automatically create or share folders that this device advertises at the default path.
See <link xlink:href="https://docs.syncthing.net/users/config.html?highlight=autoaccept#config-file-format"/>.
See <https://docs.syncthing.net/users/config.html?highlight=autoaccept#config-file-format>.
'';
};
@ -166,21 +166,21 @@ in {
overrideFolders = mkOption {
type = types.bool;
default = true;
description = ''
description = mdDoc ''
Whether to delete the folders which are not configured via the
<link linkend="opt-services.syncthing.folders">folders</link> option.
If set to <literal>false</literal>, folders added via the web
[folders](#opt-services.syncthing.folders) option.
If set to `false`, folders added via the web
interface will persist and will have to be deleted manually.
'';
};
folders = mkOption {
default = {};
description = ''
description = mdDoc ''
Folders which should be shared by Syncthing.
Note that you can still add devices manually, but those changes
will be reverted on restart if <link linkend="opt-services.syncthing.overrideDevices">overrideDevices</link>
will be reverted on restart if [overrideDevices](#opt-services.syncthing.overrideDevices)
is enabled.
'';
example = literalExpression ''
@ -231,18 +231,18 @@ in {
devices = mkOption {
type = types.listOf types.str;
default = [];
description = ''
description = mdDoc ''
The devices this folder should be shared with. Each device must
be defined in the <link linkend="opt-services.syncthing.devices">devices</link> option.
be defined in the [devices](#opt-services.syncthing.devices) option.
'';
};
versioning = mkOption {
default = null;
description = ''
description = mdDoc ''
How to keep changed/deleted files with Syncthing.
There are 4 different types of versioning with different parameters.
See <link xlink:href="https://docs.syncthing.net/users/versioning.html"/>.
See <https://docs.syncthing.net/users/versioning.html>.
'';
example = literalExpression ''
[
@ -284,17 +284,17 @@ in {
options = {
type = mkOption {
type = enum [ "external" "simple" "staggered" "trashcan" ];
description = ''
description = mdDoc ''
The type of versioning.
See <link xlink:href="https://docs.syncthing.net/users/versioning.html"/>.
See <https://docs.syncthing.net/users/versioning.html>.
'';
};
params = mkOption {
type = attrsOf (either str path);
description = ''
description = mdDoc ''
The parameters for versioning. Structure depends on
<link linkend="opt-services.syncthing.folders._name_.versioning.type">versioning.type</link>.
See <link xlink:href="https://docs.syncthing.net/users/versioning.html"/>.
[versioning.type](#opt-services.syncthing.folders._name_.versioning.type).
See <https://docs.syncthing.net/users/versioning.html>.
'';
};
};
@ -345,9 +345,9 @@ in {
ignoreDelete = mkOption {
type = types.bool;
default = false;
description = ''
description = mdDoc ''
Whether to skip deleting files that are deleted by peers.
See <link xlink:href="https://docs.syncthing.net/advanced/folder-ignoredelete.html"/>.
See <https://docs.syncthing.net/advanced/folder-ignoredelete.html>.
'';
};
};
@ -357,9 +357,9 @@ in {
extraOptions = mkOption {
type = types.addCheck (pkgs.formats.json {}).type isAttrs;
default = {};
description = ''
description = mdDoc ''
Extra configuration options for Syncthing.
See <link xlink:href="https://docs.syncthing.net/users/config.html"/>.
See <https://docs.syncthing.net/users/config.html>.
'';
example = {
options.localAnnounceEnabled = false;
@ -387,9 +387,9 @@ in {
type = types.str;
default = defaultUser;
example = "yourUser";
description = ''
description = mdDoc ''
The user to run Syncthing as.
By default, a user named <literal>${defaultUser}</literal> will be created.
By default, a user named `${defaultUser}` will be created.
'';
};
@ -397,9 +397,9 @@ in {
type = types.str;
default = defaultGroup;
example = "yourGroup";
description = ''
description = mdDoc ''
The group to run Syncthing under.
By default, a group named <literal>${defaultGroup}</literal> will be created.
By default, a group named `${defaultGroup}` will be created.
'';
};
@ -407,11 +407,11 @@ in {
type = with types; nullOr str;
default = null;
example = "socks5://address.com:1234";
description = ''
description = mdDoc ''
Overwrites the all_proxy environment variable for the Syncthing process to
the given value. This is normally used to let Syncthing connect
through a SOCKS5 proxy server.
See <link xlink:href="https://docs.syncthing.net/users/proxying.html"/>.
See <https://docs.syncthing.net/users/proxying.html>.
'';
};
@ -432,25 +432,13 @@ in {
The path where the settings and keys will exist.
'';
default = cfg.dataDir + optionalString cond "/.config/syncthing";
defaultText = literalDocBook ''
<variablelist>
<varlistentry>
<term><literal>stateVersion >= 19.03</literal></term>
<listitem>
<programlisting>
defaultText = literalMD ''
* if `stateVersion >= 19.03`:
config.${opt.dataDir} + "/.config/syncthing"
</programlisting>
</listitem>
</varlistentry>
<varlistentry>
<term>otherwise</term>
<listitem>
<programlisting>
* otherwise:
config.${opt.dataDir}
</programlisting>
</listitem>
</varlistentry>
</variablelist>
'';
};

View file

@ -6,6 +6,7 @@ let
cfg = config.services.tailscale;
firewallOn = config.networking.firewall.enable;
rpfMode = config.networking.firewall.checkReversePath;
isNetworkd = config.networking.useNetworkd;
rpfIsStrict = rpfMode == true || rpfMode == "strict";
in {
meta.maintainers = with maintainers; [ danderson mbaillie twitchyliquid64 ];
@ -69,5 +70,17 @@ in {
# linux distros.
stopIfChanged = false;
};
networking.dhcpcd.denyInterfaces = [ cfg.interfaceName ];
systemd.network.networks."50-tailscale" = mkIf isNetworkd {
matchConfig = {
Name = cfg.interfaceName;
};
linkConfig = {
Unmanaged = true;
ActivationPolicy = "manual";
};
};
};
}

View file

@ -6,6 +6,9 @@ let
cfg = config.services.trickster;
in
{
imports = [
(mkRenamedOptionModule [ "services" "trickster" "origin" ] [ "services" "trickster" "origin-url" ])
];
options = {
services.trickster = {
@ -58,11 +61,19 @@ in
'';
};
origin = mkOption {
origin-type = mkOption {
type = types.enum [ "prometheus" "influxdb" ];
default = "prometheus";
description = ''
Type of origin (prometheus, influxdb)
'';
};
origin-url = mkOption {
type = types.str;
default = "http://prometheus:9090";
description = ''
URL to the Prometheus Origin. Enter it like you would in grafana, e.g., http://prometheus:9090 (default http://prometheus:9090).
URL to the Origin. Enter it like you would in grafana, e.g., http://prometheus:9090 (default http://prometheus:9090).
'';
};
@ -87,7 +98,7 @@ in
config = mkIf cfg.enable {
systemd.services.trickster = {
description = "Dashboard Accelerator for Prometheus";
description = "Reverse proxy cache and time series dashboard accelerator";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
@ -96,7 +107,8 @@ in
${cfg.package}/bin/trickster \
-log-level ${cfg.log-level} \
-metrics-port ${toString cfg.metrics-port} \
-origin ${cfg.origin} \
-origin-type ${cfg.origin-type} \
-origin-url ${cfg.origin-url} \
-proxy-port ${toString cfg.proxy-port} \
${optionalString (cfg.configFile != null) "-config ${cfg.configFile}"} \
${optionalString (cfg.profiler-port != null) "-profiler-port ${cfg.profiler-port}"} \

View file

@ -534,6 +534,7 @@ let
services.phpfpm.pools = mkIf (cfg.pool == "${poolName}") {
${poolName} = {
inherit (cfg) user;
phpPackage = pkgs.php80;
settings = mapAttrs (name: mkDefault) {
"listen.owner" = "nginx";
"listen.group" = "nginx";

View file

@ -360,7 +360,7 @@ let
${optionalString (config.alias != null) "alias ${config.alias};"}
${optionalString (config.return != null) "return ${config.return};"}
${config.extraConfig}
${optionalString (config.proxyPass != null && cfg.recommendedProxySettings) "include ${recommendedProxyConfig};"}
${optionalString (config.proxyPass != null && config.recommendedProxySettings) "include ${recommendedProxyConfig};"}
${mkBasicAuth "sublocation" config}
}
'') (sortProperties (mapAttrsToList (k: v: v // { location = k; }) locations)));
@ -423,7 +423,7 @@ in
default = false;
type = types.bool;
description = "
Enable recommended proxy settings.
Whether to enable recommended proxy settings if a vhost does not specify the option manually.
";
};

View file

@ -3,7 +3,7 @@
# has additional options that affect the web server as a whole, like
# the user/group to run under.)
{ lib }:
{ lib, config }:
with lib;
@ -128,5 +128,14 @@ with lib;
a greater priority.
'';
};
recommendedProxySettings = mkOption {
type = types.bool;
default = config.services.nginx.recommendedProxySettings;
defaultText = literalExpression "config.services.nginx.recommendedProxySettings";
description = ''
Enable recommended proxy settings.
'';
};
};
}

View file

@ -281,7 +281,7 @@ with lib;
locations = mkOption {
type = types.attrsOf (types.submodule (import ./location-options.nix {
inherit lib;
inherit lib config;
}));
default = {};
example = literalExpression ''

View file

@ -26,6 +26,13 @@ in
description = "Bind xpra to TCP";
};
desktop = mkOption {
type = types.nullOr types.str;
default = null;
example = "gnome-shell";
description = "Start a desktop environment instead of seamless mode";
};
auth = mkOption {
type = types.str;
default = "pam";
@ -222,7 +229,7 @@ in
services.xserver.displayManager.job.execCmd = ''
${optionalString (cfg.pulseaudio)
"export PULSE_COOKIE=/run/pulse/.config/pulse/cookie"}
exec ${pkgs.xpra}/bin/xpra start \
exec ${pkgs.xpra}/bin/xpra ${if cfg.desktop == null then "start" else "start-desktop --start=${cfg.desktop}"} \
--daemon=off \
--log-dir=/var/log \
--log-file=xpra.log \

View file

@ -90,7 +90,7 @@ let
bindsTo = [ "network-setup.service" ];
};
networkSetup =
networkSetup = lib.mkIf (config.networking.resolvconf.enable || cfg.defaultGateway != null || cfg.defaultGateway6 != null)
{ description = "Networking Setup";
after = [ "network-pre.target" "systemd-udevd.service" "systemd-sysctl.service" ];

View file

@ -59,15 +59,14 @@ in
genericNetwork = override:
let gateway = optional (cfg.defaultGateway != null && (cfg.defaultGateway.address or "") != "") cfg.defaultGateway.address
++ optional (cfg.defaultGateway6 != null && (cfg.defaultGateway6.address or "") != "") cfg.defaultGateway6.address;
in optionalAttrs (gateway != [ ]) {
routes = override [
{
makeGateway = gateway: {
routeConfig = {
Gateway = gateway;
GatewayOnLink = false;
};
}
];
};
in optionalAttrs (gateway != [ ]) {
routes = override (map makeGateway gateway);
} // optionalAttrs (domains != [ ]) {
domains = override domains;
};
@ -89,20 +88,22 @@ in
# more likely to result in interfaces being configured to
# use DHCP when they shouldn't.
# We set RequiredForOnline to false, because it's fairly
# common for such devices to have multiple interfaces and
# only one of them to be connected (e.g. a laptop with
# ethernet and WiFi interfaces). Maybe one day networkd will
# support "any"-style RequiredForOnline...
# When wait-online.anyInterface is enabled, RequiredForOnline really
# means "sufficient for online", so we can enable it.
# Otherwise, don't block the network coming online because of default networks.
matchConfig.Name = ["en*" "eth*"];
DHCP = "yes";
linkConfig.RequiredForOnline = lib.mkDefault false;
linkConfig.RequiredForOnline =
lib.mkDefault config.systemd.network.wait-online.anyInterface;
networkConfig.IPv6PrivacyExtensions = "kernel";
};
networks."99-wireless-client-dhcp" = lib.mkIf cfg.useDHCP {
# Like above, but this is much more likely to be correct.
matchConfig.WLANInterfaceType = "station";
DHCP = "yes";
linkConfig.RequiredForOnline = lib.mkDefault false;
linkConfig.RequiredForOnline =
lib.mkDefault config.systemd.network.wait-online.anyInterface;
networkConfig.IPv6PrivacyExtensions = "kernel";
# We also set the route metric to one more than the default
# of 1024, so that Ethernet is preferred if both are
# available.

View file

@ -63,18 +63,18 @@ in
default = {};
example = literalExpression ''
{
# create /etc/hostname on container creation
# create /etc/hostname on container creation. also requires networking.hostName = "" to be set
"hostname" = {
enable = true;
target = "/etc/hostname";
template = builtins.writeFile "hostname.tpl" "{{ container.name }}";
template = builtins.toFile "hostname.tpl" "{{ container.name }}";
when = [ "create" ];
};
# create /etc/nixos/hostname.nix with a configuration for keeping the hostname applied
"hostname-nix" = {
enable = true;
target = "/etc/nixos/hostname.nix";
template = builtins.writeFile "hostname-nix.tpl" "{ ... }: { networking.hostName = "{{ container.name }}"; }";
template = builtins.toFile "hostname-nix.tpl" "{ ... }: { networking.hostName = \"{{ container.name }}\"; }";
# copy keeps the file updated when the container is changed
when = [ "create" "copy" ];
};
@ -82,7 +82,7 @@ in
"configuration-nix" = {
enable = true;
target = "/etc/nixos/configuration.nix";
template = builtins.writeFile "configuration-nix" "{{ config_get(\"user.user-data\", properties.default) }}";
template = builtins.toFile "configuration-nix" "{{ config_get(\"user.user-data\", properties.default) }}";
when = [ "create" ];
};
};

View file

@ -23,12 +23,12 @@ in
default = false;
type = types.bool;
description =
''
mdDoc ''
Setting this option enables the Xen hypervisor, a
virtualisation technology that allows multiple virtual
machines, known as <emphasis>domains</emphasis>, to run
machines, known as *domains*, to run
concurrently on the physical machine. NixOS runs as the
privileged <emphasis>Domain 0</emphasis>. This option
privileged *Domain 0*. This option
requires a reboot to take effect.
'';
};

View file

@ -578,7 +578,7 @@ in {
webserver.wait_for_unit(f"acme-finished-{test_domain}.target")
wait_for_server()
check_connection(client, test_domain)
rc, _ = client.execute(
rc, _s = client.execute(
f"openssl s_client -CAfile /tmp/ca.crt -connect {test_alias}:443"
" </dev/null 2>/dev/null | openssl x509 -noout -text"
f" | grep DNS: | grep {test_alias}"

View file

@ -254,7 +254,6 @@ in {
jirafeau = handleTest ./jirafeau.nix {};
jitsi-meet = handleTest ./jitsi-meet.nix {};
k3s-single-node = handleTest ./k3s-single-node.nix {};
k3s-single-node-docker = handleTest ./k3s-single-node-docker.nix {};
kafka = handleTest ./kafka.nix {};
kanidm = handleTest ./kanidm.nix {};
kbd-setfont-decompress = handleTest ./kbd-setfont-decompress.nix {};
@ -473,7 +472,6 @@ in {
restartByActivationScript = handleTest ./restart-by-activation-script.nix {};
restic = handleTest ./restic.nix {};
retroarch = handleTest ./retroarch.nix {};
riak = handleTest ./riak.nix {};
robustirc-bridge = handleTest ./robustirc-bridge.nix {};
roundcube = handleTest ./roundcube.nix {};
rspamd = handleTest ./rspamd.nix {};
@ -486,6 +484,7 @@ in {
samba = handleTest ./samba.nix {};
samba-wsdd = handleTest ./samba-wsdd.nix {};
sanoid = handleTest ./sanoid.nix {};
schleuder = handleTest ./schleuder.nix {};
sddm = handleTest ./sddm.nix {};
seafile = handleTest ./seafile.nix {};
searx = handleTest ./searx.nix {};
@ -614,6 +613,7 @@ in {
yabar = handleTest ./yabar.nix {};
yggdrasil = handleTest ./yggdrasil.nix {};
zammad = handleTest ./zammad.nix {};
zeronet-conservancy = handleTest ./zeronet-conservancy.nix {};
zfs = handleTest ./zfs.nix {};
zigbee2mqtt = handleTest ./zigbee2mqtt.nix {};
zoneminder = handleTest ./zoneminder.nix {};

View file

@ -23,7 +23,7 @@ in
testScript = ''
machine.wait_for_unit("convos")
machine.wait_for_open_port(port)
machine.wait_for_open_port(${toString port})
machine.succeed("journalctl -u convos | grep -q 'Listening at.*${toString port}'")
machine.succeed("curl -f http://localhost:${toString port}/")
'';

View file

@ -419,5 +419,10 @@ import ./make-test-python.nix ({ pkgs, ... }: {
"docker rmi layered-image-with-path",
)
with subtest("etc"):
docker.succeed("${examples.etc} | docker load")
docker.succeed("docker run --rm etc | grep localhost")
docker.succeed("docker image rm etc:latest")
'';
})

View file

@ -5,6 +5,7 @@ import ../make-test-python.nix (
# copy_from_host works only for store paths
rec {
name = "fcitx";
meta.broken = true; # takes hours to time out since October 2021
nodes.machine =
{
pkgs,

View file

@ -23,9 +23,9 @@ import ./make-test-python.nix ({ lib, pkgs, ... }:
with subtest("Grafana-agent is running"):
machine.wait_for_unit("grafana-agent.service")
machine.wait_for_open_port(9090)
machine.wait_for_open_port(12345)
machine.succeed(
"curl -sSfN http://127.0.0.1:9090/-/healthy"
"curl -sSfN http://127.0.0.1:12345/-/healthy"
)
machine.shutdown()
'';

View file

@ -98,9 +98,26 @@ in {
};
lovelaceConfigWritable = true;
};
# Cause a configuration change inside `configuration.yml` and verify that the process is being reloaded.
specialisation.differentName = {
inheritParentConfig = true;
configuration.services.home-assistant.config.homeassistant.name = lib.mkForce "Test Home";
};
testScript = ''
# Cause a configuration change that requires a service restart as we added a new runtime dependency
specialisation.newFeature = {
inheritParentConfig = true;
configuration.services.home-assistant.config.device_tracker = [
{ platform = "bluetooth_tracker"; }
];
};
};
testScript = { nodes, ... }: let
system = nodes.hass.config.system.build.toplevel;
in
''
import re
start_all()
@ -142,12 +159,21 @@ in {
with subtest("Check extra components are considered in systemd unit hardening"):
hass.succeed("systemctl show -p DeviceAllow home-assistant.service | grep -q char-ttyUSB")
with subtest("Print log to ease debugging"):
output_log = hass.succeed("cat ${configDir}/home-assistant.log")
print("\n### home-assistant.log ###\n")
print(output_log + "\n")
with subtest("Check service reloads when configuration changes"):
# store the old pid of the process
pid = hass.succeed("systemctl show --property=MainPID home-assistant.service")
hass.succeed("${system}/specialisation/differentName/bin/switch-to-configuration test")
new_pid = hass.succeed("systemctl show --property=MainPID home-assistant.service")
assert pid == new_pid, "The PID of the process should not change between process reloads"
with subtest("check service restarts when package changes"):
pid = new_pid
hass.succeed("${system}/specialisation/newFeature/bin/switch-to-configuration test")
new_pid = hass.succeed("systemctl show --property=MainPID home-assistant.service")
assert pid != new_pid, "The PID of the process shoudl change when the HA binary changes"
with subtest("Check that no errors were logged"):
output_log = hass.succeed("cat ${configDir}/home-assistant.log")
assert "ERROR" not in output_log
with subtest("Check systemd unit hardening"):

View file

@ -1,84 +0,0 @@
import ./make-test-python.nix ({ pkgs, ... }:
let
imageEnv = pkgs.buildEnv {
name = "k3s-pause-image-env";
paths = with pkgs; [ tini (hiPrio coreutils) busybox ];
};
pauseImage = pkgs.dockerTools.streamLayeredImage {
name = "test.local/pause";
tag = "local";
contents = imageEnv;
config.Entrypoint = [ "/bin/tini" "--" "/bin/sleep" "inf" ];
};
# Don't use the default service account because there's a race where it may
# not be created yet; make our own instead.
testPodYaml = pkgs.writeText "test.yml" ''
apiVersion: v1
kind: ServiceAccount
metadata:
name: test
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
serviceAccountName: test
containers:
- name: test
image: test.local/pause:local
imagePullPolicy: Never
command: ["sh", "-c", "sleep inf"]
'';
in
{
name = "k3s";
meta = with pkgs.lib.maintainers; {
maintainers = [ euank ];
};
nodes.machine = { pkgs, ... }: {
environment.systemPackages = with pkgs; [ k3s gzip ];
# k3s uses enough resources the default vm fails.
virtualisation.memorySize = 1536;
virtualisation.diskSize = 4096;
services.k3s = {
enable = true;
role = "server";
docker = true;
# Slightly reduce resource usage
extraFlags = "--no-deploy coredns,servicelb,traefik,local-storage,metrics-server --pause-image test.local/pause:local";
};
users.users = {
noprivs = {
isNormalUser = true;
description = "Can't access k3s by default";
password = "*";
};
};
};
testScript = ''
start_all()
machine.wait_for_unit("k3s")
machine.succeed("k3s kubectl cluster-info")
machine.fail("sudo -u noprivs k3s kubectl cluster-info")
# FIXME: this fails with the current nixos kernel config; once it passes, we should uncomment it
# machine.succeed("k3s check-config")
machine.succeed(
"${pauseImage} | docker load"
)
machine.succeed("k3s kubectl apply -f ${testPodYaml}")
machine.succeed("k3s kubectl wait --for 'condition=Ready' pod/test")
machine.succeed("k3s kubectl delete -f ${testPodYaml}")
machine.shutdown()
'';
})

View file

@ -30,7 +30,6 @@ let
linux_5_4_hardened
linux_5_10_hardened
linux_5_15_hardened
linux_5_17_hardened
linux_5_18_hardened
linux_testing;

View file

@ -193,6 +193,7 @@ import ../make-test-python.nix ({ pkgs, ... }:
testScript = ''
import pathlib
import os
start_all()
@ -206,7 +207,7 @@ import ../make-test-python.nix ({ pkgs, ... }:
with subtest("copy the registration file"):
appservice.copy_from_vm("/var/lib/matrix-appservice-irc/registration.yml")
homeserver.copy_from_host(
pathlib.Path(os.environ.get("out", os.getcwd())) / "registration.yml", "/"
str(pathlib.Path(os.environ.get("out", os.getcwd())) / "registration.yml"), "/"
)
homeserver.succeed("chmod 444 /registration.yml")

View file

@ -98,6 +98,7 @@ let
useNetworkd = networkd;
useDHCP = false;
defaultGateway = "192.168.1.1";
defaultGateway6 = "fd00:1234:5678:1::1";
interfaces.eth1.ipv4.addresses = mkOverride 0 [
{ address = "192.168.1.2"; prefixLength = 24; }
{ address = "192.168.1.3"; prefixLength = 32; }
@ -139,6 +140,8 @@ let
with subtest("Test default gateway"):
router.wait_until_succeeds("ping -c 1 192.168.3.1")
client.wait_until_succeeds("ping -c 1 192.168.3.1")
router.wait_until_succeeds("ping -c 1 fd00:1234:5678:3::1")
client.wait_until_succeeds("ping -c 1 fd00:1234:5678:3::1")
'';
};
routeType = {

View file

@ -1,18 +0,0 @@
import ./make-test-python.nix ({ lib, pkgs, ... }: {
name = "riak";
meta = with lib.maintainers; {
maintainers = [ Br1ght0ne ];
};
nodes.machine = {
services.riak.enable = true;
services.riak.package = pkgs.riak;
};
testScript = ''
machine.start()
machine.wait_for_unit("riak")
machine.wait_until_succeeds("riak ping 2>&1")
'';
})

View file

@ -0,0 +1,128 @@
let
certs = import ./common/acme/server/snakeoil-certs.nix;
domain = certs.domain;
in
import ./make-test-python.nix {
name = "schleuder";
nodes.machine = { pkgs, ... }: {
imports = [ ./common/user-account.nix ];
services.postfix = {
enable = true;
enableSubmission = true;
tlsTrustedAuthorities = "${certs.ca.cert}";
sslCert = "${certs.${domain}.cert}";
sslKey = "${certs.${domain}.key}";
inherit domain;
destination = [ domain ];
localRecipients = [ "root" "alice" "bob" ];
};
services.schleuder = {
enable = true;
# Don't do it like this in production! The point of this setting
# is to allow loading secrets from _outside_ the world-readable
# Nix store.
extraSettingsFile = pkgs.writeText "schleuder-api-keys.yml" ''
api:
valid_api_keys:
- fnord
'';
lists = [ "security@${domain}" ];
settings.api = {
tls_cert_file = "${certs.${domain}.cert}";
tls_key_file = "${certs.${domain}.key}";
};
};
environment.systemPackages = [
pkgs.gnupg
pkgs.msmtp
(pkgs.writeScriptBin "do-test" ''
#!${pkgs.runtimeShell}
set -exuo pipefail
# Generate a GPG key with no passphrase and export it
sudo -u alice gpg --passphrase-fd 0 --batch --yes --quick-generate-key 'alice@${domain}' rsa4096 sign,encr < <(echo)
sudo -u alice gpg --armor --export alice@${domain} > alice.asc
# Create a new mailing list with alice as the owner, and alice's key
schleuder-cli list new security@${domain} alice@${domain} alice.asc
# Send an email from a non-member of the list. Use --auto-from so we don't have to specify who it's from twice.
msmtp --auto-from security@${domain} --host=${domain} --port=25 --tls --tls-starttls <<EOF
Subject: really big security issue!!
From: root@${domain}
I found a big security problem!
EOF
# Wait for delivery
(set +o pipefail; journalctl -f -n 1000 -u postfix | grep -m 1 'delivered to maildir')
# There should be exactly one email
mail=(/var/spool/mail/alice/new/*)
[[ "''${#mail[@]}" = 1 ]]
# Find the fingerprint of the mailing list key
read list_key_fp address < <(schleuder-cli keys list security@${domain} | grep security@)
schleuder-cli keys export security@${domain} $list_key_fp > list.asc
# Import the key into alice's keyring, so we can verify it as well as decrypting
sudo -u alice gpg --import <list.asc
# And perform the decryption.
sudo -u alice gpg -d $mail >decrypted
# And check that the text matches.
grep "big security problem" decrypted
'')
# For debugging:
# pkgs.vim pkgs.openssl pkgs.sqliteinteractive
];
security.pki.certificateFiles = [ certs.ca.cert ];
# Since we don't have internet here, use dnsmasq to provide MX records from /etc/hosts
services.dnsmasq = {
enable = true;
extraConfig = ''
selfmx
'';
};
networking.extraHosts = ''
127.0.0.1 ${domain}
'';
# schleuder-cli's config is not quite optimal in several ways:
# - A fingerprint _must_ be pinned, it doesn't even have an option
# to trust the PKI
# - It compares certificate fingerprints rather than key
# fingerprints, so renewals break the pin (though that's not
# relevant for this test)
# - It compares them as strings, which means we need to match the
# expected format exactly. This means removing the :s and
# lowercasing it.
# Refs:
# https://0xacab.org/schleuder/schleuder-cli/-/issues/16
# https://0xacab.org/schleuder/schleuder-cli/-/blob/f8895b9f47083d8c7b99a2797c93f170f3c6a3c0/lib/schleuder-cli/helper.rb#L230-238
systemd.tmpfiles.rules = let cliconfig = pkgs.runCommand "schleuder-cli.yml"
{
nativeBuildInputs = [ pkgs.jq pkgs.openssl ];
} ''
fp=$(openssl x509 -in ${certs.${domain}.cert} -noout -fingerprint -sha256 | cut -d = -f 2 | tr -d : | tr 'A-Z' 'a-z')
cat > $out <<EOF
host: localhost
port: 4443
tls_fingerprint: "$fp"
api_key: fnord
EOF
''; in
[
"L+ /root/.schleuder-cli/schleuder-cli.yml - - - - ${cliconfig}"
];
};
testScript = ''
machine.wait_for_unit("multi-user.target")
machine.wait_until_succeeds("nc -z localhost 4443")
machine.succeed("do-test")
'';
}

View file

@ -138,18 +138,18 @@ in {
};
testScript = ''
def compare_tables(expected, actual):
assert (
expected == actual
), """
Routing tables don't match!
Expected:
{}
Actual:
{}
""".format(
expected, actual
)
import json
def compare(raw_json, to_compare):
data = json.loads(raw_json)
assert len(raw_json) >= len(to_compare)
for i, row in enumerate(to_compare):
actual = data[i]
assert len(row.keys()) > 0
for key, value in row.items():
assert value == actual[key], f"""
In entry {i}, value {key}: got: {actual[key]}, expected {value}
"""
start_all()
@ -159,23 +159,18 @@ in {
node2.wait_for_unit("network.target")
node3.wait_for_unit("network.target")
# NOTE: please keep in mind that the trailing whitespaces in the following strings
# are intentional as the output is compared against the raw `iproute2`-output.
# editorconfig-checker-disable
client_ipv4_table = """
192.168.1.2 dev vrf1 proto static metric 100
192.168.1.2 dev vrf1 proto static metric 100\x20
192.168.2.3 dev vrf2 proto static metric 100
""".strip()
vrf1_table = """
broadcast 192.168.1.0 dev eth1 proto kernel scope link src 192.168.1.1
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1
local 192.168.1.1 dev eth1 proto kernel scope host src 192.168.1.1
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.1\x20
local 192.168.1.1 dev eth1 proto kernel scope host src 192.168.1.1\x20
broadcast 192.168.1.255 dev eth1 proto kernel scope link src 192.168.1.1
""".strip()
vrf2_table = """
broadcast 192.168.2.0 dev eth2 proto kernel scope link src 192.168.2.1
192.168.2.0/24 dev eth2 proto kernel scope link src 192.168.2.1
local 192.168.2.1 dev eth2 proto kernel scope host src 192.168.2.1
192.168.2.0/24 dev eth2 proto kernel scope link src 192.168.2.1\x20
local 192.168.2.1 dev eth2 proto kernel scope host src 192.168.2.1\x20
broadcast 192.168.2.255 dev eth2 proto kernel scope link src 192.168.2.1
""".strip()
# editorconfig-checker-enable
@ -183,14 +178,28 @@ in {
# Check that networkd properly configures the main routing table
# and the routing tables for the VRF.
with subtest("check vrf routing tables"):
compare_tables(
client_ipv4_table, client.succeed("ip -4 route list | head -n2").strip()
compare(
client.succeed("ip --json -4 route list"),
[
{"dst": "192.168.1.2", "dev": "vrf1", "metric": 100},
{"dst": "192.168.2.3", "dev": "vrf2", "metric": 100}
]
)
compare_tables(
vrf1_table, client.succeed("ip -4 route list table 23 | head -n4").strip()
compare(
client.succeed("ip --json -4 route list table 23"),
[
{"dst": "192.168.1.0/24", "dev": "eth1", "prefsrc": "192.168.1.1"},
{"type": "local", "dst": "192.168.1.1", "dev": "eth1", "prefsrc": "192.168.1.1"},
{"type": "broadcast", "dev": "eth1", "prefsrc": "192.168.1.1", "dst": "192.168.1.255"}
]
)
compare_tables(
vrf2_table, client.succeed("ip -4 route list table 42 | head -n4").strip()
compare(
client.succeed("ip --json -4 route list table 42"),
[
{"dst": "192.168.2.0/24", "dev": "eth2", "prefsrc": "192.168.2.1"},
{"type": "local", "dst": "192.168.2.1", "dev": "eth2", "prefsrc": "192.168.2.1"},
{"type": "broadcast", "dev": "eth2", "prefsrc": "192.168.2.1", "dst": "192.168.2.255"}
]
)
# Ensure that other nodes are reachable via ICMP through the VRF.

View file

@ -11,15 +11,21 @@ import ./make-test-python.nix ({ pkgs, ... }: {
environment.systemPackages = [ pkgs.curl ];
};
traefik = { config, pkgs, ... }: {
virtualisation.oci-containers.containers.nginx = {
virtualisation.oci-containers = {
backend = "docker";
containers.nginx = {
extraOptions = [
"-l" "traefik.enable=true"
"-l" "traefik.http.routers.nginx.entrypoints=web"
"-l" "traefik.http.routers.nginx.rule=Host(`nginx.traefik.test`)"
"-l"
"traefik.enable=true"
"-l"
"traefik.http.routers.nginx.entrypoints=web"
"-l"
"traefik.http.routers.nginx.rule=Host(`nginx.traefik.test`)"
];
image = "nginx-container";
imageFile = pkgs.dockerTools.examples.nginx;
};
};
networking.firewall.allowedTCPPorts = [ 80 ];

View file

@ -23,7 +23,7 @@ import ./make-test-python.nix ({ pkgs, ... }: {
# OCR on voxedit's window is very expensive, so we avoid wasting a try
# by letting the window load fully first
machine.sleep(15)
machine.wait_for_text("Palette")
machine.wait_for_text("Solid")
machine.screenshot("screen")
'';
})

View file

@ -0,0 +1,25 @@
let
port = 43110;
in
import ./make-test-python.nix ({ pkgs, ... }: {
name = "zeronet-conservancy";
meta = with pkgs.lib.maintainers; {
maintainers = [ fgaz ];
};
nodes.machine = { config, pkgs, ... }: {
services.zeronet = {
enable = true;
package = pkgs.zeronet-conservancy;
inherit port;
};
};
testScript = ''
machine.wait_for_unit("zeronet.service")
machine.wait_for_open_port(${toString port})
machine.succeed("curl --fail -H 'Accept: text/html, application/xml, */*' localhost:${toString port}/Stats")
'';
})

View file

@ -19,20 +19,20 @@
stdenv.mkDerivation rec {
pname = "amberol";
version = "0.7.0";
version = "0.8.0";
src = fetchFromGitLab {
domain = "gitlab.gnome.org";
owner = "World";
repo = pname;
rev = version;
hash = "sha256-cBHFyPqhgcFOeYqMhF1aX3XCGAtqEZpI7Mj7b78Etmo=";
hash = "sha256-spVZOFqnY4cNbIY1ED3Zki6yPMoFDNG5BtuD456hPs4=";
};
cargoDeps = rustPlatform.fetchCargoTarball {
inherit src;
name = "${pname}-${version}";
hash = "sha256-GaMJsIrTbhI1tmahEMlI1v5hmjw+tFEv9Wdne/kiYIA=";
hash = "sha256-8PEAyQ8JW45d/Lut3hUSKCzV7JjFTpvKGra5dj3byo4=";
};
postPatch = ''

View file

@ -63,6 +63,7 @@ stdenv.mkDerivation rec {
# See http://www.baudline.com/faq.html#licensing_terms.
# (Do NOT (re)distribute on hydra.)
license = licenses.unfree;
sourceProvenance = with sourceTypes; [ binaryNativeCode ];
platforms = [ "x86_64-linux" "i686-linux" ];
maintainers = [ maintainers.bjornfor ];
};

View file

@ -35,13 +35,13 @@
stdenv.mkDerivation rec {
pname = "easyeffects";
version = "6.2.5";
version = "6.2.6";
src = fetchFromGitHub {
owner = "wwmm";
repo = "easyeffects";
rev = "v${version}";
sha256 = "sha256-LvTvNBo3aUGUD4vA04YtINFBjTplhmkxj3FlbTZDTA0=";
sha256 = "sha256-1kXYh2Qk0Wj0LgHTcRVAKro7LAPV/UM5i9VmHjmxTx0=";
};
nativeBuildInputs = [

View file

@ -1,20 +1,20 @@
{ stdenv, lib, fetchFromGitHub, cmake, libuchardet, pkg-config, shntool, flac
, opusTools, vorbis-tools, mp3gain, lame, wavpack, vorbisgain, gtk3, qtbase
, opusTools, vorbis-tools, mp3gain, lame, taglib, wavpack, vorbisgain, gtk3, qtbase
, qttools, wrapQtAppsHook }:
stdenv.mkDerivation rec {
pname = "flacon";
version = "7.0.1";
version = "9.0.0";
src = fetchFromGitHub {
owner = "flacon";
repo = "flacon";
rev = "v${version}";
sha256 = "sha256-35tARJkyhC8EisIyDCwuT/UUruzLjJRUuZysuqeNssM=";
sha256 = "sha256-x27tp8NnAae8y8n9Z1JMobFrgPVRADVZj2cRyul7+cM=";
};
nativeBuildInputs = [ cmake pkg-config wrapQtAppsHook ];
buildInputs = [ qtbase qttools libuchardet ];
buildInputs = [ qtbase qttools libuchardet taglib ];
bin_path = lib.makeBinPath [
shntool

Some files were not shown because too many files have changed in this diff Show more