Project import generated by Copybara.

GitOrigin-RevId: 253aecf69ed7595aaefabde779aa6449195bebb7
This commit is contained in:
Default email 2021-08-18 15:19:15 +02:00
parent f8235d0468
commit b6f2ab0a42
946 changed files with 11934 additions and 14406 deletions

View file

@ -545,7 +545,26 @@ The following types of tests exists:
Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests). Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### Writing package tests {#ssec-package-tests-writing} ### Writing inline package tests {#ssec-inline-package-tests-writing}
For very simple tests, they can be written inline:
```nix
{ …, yq-go }:
buildGoModule rec {
passthru.tests = {
simple = runCommand "${pname}-test" {} ''
echo "test: 1" | ${yq-go}/bin/yq eval -j > $out
[ "$(cat $out | tr -d $'\n ')" = '{"test":1}' ]
'';
};
}
```
### Writing larger package tests {#ssec-package-tests-writing}
This is an example using the `phoronix-test-suite` package with the current best practices. This is an example using the `phoronix-test-suite` package with the current best practices.

View file

@ -33,6 +33,7 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against. * `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`, * `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`,
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one. * `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
* `opam-name` (optional, defaults to `coq-` followed by the value of `pname`), name of the Dune package to build.
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it. * `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation. * `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`, * `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,

View file

@ -134,7 +134,28 @@ Attribute Set `lib.platforms` defines [various common lists](https://github.com/
This attribute is special in that it is not actually under the `meta` attribute set but rather under the `passthru` attribute set. This is due to how `meta` attributes work, and the fact that they are supposed to contain only metadata, not derivations. This attribute is special in that it is not actually under the `meta` attribute set but rather under the `passthru` attribute set. This is due to how `meta` attributes work, and the fact that they are supposed to contain only metadata, not derivations.
::: :::
An attribute set with as values tests. A test is a derivation, which builds successfully when the test passes, and fails to build otherwise. A derivation that is a test needs to have `meta.timeout` defined. An attribute set with tests as values. A test is a derivation that builds when the test passes and fails to build otherwise.
You can run these tests with:
```ShellSession
$ cd path/to/nixpkgs
$ nix-build -A your-package.tests
```
#### Package tests
Tests that are part of the source package are often executed in the `installCheckPhase`.
Prefer `passthru.tests` for tests that are introduced in nixpkgs because:
* `passthru.tests` tests the 'real' package, independently from the environment in which it was built
* we can run `passthru.tests` independently
* `installCheckPhase` adds overhead to each build
For more on how to write and run package tests, see <xref linkend="sec-package-tests"/>.
#### NixOS tests
The NixOS tests are available as `nixosTests` in parameters of derivations. For instance, the OpenSMTPD derivation includes lines similar to: The NixOS tests are available as `nixosTests` in parameters of derivations. For instance, the OpenSMTPD derivation includes lines similar to:
@ -148,6 +169,8 @@ The NixOS tests are available as `nixosTests` in parameters of derivations. For
} }
``` ```
NixOS tests run in a VM, so they are slower than regular package tests. For more information see [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### `timeout` {#var-meta-timeout} ### `timeout` {#var-meta-timeout}
A timeout (in seconds) for building the derivation. If the derivation takes longer than this time to build, it can fail due to breaking the timeout. However, all computers do not have the same computing power, hence some builders may decide to apply a multiplicative factor to this value. When filling this value in, try to keep it approximately consistent with other values already present in `nixpkgs`. A timeout (in seconds) for building the derivation. If the derivation takes longer than this time to build, it can fail due to breaking the timeout. However, all computers do not have the same computing power, hence some builders may decide to apply a multiplicative factor to this value. When filling this value in, try to keep it approximately consistent with other values already present in `nixpkgs`.

View file

@ -714,6 +714,8 @@ to `~/.gdbinit`. GDB will then be able to find debug information installed via `
The installCheck phase checks whether the package was installed correctly by running its test suite against the installed directories. The default `installCheck` calls `make installcheck`. The installCheck phase checks whether the package was installed correctly by running its test suite against the installed directories. The default `installCheck` calls `make installcheck`.
It is often better to add tests that are not part of the source distribution to `passthru.tests` (see <xref linkend="var-meta-tests"/>). This avoids adding overhead to every build and enables us to run them independently.
#### Variables controlling the installCheck phase {#variables-controlling-the-installcheck-phase} #### Variables controlling the installCheck phase {#variables-controlling-the-installcheck-phase}
##### `doInstallCheck` {#var-stdenv-doInstallCheck} ##### `doInstallCheck` {#var-stdenv-doInstallCheck}

View file

@ -115,7 +115,7 @@ let
mergeModules' mergeOptionDecls evalOptionValue mergeDefinitions mergeModules' mergeOptionDecls evalOptionValue mergeDefinitions
pushDownProperties dischargeProperties filterOverrides pushDownProperties dischargeProperties filterOverrides
sortProperties fixupOptionType mkIf mkAssert mkMerge mkOverride sortProperties fixupOptionType mkIf mkAssert mkMerge mkOverride
mkOptionDefault mkDefault mkForce mkVMOverride mkOptionDefault mkDefault mkImageMediaOverride mkForce mkVMOverride
mkFixStrictness mkOrder mkBefore mkAfter mkAliasDefinitions mkFixStrictness mkOrder mkBefore mkAfter mkAliasDefinitions
mkAliasAndWrapDefinitions fixMergeModules mkRemovedOptionModule mkAliasAndWrapDefinitions fixMergeModules mkRemovedOptionModule
mkRenamedOptionModule mkMergedOptionModule mkChangedOptionModule mkRenamedOptionModule mkMergedOptionModule mkChangedOptionModule

View file

@ -1,5 +1,5 @@
{ {
description = "Library of low-level helper functions for nix expressions."; description = "Library of low-level helper functions for nix expressions.";
outputs = { self }: { lib = import ./lib; }; outputs = { self }: { lib = import ./.; };
} }

View file

@ -248,7 +248,7 @@ rec {
then v.__pretty v.val then v.__pretty v.val
else if v == {} then "{ }" else if v == {} then "{ }"
else if v ? type && v.type == "derivation" then else if v ? type && v.type == "derivation" then
"<derivation ${v.drvPath}>" "<derivation ${v.drvPath or "???"}>"
else "{" + introSpace else "{" + introSpace
+ libStr.concatStringsSep introSpace (libAttr.mapAttrsToList + libStr.concatStringsSep introSpace (libAttr.mapAttrsToList
(name: value: (name: value:

View file

@ -710,6 +710,7 @@ rec {
mkOptionDefault = mkOverride 1500; # priority of option defaults mkOptionDefault = mkOverride 1500; # priority of option defaults
mkDefault = mkOverride 1000; # used in config sections of non-user modules to set a default mkDefault = mkOverride 1000; # used in config sections of non-user modules to set a default
mkImageMediaOverride = mkOverride 60; # image media profiles can be derived by inclusion into host config, hence needing to override host config, but do allow user to mkForce
mkForce = mkOverride 50; mkForce = mkOverride 50;
mkVMOverride = mkOverride 10; # used by nixos-rebuild build-vm mkVMOverride = mkOverride 10; # used by nixos-rebuild build-vm

View file

@ -61,9 +61,9 @@ let
missingGithubIds = lib.concatLists (lib.mapAttrsToList checkMaintainer lib.maintainers); missingGithubIds = lib.concatLists (lib.mapAttrsToList checkMaintainer lib.maintainers);
success = pkgs.runCommandNoCC "checked-maintainers-success" {} ">$out"; success = pkgs.runCommand "checked-maintainers-success" {} ">$out";
failure = pkgs.runCommandNoCC "checked-maintainers-failure" { failure = pkgs.runCommand "checked-maintainers-failure" {
nativeBuildInputs = [ pkgs.curl pkgs.jq ]; nativeBuildInputs = [ pkgs.curl pkgs.jq ];
outputHash = "sha256:${lib.fakeSha256}"; outputHash = "sha256:${lib.fakeSha256}";
outputHAlgo = "sha256"; outputHAlgo = "sha256";

View file

@ -3,7 +3,7 @@
pkgs ? import ../.. {} // { lib = throw "pkgs.lib accessed, but the lib tests should use nixpkgs' lib path directly!"; } pkgs ? import ../.. {} // { lib = throw "pkgs.lib accessed, but the lib tests should use nixpkgs' lib path directly!"; }
}: }:
pkgs.runCommandNoCC "nixpkgs-lib-tests" { pkgs.runCommand "nixpkgs-lib-tests" {
buildInputs = [ buildInputs = [
pkgs.nix pkgs.nix
(import ./check-eval.nix) (import ./check-eval.nix)

View file

@ -4755,6 +4755,12 @@
githubId = 40566146; githubId = 40566146;
name = "Jonas Braun"; name = "Jonas Braun";
}; };
j-hui = {
email = "j-hui@cs.columbia.edu";
github = "j-hui";
githubId = 11800204;
name = "John Hui";
};
j-keck = { j-keck = {
email = "jhyphenkeck@gmail.com"; email = "jhyphenkeck@gmail.com";
github = "j-keck"; github = "j-keck";
@ -8768,6 +8774,12 @@
githubId = 5636; githubId = 5636;
name = "Steve Purcell"; name = "Steve Purcell";
}; };
putchar = {
email = "slim.cadoux@gmail.com";
github = "putchar";
githubId = 8208767;
name = "Slim Cadoux";
};
puzzlewolf = { puzzlewolf = {
email = "nixos@nora.pink"; email = "nixos@nora.pink";
github = "puzzlewolf"; github = "puzzlewolf";
@ -10006,6 +10018,12 @@
fingerprint = "6F8A 18AE 4101 103F 3C54 24B9 6AA2 3A11 93B7 064B"; fingerprint = "6F8A 18AE 4101 103F 3C54 24B9 6AA2 3A11 93B7 064B";
}]; }];
}; };
smancill = {
email = "smancill@smancill.dev";
github = "smancill";
githubId = 238528;
name = "Sebastián Mancilla";
};
smaret = { smaret = {
email = "sebastien.maret@icloud.com"; email = "sebastien.maret@icloud.com";
github = "smaret"; github = "smaret";
@ -11277,7 +11295,7 @@
}; };
vel = { vel = {
email = "llathasa@outlook.com"; email = "llathasa@outlook.com";
github = "llathasa-veleth"; github = "q60";
githubId = 61933599; githubId = 61933599;
name = "vel"; name = "vel";
}; };

View file

@ -86,7 +86,7 @@ class PluginDesc:
owner: str owner: str
repo: str repo: str
branch: str branch: str
alias: str alias: Optional[str]
class Repo: class Repo:
@ -317,12 +317,10 @@ def get_current_plugins(editor: Editor) -> List[Plugin]:
def prefetch_plugin( def prefetch_plugin(
user: str, p: PluginDesc,
repo_name: str,
branch: str,
alias: Optional[str],
cache: "Optional[Cache]" = None, cache: "Optional[Cache]" = None,
) -> Tuple[Plugin, Dict[str, str]]: ) -> Tuple[Plugin, Dict[str, str]]:
user, repo_name, branch, alias = p.owner, p.repo, p.branch, p.alias
log.info(f"Fetching last commit for plugin {user}/{repo_name}@{branch}") log.info(f"Fetching last commit for plugin {user}/{repo_name}@{branch}")
repo = Repo(user, repo_name, branch, alias) repo = Repo(user, repo_name, branch, alias)
commit, date = repo.latest_commit() commit, date = repo.latest_commit()
@ -347,7 +345,7 @@ def prefetch_plugin(
def fetch_plugin_from_pluginline(plugin_line: str) -> Plugin: def fetch_plugin_from_pluginline(plugin_line: str) -> Plugin:
plugin, _ = prefetch_plugin(*parse_plugin_line(plugin_line)) plugin, _ = prefetch_plugin(parse_plugin_line(plugin_line))
return plugin return plugin
@ -466,11 +464,11 @@ class Cache:
def prefetch( def prefetch(
args: PluginDesc, cache: Cache pluginDesc: PluginDesc, cache: Cache
) -> Tuple[str, str, Union[Exception, Plugin], dict]: ) -> Tuple[str, str, Union[Exception, Plugin], dict]:
owner, repo = args.owner, args.repo owner, repo = pluginDesc.owner, pluginDesc.repo
try: try:
plugin, redirect = prefetch_plugin(owner, repo, args.branch, args.alias, cache) plugin, redirect = prefetch_plugin(pluginDesc, cache)
cache[plugin.commit] = plugin cache[plugin.commit] = plugin
return (owner, repo, plugin, redirect) return (owner, repo, plugin, redirect)
except Exception as e: except Exception as e:
@ -576,8 +574,9 @@ def update_plugins(editor: Editor, args):
if autocommit: if autocommit:
commit( commit(
nixpkgs_repo, nixpkgs_repo,
"{editor.get_drv_name name}: init at {version}".format( "{drv_name}: init at {version}".format(
editor=editor.name, name=plugin.normalized_name, version=plugin.version drv_name=editor.get_drv_name(plugin.normalized_name),
version=plugin.version
), ),
[args.outfile, args.input_file], [args.outfile, args.input_file],
) )

View file

@ -110,7 +110,6 @@ class LuaEditor(Editor):
return "luaPackages" return "luaPackages"
def get_update(self, input_file: str, outfile: str, proc: int): def get_update(self, input_file: str, outfile: str, proc: int):
cache: Cache = Cache(self.cache_file)
_prefetch = generate_pkg_nix _prefetch = generate_pkg_nix
def update() -> dict: def update() -> dict:

View file

@ -132,6 +132,17 @@ with lib.maintainers; {
scope = "Maintain the Home Assistant ecosystem"; scope = "Maintain the Home Assistant ecosystem";
}; };
iog = {
members = [
cleverca22
disassembler
jonringer
maveru
nrdxp
];
scope = "Input-Output Global employees, which maintain critical software";
};
jitsi = { jitsi = {
members = [ members = [
petabyteboy petabyteboy

View file

@ -1,7 +1,22 @@
# Building Your Own NixOS CD {#sec-building-cd} # Building a NixOS (Live) ISO {#sec-building-image}
Building a NixOS CD is as easy as configuring your own computer. The idea is to use another module which will replace your `configuration.nix` to configure the system that would be installed on the CD.
Default CD/DVD configurations are available inside `nixos/modules/installer/cd-dvd` Default live installer configurations are available inside `nixos/modules/installer/cd-dvd`.
For building other system images, [nixos-generators] is a good place to start looking at.
You have two options:
- Use any of those default configurations as is
- Combine them with (any of) your host config(s)
System images, such as the live installer ones, know how to enforce configuration settings
on wich they immediately depend in order to work correctly.
However, if you are confident, you can opt to override those
enforced values with `mkForce`.
[nixos-generators]: https://github.com/nix-community/nixos-generators
## Practical Instructions {#sec-building-image-instructions}
```ShellSession ```ShellSession
$ git clone https://github.com/NixOS/nixpkgs.git $ git clone https://github.com/NixOS/nixpkgs.git
@ -9,10 +24,23 @@ $ cd nixpkgs/nixos
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix $ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix
``` ```
Before burning your CD/DVD, you can check the content of the image by mounting anywhere like suggested by the following command: To check the content of an ISO image, mount it like so:
```ShellSession ```ShellSession
# mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso</screen> # mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso
``` ```
If you want to customize your NixOS CD in more detail, or generate other kinds of images, you might want to check out [nixos-generators](https://github.com/nix-community/nixos-generators). This can also be a good starting point when you want to use Nix to build a 'minimal' image that doesn't include a NixOS installation. ## Technical Notes {#sec-building-image-tech-notes}
The config value enforcement is implemented via `mkImageMediaOverride = mkOverride 60;`
and therefore primes over simple value assignments, but also yields to `mkForce`.
This property allows image designers to implement in semantically correct ways those
configuration values upon which the correct functioning of the image depends.
For example, the iso base image overrides those file systems which it needs at a minimum
for correct functioning, while the installer base image overrides the entire file system
layout because there can't be any other guarantees on a live medium than those given
by the live medium itself. The latter is especially true befor formatting the target
block device(s). On the other hand, the netboot iso only overrides its minimum dependencies
since netboot images are always made-to-target.

View file

@ -1,33 +1,72 @@
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-building-cd"> <chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-building-image">
<title>Building Your Own NixOS CD</title> <title>Building a NixOS (Live) ISO</title>
<para> <para>
Building a NixOS CD is as easy as configuring your own computer. The Default live installer configurations are available inside
idea is to use another module which will replace your <literal>nixos/modules/installer/cd-dvd</literal>. For building
<literal>configuration.nix</literal> to configure the system that other system images,
would be installed on the CD. <link xlink:href="https://github.com/nix-community/nixos-generators">nixos-generators</link>
is a good place to start looking at.
</para> </para>
<para> <para>
Default CD/DVD configurations are available inside You have two options:
<literal>nixos/modules/installer/cd-dvd</literal>
</para> </para>
<programlisting> <itemizedlist spacing="compact">
<listitem>
<para>
Use any of those default configurations as is
</para>
</listitem>
<listitem>
<para>
Combine them with (any of) your host config(s)
</para>
</listitem>
</itemizedlist>
<para>
System images, such as the live installer ones, know how to enforce
configuration settings on wich they immediately depend in order to
work correctly.
</para>
<para>
However, if you are confident, you can opt to override those
enforced values with <literal>mkForce</literal>.
</para>
<section xml:id="sec-building-image-instructions">
<title>Practical Instructions</title>
<programlisting>
$ git clone https://github.com/NixOS/nixpkgs.git $ git clone https://github.com/NixOS/nixpkgs.git
$ cd nixpkgs/nixos $ cd nixpkgs/nixos
$ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix $ nix-build -A config.system.build.isoImage -I nixos-config=modules/installer/cd-dvd/installation-cd-minimal.nix default.nix
</programlisting> </programlisting>
<para> <para>
Before burning your CD/DVD, you can check the content of the image To check the content of an ISO image, mount it like so:
by mounting anywhere like suggested by the following command: </para>
</para> <programlisting>
<programlisting> # mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso
# mount -o loop -t iso9660 ./result/iso/cd.iso /mnt/iso&lt;/screen&gt;
</programlisting> </programlisting>
<para> </section>
If you want to customize your NixOS CD in more detail, or generate <section xml:id="sec-building-image-tech-notes">
other kinds of images, you might want to check out <title>Technical Notes</title>
<link xlink:href="https://github.com/nix-community/nixos-generators">nixos-generators</link>. <para>
This can also be a good starting point when you want to use Nix to The config value enforcement is implemented via
build a <quote>minimal</quote> image that doesnt include a NixOS <literal>mkImageMediaOverride = mkOverride 60;</literal> and
installation. therefore primes over simple value assignments, but also yields to
</para> <literal>mkForce</literal>.
</para>
<para>
This property allows image designers to implement in semantically
correct ways those configuration values upon which the correct
functioning of the image depends.
</para>
<para>
For example, the iso base image overrides those file systems which
it needs at a minimum for correct functioning, while the installer
base image overrides the entire file system layout because there
cant be any other guarantees on a live medium than those given by
the live medium itself. The latter is especially true befor
formatting the target block device(s). On the other hand, the
netboot iso only overrides its minimum dependencies since netboot
images are always made-to-target.
</para>
</section>
</chapter> </chapter>

View file

@ -84,7 +84,7 @@
</listitem> </listitem>
<listitem> <listitem>
<para> <para>
The linux_latest kernel was updated to the 5.12 series. It The linux_latest kernel was updated to the 5.13 series. It
currently is not officially supported for use with the zfs currently is not officially supported for use with the zfs
filesystem. If you use zfs, you should use a different kernel filesystem. If you use zfs, you should use a different kernel
version (either the LTS kernel, or track a specific one). version (either the LTS kernel, or track a specific one).

View file

@ -172,10 +172,104 @@
</para> </para>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
<itemizedlist spacing="compact">
<listitem>
<para>
<link xlink:href="https://www.navidrome.org/">navidrome</link>,
a personal music streaming server with subsonic-compatible
api. Available as
<link linkend="opt-services.navidrome.enable">navidrome</link>.
</para>
</listitem>
</itemizedlist>
</section> </section>
<section xml:id="sec-release-21.11-incompatibilities"> <section xml:id="sec-release-21.11-incompatibilities">
<title>Backward Incompatibilities</title> <title>Backward Incompatibilities</title>
<itemizedlist> <itemizedlist>
<listitem>
<para>
The <literal>paperless</literal> module and package have been
removed. All users should migrate to the successor
<literal>paperless-ng</literal> instead. The Paperless project
<link xlink:href="https://github.com/the-paperless-project/paperless/commit/9b0063c9731f7c5f65b1852cb8caff97f5e40ba4">has
been archived</link> and advises all users to use
<literal>paperless-ng</literal> instead.
</para>
<para>
Users can use the <literal>services.paperless-ng</literal>
module as a replacement while noting the following
incompatibilities:
</para>
<itemizedlist spacing="compact">
<listitem>
<para>
<literal>services.paperless.ocrLanguages</literal> has no
replacement. Users should migrate to
<link xlink:href="options.html#opt-services.paperless-ng.extraConfig"><literal>services.paperless-ng.extraConfig</literal></link>
instead:
</para>
</listitem>
</itemizedlist>
<programlisting language="bash">
{
services.paperless-ng.extraConfig = {
# Provide languages as ISO 639-2 codes
# separated by a plus (+) sign.
# https://en.wikipedia.org/wiki/List_of_ISO_639-2_codes
PAPERLESS_OCR_LANGUAGE = &quot;deu+eng+jpn&quot;; # German &amp; English &amp; Japanse
};
}
</programlisting>
<itemizedlist>
<listitem>
<para>
If you previously specified
<literal>PAPERLESS_CONSUME_MAIL_*</literal> settings in
<literal>services.paperless.extraConfig</literal> you
should remove those options now. You now
<emphasis>must</emphasis> define those settings in the
admin interface of paperless-ng.
</para>
</listitem>
<listitem>
<para>
Option <literal>services.paperless.manage</literal> no
longer exists. Use the script at
<literal>${services.paperless-ng.dataDir}/paperless-ng-manage</literal>
instead. Note that this script only exists after the
<literal>paperless-ng</literal> service has been started
at least once.
</para>
</listitem>
<listitem>
<para>
After switching to the new system configuration you should
run the Django management command to reindex your
documents and optionally create a user, if you dont have
one already.
</para>
<para>
To do so, enter the data directory (the value of
<literal>services.paperless-ng.dataDir</literal>,
<literal>/var/lib/paperless</literal> by default), switch
to the paperless user and execute the management command
like below:
</para>
<programlisting>
$ cd /var/lib/paperless
$ su paperless -s /bin/sh
$ ./paperless-ng-manage document_index reindex
# if not already done create a user account, paperless-ng requires a login
$ ./paperless-ng-manage createsuperuser
Username (leave blank to use 'paperless'): my-user-name
Email address: me@example.com
Password: **********
Password (again): **********
Superuser created successfully.
</programlisting>
</listitem>
</itemizedlist>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>staticjinja</literal> package has been upgraded The <literal>staticjinja</literal> package has been upgraded
@ -703,6 +797,36 @@
web UI this port needs to be opened in the firewall. web UI this port needs to be opened in the firewall.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>varnish</literal> package was upgraded from 6.3.x
to 6.5.x. <literal>varnish60</literal> for the last LTS
release is also still available.
</para>
</listitem>
<listitem>
<para>
The <literal>kubernetes</literal> package was upgraded to
1.22. The <literal>kubernetes.apiserver.kubeletHttps</literal>
option was removed and HTTPS is always used.
</para>
</listitem>
<listitem>
<para>
The attribute <literal>linuxPackages_latest_hardened</literal>
was dropped because the hardened patches lag behind the
upstream kernel which made version bumps harder. If you want
to use a hardened kernel, please pin it explicitly with a
versioned attribute such as
<literal>linuxPackages_5_10_hardened</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>nomad</literal> package now defaults to a 1.1.x
release instead of 1.0.x
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
<section xml:id="sec-release-21.11-notable-changes"> <section xml:id="sec-release-21.11-notable-changes">

View file

@ -64,14 +64,51 @@
</para> </para>
<para> <para>
To manually configure the network on the graphical installer, first disable On the graphical installer, you can configure the network, wifi included,
network-manager with <command>systemctl stop NetworkManager</command>. through NetworkManager. Using the <command>nmtui</command> program, you
can do so even in a non-graphical session. If you prefer to configure the
network manually, disable NetworkManager with
<command>systemctl stop NetworkManager</command>.
</para> </para>
<para> <para>
To manually configure the wifi on the minimal installer, run On the minimal installer, NetworkManager is not available, so configuration
<command>wpa_supplicant -B -i interface -c &lt;(wpa_passphrase 'SSID' must be perfomed manually. To configure the wifi, first start wpa_supplicant
'key')</command>. with <command>sudo systemctl start wpa_supplicant</command>, then run
<command>wpa_cli</command>. For most home networks, you need to type
in the following commands:
<programlisting>
<prompt>&gt; </prompt>add_network
0
<prompt>&gt; </prompt>set_network 0 ssid "myhomenetwork"
OK
<prompt>&gt; </prompt>set_network 0 psk "mypassword"
OK
<prompt>&gt; </prompt>set_network 0 key_mgmt WPA-PSK
OK
<prompt>&gt; </prompt>enable_network 0
OK
</programlisting>
For enterprise networks, for example <emphasis>eduroam</emphasis>, instead do:
<programlisting>
<prompt>&gt; </prompt>add_network
0
<prompt>&gt; </prompt>set_network 0 ssid "eduroam"
OK
<prompt>&gt; </prompt>set_network 0 identity "myname@example.com"
OK
<prompt>&gt; </prompt>set_network 0 password "mypassword"
OK
<prompt>&gt; </prompt>set_network 0 key_mgmt WPA-EAP
OK
<prompt>&gt; </prompt>enable_network 0
OK
</programlisting>
When successfully connected, you should see a line such as this one
<programlisting>
&lt;3&gt;CTRL-EVENT-CONNECTED - Connection to 32:85:ab:ef:24:5c completed [id=0 id_str=]
</programlisting>
you can now leave <command>wpa_cli</command> by typing <command>quit</command>.
</para> </para>
<para> <para>

View file

@ -30,7 +30,7 @@ In addition to numerous new and upgraded packages, this release has the followin
- Python optimizations were disabled again. Builds with optimizations enabled are not reproducible. Optimizations can now be enabled with an option. - Python optimizations were disabled again. Builds with optimizations enabled are not reproducible. Optimizations can now be enabled with an option.
- The linux_latest kernel was updated to the 5.12 series. It currently is not officially supported for use with the zfs filesystem. If you use zfs, you should use a different kernel version (either the LTS kernel, or track a specific one). - The linux_latest kernel was updated to the 5.13 series. It currently is not officially supported for use with the zfs filesystem. If you use zfs, you should use a different kernel version (either the LTS kernel, or track a specific one).
## New Services {#sec-release-21.05-new-services} ## New Services {#sec-release-21.05-new-services}

View file

@ -53,8 +53,58 @@ pt-services.clipcat.enable).
- [isso](https://posativ.org/isso/), a commenting server similar to Disqus. - [isso](https://posativ.org/isso/), a commenting server similar to Disqus.
Available as [isso](#opt-services.isso.enable) Available as [isso](#opt-services.isso.enable)
* [navidrome](https://www.navidrome.org/), a personal music streaming server with
subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable).
## Backward Incompatibilities {#sec-release-21.11-incompatibilities} ## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
- The `paperless` module and package have been removed. All users should migrate to the
successor `paperless-ng` instead. The Paperless project [has been
archived](https://github.com/the-paperless-project/paperless/commit/9b0063c9731f7c5f65b1852cb8caff97f5e40ba4)
and advises all users to use `paperless-ng` instead.
Users can use the `services.paperless-ng` module as a replacement while noting the following incompatibilities:
- `services.paperless.ocrLanguages` has no replacement. Users should migrate to [`services.paperless-ng.extraConfig`](options.html#opt-services.paperless-ng.extraConfig) instead:
```nix
{
services.paperless-ng.extraConfig = {
# Provide languages as ISO 639-2 codes
# separated by a plus (+) sign.
# https://en.wikipedia.org/wiki/List_of_ISO_639-2_codes
PAPERLESS_OCR_LANGUAGE = "deu+eng+jpn"; # German & English & Japanse
};
}
```
- If you previously specified `PAPERLESS_CONSUME_MAIL_*` settings in
`services.paperless.extraConfig` you should remove those options now. You
now *must* define those settings in the admin interface of paperless-ng.
- Option `services.paperless.manage` no longer exists.
Use the script at `${services.paperless-ng.dataDir}/paperless-ng-manage` instead.
Note that this script only exists after the `paperless-ng` service has been
started at least once.
- After switching to the new system configuration you should run the Django
management command to reindex your documents and optionally create a user,
if you don't have one already.
To do so, enter the data directory (the value of
`services.paperless-ng.dataDir`, `/var/lib/paperless` by default), switch
to the paperless user and execute the management command like below:
```
$ cd /var/lib/paperless
$ su paperless -s /bin/sh
$ ./paperless-ng-manage document_index reindex
# if not already done create a user account, paperless-ng requires a login
$ ./paperless-ng-manage createsuperuser
Username (leave blank to use 'paperless'): my-user-name
Email address: me@example.com
Password: **********
Password (again): **********
Superuser created successfully.
```
- The `staticjinja` package has been upgraded from 1.0.4 to 3.0.1 - The `staticjinja` package has been upgraded from 1.0.4 to 3.0.1
- The `erigon` ethereum node has moved to a new database format in `2021-05-04`, and requires a full resync - The `erigon` ethereum node has moved to a new database format in `2021-05-04`, and requires a full resync
@ -179,6 +229,17 @@ pt-services.clipcat.enable).
configures the address and port the web UI is listening, it defaults to `:9001`. configures the address and port the web UI is listening, it defaults to `:9001`.
To be able to access the web UI this port needs to be opened in the firewall. To be able to access the web UI this port needs to be opened in the firewall.
- The `varnish` package was upgraded from 6.3.x to 6.5.x. `varnish60` for the last LTS release is also still available.
- The `kubernetes` package was upgraded to 1.22. The `kubernetes.apiserver.kubeletHttps` option was removed and HTTPS is always used.
- The attribute `linuxPackages_latest_hardened` was dropped because the hardened patches
lag behind the upstream kernel which made version bumps harder. If you want to use
a hardened kernel, please pin it explicitly with a versioned attribute such as
`linuxPackages_5_10_hardened`.
- The `nomad` package now defaults to a 1.1.x release instead of 1.0.x
## Other Notable Changes {#sec-release-21.11-notable-changes} ## Other Notable Changes {#sec-release-21.11-notable-changes}
- The setting [`services.openssh.logLevel`](options.html#opt-services.openssh.logLevel) `"VERBOSE"` `"INFO"`. This brings NixOS in line with upstream and other Linux distributions, and reduces log spam on servers due to bruteforcing botnets. - The setting [`services.openssh.logLevel`](options.html#opt-services.openssh.logLevel) `"VERBOSE"` `"INFO"`. This brings NixOS in line with upstream and other Linux distributions, and reduces log spam on servers due to bruteforcing botnets.

View file

@ -1029,10 +1029,11 @@ if __name__ == "__main__":
args = arg_parser.parse_args() args = arg_parser.parse_args()
global test_script global test_script
testscript = pathlib.Path(args.testscript).read_text()
def test_script() -> None: def test_script() -> None:
with log.nested("running the VM test script"): with log.nested("running the VM test script"):
exec(pathlib.Path(args.testscript).read_text(), globals()) exec(testscript, globals())
log = Logger() log = Logger()
@ -1061,7 +1062,8 @@ if __name__ == "__main__":
process.terminate() process.terminate()
log.close() log.close()
interactive = args.interactive or (not bool(testscript))
tic = time.time() tic = time.time()
run_tests(args.interactive) run_tests(interactive)
toc = time.time() toc = time.time()
print("test script finished in {:.2f}s".format(toc - tic)) print("test script finished in {:.2f}s".format(toc - tic))

View file

@ -186,6 +186,14 @@ rec {
--set startScripts "''${vmStartScripts[*]}" \ --set startScripts "''${vmStartScripts[*]}" \
--set testScript "$out/test-script" \ --set testScript "$out/test-script" \
--set vlans '${toString vlans}' --set vlans '${toString vlans}'
${lib.optionalString (testScript == "") ''
ln -s ${testDriver}/bin/nixos-test-driver $out/bin/nixos-run-vms
wrapProgram $out/bin/nixos-run-vms \
--set startScripts "''${vmStartScripts[*]}" \
--set testScript "${pkgs.writeText "start-all" "start_all(); join_all();"}" \
--set vlans '${toString vlans}'
''}
''); '');
# Make a full-blown test # Make a full-blown test

View file

@ -3,7 +3,7 @@ pkgs: with pkgs.lib;
rec { rec {
# Copy configuration files to avoid having the entire sources in the system closure # Copy configuration files to avoid having the entire sources in the system closure
copyFile = filePath: pkgs.runCommandNoCC (builtins.unsafeDiscardStringContext (builtins.baseNameOf filePath)) {} '' copyFile = filePath: pkgs.runCommand (builtins.unsafeDiscardStringContext (builtins.baseNameOf filePath)) {} ''
cp ${filePath} $out cp ${filePath} $out
''; '';

View file

@ -42,7 +42,7 @@ let
# nslcd normally reads configuration from /etc/nslcd.conf. # nslcd normally reads configuration from /etc/nslcd.conf.
# this file might contain secrets. We append those at runtime, # this file might contain secrets. We append those at runtime,
# so redirect its location to something more temporary. # so redirect its location to something more temporary.
nslcdWrapped = runCommandNoCC "nslcd-wrapped" { nativeBuildInputs = [ makeWrapper ]; } '' nslcdWrapped = runCommand "nslcd-wrapped" { nativeBuildInputs = [ makeWrapper ]; } ''
mkdir -p $out/bin mkdir -p $out/bin
makeWrapper ${nss_pam_ldapd}/sbin/nslcd $out/bin/nslcd \ makeWrapper ${nss_pam_ldapd}/sbin/nslcd $out/bin/nslcd \
--set LD_PRELOAD "${pkgs.libredirect}/lib/libredirect.so" \ --set LD_PRELOAD "${pkgs.libredirect}/lib/libredirect.so" \

View file

@ -190,7 +190,7 @@ in
protocols.source = pkgs.iana-etc + "/etc/protocols"; protocols.source = pkgs.iana-etc + "/etc/protocols";
# /etc/hosts: Hostname-to-IP mappings. # /etc/hosts: Hostname-to-IP mappings.
hosts.source = pkgs.runCommandNoCC "hosts" {} '' hosts.source = pkgs.runCommand "hosts" {} ''
cat ${escapeShellArgs cfg.hostFiles} > $out cat ${escapeShellArgs cfg.hostFiles} > $out
''; '';

View file

@ -12,5 +12,6 @@ with lib;
boot.loader.systemd-boot.consoleMode = mkDefault "1"; boot.loader.systemd-boot.consoleMode = mkDefault "1";
# TODO Find reasonable defaults X11 & wayland # TODO Find reasonable defaults X11 & wayland
services.xserver.dpi = lib.mkDefault 192;
}; };
} }

View file

@ -30,6 +30,11 @@ with lib;
# Add Memtest86+ to the CD. # Add Memtest86+ to the CD.
boot.loader.grub.memtest86.enable = true; boot.loader.grub.memtest86.enable = true;
# An installation media cannot tolerate a host config defined file
# system layout on a fresh machine, before it has been formatted.
swapDevices = mkImageMediaOverride [ ];
fileSystems = mkImageMediaOverride config.lib.isoFileSystems;
boot.postBootCommands = '' boot.postBootCommands = ''
for o in $(</proc/cmdline); do for o in $(</proc/cmdline); do
case "$o" in case "$o" in

View file

@ -615,6 +615,55 @@ in
}; };
# store them in lib so we can mkImageMediaOverride the
# entire file system layout in installation media (only)
config.lib.isoFileSystems = {
"/" = mkImageMediaOverride
{
fsType = "tmpfs";
options = [ "mode=0755" ];
};
# Note that /dev/root is a symlink to the actual root device
# specified on the kernel command line, created in the stage 1
# init script.
"/iso" = mkImageMediaOverride
{ device = "/dev/root";
neededForBoot = true;
noCheck = true;
};
# In stage 1, mount a tmpfs on top of /nix/store (the squashfs
# image) to make this a live CD.
"/nix/.ro-store" = mkImageMediaOverride
{ fsType = "squashfs";
device = "/iso/nix-store.squashfs";
options = [ "loop" ];
neededForBoot = true;
};
"/nix/.rw-store" = mkImageMediaOverride
{ fsType = "tmpfs";
options = [ "mode=0755" ];
neededForBoot = true;
};
"/nix/store" = mkImageMediaOverride
{ fsType = "overlay";
device = "overlay";
options = [
"lowerdir=/nix/.ro-store"
"upperdir=/nix/.rw-store/store"
"workdir=/nix/.rw-store/work"
];
depends = [
"/nix/.ro-store"
"/nix/.rw-store/store"
"/nix/.rw-store/work"
];
};
};
config = { config = {
assertions = [ assertions = [
{ {
@ -653,54 +702,7 @@ in
"boot.shell_on_fail" "boot.shell_on_fail"
]; ];
fileSystems."/" = fileSystems = config.lib.isoFileSystems;
# This module is often over-layed onto an existing host config
# that defines `/`. We use mkOverride 60 to override standard
# values, but at the same time leave room for mkForce values
# targeted at the image build.
{ fsType = mkOverride 60 "tmpfs";
options = [ "mode=0755" ];
};
# Note that /dev/root is a symlink to the actual root device
# specified on the kernel command line, created in the stage 1
# init script.
fileSystems."/iso" =
{ device = "/dev/root";
neededForBoot = true;
noCheck = true;
};
# In stage 1, mount a tmpfs on top of /nix/store (the squashfs
# image) to make this a live CD.
fileSystems."/nix/.ro-store" =
{ fsType = "squashfs";
device = "/iso/nix-store.squashfs";
options = [ "loop" ];
neededForBoot = true;
};
fileSystems."/nix/.rw-store" =
{ fsType = "tmpfs";
options = [ "mode=0755" ];
neededForBoot = true;
};
fileSystems."/nix/store" =
{ fsType = "overlay";
device = "overlay";
options = [
"lowerdir=/nix/.ro-store"
"upperdir=/nix/.rw-store/store"
"workdir=/nix/.rw-store/work"
];
depends = [
"/nix/.ro-store"
"/nix/.rw-store/store"
"/nix/.rw-store/work"
];
};
boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "uas" "overlay" ]; boot.initrd.availableKernelModules = [ "squashfs" "iso9660" "uas" "overlay" ];

View file

@ -29,31 +29,27 @@ with lib;
then [] then []
else [ pkgs.grub2 pkgs.syslinux ]); else [ pkgs.grub2 pkgs.syslinux ]);
fileSystems."/" = fileSystems."/" = mkImageMediaOverride
# This module is often over-layed onto an existing host config { fsType = "tmpfs";
# that defines `/`. We use mkOverride 60 to override standard
# values, but at the same time leave room for mkForce values
# targeted at the image build.
{ fsType = mkOverride 60 "tmpfs";
options = [ "mode=0755" ]; options = [ "mode=0755" ];
}; };
# In stage 1, mount a tmpfs on top of /nix/store (the squashfs # In stage 1, mount a tmpfs on top of /nix/store (the squashfs
# image) to make this a live CD. # image) to make this a live CD.
fileSystems."/nix/.ro-store" = fileSystems."/nix/.ro-store" = mkImageMediaOverride
{ fsType = "squashfs"; { fsType = "squashfs";
device = "../nix-store.squashfs"; device = "../nix-store.squashfs";
options = [ "loop" ]; options = [ "loop" ];
neededForBoot = true; neededForBoot = true;
}; };
fileSystems."/nix/.rw-store" = fileSystems."/nix/.rw-store" = mkImageMediaOverride
{ fsType = "tmpfs"; { fsType = "tmpfs";
options = [ "mode=0755" ]; options = [ "mode=0755" ];
neededForBoot = true; neededForBoot = true;
}; };
fileSystems."/nix/store" = fileSystems."/nix/store" = mkImageMediaOverride
{ fsType = "overlay"; { fsType = "overlay";
device = "overlay"; device = "overlay";
options = [ options = [

View file

@ -254,6 +254,7 @@
./services/audio/mopidy.nix ./services/audio/mopidy.nix
./services/audio/networkaudiod.nix ./services/audio/networkaudiod.nix
./services/audio/roon-bridge.nix ./services/audio/roon-bridge.nix
./services/audio/navidrome.nix
./services/audio/roon-server.nix ./services/audio/roon-server.nix
./services/audio/slimserver.nix ./services/audio/slimserver.nix
./services/audio/snapserver.nix ./services/audio/snapserver.nix
@ -550,7 +551,7 @@
./services/misc/ombi.nix ./services/misc/ombi.nix
./services/misc/osrm.nix ./services/misc/osrm.nix
./services/misc/packagekit.nix ./services/misc/packagekit.nix
./services/misc/paperless.nix ./services/misc/paperless-ng.nix
./services/misc/parsoid.nix ./services/misc/parsoid.nix
./services/misc/plex.nix ./services/misc/plex.nix
./services/misc/plikd.nix ./services/misc/plikd.nix

View file

@ -54,7 +54,12 @@ with lib;
An ssh daemon is running. You then must set a password An ssh daemon is running. You then must set a password
for either "root" or "nixos" with `passwd` or add an ssh key for either "root" or "nixos" with `passwd` or add an ssh key
to /home/nixos/.ssh/authorized_keys be able to login. to /home/nixos/.ssh/authorized_keys be able to login.
If you need a wireless connection, type
`sudo systemctl start wpa_supplicant` and configure a
network using `wpa_cli`. See the NixOS manual for details.
'' + optionalString config.services.xserver.enable '' '' + optionalString config.services.xserver.enable ''
Type `sudo systemctl start display-manager' to Type `sudo systemctl start display-manager' to
start the graphical user interface. start the graphical user interface.
''; '';
@ -71,6 +76,7 @@ with lib;
# Enable wpa_supplicant, but don't start it by default. # Enable wpa_supplicant, but don't start it by default.
networking.wireless.enable = mkDefault true; networking.wireless.enable = mkDefault true;
networking.wireless.userControlled.enable = true;
systemd.services.wpa_supplicant.wantedBy = mkOverride 50 []; systemd.services.wpa_supplicant.wantedBy = mkOverride 50 [];
# Tell the Nix evaluator to garbage collect more aggressively. # Tell the Nix evaluator to garbage collect more aggressively.

View file

@ -828,7 +828,7 @@ in
}; };
challengeResponsePath = mkOption { challengeResponsePath = mkOption {
default = null; default = null;
type = types.path; type = types.nullOr types.path;
description = '' description = ''
If not null, set the path used by yubico pam module where the challenge expected response is stored. If not null, set the path used by yubico pam module where the challenge expected response is stored.

View file

@ -14,17 +14,6 @@ in
services.hqplayerd = { services.hqplayerd = {
enable = mkEnableOption "HQPlayer Embedded"; enable = mkEnableOption "HQPlayer Embedded";
licenseFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Path to the HQPlayer license key file.
Without this, the service will run in trial mode and restart every 30
minutes.
'';
};
auth = { auth = {
username = mkOption { username = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
@ -49,11 +38,32 @@ in
}; };
}; };
licenseFile = mkOption {
type = types.nullOr types.path;
default = null;
description = ''
Path to the HQPlayer license key file.
Without this, the service will run in trial mode and restart every 30
minutes.
'';
};
openFirewall = mkOption { openFirewall = mkOption {
type = types.bool; type = types.bool;
default = false; default = false;
description = '' description = ''
Open TCP port 8088 in the firewall for the server. Opens ports needed for the WebUI and controller API.
'';
};
config = mkOption {
type = types.nullOr types.lines;
default = null;
description = ''
HQplayer daemon configuration, written to /etc/hqplayer/hqplayerd.xml.
Refer to ${pkg}/share/doc/hqplayerd/readme.txt for possible values.
''; '';
}; };
}; };
@ -70,6 +80,7 @@ in
environment = { environment = {
etc = { etc = {
"hqplayer/hqplayerd.xml" = mkIf (cfg.config != null) { source = pkgs.writeText "hqplayerd.xml" cfg.config; };
"hqplayer/hqplayerd4-key.xml" = mkIf (cfg.licenseFile != null) { source = cfg.licenseFile; }; "hqplayer/hqplayerd4-key.xml" = mkIf (cfg.licenseFile != null) { source = cfg.licenseFile; };
"modules-load.d/taudio2.conf".source = "${pkg}/etc/modules-load.d/taudio2.conf"; "modules-load.d/taudio2.conf".source = "${pkg}/etc/modules-load.d/taudio2.conf";
}; };
@ -77,7 +88,7 @@ in
}; };
networking.firewall = mkIf cfg.openFirewall { networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ 8088 ]; allowedTCPPorts = [ 8088 4321 ];
}; };
services.udev.packages = [ pkg ]; services.udev.packages = [ pkg ];
@ -99,6 +110,8 @@ in
unitConfig.ConditionPathExists = [ configDir stateDir ]; unitConfig.ConditionPathExists = [ configDir stateDir ];
restartTriggers = optionals (cfg.config != null) [ config.environment.etc."hqplayer/hqplayerd.xml".source ];
preStart = '' preStart = ''
cp -r "${pkg}/var/lib/hqplayer/web" "${stateDir}" cp -r "${pkg}/var/lib/hqplayer/web" "${stateDir}"
chmod -R u+wX "${stateDir}/web" chmod -R u+wX "${stateDir}/web"

View file

@ -0,0 +1,71 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.navidrome;
settingsFormat = pkgs.formats.json {};
in {
options = {
services.navidrome = {
enable = mkEnableOption pkgs.navidrome.meta.description;
settings = mkOption rec {
type = settingsFormat.type;
apply = recursiveUpdate default;
default = {
Address = "127.0.0.1";
Port = 4533;
};
example = {
MusicFolder = "/mnt/music";
};
description = ''
Configuration for Navidrome, see <link xlink:href="https://www.navidrome.org/docs/usage/configuration-options/"/> for supported values.
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.navidrome = {
description = "Navidrome Media Server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${pkgs.navidrome}/bin/navidrome --configfile ${settingsFormat.generate "navidrome.json" cfg.settings}
'';
DynamicUser = true;
StateDirectory = "navidrome";
WorkingDirectory = "/var/lib/navidrome";
RuntimeDirectory = "navidrome";
RootDirectory = "/run/navidrome";
ReadWritePaths = "";
BindReadOnlyPaths = [
builtins.storeDir
] ++ lib.optional (cfg.settings ? MusicFolder) cfg.settings.MusicFolder;
CapabilityBoundingSet = "";
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
RestrictNamespaces = true;
PrivateDevices = true;
PrivateUsers = true;
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
SystemCallArchitectures = "native";
SystemCallFilter = [ "@system-service" "~@privileged" "~@resources" ];
RestrictRealtime = true;
LockPersonality = true;
MemoryDenyWriteExecute = true;
UMask = "0066";
ProtectHostname = true;
};
};
};
}

View file

@ -102,7 +102,7 @@ let
mkWrapperDrv = { mkWrapperDrv = {
original, name, set ? {} original, name, set ? {}
}: }:
pkgs.runCommandNoCC "${name}-wrapper" { pkgs.runCommand "${name}-wrapper" {
buildInputs = [ pkgs.makeWrapper ]; buildInputs = [ pkgs.makeWrapper ];
} (with lib; '' } (with lib; ''
makeWrapper "${original}" "$out/bin/${name}" \ makeWrapper "${original}" "$out/bin/${name}" \

View file

@ -79,6 +79,33 @@ in
''; '';
}; };
localSourceAllow = mkOption {
type = types.listOf types.str;
# Permissions snapshot and destroy are in case --no-sync-snap is not used
default = [ "bookmark" "hold" "send" "snapshot" "destroy" ];
description = ''
Permissions granted for the <option>services.syncoid.user</option> user
for local source datasets. See
<link xlink:href="https://openzfs.github.io/openzfs-docs/man/8/zfs-allow.8.html"/>
for available permissions.
'';
};
localTargetAllow = mkOption {
type = types.listOf types.str;
default = [ "change-key" "compression" "create" "mount" "mountpoint" "receive" "rollback" ];
example = [ "create" "mount" "receive" "rollback" ];
description = ''
Permissions granted for the <option>services.syncoid.user</option> user
for local target datasets. See
<link xlink:href="https://openzfs.github.io/openzfs-docs/man/8/zfs-allow.8.html"/>
for available permissions.
Make sure to include the <literal>change-key</literal> permission if you send raw encrypted datasets,
the <literal>compression</literal> permission if you send raw compressed datasets, and so on.
For remote target datasets you'll have to set your remote user permissions by yourself.
'';
};
commonArgs = mkOption { commonArgs = mkOption {
type = types.listOf types.str; type = types.listOf types.str;
default = [ ]; default = [ ];
@ -133,6 +160,30 @@ in
''; '';
}; };
localSourceAllow = mkOption {
type = types.listOf types.str;
description = ''
Permissions granted for the <option>services.syncoid.user</option> user
for local source datasets. See
<link xlink:href="https://openzfs.github.io/openzfs-docs/man/8/zfs-allow.8.html"/>
for available permissions.
Defaults to <option>services.syncoid.localSourceAllow</option> option.
'';
};
localTargetAllow = mkOption {
type = types.listOf types.str;
description = ''
Permissions granted for the <option>services.syncoid.user</option> user
for local target datasets. See
<link xlink:href="https://openzfs.github.io/openzfs-docs/man/8/zfs-allow.8.html"/>
for available permissions.
Make sure to include the <literal>change-key</literal> permission if you send raw encrypted datasets,
the <literal>compression</literal> permission if you send raw compressed datasets, and so on.
For remote target datasets you'll have to set your remote user permissions by yourself.
'';
};
sendOptions = mkOption { sendOptions = mkOption {
type = types.separatedString " "; type = types.separatedString " ";
default = ""; default = "";
@ -179,6 +230,8 @@ in
config = { config = {
source = mkDefault name; source = mkDefault name;
sshKey = mkDefault cfg.sshKey; sshKey = mkDefault cfg.sshKey;
localSourceAllow = mkDefault cfg.localSourceAllow;
localTargetAllow = mkDefault cfg.localTargetAllow;
}; };
})); }));
default = { }; default = { };
@ -221,13 +274,11 @@ in
path = [ "/run/booted-system/sw/bin/" ]; path = [ "/run/booted-system/sw/bin/" ];
serviceConfig = { serviceConfig = {
ExecStartPre = ExecStartPre =
# Permissions snapshot and destroy are in case --no-sync-snap is not used (map (buildAllowCommand "allow" c.localSourceAllow) (localDatasetName c.source)) ++
(map (buildAllowCommand "allow" [ "bookmark" "hold" "send" "snapshot" "destroy" ]) (localDatasetName c.source)) ++ (map (buildAllowCommand "allow" c.localTargetAllow) (localDatasetName c.target));
(map (buildAllowCommand "allow" [ "create" "mount" "receive" "rollback" ]) (localDatasetName c.target));
ExecStopPost = ExecStopPost =
# Permissions snapshot and destroy are in case --no-sync-snap is not used (map (buildAllowCommand "unallow" c.localSourceAllow) (localDatasetName c.source)) ++
(map (buildAllowCommand "unallow" [ "bookmark" "hold" "send" "snapshot" "destroy" ]) (localDatasetName c.source)) ++ (map (buildAllowCommand "unallow" c.localTargetAllow) (localDatasetName c.target));
(map (buildAllowCommand "unallow" [ "create" "mount" "receive" "rollback" ]) (localDatasetName c.target));
ExecStart = lib.escapeShellArgs ([ "${pkgs.sanoid}/bin/syncoid" ] ExecStart = lib.escapeShellArgs ([ "${pkgs.sanoid}/bin/syncoid" ]
++ optionals c.useCommonArgs cfg.commonArgs ++ optionals c.useCommonArgs cfg.commonArgs
++ optional c.recursive "-r" ++ optional c.recursive "-r"

View file

@ -83,8 +83,8 @@ let
}; };
syncmode = mkOption { syncmode = mkOption {
type = types.enum [ "fast" "full" "light" ]; type = types.enum [ "snap" "fast" "full" "light" ];
default = "fast"; default = "snap";
description = "Blockchain sync mode."; description = "Blockchain sync mode.";
}; };

View file

@ -190,12 +190,6 @@ in
type = nullOr path; type = nullOr path;
}; };
kubeletHttps = mkOption {
description = "Whether to use https for connections to kubelet.";
default = true;
type = bool;
};
preferredAddressTypes = mkOption { preferredAddressTypes = mkOption {
description = "List of the preferred NodeAddressTypes to use for kubelet connections."; description = "List of the preferred NodeAddressTypes to use for kubelet connections.";
type = nullOr str; type = nullOr str;
@ -365,7 +359,6 @@ in
"--feature-gates=${concatMapStringsSep "," (feature: "${feature}=true") cfg.featureGates}"} \ "--feature-gates=${concatMapStringsSep "," (feature: "${feature}=true") cfg.featureGates}"} \
${optionalString (cfg.basicAuthFile != null) ${optionalString (cfg.basicAuthFile != null)
"--basic-auth-file=${cfg.basicAuthFile}"} \ "--basic-auth-file=${cfg.basicAuthFile}"} \
--kubelet-https=${boolToString cfg.kubeletHttps} \
${optionalString (cfg.kubeletClientCaFile != null) ${optionalString (cfg.kubeletClientCaFile != null)
"--kubelet-certificate-authority=${cfg.kubeletClientCaFile}"} \ "--kubelet-certificate-authority=${cfg.kubeletClientCaFile}"} \
${optionalString (cfg.kubeletClientCertFile != null) ${optionalString (cfg.kubeletClientCertFile != null)

View file

@ -58,7 +58,7 @@ in
services.kubernetes.addonManager.bootstrapAddons = mkIf ((storageBackend == "kubernetes") && (elem "RBAC" top.apiserver.authorizationMode)) { services.kubernetes.addonManager.bootstrapAddons = mkIf ((storageBackend == "kubernetes") && (elem "RBAC" top.apiserver.authorizationMode)) {
flannel-cr = { flannel-cr = {
apiVersion = "rbac.authorization.k8s.io/v1beta1"; apiVersion = "rbac.authorization.k8s.io/v1";
kind = "ClusterRole"; kind = "ClusterRole";
metadata = { name = "flannel"; }; metadata = { name = "flannel"; };
rules = [{ rules = [{
@ -79,7 +79,7 @@ in
}; };
flannel-crb = { flannel-crb = {
apiVersion = "rbac.authorization.k8s.io/v1beta1"; apiVersion = "rbac.authorization.k8s.io/v1";
kind = "ClusterRoleBinding"; kind = "ClusterRoleBinding";
metadata = { name = "flannel"; }; metadata = { name = "flannel"; };
roleRef = { roleRef = {

View file

@ -10,7 +10,7 @@ let
jsonType = (pkgs.formats.json {}).type; jsonType = (pkgs.formats.json {}).type;
configFile = pkgs.runCommandNoCC "matrix-appservice-irc.yml" { configFile = pkgs.runCommand "matrix-appservice-irc.yml" {
# Because this program will be run at build time, we need `nativeBuildInputs` # Because this program will be run at build time, we need `nativeBuildInputs`
nativeBuildInputs = [ (pkgs.python3.withPackages (ps: [ ps.pyyaml ps.jsonschema ])) ]; nativeBuildInputs = [ (pkgs.python3.withPackages (ps: [ ps.pyyaml ps.jsonschema ])) ];
preferLocalBuild = true; preferLocalBuild = true;

View file

@ -0,0 +1,304 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.paperless-ng;
defaultUser = "paperless";
env = {
PAPERLESS_DATA_DIR = cfg.dataDir;
PAPERLESS_MEDIA_ROOT = cfg.mediaDir;
PAPERLESS_CONSUMPTION_DIR = cfg.consumptionDir;
GUNICORN_CMD_ARGS = "--bind=${cfg.address}:${toString cfg.port}";
} // lib.mapAttrs (_: toString) cfg.extraConfig;
manage = let
setupEnv = lib.concatStringsSep "\n" (mapAttrsToList (name: val: "export ${name}=\"${val}\"") env);
in pkgs.writeShellScript "manage" ''
${setupEnv}
exec ${cfg.package}/bin/paperless-ng "$@"
'';
# Secure the services
defaultServiceConfig = {
TemporaryFileSystem = "/:ro";
BindReadOnlyPaths = [
"/nix/store"
"-/etc/resolv.conf"
"-/etc/nsswitch.conf"
"-/etc/hosts"
"-/etc/localtime"
];
BindPaths = [
cfg.consumptionDir
cfg.dataDir
cfg.mediaDir
];
CapabilityBoundingSet = "";
# ProtectClock adds DeviceAllow=char-rtc r
DeviceAllow = "";
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
PrivateDevices = true;
PrivateMounts = true;
# Needs to connect to redis
# PrivateNetwork = true;
PrivateTmp = true;
PrivateUsers = true;
ProcSubset = "pid";
ProtectClock = true;
# Breaks if the home dir of the user is in /home
# Also does not add much value in combination with the TemporaryFileSystem.
# ProtectHome = true;
ProtectHostname = true;
# Would re-mount paths ignored by temporary root
#ProtectSystem = "strict";
ProtectControlGroups = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectProc = "invisible";
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter = [ "@system-service" "~@privileged @resources @setuid @keyring" ];
# Does not work well with the temporary root
#UMask = "0066";
};
in
{
meta.maintainers = with maintainers; [ earvstedt Flakebi ];
imports = [
(mkRemovedOptionModule [ "services" "paperless"] ''
The paperless module has been removed as the upstream project died.
Users should migrate to the paperless-ng module (services.paperless-ng).
More information can be found in the NixOS 21.11 release notes.
'')
];
options.services.paperless-ng = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable Paperless-ng.
When started, the Paperless database is automatically created if it doesn't
exist and updated if the Paperless package has changed.
Both tasks are achieved by running a Django migration.
A script to manage the Paperless instance (by wrapping Django's manage.py) is linked to
<literal>''${dataDir}/paperless-ng-manage</literal>.
'';
};
dataDir = mkOption {
type = types.str;
default = "/var/lib/paperless";
description = "Directory to store the Paperless data.";
};
mediaDir = mkOption {
type = types.str;
default = "${cfg.dataDir}/media";
defaultText = "\${dataDir}/consume";
description = "Directory to store the Paperless documents.";
};
consumptionDir = mkOption {
type = types.str;
default = "${cfg.dataDir}/consume";
defaultText = "\${dataDir}/consume";
description = "Directory from which new documents are imported.";
};
consumptionDirIsPublic = mkOption {
type = types.bool;
default = false;
description = "Whether all users can write to the consumption dir.";
};
passwordFile = mkOption {
type = types.nullOr types.path;
default = null;
example = "/run/keys/paperless-ng-password";
description = ''
A file containing the superuser password.
A superuser is required to access the web interface.
If unset, you can create a superuser manually by running
<literal>''${dataDir}/paperless-ng-manage createsuperuser</literal>.
The default superuser name is <literal>admin</literal>. To change it, set
option <option>extraConfig.PAPERLESS_ADMIN_USER</option>.
WARNING: When changing the superuser name after the initial setup, the old superuser
will continue to exist.
To disable login for the web interface, set the following:
<literal>extraConfig.PAPERLESS_AUTO_LOGIN_USERNAME = "admin";</literal>.
WARNING: Only use this on a trusted system without internet access to Paperless.
'';
};
address = mkOption {
type = types.str;
default = "localhost";
description = "Web interface address.";
};
port = mkOption {
type = types.port;
default = 28981;
description = "Web interface port.";
};
extraConfig = mkOption {
type = types.attrs;
default = {};
description = ''
Extra paperless-ng config options.
See <link xlink:href="https://paperless-ng.readthedocs.io/en/latest/configuration.html">the documentation</link>
for available options.
'';
example = literalExample ''
{
PAPERLESS_OCR_LANGUAGE = "deu+eng";
}
'';
};
user = mkOption {
type = types.str;
default = defaultUser;
description = "User under which Paperless runs.";
};
package = mkOption {
type = types.package;
default = pkgs.paperless-ng;
defaultText = "pkgs.paperless-ng";
description = "The Paperless package to use.";
};
};
config = mkIf cfg.enable {
# Enable redis if no special url is set
services.redis.enable = mkIf (!hasAttr "PAPERLESS_REDIS" env) true;
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' - ${cfg.user} ${config.users.users.${cfg.user}.group} - -"
"d '${cfg.mediaDir}' - ${cfg.user} ${config.users.users.${cfg.user}.group} - -"
(if cfg.consumptionDirIsPublic then
"d '${cfg.consumptionDir}' 777 - - - -"
else
"d '${cfg.consumptionDir}' - ${cfg.user} ${config.users.users.${cfg.user}.group} - -"
)
];
systemd.services.paperless-ng-server = {
description = "Paperless document server";
serviceConfig = defaultServiceConfig // {
User = cfg.user;
ExecStart = "${cfg.package}/bin/paperless-ng qcluster";
Restart = "on-failure";
};
environment = env;
wantedBy = [ "multi-user.target" ];
wants = [ "paperless-ng-consumer.service" "paperless-ng-web.service" ];
preStart = ''
ln -sf ${manage} ${cfg.dataDir}/paperless-ng-manage
# Auto-migrate on first run or if the package has changed
versionFile="${cfg.dataDir}/src-version"
if [[ $(cat "$versionFile" 2>/dev/null) != ${cfg.package} ]]; then
${cfg.package}/bin/paperless-ng migrate
echo ${cfg.package} > "$versionFile"
fi
''
+ optionalString (cfg.passwordFile != null) ''
export PAPERLESS_ADMIN_USER="''${PAPERLESS_ADMIN_USER:-admin}"
export PAPERLESS_ADMIN_PASSWORD=$(cat "${cfg.dataDir}/superuser-password")
superuserState="$PAPERLESS_ADMIN_USER:$PAPERLESS_ADMIN_PASSWORD"
superuserStateFile="${cfg.dataDir}/superuser-state"
if [[ $(cat "$superuserStateFile" 2>/dev/null) != $superuserState ]]; then
${cfg.package}/bin/paperless-ng manage_superuser
echo "$superuserState" > "$superuserStateFile"
fi
'';
};
# Password copying can't be implemented as a privileged preStart script
# in 'paperless-ng-server' because 'defaultServiceConfig' limits the filesystem
# paths accessible by the service.
systemd.services.paperless-ng-copy-password = mkIf (cfg.passwordFile != null) {
requiredBy = [ "paperless-ng-server.service" ];
before = [ "paperless-ng-server.service" ];
serviceConfig = {
ExecStart = ''
${pkgs.coreutils}/bin/install --mode 600 --owner '${cfg.user}' --compare \
'${cfg.passwordFile}' '${cfg.dataDir}/superuser-password'
'';
Type = "oneshot";
};
};
systemd.services.paperless-ng-consumer = {
description = "Paperless document consumer";
serviceConfig = defaultServiceConfig // {
User = cfg.user;
ExecStart = "${cfg.package}/bin/paperless-ng document_consumer";
Restart = "on-failure";
};
environment = env;
# Bind to `paperless-ng-server` so that the consumer never runs
# during migrations
bindsTo = [ "paperless-ng-server.service" ];
after = [ "paperless-ng-server.service" ];
};
systemd.services.paperless-ng-web = {
description = "Paperless web server";
serviceConfig = defaultServiceConfig // {
User = cfg.user;
ExecStart = ''
${pkgs.python3Packages.gunicorn}/bin/gunicorn \
-c ${cfg.package}/lib/paperless-ng/gunicorn.conf.py paperless.asgi:application
'';
Restart = "on-failure";
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
CapabilityBoundingSet = "CAP_NET_BIND_SERVICE";
# gunicorn needs setuid
SystemCallFilter = defaultServiceConfig.SystemCallFilter ++ [ "@setuid" ];
};
environment = env // {
PATH = mkForce cfg.package.path;
PYTHONPATH = "${cfg.package.pythonPath}:${cfg.package}/lib/paperless-ng/src";
};
# Bind to `paperless-ng-server` so that the web server never runs
# during migrations
bindsTo = [ "paperless-ng-server.service" ];
after = [ "paperless-ng-server.service" ];
};
users = optionalAttrs (cfg.user == defaultUser) {
users.${defaultUser} = {
group = defaultUser;
uid = config.ids.uids.paperless;
home = cfg.dataDir;
};
groups.${defaultUser} = {
gid = config.ids.gids.paperless;
};
};
};
}

View file

@ -1,183 +0,0 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.paperless;
defaultUser = "paperless";
manage = cfg.package.withConfig {
config = {
PAPERLESS_CONSUMPTION_DIR = cfg.consumptionDir;
PAPERLESS_INLINE_DOC = "true";
PAPERLESS_DISABLE_LOGIN = "true";
} // cfg.extraConfig;
inherit (cfg) dataDir ocrLanguages;
paperlessPkg = cfg.package;
};
in
{
options.services.paperless = {
enable = mkOption {
type = lib.types.bool;
default = false;
description = ''
Enable Paperless.
When started, the Paperless database is automatically created if it doesn't
exist and updated if the Paperless package has changed.
Both tasks are achieved by running a Django migration.
'';
};
dataDir = mkOption {
type = types.str;
default = "/var/lib/paperless";
description = "Directory to store the Paperless data.";
};
consumptionDir = mkOption {
type = types.str;
default = "${cfg.dataDir}/consume";
defaultText = "\${dataDir}/consume";
description = "Directory from which new documents are imported.";
};
consumptionDirIsPublic = mkOption {
type = types.bool;
default = false;
description = "Whether all users can write to the consumption dir.";
};
ocrLanguages = mkOption {
type = with types; nullOr (listOf str);
default = null;
description = ''
Languages available for OCR via Tesseract, specified as
<literal>ISO 639-2/T</literal> language codes.
If unset, defaults to all available languages.
'';
example = [ "eng" "spa" "jpn" ];
};
address = mkOption {
type = types.str;
default = "localhost";
description = "Server listening address.";
};
port = mkOption {
type = types.port;
default = 28981;
description = "Server port to listen on.";
};
extraConfig = mkOption {
type = types.attrs;
default = {};
description = ''
Extra paperless config options.
The config values are evaluated as double-quoted Bash string literals.
See <literal>paperless-src/paperless.conf.example</literal> for available options.
To enable user authentication, set <literal>PAPERLESS_DISABLE_LOGIN = "false"</literal>
and run the shell command <literal>$dataDir/paperless-manage createsuperuser</literal>.
To define secret options without storing them in /nix/store, use the following pattern:
<literal>PAPERLESS_PASSPHRASE = "$(&lt; /etc/my_passphrase_file)"</literal>
'';
example = literalExample ''
{
PAPERLESS_OCR_LANGUAGE = "deu";
}
'';
};
user = mkOption {
type = types.str;
default = defaultUser;
description = "User under which Paperless runs.";
};
package = mkOption {
type = types.package;
default = pkgs.paperless;
defaultText = "pkgs.paperless";
description = "The Paperless package to use.";
};
manage = mkOption {
type = types.package;
readOnly = true;
default = manage;
description = ''
A script to manage the Paperless instance.
It wraps Django's manage.py and is also available at
<literal>$dataDir/manage-paperless</literal>
'';
};
};
config = mkIf cfg.enable {
systemd.tmpfiles.rules = [
"d '${cfg.dataDir}' - ${cfg.user} ${config.users.users.${cfg.user}.group} - -"
] ++ (optional cfg.consumptionDirIsPublic
"d '${cfg.consumptionDir}' 777 - - - -"
# If the consumption dir is not created here, it's automatically created by
# 'manage' with the default permissions.
);
systemd.services.paperless-consumer = {
description = "Paperless document consumer";
serviceConfig = {
User = cfg.user;
ExecStart = "${manage} document_consumer";
Restart = "always";
};
after = [ "systemd-tmpfiles-setup.service" ];
wantedBy = [ "multi-user.target" ];
preStart = ''
if [[ $(readlink ${cfg.dataDir}/paperless-manage) != ${manage} ]]; then
ln -sf ${manage} ${cfg.dataDir}/paperless-manage
fi
${manage.setupEnv}
# Auto-migrate on first run or if the package has changed
versionFile="$PAPERLESS_DBDIR/src-version"
if [[ $(cat "$versionFile" 2>/dev/null) != ${cfg.package} ]]; then
python $paperlessSrc/manage.py migrate
echo ${cfg.package} > "$versionFile"
fi
'';
};
systemd.services.paperless-server = {
description = "Paperless document server";
serviceConfig = {
User = cfg.user;
ExecStart = "${manage} runserver --noreload ${cfg.address}:${toString cfg.port}";
Restart = "always";
};
# Bind to `paperless-consumer` so that the server never runs
# during migrations
bindsTo = [ "paperless-consumer.service" ];
after = [ "paperless-consumer.service" ];
wantedBy = [ "multi-user.target" ];
};
users = optionalAttrs (cfg.user == defaultUser) {
users.${defaultUser} = {
group = defaultUser;
uid = config.ids.uids.paperless;
home = cfg.dataDir;
};
groups.${defaultUser} = {
gid = config.ids.gids.paperless;
};
};
};
}

View file

@ -84,7 +84,7 @@ in
(rev: archs: (rev: archs:
lib.attrsets.mapAttrsToList lib.attrsets.mapAttrsToList
(arch: image: (arch: image:
pkgs.runCommandNoCC "buildsrht-images" { } '' pkgs.runCommand "buildsrht-images" { } ''
mkdir -p $out/${distro}/${rev}/${arch} mkdir -p $out/${distro}/${rev}/${arch}
ln -s ${image}/*.qcow2 $out/${distro}/${rev}/${arch}/root.img.qcow2 ln -s ${image}/*.qcow2 $out/${distro}/${rev}/${arch}/root.img.qcow2
'') '')
@ -97,7 +97,7 @@ in
"${pkgs.sourcehut.buildsrht}/lib/images" "${pkgs.sourcehut.buildsrht}/lib/images"
]; ];
}; };
image_dir = pkgs.runCommandNoCC "builds.sr.ht-worker-images" { } '' image_dir = pkgs.runCommand "builds.sr.ht-worker-images" { } ''
mkdir -p $out/images mkdir -p $out/images
cp -Lr ${image_dir_pre}/* $out/images cp -Lr ${image_dir_pre}/* $out/images
''; '';

View file

@ -10,7 +10,7 @@ let
# a wrapper that verifies that the configuration is valid # a wrapper that verifies that the configuration is valid
promtoolCheck = what: name: file: promtoolCheck = what: name: file:
if cfg.checkConfig then if cfg.checkConfig then
pkgs.runCommandNoCCLocal pkgs.runCommandLocal
"${name}-${replaceStrings [" "] [""] what}-checked" "${name}-${replaceStrings [" "] [""] what}-checked"
{ buildInputs = [ cfg.package ]; } '' { buildInputs = [ cfg.package ]; } ''
ln -s ${file} $out ln -s ${file} $out
@ -19,7 +19,7 @@ let
# Pretty-print JSON to a file # Pretty-print JSON to a file
writePrettyJSON = name: x: writePrettyJSON = name: x:
pkgs.runCommandNoCCLocal name {} '' pkgs.runCommandLocal name {} ''
echo '${builtins.toJSON x}' | ${pkgs.jq}/bin/jq . > $out echo '${builtins.toJSON x}' | ${pkgs.jq}/bin/jq . > $out
''; '';

View file

@ -63,7 +63,7 @@ let
}; };
}; };
toYAML = name: attrs: pkgs.runCommandNoCC name { toYAML = name: attrs: pkgs.runCommand name {
preferLocalBuild = true; preferLocalBuild = true;
json = builtins.toFile "${name}.json" (builtins.toJSON attrs); json = builtins.toFile "${name}.json" (builtins.toJSON attrs);
nativeBuildInputs = [ pkgs.remarshal ]; nativeBuildInputs = [ pkgs.remarshal ];

View file

@ -39,7 +39,7 @@ let
}; };
# Additional /etc/hosts entries for peers with an associated hostname # Additional /etc/hosts entries for peers with an associated hostname
cjdnsExtraHosts = pkgs.runCommandNoCC "cjdns-hosts" {} '' cjdnsExtraHosts = pkgs.runCommand "cjdns-hosts" {} ''
exec >$out exec >$out
${concatStringsSep "\n" (mapAttrsToList (k: v: ${concatStringsSep "\n" (mapAttrsToList (k: v:
optionalString (v.hostname != "") optionalString (v.hostname != "")

View file

@ -150,6 +150,7 @@ in {
useDHCP = false; useDHCP = false;
wireless = { wireless = {
enable = mkIf (!enableIwd) true; enable = mkIf (!enableIwd) true;
dbusControlled = true;
iwd = mkIf enableIwd { iwd = mkIf enableIwd {
enable = true; enable = true;
}; };

View file

@ -549,11 +549,7 @@ in
LogLevel ${cfg.logLevel} LogLevel ${cfg.logLevel}
${if cfg.useDns then '' UseDNS ${if cfg.useDns then "yes" else "no"}
UseDNS yes
'' else ''
UseDNS no
''}
''; '';

View file

@ -8,28 +8,108 @@ let
else pkgs.wpa_supplicant; else pkgs.wpa_supplicant;
cfg = config.networking.wireless; cfg = config.networking.wireless;
configFile = if cfg.networks != {} || cfg.extraConfig != "" || cfg.userControlled.enable then pkgs.writeText "wpa_supplicant.conf" ''
${optionalString cfg.userControlled.enable '' # Content of wpa_supplicant.conf
ctrl_interface=DIR=/run/wpa_supplicant GROUP=${cfg.userControlled.group} generatedConfig = concatStringsSep "\n" (
update_config=1''} (mapAttrsToList mkNetwork cfg.networks)
${cfg.extraConfig} ++ optional cfg.userControlled.enable (concatStringsSep "\n"
${concatStringsSep "\n" (mapAttrsToList (ssid: config: with config; let [ "ctrl_interface=/run/wpa_supplicant"
key = if psk != null "ctrl_interface_group=${cfg.userControlled.group}"
then ''"${psk}"'' "update_config=1"
else pskRaw; ])
baseAuth = if key != null ++ optional cfg.scanOnLowSignal ''bgscan="simple:30:-70:3600"''
then "psk=${key}" ++ optional (cfg.extraConfig != "") cfg.extraConfig);
else "key_mgmt=NONE";
in '' configFile =
network={ if cfg.networks != {} || cfg.extraConfig != "" || cfg.userControlled.enable
ssid="${ssid}" then pkgs.writeText "wpa_supplicant.conf" generatedConfig
${optionalString (priority != null) ''priority=${toString priority}''} else "/etc/wpa_supplicant.conf";
${optionalString hidden "scan_ssid=1"}
${if (auth != null) then auth else baseAuth} # Creates a network block for wpa_supplicant.conf
${extraConfig} mkNetwork = ssid: opts:
} let
'') cfg.networks)} quote = x: ''"${x}"'';
'' else "/etc/wpa_supplicant.conf"; indent = x: " " + x;
pskString = if opts.psk != null
then quote opts.psk
else opts.pskRaw;
options = [
"ssid=${quote ssid}"
(if pskString != null || opts.auth != null
then "key_mgmt=${concatStringsSep " " opts.authProtocols}"
else "key_mgmt=NONE")
] ++ optional opts.hidden "scan_ssid=1"
++ optional (pskString != null) "psk=${pskString}"
++ optionals (opts.auth != null) (filter (x: x != "") (splitString "\n" opts.auth))
++ optional (opts.priority != null) "priority=${toString opts.priority}"
++ optional (opts.extraConfig != "") opts.extraConfig;
in ''
network={
${concatMapStringsSep "\n" indent options}
}
'';
# Creates a systemd unit for wpa_supplicant bound to a given (or any) interface
mkUnit = iface:
let
deviceUnit = optional (iface != null) "sys-subsystem-net-devices-${utils.escapeSystemdPath iface}.device";
configStr = if cfg.allowAuxiliaryImperativeNetworks
then "-c /etc/wpa_supplicant.conf -I ${configFile}"
else "-c ${configFile}";
in {
description = "WPA Supplicant instance" + optionalString (iface != null) " for interface ${iface}";
after = deviceUnit;
before = [ "network.target" ];
wants = [ "network.target" ];
requires = deviceUnit;
wantedBy = [ "multi-user.target" ];
stopIfChanged = false;
path = [ package ];
script =
''
if [ -f /etc/wpa_supplicant.conf -a "/etc/wpa_supplicant.conf" != "${configFile}" ]; then
echo >&2 "<3>/etc/wpa_supplicant.conf present but ignored. Generated ${configFile} is used instead."
fi
iface_args="-s ${optionalString cfg.dbusControlled "-u"} -D${cfg.driver} ${configStr}"
${if iface == null then ''
# detect interfaces automatically
# check if there are no wireless interfaces
if ! find -H /sys/class/net/* -name wireless | grep -q .; then
# if so, wait until one appears
echo "Waiting for wireless interfaces"
grep -q '^ACTION=add' < <(stdbuf -oL -- udevadm monitor -s net/wlan -pu)
# Note: the above line has been carefully written:
# 1. The process substitution avoids udevadm hanging (after grep has quit)
# until it tries to write to the pipe again. Not even pipefail works here.
# 2. stdbuf is needed because udevadm output is buffered by default and grep
# may hang until more udev events enter the pipe.
fi
# add any interface found to the daemon arguments
for name in $(find -H /sys/class/net/* -name wireless | cut -d/ -f 5); do
echo "Adding interface $name"
args+="''${args:+ -N} -i$name $iface_args"
done
'' else ''
# add known interface to the daemon arguments
args="-i${iface} $iface_args"
''}
# finally start daemon
exec wpa_supplicant $args
'';
};
systemctl = "/run/current-system/systemd/bin/systemctl";
in { in {
options = { options = {
networking.wireless = { networking.wireless = {
@ -42,6 +122,10 @@ in {
description = '' description = ''
The interfaces <command>wpa_supplicant</command> will use. If empty, it will The interfaces <command>wpa_supplicant</command> will use. If empty, it will
automatically use all wireless interfaces. automatically use all wireless interfaces.
<note><para>
A separate wpa_supplicant instance will be started for each interface.
</para></note>
''; '';
}; };
@ -61,6 +145,16 @@ in {
''; '';
}; };
scanOnLowSignal = mkOption {
type = types.bool;
default = true;
description = ''
Whether to periodically scan for (better) networks when the signal of
the current one is low. This will make roaming between access points
faster, but will consume more power.
'';
};
networks = mkOption { networks = mkOption {
type = types.attrsOf (types.submodule { type = types.attrsOf (types.submodule {
options = { options = {
@ -89,11 +183,52 @@ in {
''; '';
}; };
authProtocols = mkOption {
default = [
# WPA2 and WPA3
"WPA-PSK" "WPA-EAP" "SAE"
# 802.11r variants of the above
"FT-PSK" "FT-EAP" "FT-SAE"
];
# The list can be obtained by running this command
# awk '
# /^# key_mgmt: /{ run=1 }
# /^#$/{ run=0 }
# /^# [A-Z0-9-]{2,}/{ if(run){printf("\"%s\"\n", $2)} }
# ' /run/current-system/sw/share/doc/wpa_supplicant/wpa_supplicant.conf.example
type = types.listOf (types.enum [
"WPA-PSK"
"WPA-EAP"
"IEEE8021X"
"NONE"
"WPA-NONE"
"FT-PSK"
"FT-EAP"
"FT-EAP-SHA384"
"WPA-PSK-SHA256"
"WPA-EAP-SHA256"
"SAE"
"FT-SAE"
"WPA-EAP-SUITE-B"
"WPA-EAP-SUITE-B-192"
"OSEN"
"FILS-SHA256"
"FILS-SHA384"
"FT-FILS-SHA256"
"FT-FILS-SHA384"
"OWE"
"DPP"
]);
description = ''
The list of authentication protocols accepted by this network.
This corresponds to the <literal>key_mgmt</literal> option in wpa_supplicant.
'';
};
auth = mkOption { auth = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
default = null; default = null;
example = '' example = ''
key_mgmt=WPA-EAP
eap=PEAP eap=PEAP
identity="user@example.com" identity="user@example.com"
password="secret" password="secret"
@ -200,6 +335,16 @@ in {
description = "Members of this group can control wpa_supplicant."; description = "Members of this group can control wpa_supplicant.";
}; };
}; };
dbusControlled = mkOption {
type = types.bool;
default = lib.length cfg.interfaces < 2;
description = ''
Whether to enable the DBus control interface.
This is only needed when using NetworkManager or connman.
'';
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.str; type = types.str;
default = ""; default = "";
@ -223,80 +368,47 @@ in {
assertions = flip mapAttrsToList cfg.networks (name: cfg: { assertions = flip mapAttrsToList cfg.networks (name: cfg: {
assertion = with cfg; count (x: x != null) [ psk pskRaw auth ] <= 1; assertion = with cfg; count (x: x != null) [ psk pskRaw auth ] <= 1;
message = ''options networking.wireless."${name}".{psk,pskRaw,auth} are mutually exclusive''; message = ''options networking.wireless."${name}".{psk,pskRaw,auth} are mutually exclusive'';
}); }) ++ [
{
environment.systemPackages = [ package ]; assertion = length cfg.interfaces > 1 -> !cfg.dbusControlled;
message =
services.dbus.packages = [ package ]; let daemon = if config.networking.networkmanager.enable then "NetworkManager" else
if config.services.connman.enable then "connman" else null;
n = toString (length cfg.interfaces);
in ''
It's not possible to run multiple wpa_supplicant instances with DBus support.
Note: you're seeing this error because `networking.wireless.interfaces` has
${n} entries, implying an equal number of wpa_supplicant instances.
'' + optionalString (daemon != null) ''
You don't need to change `networking.wireless.interfaces` when using ${daemon}:
in this case the interfaces will be configured automatically for you.
'';
}
];
hardware.wirelessRegulatoryDatabase = true; hardware.wirelessRegulatoryDatabase = true;
# FIXME: start a separate wpa_supplicant instance per interface. environment.systemPackages = [ package ];
systemd.services.wpa_supplicant = let services.dbus.packages = optional cfg.dbusControlled package;
ifaces = cfg.interfaces;
deviceUnit = interface: [ "sys-subsystem-net-devices-${utils.escapeSystemdPath interface}.device" ];
in {
description = "WPA Supplicant";
after = lib.concatMap deviceUnit ifaces; systemd.services =
before = [ "network.target" ]; if cfg.interfaces == []
wants = [ "network.target" ]; then { wpa_supplicant = mkUnit null; }
requires = lib.concatMap deviceUnit ifaces; else listToAttrs (map (i: nameValuePair "wpa_supplicant-${i}" (mkUnit i)) cfg.interfaces);
wantedBy = [ "multi-user.target" ];
stopIfChanged = false;
path = [ package pkgs.udev ]; # Restart wpa_supplicant after resuming from sleep
powerManagement.resumeCommands = concatStringsSep "\n" (
optional (cfg.interfaces == []) "${systemctl} try-restart wpa_supplicant"
++ map (i: "${systemctl} try-restart wpa_supplicant-${i}") cfg.interfaces
);
script = let # Restart wpa_supplicant when a wlan device appears or disappears. This is
configStr = if cfg.allowAuxiliaryImperativeNetworks # only needed when an interface hasn't been specified by the user.
then "-c /etc/wpa_supplicant.conf -I ${configFile}" services.udev.extraRules = optionalString (cfg.interfaces == []) ''
else "-c ${configFile}"; ACTION=="add|remove", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", \
in '' RUN+="${systemctl} try-restart wpa_supplicant.service"
if [ -f /etc/wpa_supplicant.conf -a "/etc/wpa_supplicant.conf" != "${configFile}" ]; then
echo >&2 "<3>/etc/wpa_supplicant.conf present but ignored. Generated ${configFile} is used instead."
fi
iface_args="-s -u -D${cfg.driver} ${configStr}"
${if ifaces == [] then ''
# detect interfaces automatically
# check if there are no wireless interface
if ! find -H /sys/class/net/* -name wireless | grep -q .; then
# if so, wait until one appears
echo "Waiting for wireless interfaces"
grep -q '^ACTION=add' < <(stdbuf -oL -- udevadm monitor -s net/wlan -pu)
# Note: the above line has been carefully written:
# 1. The process substitution avoids udevadm hanging (after grep has quit)
# until it tries to write to the pipe again. Not even pipefail works here.
# 2. stdbuf is needed because udevadm output is buffered by default and grep
# may hang until more udev events enter the pipe.
fi
# add any interface found to the daemon arguments
for name in $(find -H /sys/class/net/* -name wireless | cut -d/ -f 5); do
echo "Adding interface $name"
args+="''${args:+ -N} -i$name $iface_args"
done
'' else ''
# add known interfaces to the daemon arguments
args="${concatMapStringsSep " -N " (i: "-i${i} $iface_args") ifaces}"
''}
# finally start daemon
exec wpa_supplicant $args
'';
};
powerManagement.resumeCommands = ''
/run/current-system/systemd/bin/systemctl try-restart wpa_supplicant
'';
# Restart wpa_supplicant when a wlan device appears or disappears.
services.udev.extraRules = ''
ACTION=="add|remove", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", RUN+="/run/current-system/systemd/bin/systemctl try-restart wpa_supplicant.service"
''; '';
}; };
meta.maintainers = with lib.maintainers; [ globin ]; meta.maintainers = with lib.maintainers; [ globin rnhmjoj ];
} }

View file

@ -771,7 +771,6 @@ in
"tmp" "tmp"
"assets/javascripts/plugins" "assets/javascripts/plugins"
"public" "public"
"plugins"
"sockets" "sockets"
]; ];
RuntimeDirectoryMode = 0750; RuntimeDirectoryMode = 0750;

View file

@ -284,11 +284,22 @@ services.discourse = {
Ruby dependencies are listed in its Ruby dependencies are listed in its
<filename>plugin.rb</filename> file as function calls to <filename>plugin.rb</filename> file as function calls to
<literal>gem</literal>. To construct the corresponding <literal>gem</literal>. To construct the corresponding
<filename>Gemfile</filename>, run <command>bundle <filename>Gemfile</filename> manually, run <command>bundle
init</command>, then add the <literal>gem</literal> lines to it init</command>, then add the <literal>gem</literal> lines to it
verbatim. verbatim.
</para> </para>
<para>
Much of the packaging can be done automatically by the
<filename>nixpkgs/pkgs/servers/web-apps/discourse/update.py</filename>
script - just add the plugin to the <literal>plugins</literal>
list in the <function>update_plugins</function> function and run
the script:
<programlisting language="bash">
./update.py update-plugins
</programlisting>.
</para>
<para> <para>
Some plugins provide <link Some plugins provide <link
linkend="module-services-discourse-site-settings">site linkend="module-services-discourse-site-settings">site

View file

@ -281,7 +281,7 @@ in
createLocalPostgreSQL = databaseActuallyCreateLocally && cfg.database.type == "postgresql"; createLocalPostgreSQL = databaseActuallyCreateLocally && cfg.database.type == "postgresql";
createLocalMySQL = databaseActuallyCreateLocally && cfg.database.type == "mysql"; createLocalMySQL = databaseActuallyCreateLocally && cfg.database.type == "mysql";
mySqlCaKeystore = pkgs.runCommandNoCC "mysql-ca-keystore" {} '' mySqlCaKeystore = pkgs.runCommand "mysql-ca-keystore" {} ''
${pkgs.jre}/bin/keytool -importcert -trustcacerts -alias MySQLCACert -file ${cfg.database.caCert} -keystore $out -storepass notsosecretpassword -noprompt ${pkgs.jre}/bin/keytool -importcert -trustcacerts -alias MySQLCACert -file ${cfg.database.caCert} -keystore $out -storepass notsosecretpassword -noprompt
''; '';
@ -553,7 +553,7 @@ in
jbossCliScript = pkgs.writeText "jboss-cli-script" (mkJbossScript keycloakConfig'); jbossCliScript = pkgs.writeText "jboss-cli-script" (mkJbossScript keycloakConfig');
keycloakConfig = pkgs.runCommandNoCC "keycloak-config" { keycloakConfig = pkgs.runCommand "keycloak-config" {
nativeBuildInputs = [ cfg.package ]; nativeBuildInputs = [ cfg.package ];
} '' } ''
export JBOSS_BASE_DIR="$(pwd -P)"; export JBOSS_BASE_DIR="$(pwd -P)";

View file

@ -6,7 +6,7 @@ let
cfg = config.services.node-red; cfg = config.services.node-red;
defaultUser = "node-red"; defaultUser = "node-red";
finalPackage = if cfg.withNpmAndGcc then node-red_withNpmAndGcc else cfg.package; finalPackage = if cfg.withNpmAndGcc then node-red_withNpmAndGcc else cfg.package;
node-red_withNpmAndGcc = pkgs.runCommandNoCC "node-red" { node-red_withNpmAndGcc = pkgs.runCommand "node-red" {
nativeBuildInputs = [ pkgs.makeWrapper ]; nativeBuildInputs = [ pkgs.makeWrapper ];
} }
'' ''

View file

@ -8,10 +8,10 @@ let
tlsConfig = { tlsConfig = {
apps.tls.automation.policies = [{ apps.tls.automation.policies = [{
issuer = { issuers = [{
inherit (cfg) ca email; inherit (cfg) ca email;
module = "acme"; module = "acme";
}; }];
}]; }];
}; };
@ -23,23 +23,28 @@ let
# merge the TLS config options we expose with the ones originating in the Caddyfile # merge the TLS config options we expose with the ones originating in the Caddyfile
configJSON = configJSON =
let tlsConfigMerge = '' if cfg.ca != null then
{"apps": let tlsConfigMerge = ''
{"tls": {"apps":
{"automation": {"tls":
{"policies": {"automation":
(if .[0].apps.tls.automation.policies == .[1]?.apps.tls.automation.policies {"policies":
then .[0].apps.tls.automation.policies (if .[0].apps.tls.automation.policies == .[1]?.apps.tls.automation.policies
else (.[0].apps.tls.automation.policies + .[1]?.apps.tls.automation.policies) then .[0].apps.tls.automation.policies
end) else (.[0].apps.tls.automation.policies + .[1]?.apps.tls.automation.policies)
end)
}
} }
} }
} }'';
}''; in
in pkgs.runCommand "caddy-config.json" { } '' pkgs.runCommand "caddy-config.json" { } ''
${pkgs.jq}/bin/jq -s '.[0] * ${tlsConfigMerge}' ${adaptedConfig} ${tlsJSON} > $out ${pkgs.jq}/bin/jq -s '.[0] * ${tlsConfigMerge}' ${adaptedConfig} ${tlsJSON} > $out
''; ''
in { else
adaptedConfig;
in
{
imports = [ imports = [
(mkRemovedOptionModule [ "services" "caddy" "agree" ] "this option is no longer necessary for Caddy 2") (mkRemovedOptionModule [ "services" "caddy" "agree" ] "this option is no longer necessary for Caddy 2")
]; ];
@ -85,11 +90,24 @@ in {
''; '';
}; };
resume = mkOption {
default = false;
type = types.bool;
description = ''
Use saved config, if any (and prefer over configuration passed with <option>services.caddy.config</option>).
'';
};
ca = mkOption { ca = mkOption {
default = "https://acme-v02.api.letsencrypt.org/directory"; default = "https://acme-v02.api.letsencrypt.org/directory";
example = "https://acme-staging-v02.api.letsencrypt.org/directory"; example = "https://acme-staging-v02.api.letsencrypt.org/directory";
type = types.str; type = types.nullOr types.str;
description = "Certificate authority ACME server. The default (Let's Encrypt production server) should be fine for most people."; description = ''
Certificate authority ACME server. The default (Let's Encrypt
production server) should be fine for most people. Set it to null if
you don't want to include any authority (or if you want to write a more
fine-graned configuration manually)
'';
}; };
email = mkOption { email = mkOption {
@ -132,7 +150,7 @@ in {
startLimitIntervalSec = 14400; startLimitIntervalSec = 14400;
startLimitBurst = 10; startLimitBurst = 10;
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/caddy run --config ${configJSON}"; ExecStart = "${cfg.package}/bin/caddy run ${optionalString cfg.resume "--resume"} --config ${configJSON}";
ExecReload = "${cfg.package}/bin/caddy reload --config ${configJSON}"; ExecReload = "${cfg.package}/bin/caddy reload --config ${configJSON}";
Type = "simple"; Type = "simple";
User = cfg.user; User = cfg.user;

View file

@ -171,6 +171,14 @@ let
map_hash_max_size ${toString cfg.mapHashMaxSize}; map_hash_max_size ${toString cfg.mapHashMaxSize};
''} ''}
${optionalString (cfg.serverNamesHashBucketSize != null) ''
server_names_hash_bucket_size ${toString cfg.serverNamesHashBucketSize};
''}
${optionalString (cfg.serverNamesHashMaxSize != null) ''
server_names_hash_max_size ${toString cfg.serverNamesHashMaxSize};
''}
# $connection_upgrade is used for websocket proxying # $connection_upgrade is used for websocket proxying
map $http_upgrade $connection_upgrade { map $http_upgrade $connection_upgrade {
default upgrade; default upgrade;
@ -233,7 +241,7 @@ let
defaultListen = defaultListen =
if vhost.listen != [] then vhost.listen if vhost.listen != [] then vhost.listen
else else
let addrs = if vhost.listenAddresses != [] then vhost.listenAddreses else ( let addrs = if vhost.listenAddresses != [] then vhost.listenAddresses else (
[ "0.0.0.0" ] ++ optional enableIPv6 "[::0]" [ "0.0.0.0" ] ++ optional enableIPv6 "[::0]"
); );
in in
@ -643,6 +651,23 @@ in
''; '';
}; };
serverNamesHashBucketSize = mkOption {
type = types.nullOr types.ints.positive;
default = null;
description = ''
Sets the bucket size for the server names hash tables. Default
value depends on the processors cache line size.
'';
};
serverNamesHashMaxSize = mkOption {
type = types.nullOr types.ints.positive;
default = null;
description = ''
Sets the maximum size of the server names hash tables.
'';
};
resolver = mkOption { resolver = mkOption {
type = types.submodule { type = types.submodule {
options = { options = {

View file

@ -553,6 +553,8 @@ in
apply = toString; apply = toString;
description = '' description = ''
Index of the default menu item to be booted. Index of the default menu item to be booted.
Can also be set to "saved", which will make GRUB select
the menu item that was used at the last boot.
''; '';
}; };

View file

@ -85,6 +85,7 @@ my $bootloaderId = get("bootloaderId");
my $forceInstall = get("forceInstall"); my $forceInstall = get("forceInstall");
my $font = get("font"); my $font = get("font");
my $theme = get("theme"); my $theme = get("theme");
my $saveDefault = $defaultEntry eq "saved";
$ENV{'PATH'} = get("path"); $ENV{'PATH'} = get("path");
die "unsupported GRUB version\n" if $grubVersion != 1 && $grubVersion != 2; die "unsupported GRUB version\n" if $grubVersion != 1 && $grubVersion != 2;
@ -250,6 +251,8 @@ if ($copyKernels == 0) {
my $conf .= "# Automatically generated. DO NOT EDIT THIS FILE!\n"; my $conf .= "# Automatically generated. DO NOT EDIT THIS FILE!\n";
if ($grubVersion == 1) { if ($grubVersion == 1) {
# $defaultEntry might be "saved", indicating that we want to use the last selected configuration as default.
# Incidentally this is already the correct value for the grub 1 config to achieve this behaviour.
$conf .= " $conf .= "
default $defaultEntry default $defaultEntry
timeout $timeout timeout $timeout
@ -305,6 +308,10 @@ else {
" . $grubStore->search; " . $grubStore->search;
} }
# FIXME: should use grub-mkconfig. # FIXME: should use grub-mkconfig.
my $defaultEntryText = $defaultEntry;
if ($saveDefault) {
$defaultEntryText = "\"\${saved_entry}\"";
}
$conf .= " $conf .= "
" . $grubBoot->search . " " . $grubBoot->search . "
if [ -s \$prefix/grubenv ]; then if [ -s \$prefix/grubenv ]; then
@ -318,11 +325,19 @@ else {
set next_entry= set next_entry=
save_env next_entry save_env next_entry
set timeout=1 set timeout=1
set boot_once=true
else else
set default=$defaultEntry set default=$defaultEntryText
set timeout=$timeout set timeout=$timeout
fi fi
function savedefault {
if [ -z \"\${boot_once}\"]; then
saved_entry=\"\${chosen}\"
save_env saved_entry
fi
}
# Setup the graphics stack for bios and efi systems # Setup the graphics stack for bios and efi systems
if [ \"\${grub_platform}\" = \"efi\" ]; then if [ \"\${grub_platform}\" = \"efi\" ]; then
insmod efi_gop insmod efi_gop
@ -468,9 +483,16 @@ sub addEntry {
$conf .= " $extraPerEntryConfig\n" if $extraPerEntryConfig; $conf .= " $extraPerEntryConfig\n" if $extraPerEntryConfig;
$conf .= " kernel $xen $xenParams\n" if $xen; $conf .= " kernel $xen $xenParams\n" if $xen;
$conf .= " " . ($xen ? "module" : "kernel") . " $kernel $kernelParams\n"; $conf .= " " . ($xen ? "module" : "kernel") . " $kernel $kernelParams\n";
$conf .= " " . ($xen ? "module" : "initrd") . " $initrd\n\n"; $conf .= " " . ($xen ? "module" : "initrd") . " $initrd\n";
if ($saveDefault) {
$conf .= " savedefault\n";
}
$conf .= "\n";
} else { } else {
$conf .= "menuentry \"$name\" " . ($options||"") . " {\n"; $conf .= "menuentry \"$name\" " . ($options||"") . " {\n";
if ($saveDefault) {
$conf .= " savedefault\n";
}
$conf .= $grubBoot->search . "\n"; $conf .= $grubBoot->search . "\n";
if ($copyKernels == 0) { if ($copyKernels == 0) {
$conf .= $grubStore->search . "\n"; $conf .= $grubStore->search . "\n";
@ -605,6 +627,11 @@ my $efiTarget = getEfiTarget();
# Append entries detected by os-prober # Append entries detected by os-prober
if (get("useOSProber") eq "true") { if (get("useOSProber") eq "true") {
if ($saveDefault) {
# os-prober will read this to determine if "savedefault" should be added to generated entries
$ENV{'GRUB_SAVEDEFAULT'} = "true";
}
my $targetpackage = ($efiTarget eq "no") ? $grub : $grubEfi; my $targetpackage = ($efiTarget eq "no") ? $grub : $grubEfi;
system(get("shell"), "-c", "pkgdatadir=$targetpackage/share/grub $targetpackage/etc/grub.d/30_os-prober >> $tmpFile"); system(get("shell"), "-c", "pkgdatadir=$targetpackage/share/grub $targetpackage/etc/grub.d/30_os-prober >> $tmpFile");
} }

View file

@ -375,7 +375,7 @@ let
} }
trap cleanup EXIT trap cleanup EXIT
tmp=$(mktemp -d initrd-secrets.XXXXXXXXXX) tmp=$(mktemp -d ''${TMPDIR:-/tmp}/initrd-secrets.XXXXXXXXXX)
${lib.concatStringsSep "\n" (mapAttrsToList (dest: source: ${lib.concatStringsSep "\n" (mapAttrsToList (dest: source:
let source' = if source == null then dest else toString source; in let source' = if source == null then dest else toString source; in

View file

@ -32,9 +32,6 @@ in
assertions = [ { assertions = [ {
assertion = cfg.defaultGatewayWindowSize == null; assertion = cfg.defaultGatewayWindowSize == null;
message = "networking.defaultGatewayWindowSize is not supported by networkd."; message = "networking.defaultGatewayWindowSize is not supported by networkd.";
} {
assertion = cfg.vswitches == {};
message = "networking.vswitches are not supported by networkd.";
} { } {
assertion = cfg.defaultGateway == null || cfg.defaultGateway.interface == null; assertion = cfg.defaultGateway == null || cfg.defaultGateway.interface == null;
message = "networking.defaultGateway.interface is not supported by networkd."; message = "networking.defaultGateway.interface is not supported by networkd.";

View file

@ -36,6 +36,14 @@ in
`<nixpkgs/nixos/modules/virtualisation/google-compute-image.nix>`. `<nixpkgs/nixos/modules/virtualisation/google-compute-image.nix>`.
''; '';
}; };
virtualisation.googleComputeImage.compressionLevel = mkOption {
type = types.int;
default = 6;
description = ''
GZIP compression level of the resulting disk image (1-9).
'';
};
}; };
#### implementation #### implementation
@ -47,7 +55,8 @@ in
PATH=$PATH:${with pkgs; lib.makeBinPath [ gnutar gzip ]} PATH=$PATH:${with pkgs; lib.makeBinPath [ gnutar gzip ]}
pushd $out pushd $out
mv $diskImage disk.raw mv $diskImage disk.raw
tar -Szcf nixos-image-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}.raw.tar.gz disk.raw tar -Sc disk.raw | gzip -${toString cfg.compressionLevel} > \
nixos-image-${config.system.nixos.label}-${pkgs.stdenv.hostPlatform.system}.raw.tar.gz
rm $out/disk.raw rm $out/disk.raw
popd popd
''; '';

View file

@ -9,7 +9,7 @@ let
podmanPackage = (pkgs.podman.override { inherit (cfg) extraPackages; }); podmanPackage = (pkgs.podman.override { inherit (cfg) extraPackages; });
# Provides a fake "docker" binary mapping to podman # Provides a fake "docker" binary mapping to podman
dockerCompat = pkgs.runCommandNoCC "${podmanPackage.pname}-docker-compat-${podmanPackage.version}" { dockerCompat = pkgs.runCommand "${podmanPackage.pname}-docker-compat-${podmanPackage.version}" {
outputs = [ "out" "man" ]; outputs = [ "out" "man" ];
inherit (podmanPackage) meta; inherit (podmanPackage) meta;
} '' } ''

View file

@ -259,6 +259,7 @@ in
miniflux = handleTest ./miniflux.nix {}; miniflux = handleTest ./miniflux.nix {};
minio = handleTest ./minio.nix {}; minio = handleTest ./minio.nix {};
misc = handleTest ./misc.nix {}; misc = handleTest ./misc.nix {};
mod_perl = handleTest ./mod_perl.nix {};
moinmoin = handleTest ./moinmoin.nix {}; moinmoin = handleTest ./moinmoin.nix {};
mongodb = handleTest ./mongodb.nix {}; mongodb = handleTest ./mongodb.nix {};
moodle = handleTest ./moodle.nix {}; moodle = handleTest ./moodle.nix {};
@ -282,6 +283,7 @@ in
nat.firewall = handleTest ./nat.nix { withFirewall = true; }; nat.firewall = handleTest ./nat.nix { withFirewall = true; };
nat.firewall-conntrack = handleTest ./nat.nix { withFirewall = true; withConntrackHelpers = true; }; nat.firewall-conntrack = handleTest ./nat.nix { withFirewall = true; withConntrackHelpers = true; };
nat.standalone = handleTest ./nat.nix { withFirewall = false; }; nat.standalone = handleTest ./nat.nix { withFirewall = false; };
navidrome = handleTest ./navidrome.nix {};
ncdns = handleTest ./ncdns.nix {}; ncdns = handleTest ./ncdns.nix {};
ndppd = handleTest ./ndppd.nix {}; ndppd = handleTest ./ndppd.nix {};
nebula = handleTest ./nebula.nix {}; nebula = handleTest ./nebula.nix {};
@ -334,7 +336,7 @@ in
pam-oath-login = handleTest ./pam-oath-login.nix {}; pam-oath-login = handleTest ./pam-oath-login.nix {};
pam-u2f = handleTest ./pam-u2f.nix {}; pam-u2f = handleTest ./pam-u2f.nix {};
pantheon = handleTest ./pantheon.nix {}; pantheon = handleTest ./pantheon.nix {};
paperless = handleTest ./paperless.nix {}; paperless-ng = handleTest ./paperless-ng.nix {};
pdns-recursor = handleTest ./pdns-recursor.nix {}; pdns-recursor = handleTest ./pdns-recursor.nix {};
peerflix = handleTest ./peerflix.nix {}; peerflix = handleTest ./peerflix.nix {};
pgjwt = handleTest ./pgjwt.nix {}; pgjwt = handleTest ./pgjwt.nix {};

View file

@ -4,7 +4,7 @@
# 3. replying to that message via email. # 3. replying to that message via email.
import ./make-test-python.nix ( import ./make-test-python.nix (
{ pkgs, lib, ... }: { pkgs, lib, package ? pkgs.discourse, ... }:
let let
certs = import ./common/acme/server/snakeoil-certs.nix; certs = import ./common/acme/server/snakeoil-certs.nix;
clientDomain = "client.fake.domain"; clientDomain = "client.fake.domain";
@ -55,7 +55,7 @@ import ./make-test-python.nix (
services.discourse = { services.discourse = {
enable = true; enable = true;
inherit admin; inherit admin package;
hostname = discourseDomain; hostname = discourseDomain;
sslCertificate = "${certs.${discourseDomain}.cert}"; sslCertificate = "${certs.${discourseDomain}.cert}";
sslCertificateKey = "${certs.${discourseDomain}.key}"; sslCertificateKey = "${certs.${discourseDomain}.key}";

View file

@ -78,6 +78,13 @@ import ./make-test-python.nix (
'su - test7 -c "SSH_AUTH_SOCK=HOLEY doas env"' 'su - test7 -c "SSH_AUTH_SOCK=HOLEY doas env"'
): ):
raise Exception("failed to exclude SSH_AUTH_SOCK") raise Exception("failed to exclude SSH_AUTH_SOCK")
# Test that the doas setuid wrapper precedes the unwrapped version in PATH after
# calling doas.
# The PATH set by doas is defined in
# ../../pkgs/tools/security/doas/0001-add-NixOS-specific-dirs-to-safe-PATH.patch
with subtest("recursive calls to doas from subprocesses should succeed"):
machine.succeed('doas -u test0 sh -c "doas -u test0 true"')
''; '';
} }
) )

View file

@ -33,18 +33,7 @@ import ./make-test-python.nix ({ pkgs, latestKernel ? false, ... } : {
testScript = testScript =
let let
hardened-malloc-tests = pkgs.stdenv.mkDerivation { hardened-malloc-tests = pkgs.graphene-hardened-malloc.ld-preload-tests;
name = "hardened-malloc-tests-${pkgs.graphene-hardened-malloc.version}";
src = pkgs.graphene-hardened-malloc.src;
buildPhase = ''
cd test/simple-memory-corruption
make -j4
'';
installPhase = ''
find . -type f -executable -exec install -Dt $out/bin '{}' +
'';
};
in in
'' ''
machine.wait_for_unit("multi-user.target") machine.wait_for_unit("multi-user.target")
@ -107,20 +96,7 @@ import ./make-test-python.nix ({ pkgs, latestKernel ? false, ... } : {
machine.fail("systemctl kexec") machine.fail("systemctl kexec")
# Test hardened memory allocator
def runMallocTestProg(prog_name, error_text):
text = "fatal allocator error: " + error_text
if not text in machine.fail(
"${hardened-malloc-tests}/bin/"
+ prog_name
+ " 2>&1"
):
raise Exception("Hardened malloc does not work for {}".format(error_text))
with subtest("The hardened memory allocator works"): with subtest("The hardened memory allocator works"):
runMallocTestProg("double_free_large", "invalid free") machine.succeed("${hardened-malloc-tests}/bin/run-tests")
runMallocTestProg("unaligned_free_small", "invalid unaligned free")
runMallocTestProg("write_after_free_small", "detected write after free")
''; '';
}) })

View file

@ -1,6 +1,6 @@
import ./make-test-python.nix ({ lib, pkgs, ... }: import ./make-test-python.nix ({ lib, pkgs, ... }:
let let
gpgKeyring = (pkgs.runCommandNoCC "gpg-keyring" { buildInputs = [ pkgs.gnupg ]; } '' gpgKeyring = (pkgs.runCommand "gpg-keyring" { buildInputs = [ pkgs.gnupg ]; } ''
mkdir -p $out mkdir -p $out
export GNUPGHOME=$out export GNUPGHOME=$out
cat > foo <<EOF cat > foo <<EOF

View file

@ -31,8 +31,12 @@ with pkgs; {
linux_4_19 = makeKernelTest "4.19" linuxPackages_4_19; linux_4_19 = makeKernelTest "4.19" linuxPackages_4_19;
linux_5_4 = makeKernelTest "5.4" linuxPackages_5_4; linux_5_4 = makeKernelTest "5.4" linuxPackages_5_4;
linux_5_10 = makeKernelTest "5.10" linuxPackages_5_10; linux_5_10 = makeKernelTest "5.10" linuxPackages_5_10;
linux_5_12 = makeKernelTest "5.12" linuxPackages_5_12;
linux_5_13 = makeKernelTest "5.13" linuxPackages_5_13; linux_5_13 = makeKernelTest "5.13" linuxPackages_5_13;
linux_hardened_4_14 = makeKernelTest "4.14" linuxPackages_4_14_hardened;
linux_hardened_4_19 = makeKernelTest "4.19" linuxPackages_4_19_hardened;
linux_hardened_5_4 = makeKernelTest "5.4" linuxPackages_5_4_hardened;
linux_hardened_5_10 = makeKernelTest "5.10" linuxPackages_5_10_hardened;
linux_testing = makeKernelTest "testing" linuxPackages_testing; linux_testing = makeKernelTest "testing" linuxPackages_testing;
} }

View file

@ -0,0 +1,53 @@
import ./make-test-python.nix ({ pkgs, lib, ... }: {
name = "mod_perl";
meta = with pkgs.lib.maintainers; {
maintainers = [ sgo ];
};
machine = { config, lib, pkgs, ... }: {
services.httpd = {
enable = true;
adminAddr = "admin@localhost";
virtualHosts."modperl" =
let
inc = pkgs.writeTextDir "ModPerlTest.pm" ''
package ModPerlTest;
use strict;
use Apache2::RequestRec ();
use Apache2::RequestIO ();
use Apache2::Const -compile => qw(OK);
sub handler {
my $r = shift;
$r->content_type('text/plain');
print "Hello mod_perl!\n";
return Apache2::Const::OK;
}
1;
'';
startup = pkgs.writeScript "startup.pl" ''
use lib "${inc}",
split ":","${with pkgs.perl.pkgs; makeFullPerlPath ([ mod_perl2 ])}";
1;
'';
in
{
extraConfig = ''
PerlRequire ${startup}
'';
locations."/modperl" = {
extraConfig = ''
SetHandler perl-script
PerlResponseHandler ModPerlTest
'';
};
};
enablePerl = true;
};
};
testScript = { ... }: ''
machine.wait_for_unit("httpd.service")
response = machine.succeed("curl -fvvv -s http://127.0.0.1:80/modperl")
assert "Hello mod_perl!" in response, "/modperl handler did not respond"
'';
})

View file

@ -0,0 +1,12 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "navidrome";
machine = { ... }: {
services.navidrome.enable = true;
};
testScript = ''
machine.wait_for_unit("navidrome")
machine.wait_for_open_port("4533")
'';
})

View file

@ -0,0 +1,36 @@
import ./make-test-python.nix ({ lib, ... }: {
name = "paperless-ng";
meta.maintainers = with lib.maintainers; [ earvstedt Flakebi ];
nodes.machine = { pkgs, ... }: {
environment.systemPackages = with pkgs; [ imagemagick jq ];
services.paperless-ng = {
enable = true;
passwordFile = builtins.toFile "password" "admin";
};
virtualisation.memorySize = 1024;
};
testScript = ''
machine.wait_for_unit("paperless-ng-consumer.service")
with subtest("Create test doc"):
machine.succeed(
"convert -size 400x40 xc:white -font 'DejaVu-Sans' -pointsize 20 -fill black "
"-annotate +5+20 'hello world 16-10-2005' /var/lib/paperless/consume/doc.png"
)
with subtest("Web interface gets ready"):
machine.wait_for_unit("paperless-ng-web.service")
# Wait until server accepts connections
machine.wait_until_succeeds("curl -fs localhost:28981")
with subtest("Document is consumed"):
machine.wait_until_succeeds(
"(($(curl -u admin:admin -fs localhost:28981/api/documents/ | jq .count) == 1))"
)
assert "2005-10-16" in machine.succeed(
"curl -u admin:admin -fs localhost:28981/api/documents/ | jq '.results | .[0] | .created'"
)
'';
})

View file

@ -1,36 +0,0 @@
import ./make-test-python.nix ({ lib, ... } : {
name = "paperless";
meta = with lib.maintainers; {
maintainers = [ earvstedt ];
};
machine = { pkgs, ... }: {
environment.systemPackages = with pkgs; [ imagemagick jq ];
services.paperless = {
enable = true;
ocrLanguages = [ "eng" ];
};
};
testScript = ''
machine.wait_for_unit("paperless-consumer.service")
# Create test doc
machine.succeed(
"convert -size 400x40 xc:white -font 'DejaVu-Sans' -pointsize 20 -fill black -annotate +5+20 'hello world 16-10-2005' /var/lib/paperless/consume/doc.png"
)
with subtest("Service gets ready"):
machine.wait_for_unit("paperless-server.service")
# Wait until server accepts connections
machine.wait_until_succeeds("curl -fs localhost:28981")
with subtest("Test document is consumed"):
machine.wait_until_succeeds(
"(($(curl -fs localhost:28981/api/documents/ | jq .count) == 1))"
)
assert "2005-10-16" in machine.succeed(
"curl -fs localhost:28981/api/documents/ | jq '.results | .[0] | .created'"
)
'';
})

View file

@ -162,7 +162,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
pleroma_ctl user new jamy jamy@nixos.test --password 'jamy-password' --moderator --admin -y pleroma_ctl user new jamy jamy@nixos.test --password 'jamy-password' --moderator --admin -y
''; '';
tls-cert = pkgs.runCommandNoCC "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } '' tls-cert = pkgs.runCommand "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } ''
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=pleroma.nixos.test' -days 36500 openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=pleroma.nixos.test' -days 36500
mkdir -p $out mkdir -p $out
cp key.pem cert.pem $out cp key.pem cert.pem $out

View file

@ -43,7 +43,7 @@ let
return EXIT_SUCCESS; return EXIT_SUCCESS;
} }
''; '';
in pkgs.runCommandNoCC "mpitest" {} '' in pkgs.runCommand "mpitest" {} ''
mkdir -p $out/bin mkdir -p $out/bin
${pkgs.openmpi}/bin/mpicc ${mpitestC} -o $out/bin/mpitest ${pkgs.openmpi}/bin/mpicc ${mpitestC} -o $out/bin/mpitest
''; '';

View file

@ -33,7 +33,7 @@ import ./make-test-python.nix ({ pkgs, lib, ... }:
}; };
}; };
cert = pkgs.runCommandNoCC "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } '' cert = pkgs.runCommand "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } ''
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=dns.example.local' openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=dns.example.local'
mkdir -p $out mkdir -p $out
cp key.pem cert.pem $out cp key.pem cert.pem $out

View file

@ -1,5 +1,5 @@
let let
cert = pkgs: pkgs.runCommandNoCC "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } '' cert = pkgs: pkgs.runCommand "selfSignedCerts" { buildInputs = [ pkgs.openssl ]; } ''
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=example.com/CN=uploads.example.com/CN=conference.example.com' -days 36500 openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -nodes -subj '/CN=example.com/CN=uploads.example.com/CN=conference.example.com' -days 36500
mkdir -p $out mkdir -p $out
cp key.pem cert.pem $out cp key.pem cert.pem $out

View file

@ -4,13 +4,13 @@ let
py = python3Packages; py = python3Packages;
in py.buildPythonApplication rec { in py.buildPythonApplication rec {
pname = "friture"; pname = "friture";
version = "unstable-2020-02-16"; version = "0.47";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "tlecomte"; owner = "tlecomte";
repo = pname; repo = pname;
rev = "4460b4e72a9c55310d6438f294424b5be74fc0aa"; rev = "v${version}";
sha256 = "1pmxzq78ibifby3gbir1ah30mgsqv0y7zladf5qf3sl5r1as0yym"; sha256 = "1qcsvmgdz9hhv5gaa918147wvng6manc4iq8ci6yr761ljqrgwjx";
}; };
nativeBuildInputs = (with py; [ numpy cython scipy ]) ++ nativeBuildInputs = (with py; [ numpy cython scipy ]) ++

View file

@ -1,13 +0,0 @@
diff --git a/friture/filter_design.py b/friture/filter_design.py
index 9876c43..1cc749a 100644
--- a/friture/filter_design.py
+++ b/friture/filter_design.py
@@ -2,7 +2,7 @@
from numpy import pi, exp, arange, cos, sin, sqrt, zeros, ones, log, arange, set_printoptions
# the three following lines are a workaround for a bug with scipy and py2exe
# together. See http://www.pyinstaller.org/ticket/83 for reference.
-from scipy.misc import factorial
+from scipy.special import factorial
import scipy
scipy.factorial = factorial

View file

@ -1,34 +1,34 @@
diff --git a/setup.py b/setup.py diff --git a/setup.py b/setup.py
index f31eeec..ac0927b 100644 index 4092388..6cb7dac 100644
--- a/setup.py --- a/setup.py
+++ b/setup.py +++ b/setup.py
@@ -50,19 +50,19 @@ ext_modules = [LateIncludeExtension("friture_extensions.exp_smoothing_conv", @@ -50,19 +50,19 @@ ext_modules = [LateIncludeExtension("friture_extensions.exp_smoothing_conv",
# these will be installed when calling 'pip install friture' # these will be installed when calling 'pip install friture'
# they are also retrieved by 'requirements.txt' # they are also retrieved by 'requirements.txt'
install_requires = [ install_requires = [
- "sounddevice==0.3.14", - "sounddevice==0.4.2",
- "rtmixer==0.1.0", - "rtmixer==0.1.3",
- "PyOpenGL==3.1.4", - "PyOpenGL==3.1.5",
- "PyOpenGL-accelerate==3.1.4", - "PyOpenGL-accelerate==3.1.5",
- "docutils==0.15.2", - "docutils==0.17.1",
- "numpy==1.17.4", - "numpy==1.21.1",
- "PyQt5==5.13.2", - "PyQt5==5.15.4",
- "appdirs==1.4.3", - "appdirs==1.4.4",
- "pyrr==0.10.3", - "pyrr==0.10.3",
+ "sounddevice>=0.3.14", + "sounddevice>=0.4.1",
+ "rtmixer>=0.1.0", + "rtmixer>=0.1.1",
+ "PyOpenGL>=3.1.4", + "PyOpenGL>=3.1.4",
+ "PyOpenGL-accelerate>=3.1.4", + "PyOpenGL-accelerate>=3.1.5",
+ "docutils>=0.15.2", + "docutils>=0.17.1",
+ "numpy>=1.17.4", + "numpy>=1.20.3",
+ "PyQt5>=5.13.2", + "PyQt5>=5.15.4",
+ "appdirs>=1.4.3", + "appdirs>=1.4.4",
+ "pyrr>=0.10.3", + "pyrr>=0.10.3",
] ]
# Cython and numpy are needed when running setup.py, to build extensions # Cython and numpy are needed when running setup.py, to build extensions
-setup_requires=["numpy==1.17.4", "Cython==0.29.14"] -setup_requires=["numpy==1.21.1", "Cython==0.29.24"]
+setup_requires=["numpy>=1.17.4", "Cython>=0.29.14"] +setup_requires=["numpy>=1.20.3", "Cython>=0.29.22"]
with open(join(dirname(__file__), 'README.rst')) as f: with open(join(dirname(__file__), 'README.rst')) as f:
long_description = f.read() long_description = f.read()

View file

@ -18,11 +18,11 @@
mkDerivation rec { mkDerivation rec {
pname = "hqplayer-desktop"; pname = "hqplayer-desktop";
version = "4.12.2-36"; version = "4.13.1-38";
src = fetchurl { src = fetchurl {
url = "https://www.signalyst.eu/bins/hqplayer/fc34/hqplayer4desktop-${version}.fc34.x86_64.rpm"; url = "https://www.signalyst.eu/bins/hqplayer/fc34/hqplayer4desktop-${version}.fc34.x86_64.rpm";
sha256 = "sha256-ng0Tkx6CSnzTxuunStaBhUYjxUmzx31ZaOY2gBWnH6Q="; sha256 = "sha256-DEZWEGk5SfhcNQddehCBVbfeTH8KfVCdaxQ+F3MrRe8=";
}; };
unpackPhase = '' unpackPhase = ''

View file

@ -28,11 +28,11 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "kid3"; pname = "kid3";
version = "3.8.6"; version = "3.8.7";
src = fetchurl { src = fetchurl {
url = "https://download.kde.org/stable/${pname}/${version}/${pname}-${version}.tar.xz"; url = "https://download.kde.org/stable/${pname}/${version}/${pname}-${version}.tar.xz";
hash = "sha256-R4gAWlCw8RezhYbw1XDo+wdp797IbLoM3wqHwr+ul6k="; sha256 = "sha256-Dr+NLh5ajG42jRKt1Swq6mccPfuAXRvhhoTNuO8lnI0=";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -2,11 +2,11 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "Mopidy-Iris"; pname = "Mopidy-Iris";
version = "3.54.0"; version = "3.58.0";
src = python3Packages.fetchPypi { src = python3Packages.fetchPypi {
inherit pname version; inherit pname version;
sha256 = "0qnshn77dv7fl6smwnpnbq67mbc1vic9gf85skiqnqy8v8w5829f"; sha256 = "1bsmc4p7b6v4mm8fi9zsy0knzdccnz1dc6ckrdr18kw2ji0hiyx2";
}; };
propagatedBuildInputs = [ propagatedBuildInputs = [

View file

@ -2,11 +2,11 @@
pythonPackages.buildPythonApplication rec { pythonPackages.buildPythonApplication rec {
pname = "mopidy-spotify"; pname = "mopidy-spotify";
version = "4.0.1"; version = "4.1.1";
src = fetchurl { src = fetchurl {
url = "https://github.com/mopidy/mopidy-spotify/archive/v${version}.tar.gz"; url = "https://github.com/mopidy/mopidy-spotify/archive/v${version}.tar.gz";
sha256 = "1ac8r8050i5r3ag1hlblbcyskqjqz7wgamndbzsmw52qi6hxk44f"; sha256 = "0054gqvnx3brpfxr06dcby0z0dirwv9ydi6gj5iz0cxn0fbi6gv2";
}; };
propagatedBuildInputs = [ mopidy pythonPackages.pyspotify ]; propagatedBuildInputs = [ mopidy pythonPackages.pyspotify ];

View file

@ -6,13 +6,13 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "mpdevil"; pname = "mpdevil";
version = "1.1.1"; version = "1.3.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "SoongNoonien"; owner = "SoongNoonien";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "0l7mqv7ys05al2hds4icb32hf14fqi3n7b0f5v1yx54cbl9cqfap"; sha256 = "1wa5wkkv8kvzlxrhqmmhjmrzcm5v2dij516dk4vlpv9sazc6gzkm";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -13,13 +13,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "mympd"; pname = "mympd";
version = "7.0.2"; version = "8.0.3";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "jcorporation"; owner = "jcorporation";
repo = "myMPD"; repo = "myMPD";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-2V3LbgnJfTIO71quZ+hfLnw/lNLYxXt19jw2Od6BVvM="; sha256 = "sha256-J37PH+yRSsPeNCdY2mslrjMoBwutm5xTSIt+TWyf21M=";
}; };
nativeBuildInputs = [ pkg-config cmake ]; nativeBuildInputs = [ pkg-config cmake ];

View file

@ -8,13 +8,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "pt2-clone"; pname = "pt2-clone";
version = "1.31"; version = "1.32";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "8bitbubsy"; owner = "8bitbubsy";
repo = "pt2-clone"; repo = "pt2-clone";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-hIm9HWKBTFmxU9jI41PfScZIHpZOZpjvV2jgaMX/KSg="; sha256 = "sha256-U1q4xCOzV7n31WgCTGlEXvZaUT/TP797cOAHkecQaLo=";
}; };
nativeBuildInputs = [ cmake ]; nativeBuildInputs = [ cmake ];

View file

@ -6,13 +6,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "scream"; pname = "scream";
version = "3.7"; version = "3.8";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "duncanthrax"; owner = "duncanthrax";
repo = pname; repo = pname;
rev = version; rev = version;
sha256 = "0d9abrw62cd08lcg4il415b7ap89iggbljvbl5jqv2y23il0pvyz"; sha256 = "sha256-7UzwEoZujTN8i056Wf+0QtjyU+/UZlqcSompiAGHT54=";
}; };
buildInputs = lib.optional pulseSupport libpulseaudio buildInputs = lib.optional pulseSupport libpulseaudio

View file

@ -3,12 +3,12 @@
, libGLU, lv2, gtk2, cairo, pango, fftwFloat, zita-convolver }: , libGLU, lv2, gtk2, cairo, pango, fftwFloat, zita-convolver }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "20210114"; version = "20210714";
pname = "x42-plugins"; pname = "x42-plugins";
src = fetchurl { src = fetchurl {
url = "https://gareus.org/misc/x42-plugins/${pname}-${version}.tar.xz"; url = "https://gareus.org/misc/x42-plugins/${pname}-${version}.tar.xz";
sha256 = "sha256-xUiA/k5ZbI/SkY8a20FsyRwqPxxMteiFdEhFF/8e2OA="; sha256 = "sha256-X389bA+cf3N5eJpAlpDn/CJQ6xM4qzrBQ47fYPIyIHk=";
}; };
nativeBuildInputs = [ pkg-config ]; nativeBuildInputs = [ pkg-config ];

View file

@ -1,12 +1,12 @@
{ lib, stdenv, fetchurl, libjack2, zita-resampler }: { lib, stdenv, fetchurl, libjack2, zita-resampler }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "0.4.4"; version = "0.4.8";
pname = "zita-njbridge"; pname = "zita-njbridge";
src = fetchurl { src = fetchurl {
url = "https://kokkinizita.linuxaudio.org/linuxaudio/downloads/${pname}-${version}.tar.bz2"; url = "https://kokkinizita.linuxaudio.org/linuxaudio/downloads/${pname}-${version}.tar.bz2";
sha256 = "1l8rszdjhp0gq7mr54sdgfs6y6cmw11ssmqb1v9yrkrz5rmwzg8j"; sha256 = "sha256-EBF2oL1AfKt7/9Mm6NaIbBtlshK8M/LvuXsD+SbEeQc=";
}; };
buildInputs = [ libjack2 zita-resampler ]; buildInputs = [ libjack2 zita-resampler ];

View file

@ -15,13 +15,13 @@ in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "btcpayserver"; pname = "btcpayserver";
version = "1.1.2"; version = "1.2.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = pname; owner = pname;
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-A9XIKCw1dL4vUQYSu6WdmpR82dAbtKVTyjllquyRGgs="; sha256 = "sha256-pRc0oud8k6ulC6tVXv6Mr7IEC2a/+FhkMDyxz1zFKTE=";
}; };
nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ]; nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ];

View file

@ -26,53 +26,48 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Hwi"; name = "BTCPayServer.Hwi";
version = "1.1.3"; version = "2.0.1";
sha256 = "1c8hfnrjh2ad8qh75d63gsl170q8czf3j1hk8sv8fnbgnxdnkm7a"; sha256 = "18pp3f0z10c0q1bbllxi2j6ix8f0x58d0dndi5faf9p3hb58ly9k";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.All"; name = "BTCPayServer.Lightning.All";
version = "1.2.7"; version = "1.2.10";
sha256 = "0jzmzvlpf6iba2fsc6cyi69vlaim9slqm2sapknmd7drl3gcn2zj"; sha256 = "0c3bi5r7sckzml44bqy0j1cd6l3xc29cdyf6rib52b5gmgrvcam2";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.Charge"; name = "BTCPayServer.Lightning.Charge";
version = "1.2.3"; version = "1.2.5";
sha256 = "1rdrwmijx0v4z0xsq4acyvdcj7hv6arfh3hwjy89rqnkkznrzgwv"; sha256 = "02mf7yhr9lfy5368c5mn1wgxxka52f0s5vx31w97sdkpc5pivng5";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.CLightning"; name = "BTCPayServer.Lightning.CLightning";
version = "1.2.3"; version = "1.2.6";
sha256 = "02197rh03q8d0mv40zf67wp1rd2gbxi5l8krd2rzj84n267bcfvc"; sha256 = "1p4bzbrd2d0izjd9q06mnagl31q50hpz5jla9gfja1bhn3xqvwsy";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.Common"; name = "BTCPayServer.Lightning.Common";
version = "1.2.0"; version = "1.2.4";
sha256 = "17di8ndkw8z0ci0zk15mcrqpmganwkz9ys2snr2rqpw5mrlhpwa0"; sha256 = "1bdj1cdf6sirwm19hq1k2fmh2jiqkcyzrqms6q9d0wqba9xggwyn";
})
(fetchNuGet {
name = "BTCPayServer.Lightning.Common";
version = "1.2.2";
sha256 = "07xb7fsqvfjmcawxylriw60i73h0cvfb765aznhp9ffyrmjaql7z";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.Eclair"; name = "BTCPayServer.Lightning.Eclair";
version = "1.2.2"; version = "1.2.4";
sha256 = "03dymhwxb5s28kb187g5h4aysnz2xzml89p47nmwz9lkg2h4s73h"; sha256 = "1l68sc9g4ffsi1bbgrbbx8zmqw811hjq17761q1han9gsykl5rr1";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.LND"; name = "BTCPayServer.Lightning.LND";
version = "1.2.4"; version = "1.2.6";
sha256 = "0qnj5rsp6hnybsr58zny9dfbsxksg1674q0z9944jwkzm7pcqyg4"; sha256 = "16wipkzzfrcjhi3whqxdfjq7qxnwjzf4gckpf1qjgdxbzggh6l3d";
}) })
(fetchNuGet { (fetchNuGet {
name = "BTCPayServer.Lightning.Ptarmigan"; name = "BTCPayServer.Lightning.Ptarmigan";
version = "1.2.2"; version = "1.2.4";
sha256 = "17yl85vqfp7l12bv3f3w1b861hm41i7cfhs78gaq04s4drvcnj6k"; sha256 = "1j80m4pb3nn4dnqmxda13lp87pgviwxai456pki097rmc0vmqj83";
}) })
(fetchNuGet { (fetchNuGet {
name = "BuildBundlerMinifier"; name = "BuildBundlerMinifier";
version = "3.2.435"; version = "3.2.449";
sha256 = "0y1p226dbvs7q2ngm9w4mpkhfrhw2y122plv1yff7lx5m84ia02l"; sha256 = "1dcjlfl5w2vfppx2hq3jj6xy24id2x3hcajwylhphlz9jw2bnhsv";
}) })
(fetchNuGet { (fetchNuGet {
name = "BundlerMinifier.Core"; name = "BundlerMinifier.Core";
@ -761,18 +756,8 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin.Altcoins"; name = "NBitcoin.Altcoins";
version = "2.0.31"; version = "3.0.3";
sha256 = "13gcfsxpfq8slmsvgzf6iv581x7n535zq0p9c88bqs5p88r6lygm"; sha256 = "0129mgnyyb55haz68d8z694g1q2rlc0qylx08d5qnfpq1r03cdqd";
})
(fetchNuGet {
name = "NBitcoin";
version = "5.0.33";
sha256 = "030q609b9lhapq4wfl1w3impjw5m40kz2rg1s9jn3bn8yjfmsi4a";
})
(fetchNuGet {
name = "NBitcoin";
version = "5.0.4";
sha256 = "04iafda61izzxb691brk72qs01m5dadqb4970nw5ayck6275s71i";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin"; name = "NBitcoin";
@ -786,13 +771,18 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin"; name = "NBitcoin";
version = "5.0.73"; version = "5.0.81";
sha256 = "0vqgcb0ws5fnkrdzqfkyh78041c6q4l22b93rr0006dd4bmqrmg1"; sha256 = "1fba94kc8yzykb1m5lvpx1hm63mpycpww9cz5zfp85phs1spdn8x";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin"; name = "NBitcoin";
version = "5.0.77"; version = "6.0.3";
sha256 = "0ykz4ii6lh6gdlz6z264wnib5pfnmq9q617qqbg0f04mq654jygb"; sha256 = "1kfq1q86844ssp8myy5vmvg33h3x0p9gqrlc99fl9gm1vzjc723f";
})
(fetchNuGet {
name = "NBitcoin";
version = "6.0.7";
sha256 = "0mk8n8isrrww0240x63rx3zx12nz5v08i3w62qp1n18mmdw3rdy6";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitpayClient"; name = "NBitpayClient";
@ -801,8 +791,8 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "NBXplorer.Client"; name = "NBXplorer.Client";
version = "3.0.21"; version = "4.0.3";
sha256 = "1asri2wsjq3ljf2p4r4x52ba9cirh8ccc5ysxpnv4cvladkdazbi"; sha256 = "0x9iggc5cyv06gnwnwrk3riv2j3g0833imdf3jx8ghmrxvim88b3";
}) })
(fetchNuGet { (fetchNuGet {
name = "Nethereum.ABI"; name = "Nethereum.ABI";
@ -1116,8 +1106,8 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "Selenium.WebDriver.ChromeDriver"; name = "Selenium.WebDriver.ChromeDriver";
version = "88.0.4324.9600"; version = "90.0.4430.2400";
sha256 = "0jm8dpfp329xsrg69lzq2m6x9yin1m43qgrhs15cz2qx9f02pdx9"; sha256 = "18gjm92nzzvxf0hk7c0nnabs0vmh6yyzq3m4si7p21m6xa3bqiga";
}) })
(fetchNuGet { (fetchNuGet {
name = "Selenium.WebDriver"; name = "Selenium.WebDriver";

View file

@ -2,16 +2,16 @@
buildGoModule rec { buildGoModule rec {
pname = "erigon"; pname = "erigon";
version = "2021.08.01"; version = "2021.08.02";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ledgerwatch"; owner = "ledgerwatch";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-fjMkCCeQa/IHB4yXlL7Qi8J9wtZm90l3xIA72LeoW8M="; sha256 = "sha256-pyqvzpsDk24UEtSx4qmDew9zRK45pD5i4Qv1uJ03tmk=";
}; };
vendorSha256 = "1vsgd19an592dblm9afasmh8cd0x2frw5pvnxkxd2fikhy2mibbs"; vendorSha256 = "sha256-FwKlQH8vEtWNDql1pmHzKneIwmJ7cg5LYkETVswO6pc=";
runVend = true; runVend = true;
# Build errors in mdbx when format hardening is enabled: # Build errors in mdbx when format hardening is enabled:

View file

@ -9,17 +9,17 @@ let
in buildGoModule rec { in buildGoModule rec {
pname = "go-ethereum"; pname = "go-ethereum";
version = "1.10.6"; version = "1.10.7";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ethereum"; owner = "ethereum";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-4lapkoxSKdXlD6rmUxnlSKrfH+DeV6/wV05CqJjuzjA="; sha256 = "sha256-P0+XPSpvVsjia21F3FIg7KO6Qe2ZbY90tM/dRwBBuBk=";
}; };
runVend = true; runVend = true;
vendorSha256 = "sha256-5qi01y0SIEI0WRYu2I2RN94QFS8rrlioFvnRqqp6wtk="; vendorSha256 = "sha256-51jt5oBb/3avZnDRfo/NKAtZAU6QBFkzNdVxFnJ+erM=";
doCheck = false; doCheck = false;

View file

@ -5,16 +5,16 @@
buildGoModule rec { buildGoModule rec {
pname = "lightning-loop"; pname = "lightning-loop";
version = "0.14.2-beta"; version = "0.15.0-beta";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "lightninglabs"; owner = "lightninglabs";
repo = "loop"; repo = "loop";
rev = "v${version}"; rev = "v${version}";
sha256 = "02ndln0n5k2pin9pngjcmn3wak761ml923111fyqb379zcfscrfv"; sha256 = "1yjc04jiam3836w7vn3b1jqj1dq1k8wwfnccir0vh29cn6v0cf63";
}; };
vendorSha256 = "1izdd9i4bqzmwagq0ilz2s37jajvzf1xwx3hmmbd1k3ss7mjm72r"; vendorSha256 = "0c3ly0s438sr9iql2ps4biaswphp7dfxshddyw5fcm0ajqzvhrmw";
subPackages = [ "cmd/loop" "cmd/loopd" ]; subPackages = [ "cmd/loop" "cmd/loopd" ];

View file

@ -2,13 +2,13 @@
python3Packages.buildPythonApplication rec { python3Packages.buildPythonApplication rec {
pname = "lndmanage"; pname = "lndmanage";
version = "0.12.0"; version = "0.13.0";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "bitromortac"; owner = "bitromortac";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "1p73wdxv3fca2ga4nqpjk5lig7bj2v230lh8niw490p5y7hhnggl"; sha256 = "1vnv03k2d11rw6mry6fmspiy3hqsza8y3daxnn4lp038gw1y0f4z";
}; };
propagatedBuildInputs = with python3Packages; [ propagatedBuildInputs = with python3Packages; [

View file

@ -3,14 +3,14 @@
with lib; with lib;
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "nc0.20.1"; version = "nc0.21.1";
name = "namecoin" + toString (optional (!withGui) "d") + "-" + version; name = "namecoin" + toString (optional (!withGui) "d") + "-" + version;
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "namecoin"; owner = "namecoin";
repo = "namecoin-core"; repo = "namecoin-core";
rev = version; rev = version;
sha256 = "1wpfp9y95lmfg2nk1xqzchwck1wk6gwkya1rj07mf5in9jngxk9z"; sha256 = "sha256-dA4BGhxHm0EdvqMq27zzWp2vOPyKbCgV1i1jt17TVxU=";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -15,13 +15,13 @@ in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "nbxplorer"; pname = "nbxplorer";
version = "2.1.52"; version = "2.1.58";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "dgarage"; owner = "dgarage";
repo = "NBXplorer"; repo = "NBXplorer";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-+BP71TQ8BTGZ/SbS7CrI4D7hcQaVLt+hCpInbOdU5GY="; sha256 = "sha256-rhD0owLEx7WxZnGPNaq4QpZopMsFQDOTnA0fs539Wxg=";
}; };
nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ]; nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ];

View file

@ -181,23 +181,23 @@
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin.Altcoins"; name = "NBitcoin.Altcoins";
version = "2.0.33"; version = "3.0.3";
sha256 = "12r4w89247xzrl2g01iv13kg1wl7gzfz1zikimx6dyhr4iipbmgf"; sha256 = "0129mgnyyb55haz68d8z694g1q2rlc0qylx08d5qnfpq1r03cdqd";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin.TestFramework"; name = "NBitcoin.TestFramework";
version = "2.0.23"; version = "3.0.3";
sha256 = "03jw3gay7brm7s7jwn4zbk1n1sq7gck523cx3ckx87v3wi2062lx"; sha256 = "1j3ajj4jrwqzlhzhkg7vicwab0aq2y50x53rindd8cq09jxvzk62";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin"; name = "NBitcoin";
version = "5.0.78"; version = "6.0.6";
sha256 = "1mfn045l489bm2xgjhvddhfy4xxcy42q6jhq4nyd6fnxg4scxyg9"; sha256 = "1kf2rjrnh97zlh00affsv95f94bwgr2h7b00njqac4qgv9cac7sa";
}) })
(fetchNuGet { (fetchNuGet {
name = "NBitcoin"; name = "NBitcoin";
version = "5.0.81"; version = "6.0.8";
sha256 = "1fba94kc8yzykb1m5lvpx1hm63mpycpww9cz5zfp85phs1spdn8x"; sha256 = "1f90zyrd35fzx0vgvd83jhd6hczd4037h2k198xiyxj04l4m3wm5";
}) })
(fetchNuGet { (fetchNuGet {
name = "NETStandard.Library"; name = "NETStandard.Library";

View file

@ -19,7 +19,7 @@ in stdenv.mkDerivation rec {
yarnCache = stdenv.mkDerivation { yarnCache = stdenv.mkDerivation {
name = "${pname}-${version}-${system}-yarn-cache"; name = "${pname}-${version}-${system}-yarn-cache";
inherit src; inherit src;
phases = [ "unpackPhase" "buildPhase" ]; dontInstall = true;
nativeBuildInputs = [ yarn ]; nativeBuildInputs = [ yarn ];
buildPhase = '' buildPhase = ''
export HOME=$NIX_BUILD_ROOT export HOME=$NIX_BUILD_ROOT

View file

@ -1,5 +1,5 @@
{ lib, stdenv, makeDesktopItem, freetype, fontconfig, libX11, libXrender { lib, stdenv, makeDesktopItem, freetype, fontconfig, libX11, libXrender
, zlib, jdk, glib, gtk, libXtst, gsettings-desktop-schemas, webkitgtk , zlib, jdk, glib, gtk, libXtst, libsecret, gsettings-desktop-schemas, webkitgtk
, makeWrapper, perl, ... }: , makeWrapper, perl, ... }:
{ name, src ? builtins.getAttr stdenv.hostPlatform.system sources, sources ? null, description }: { name, src ? builtins.getAttr stdenv.hostPlatform.system sources, sources ? null, description }:
@ -19,7 +19,7 @@ stdenv.mkDerivation rec {
buildInputs = [ buildInputs = [
fontconfig freetype glib gsettings-desktop-schemas gtk jdk libX11 fontconfig freetype glib gsettings-desktop-schemas gtk jdk libX11
libXrender libXtst makeWrapper zlib libXrender libXtst libsecret makeWrapper zlib
] ++ lib.optional (webkitgtk != null) webkitgtk; ] ++ lib.optional (webkitgtk != null) webkitgtk;
buildCommand = '' buildCommand = ''
@ -41,7 +41,7 @@ stdenv.mkDerivation rec {
makeWrapper $out/eclipse/eclipse $out/bin/eclipse \ makeWrapper $out/eclipse/eclipse $out/bin/eclipse \
--prefix PATH : ${jdk}/bin \ --prefix PATH : ${jdk}/bin \
--prefix LD_LIBRARY_PATH : ${lib.makeLibraryPath ([ glib gtk libXtst ] ++ lib.optional (webkitgtk != null) webkitgtk)} \ --prefix LD_LIBRARY_PATH : ${lib.makeLibraryPath ([ glib gtk libXtst libsecret ] ++ lib.optional (webkitgtk != null) webkitgtk)} \
--prefix XDG_DATA_DIRS : "$GSETTINGS_SCHEMAS_PATH" \ --prefix XDG_DATA_DIRS : "$GSETTINGS_SCHEMAS_PATH" \
--add-flags "-configuration \$HOME/.eclipse/''${productId}_$productVersion/configuration" --add-flags "-configuration \$HOME/.eclipse/''${productId}_$productVersion/configuration"

Some files were not shown because too many files have changed in this diff Show more