Project import generated by Copybara.
GitOrigin-RevId: 32b8ed738096bafb4cdb7f70347a0f63f9f40151
This commit is contained in:
parent
11064ea484
commit
66604c92c1
1769 changed files with 35786 additions and 20755 deletions
7
third_party/nixpkgs/.github/CODEOWNERS
vendored
7
third_party/nixpkgs/.github/CODEOWNERS
vendored
|
@ -55,6 +55,13 @@
|
|||
# NixOS integration test driver
|
||||
/nixos/lib/test-driver @tfc
|
||||
|
||||
# Updaters
|
||||
## update.nix
|
||||
/maintainers/scripts/update.nix @jtojnar
|
||||
/maintainers/scripts/update.py @jtojnar
|
||||
## common-updater-scripts
|
||||
/pkgs/common-updater/scripts/update-source-version @jtojnar
|
||||
|
||||
# Python-related code and docs
|
||||
/maintainers/scripts/update-python-libraries @FRidh
|
||||
/pkgs/top-level/python-packages.nix @FRidh @jonringer
|
||||
|
|
|
@ -15,7 +15,7 @@ Reviewing guidelines: https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/do
|
|||
|
||||
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
|
||||
|
||||
- [ ] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
|
||||
- [ ] Tested using sandboxing ([nix.useSandbox](https://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](https://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
|
||||
- Built on platform(s)
|
||||
- [ ] NixOS
|
||||
- [ ] macOS
|
||||
|
|
|
@ -111,7 +111,7 @@
|
|||
</para>
|
||||
<para>
|
||||
The exact syntax and semantics of the Nix expression language, including the built-in function, are described in the Nix manual in the <link
|
||||
xlink:href="http://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter on writing Nix expressions</link>.
|
||||
xlink:href="https://hydra.nixos.org/job/nix/trunk/tarball/latest/download-by-type/doc/manual/#chap-writing-nix-expressions">chapter on writing Nix expressions</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
96
third_party/nixpkgs/doc/languages-frameworks/agda.section.md
vendored
Normal file
96
third_party/nixpkgs/doc/languages-frameworks/agda.section.md
vendored
Normal file
|
@ -0,0 +1,96 @@
|
|||
---
|
||||
title: Agda
|
||||
author: Alex Rice (alexarice)
|
||||
date: 2020-01-06
|
||||
---
|
||||
# Agda
|
||||
|
||||
## How to use Agda
|
||||
|
||||
Agda can be installed from `agda`:
|
||||
```
|
||||
$ nix-env -iA agda
|
||||
```
|
||||
|
||||
To use agda with libraries, the `agda.withPackages` function can be used. This function either takes:
|
||||
+ A list of packages,
|
||||
+ or a function which returns a list of packages when given the `agdaPackages` attribute set,
|
||||
+ or an attribute set containing a list of packages and a GHC derivation for compilation (see below).
|
||||
|
||||
For example, suppose we wanted a version of agda which has access to the standard library. This can be obtained with the expressions:
|
||||
|
||||
```
|
||||
agda.withPackages [ agdaPackages.standard-library ]
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
agda.withPackages (p: [ p.standard-library ])
|
||||
```
|
||||
|
||||
or can be called as in the [Compiling Agda](#compiling-agda) section.
|
||||
|
||||
If you want to use a library in your home directory (for instance if it is a development version) then typecheck it manually (using `agda.withPackages` if necessary) and then override the `src` attribute of the package to point to your local repository.
|
||||
|
||||
Agda will not by default use these libraries. To tell agda to use the library we have some options:
|
||||
- Call `agda` with the library flag:
|
||||
```
|
||||
$ agda -l standard-library -i . MyFile.agda
|
||||
```
|
||||
- Write a `my-library.agda-lib` file for the project you are working on which may look like:
|
||||
```
|
||||
name: my-library
|
||||
include: .
|
||||
depends: standard-library
|
||||
```
|
||||
- Create the file `~/.agda/defaults` and add any libraries you want to use by default.
|
||||
|
||||
More information can be found in the [official Agda documentation on library management](https://agda.readthedocs.io/en/v2.6.1/tools/package-system.html).
|
||||
|
||||
## Compiling Agda
|
||||
Agda modules can be compiled with the `--compile` flag. A version of `ghc` with `ieee` is made available to the Agda program via the `--with-compiler` flag.
|
||||
This can be overridden by a different version of `ghc` as follows:
|
||||
|
||||
```
|
||||
agda.withPackages {
|
||||
pkgs = [ ... ];
|
||||
ghc = haskell.compiler.ghcHEAD;
|
||||
}
|
||||
```
|
||||
|
||||
## Writing Agda packages
|
||||
To write a nix derivation for an agda library, first check that the library has a `*.agda-lib` file.
|
||||
|
||||
A derivation can then be written using `agdaPackages.mkDerivation`. This has similar arguments to `stdenv.mkDerivation` with the following additions:
|
||||
+ `everythingFile` can be used to specify the location of the `Everything.agda` file, defaulting to `./Everything.agda`. If this file does not exist then either it should be patched in or the `buildPhase` should be overridden (see below).
|
||||
+ `libraryName` should be the name that appears in the `*.agda-lib` file, defaulting to `pname`.
|
||||
+ `libraryFile` should be the file name of the `*.agda-lib` file, defaulting to `${libraryName}.agda-lib`.
|
||||
|
||||
The build phase for `agdaPackages.mkDerivation` simply runs `agda` on the `Everything.agda` file. If something else is needed to build the package (e.g. `make`) then the `buildPhase` should be overridden (or a `preBuild` or `configurePhase` can be used if there are steps that need to be done prior to checking the `Everything.agda` file). `agda` and the Agda libraries contained in `buildInputs` are made available during the build phase. The install phase simply copies all `.agda`, `.agdai` and `.agda-lib` files to the output directory. Again, this can be overridden.
|
||||
|
||||
To add an agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other agda libraries, so the top line of the `default.nix` can look like:
|
||||
```
|
||||
{ mkDerivation, standard-library, fetchFromGitHub }:
|
||||
```
|
||||
and `mkDerivation` should be called instead of `agdaPackages.mkDerivation`. Here is an example skeleton derivation for iowa-stdlib:
|
||||
|
||||
```
|
||||
mkDerivation {
|
||||
version = "1.5.0";
|
||||
pname = "iowa-stdlib";
|
||||
|
||||
src = ...
|
||||
|
||||
libraryFile = "";
|
||||
libraryName = "IAL-1.3";
|
||||
|
||||
buildPhase = ''
|
||||
patchShebangs find-deps.sh
|
||||
make
|
||||
'';
|
||||
}
|
||||
```
|
||||
This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`.
|
||||
|
||||
When writing an agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes agda to think that the nix store is a agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613).
|
|
@ -167,7 +167,7 @@ parameters that the SDK composition function (the function shown in the
|
|||
previous section) supports.
|
||||
|
||||
This build function is particularly useful when it is desired to use
|
||||
[Hydra](http://nixos.org/hydra): the Nix-based continuous integration solution
|
||||
[Hydra](https://nixos.org/hydra): the Nix-based continuous integration solution
|
||||
to build Android apps. An Android APK gets exposed as a build product and can be
|
||||
installed on any Android device with a web browser by navigating to the build
|
||||
result page.
|
||||
|
|
|
@ -40,6 +40,23 @@
|
|||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-icon-theme-packaging">
|
||||
<title>Packaging icon themes</title>
|
||||
|
||||
<para>
|
||||
Icon themes may inherit from other icon themes. The inheritance is specified using the <literal>Inherits</literal> key in the <filename>index.theme</filename> file distributed with the icon theme. According to the <link xlink:href="https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html">icon theme specification</link>, icons not provided by the theme are looked for in its parent icon themes. Therefore the parent themes should be installed as dependencies for a more complete experience regarding the icon sets used.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The package <package>hicolor-icon-theme</package> provides a setup hook which makes symbolic links for the parent themes into the directory <filename>share/icons</filename> of the current theme directory in the nix store, making sure they can be found at runtime. For that to work the packages providing parent icon themes should be listed as propagated build dependencies, together with <package>hicolor-icon-theme</package>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Also make sure that <filename>icon-theme.cache</filename> is installed for each theme provided by the package, and set <code>dontDropIconThemeCache</code> to <code>true</code> so that the cache file is not removed by the <package>gtk3</package> setup hook.
|
||||
</para>
|
||||
|
||||
</section>
|
||||
|
||||
<section xml:id="ssec-gnome-themes">
|
||||
<title>GTK Themes</title>
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ pet = buildGoModule rec {
|
|||
sha256 = "0m2fzpqxk7hrbxsgqplkg7h2p7gv6s1miymv3gvw0cz039skag0s";
|
||||
};
|
||||
|
||||
modSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j"; <co xml:id='ex-buildGoModule-1' />
|
||||
vendorSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j"; <co xml:id='ex-buildGoModule-1' />
|
||||
|
||||
subPackages = [ "." ]; <co xml:id='ex-buildGoModule-2' />
|
||||
|
||||
|
@ -56,7 +56,7 @@ pet = buildGoModule rec {
|
|||
<calloutlist>
|
||||
<callout arearefs='ex-buildGoModule-1'>
|
||||
<para>
|
||||
<varname>modSha256</varname> is the hash of the output of the intermediate fetcher derivation.
|
||||
<varname>vendorSha256</varname> is the hash of the output of the intermediate fetcher derivation.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs='ex-buildGoModule-2'>
|
||||
|
@ -68,12 +68,12 @@ pet = buildGoModule rec {
|
|||
</para>
|
||||
|
||||
<para>
|
||||
<varname>modSha256</varname> can also take <varname>null</varname> as an input.
|
||||
<varname>vendorSha256</varname> can also take <varname>null</varname> as an input.
|
||||
|
||||
When `null` is used as a value, the derivation won't be a
|
||||
fixed-output derivation but disable the build sandbox instead. This can be useful outside
|
||||
of nixpkgs where re-generating the modSha256 on each mod.sum changes is cumbersome,
|
||||
but will fail to build by Hydra, as builds with a disabled sandbox are discouraged.
|
||||
When `null` is used as a value, rather than fetching the dependencies
|
||||
and vendoring them, we use the vendoring included within the source repo.
|
||||
If you'd like to not have to update this field on dependency changes,
|
||||
run `go mod vendor` in your source repo and set 'vendorSha256 = null;'
|
||||
</para>
|
||||
</section>
|
||||
|
||||
|
@ -191,18 +191,6 @@ deis = buildGoPackage rec {
|
|||
To extract dependency information from a Go package in automated way use <link xlink:href="https://github.com/kamilchm/go2nix">go2nix</link>. It can produce complete derivation and <varname>goDeps</varname> file for Go programs.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
<varname>buildGoPackage</varname> produces <xref linkend='chap-multiple-output' xrefstyle="select: title" /> where <varname>bin</varname> includes program binaries. You can test build a Go binary as follows:
|
||||
<screen>
|
||||
<prompt>$ </prompt>nix-build -A deis.bin
|
||||
</screen>
|
||||
or build all outputs with:
|
||||
<screen>
|
||||
<prompt>$ </prompt>nix-build -A deis.all
|
||||
</screen>
|
||||
<varname>bin</varname> output will be installed by default with <varname>nix-env -i</varname> or <varname>systemPackages</varname>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You may use Go packages installed into the active Nix profiles by adding the following to your ~/.bashrc:
|
||||
<screen>
|
||||
|
|
|
@ -103,8 +103,8 @@ command displays the complete list of available compilers:
|
|||
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler
|
||||
haskell.compiler.ghc8101 ghc-8.10.1
|
||||
haskell.compiler.integer-simple.ghc8101 ghc-8.10.1
|
||||
haskell.compiler.ghcHEAD ghc-8.11.20200403
|
||||
haskell.compiler.integer-simple.ghcHEAD ghc-8.11.20200403
|
||||
haskell.compiler.ghcHEAD ghc-8.11.20200505
|
||||
haskell.compiler.integer-simple.ghcHEAD ghc-8.11.20200505
|
||||
haskell.compiler.ghc822Binary ghc-8.2.2-binary
|
||||
haskell.compiler.ghc844 ghc-8.4.4
|
||||
haskell.compiler.ghc863Binary ghc-8.6.3-binary
|
||||
|
|
|
@ -5,6 +5,7 @@
|
|||
<para>
|
||||
The <link linkend="chap-stdenv">standard build environment</link> makes it easy to build typical Autotools-based packages with very little code. Any other kind of package can be accomodated by overriding the appropriate phases of <literal>stdenv</literal>. However, there are specialised functions in Nixpkgs to easily build packages for other programming languages, such as Perl or Haskell. These are described in this chapter.
|
||||
</para>
|
||||
<xi:include href="agda.section.xml" />
|
||||
<xi:include href="android.section.xml" />
|
||||
<xi:include href="beam.xml" />
|
||||
<xi:include href="bower.xml" />
|
||||
|
|
|
@ -18,7 +18,7 @@ The primary objective of this project is to use the Nix expression language to
|
|||
specify how iOS apps can be built from source code, and to automatically spawn
|
||||
iOS simulator instances for testing.
|
||||
|
||||
This component also makes it possible to use [Hydra](http://nixos.org/hydra),
|
||||
This component also makes it possible to use [Hydra](https://nixos.org/hydra),
|
||||
the Nix-based continuous integration server to regularly build iOS apps and to
|
||||
do wireless ad-hoc installations of enterprise IPAs on iOS devices through
|
||||
Hydra.
|
||||
|
|
|
@ -9,7 +9,7 @@
|
|||
Several versions of the Python interpreter are available on Nix, as well as a
|
||||
high amount of packages. The attribute `python` refers to the default
|
||||
interpreter, which is currently CPython 2.7. It is also possible to refer to
|
||||
specific versions, e.g. `python35` refers to CPython 3.5, and `pypy` refers to
|
||||
specific versions, e.g. `python38` refers to CPython 3.8, and `pypy` refers to
|
||||
the default PyPy interpreter.
|
||||
|
||||
Python is used a lot, and in different ways. This affects also how it is
|
||||
|
@ -25,10 +25,10 @@ however, are in separate sets, with one set per interpreter version.
|
|||
The interpreters have several common attributes. One of these attributes is
|
||||
`pkgs`, which is a package set of Python libraries for this specific
|
||||
interpreter. E.g., the `toolz` package corresponding to the default interpreter
|
||||
is `python.pkgs.toolz`, and the CPython 3.5 version is `python35.pkgs.toolz`.
|
||||
is `python.pkgs.toolz`, and the CPython 3.8 version is `python38.pkgs.toolz`.
|
||||
The main package set contains aliases to these package sets, e.g.
|
||||
`pythonPackages` refers to `python.pkgs` and `python35Packages` to
|
||||
`python35.pkgs`.
|
||||
`pythonPackages` refers to `python.pkgs` and `python38Packages` to
|
||||
`python38.pkgs`.
|
||||
|
||||
#### Installing Python and packages
|
||||
|
||||
|
@ -36,121 +36,191 @@ The Nix and NixOS manuals explain how packages are generally installed. In the
|
|||
case of Python and Nix, it is important to make a distinction between whether the
|
||||
package is considered an application or a library.
|
||||
|
||||
Applications on Nix are typically installed into your user
|
||||
profile imperatively using `nix-env -i`, and on NixOS declaratively by adding the
|
||||
package name to `environment.systemPackages` in `/etc/nixos/configuration.nix`.
|
||||
Dependencies such as libraries are automatically installed and should not be
|
||||
installed explicitly.
|
||||
Applications on Nix are typically installed into your user profile imperatively
|
||||
using `nix-env -i`, and on NixOS declaratively by adding the package name to
|
||||
`environment.systemPackages` in `/etc/nixos/configuration.nix`. Dependencies
|
||||
such as libraries are automatically installed and should not be installed
|
||||
explicitly.
|
||||
|
||||
The same goes for Python applications and libraries. Python applications can be
|
||||
installed in your profile. But Python libraries you would like to use for
|
||||
development cannot be installed, at least not individually, because they won't
|
||||
be able to find each other resulting in import errors. Instead, it is possible
|
||||
to create an environment with `python.buildEnv` or `python.withPackages` where
|
||||
the interpreter and other executables are able to find each other and all of the
|
||||
modules.
|
||||
The same goes for Python applications. Python applications can be installed in
|
||||
your profile, and will be wrapped to find their exact library dependencies,
|
||||
without impacting other applications or polluting your user environment.
|
||||
|
||||
In the following examples we create an environment with Python 3.5, `numpy` and
|
||||
`toolz`. As you may imagine, there is one limitation here, and that's that
|
||||
you can install only one environment at a time. You will notice the complaints
|
||||
about collisions when you try to install a second environment.
|
||||
But Python libraries you would like to use for development cannot be installed,
|
||||
at least not individually, because they won't be able to find each other
|
||||
resulting in import errors. Instead, it is possible to create an environment
|
||||
with `python.buildEnv` or `python.withPackages` where the interpreter and other
|
||||
executables are wrapped to be able to find each other and all of the modules.
|
||||
|
||||
##### Environment defined in separate `.nix` file
|
||||
In the following examples we will start by creating a simple, ad-hoc environment
|
||||
with a nix-shell that has `numpy` and `toolz` in Python 3.8; then we will create
|
||||
a re-usable environment in a single-file Python script; then we will create a
|
||||
full Python environment for development with this same environment.
|
||||
|
||||
Create a file, e.g. `build.nix`, with the following expression
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
Philosphically, this should be familiar to users who are used to a `venv` style
|
||||
of development: individual projects create their own Python environments without
|
||||
impacting the global environment or each other.
|
||||
|
||||
python35.withPackages (ps: with ps; [ numpy toolz ])
|
||||
```
|
||||
and install it in your profile with
|
||||
```shell
|
||||
nix-env -if build.nix
|
||||
```
|
||||
Now you can use the Python interpreter, as well as the extra packages (`numpy`,
|
||||
`toolz`) that you added to the environment.
|
||||
#### Ad-hoc temporary Python environment with `nix-shell`
|
||||
|
||||
##### Environment defined in `~/.config/nixpkgs/config.nix`
|
||||
The simplest way to start playing with the way nix wraps and sets up Python
|
||||
environments is with `nix-shell` at the cmdline. These environments create a
|
||||
temporary shell session with a Python and a *precise* list of packages (plus
|
||||
their runtime dependencies), with no other Python packages in the Python
|
||||
interpreter's scope.
|
||||
|
||||
If you prefer you could also add the environment as a package override to the
|
||||
Nixpkgs set, e.g. using `config.nix`,
|
||||
|
||||
```nix
|
||||
{ # ...
|
||||
|
||||
packageOverrides = pkgs: with pkgs; {
|
||||
myEnv = python35.withPackages (ps: with ps; [ numpy toolz ]);
|
||||
};
|
||||
}
|
||||
```
|
||||
and install it in your profile with
|
||||
|
||||
```shell
|
||||
nix-env -iA nixpkgs.myEnv
|
||||
```
|
||||
|
||||
The environment is is installed by referring to the attribute, and considering
|
||||
the `nixpkgs` channel was used.
|
||||
|
||||
##### Environment defined in `/etc/nixos/configuration.nix`
|
||||
|
||||
For the sake of completeness, here's another example how to install the
|
||||
environment system-wide.
|
||||
|
||||
```nix
|
||||
{ # ...
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
(python35.withPackages(ps: with ps; [ numpy toolz ]))
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
#### Temporary Python environment with `nix-shell`
|
||||
|
||||
The examples in the previous section showed how to install a Python environment
|
||||
into a profile. For development you may need to use multiple environments.
|
||||
`nix-shell` gives the possibility to temporarily load another environment, akin
|
||||
to `virtualenv`.
|
||||
|
||||
There are two methods for loading a shell with Python packages. The first and
|
||||
recommended method is to create an environment with `python.buildEnv` or
|
||||
`python.withPackages` and load that. E.g.
|
||||
To create a Python 3.8 session with `numpy` and `toolz` available, run:
|
||||
|
||||
```sh
|
||||
$ nix-shell -p 'python35.withPackages(ps: with ps; [ numpy toolz ])'
|
||||
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz ])'
|
||||
```
|
||||
|
||||
opens a shell from which you can launch the interpreter
|
||||
By default `nix-shell` will start a `bash` session with this interpreter in our
|
||||
`PATH`, so if we then run:
|
||||
|
||||
```
|
||||
[nix-shell:~/src/nixpkgs]$ python3
|
||||
Python 3.8.1 (default, Dec 18 2019, 19:06:26)
|
||||
[GCC 9.2.0] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> import numpy; import toolz
|
||||
```
|
||||
|
||||
Note that no other modules are in scope, even if they were imperatively
|
||||
installed into our user environment as a dependency of a Python application:
|
||||
|
||||
```
|
||||
>>> import requests
|
||||
Traceback (most recent call last):
|
||||
File "<stdin>", line 1, in <module>
|
||||
ModuleNotFoundError: No module named 'requests'
|
||||
```
|
||||
|
||||
We can add as many additional modules onto the `nix-shell` as we need, and we
|
||||
will still get 1 wrapped Python interpreter. We can start the interpreter
|
||||
directly like so:
|
||||
|
||||
```sh
|
||||
[nix-shell:~] python3
|
||||
$ nix-shell -p 'python38.withPackages(ps: with ps; [ numpy toolz requests ])' --run python3
|
||||
these derivations will be built:
|
||||
/nix/store/xbdsrqrsfa1yva5s7pzsra8k08gxlbz1-python3-3.8.1-env.drv
|
||||
building '/nix/store/xbdsrqrsfa1yva5s7pzsra8k08gxlbz1-python3-3.8.1-env.drv'...
|
||||
created 277 symlinks in user environment
|
||||
Python 3.8.1 (default, Dec 18 2019, 19:06:26)
|
||||
[GCC 9.2.0] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>> import requests
|
||||
>>>
|
||||
```
|
||||
|
||||
The other method, which is not recommended, does not create an environment and
|
||||
requires you to list the packages directly,
|
||||
Notice that this time it built a new Python environment, which now includes
|
||||
`requests`. Building an environment just creates wrapper scripts that expose the
|
||||
selected dependencies to the interpreter while re-using the actual modules. This
|
||||
means if any other env has installed `requests` or `numpy` in a different
|
||||
context, we don't need to recompile them -- we just recompile the wrapper script
|
||||
that sets up an interpreter pointing to them. This matters much more for "big"
|
||||
modules like `pytorch` or `tensorflow`.
|
||||
|
||||
Module names usually match their names on [pypi.org](https://pypi.org/), but
|
||||
you can use the [Nixpkgs search website](https://nixos.org/nixos/packages.html)
|
||||
to find them as well (along with non-python packages).
|
||||
|
||||
At this point we can create throwaway experimental Python environments with
|
||||
arbitrary dependencies. This is a good way to get a feel for how the Python
|
||||
interpreter and dependencies work in Nix and NixOS, but to do some actual
|
||||
development, we'll want to make it a bit more persistent.
|
||||
|
||||
##### Running Python scripts and using `nix-shell` as shebang
|
||||
|
||||
Sometimes, we have a script whose header looks like this:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
import numpy as np
|
||||
a = np.array([1,2])
|
||||
b = np.array([3,4])
|
||||
print(f"The dot product of {a} and {b} is: {np.dot(a, b)}")
|
||||
```
|
||||
|
||||
Executing this script requires a `python3` that has `numpy`. Using what we learned
|
||||
in the previous section, we could startup a shell and just run it like so:
|
||||
|
||||
```
|
||||
nix-shell -p 'python38.withPackages(ps: with ps; [ numpy ])' --run 'python3 foo.py'
|
||||
The dot product of [1 2] and [3 4] is: 11
|
||||
```
|
||||
|
||||
But if we maintain the script ourselves, and if there are more dependencies, it
|
||||
may be nice to encode those dependencies in source to make the script re-usable
|
||||
without that bit of knowledge. That can be done by using `nix-shell` as a
|
||||
[shebang](https://en.wikipedia.org/wiki/Shebang_(Unix), like so:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env nix-shell
|
||||
#!nix-shell -i python3 -p "python3.withPackages(ps: [ ps.numpy ])"
|
||||
import numpy as np
|
||||
a = np.array([1,2])
|
||||
b = np.array([3,4])
|
||||
print(f"The dot product of {a} and {b} is: {np.dot(a, b)}")
|
||||
```
|
||||
|
||||
Then we simply execute it, without requiring any environment setup at all!
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35.pkgs.numpy python35.pkgs.toolz
|
||||
$ ./foo.py
|
||||
The dot product of [1 2] and [3 4] is: 11
|
||||
```
|
||||
|
||||
Again, it is possible to launch the interpreter from the shell. The Python
|
||||
interpreter has the attribute `pkgs` which contains all Python libraries for
|
||||
that specific interpreter.
|
||||
If the dependencies are not available on the host where `foo.py` is executed, it
|
||||
will build or download them from a Nix binary cache prior to starting up, prior
|
||||
that it is executed on a machine with a multi-user nix installation.
|
||||
|
||||
This provides a way to ship a self bootstrapping Python script, akin to a
|
||||
statically linked binary, where it can be run on any machine (provided nix is
|
||||
installed) without having to assume that `numpy` is installed globally on the
|
||||
system.
|
||||
|
||||
By default it is pulling the import checkout of Nixpkgs itself from our nix
|
||||
channel, which is nice as it cache aligns with our other package builds, but we
|
||||
can make it fully reproducible by pinning the `nixpkgs` import:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env nix-shell
|
||||
#!nix-shell -i python3 -p "python3.withPackages(ps: [ ps.numpy ])"
|
||||
#!nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/d373d80b1207d52621961b16aa4a3438e4f98167.tar.gz
|
||||
import numpy as np
|
||||
a = np.array([1,2])
|
||||
b = np.array([3,4])
|
||||
print(f"The dot product of {a} and {b} is: {np.dot(a, b)}")
|
||||
```
|
||||
|
||||
This will execute with the exact same versions of Python 3.8, numpy, and system
|
||||
dependencies a year from now as it does today, because it will always use
|
||||
exactly git commit `d373d80b1207d52621961b16aa4a3438e4f98167` of Nixpkgs for all
|
||||
of the package versions.
|
||||
|
||||
This is also a great way to ensure the script executes identically on different
|
||||
servers.
|
||||
|
||||
##### Load environment from `.nix` expression
|
||||
As explained in the Nix manual, `nix-shell` can also load an
|
||||
expression from a `.nix` file. Say we want to have Python 3.5, `numpy`
|
||||
and `toolz`, like before, in an environment. Consider a `shell.nix` file
|
||||
with
|
||||
|
||||
We've now seen how to create an ad-hoc temporary shell session, and how to
|
||||
create a single script with Python dependencies, but in the course of normal
|
||||
development we're usually working in an entire package repository.
|
||||
|
||||
As explained in the Nix manual, `nix-shell` can also load an expression from a
|
||||
`.nix` file. Say we want to have Python 3.8, `numpy` and `toolz`, like before,
|
||||
in an environment. We can add a `shell.nix` file describing our dependencies:
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
(python35.withPackages (ps: [ps.numpy ps.toolz])).env
|
||||
(python38.withPackages (ps: [ps.numpy ps.toolz])).env
|
||||
```
|
||||
|
||||
Executing `nix-shell` gives you again a Nix shell from which you can run Python.
|
||||
And then at the command line, just typing `nix-shell` produces the same
|
||||
environment as before. In a normal project, we'll likely have many more
|
||||
dependencies; this can provide a way for developers to share the environments
|
||||
with each other and with CI builders.
|
||||
|
||||
What's happening here?
|
||||
|
||||
|
@ -158,9 +228,9 @@ What's happening here?
|
|||
imports the `<nixpkgs>` function, `{}` calls it and the `with` statement
|
||||
brings all attributes of `nixpkgs` in the local scope. These attributes form
|
||||
the main package set.
|
||||
2. Then we create a Python 3.5 environment with the `withPackages` function.
|
||||
2. Then we create a Python 3.8 environment with the `withPackages` function, as before.
|
||||
3. The `withPackages` function expects us to provide a function as an argument
|
||||
that takes the set of all python packages and returns a list of packages to
|
||||
that takes the set of all Python packages and returns a list of packages to
|
||||
include in the environment. Here, we select the packages `numpy` and `toolz`
|
||||
from the package set.
|
||||
|
||||
|
@ -168,59 +238,106 @@ To combine this with `mkShell` you can:
|
|||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
let
|
||||
pythonEnv = python35.withPackages (ps: [
|
||||
pythonEnv = python38.withPackages (ps: [
|
||||
ps.numpy
|
||||
ps.toolz
|
||||
]);
|
||||
in mkShell {
|
||||
buildInputs = [
|
||||
pythonEnv
|
||||
hello
|
||||
|
||||
black
|
||||
mypy
|
||||
|
||||
libffi
|
||||
openssl
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
##### Execute command with `--run`
|
||||
A convenient option with `nix-shell` is the `--run`
|
||||
option, with which you can execute a command in the `nix-shell`. We can
|
||||
e.g. directly open a Python shell
|
||||
This will create a unified environment that has not just our Python interpreter
|
||||
and its Python dependencies, but also tools like `black` or `mypy` and libraries
|
||||
like `libffi` the `openssl` in scope. This is generic and can span any number of
|
||||
tools or languages across the Nixpkgs ecosystem.
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
|
||||
##### Installing environments globally on the system
|
||||
|
||||
Up to now, we've been creating environments scoped to an ad-hoc shell session,
|
||||
or a single script, or a single project. This is generally advisable, as it
|
||||
avoids pollution across contexts.
|
||||
|
||||
However, sometimes we know we will often want a Python with some basic packages,
|
||||
and want this available without having to enter into a shell or build context.
|
||||
This can be useful to have things like vim/emacs editors and plugins or shell
|
||||
tools "just work" without having to set them up, or when running other software
|
||||
that expects packages to be installed globally.
|
||||
|
||||
To create your own custom environment, create a file in `~/.config/nixpkgs/overlays/`
|
||||
that looks like this:
|
||||
|
||||
```nix
|
||||
# ~/.config/nixpkgs/overlays/myEnv.nix
|
||||
self: super: {
|
||||
myEnv = super.buildEnv {
|
||||
name = "myEnv";
|
||||
paths = [
|
||||
# A Python 3 interpreter with some packages
|
||||
(self.python3.withPackages (
|
||||
ps: with ps; [
|
||||
pyflakes
|
||||
pytest
|
||||
python-language-server
|
||||
]
|
||||
))
|
||||
|
||||
# Some other packages we'd like as part of this env
|
||||
self.mypy
|
||||
self.black
|
||||
self.ripgrep
|
||||
self.tmux
|
||||
];
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
or run a script
|
||||
You can then build and install this to your profile with:
|
||||
|
||||
```sh
|
||||
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
|
||||
nix-env -iA myEnv
|
||||
```
|
||||
|
||||
##### `nix-shell` as shebang
|
||||
In fact, for the second use case, there is a more convenient method. You can add
|
||||
a [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) to your script
|
||||
specifying which dependencies `nix-shell` needs. With the following shebang, you
|
||||
can just execute `./myscript.py`, and it will make available all dependencies
|
||||
and run the script in the `python3` shell.
|
||||
One limitation of this is that you can only have 1 Python env installed
|
||||
globally, since they conflict on the `python` to load out of your `PATH`.
|
||||
|
||||
```py
|
||||
#! /usr/bin/env nix-shell
|
||||
#! nix-shell -i python3 -p "python3.withPackages(ps: [ps.numpy])"
|
||||
If you get a conflict or prefer to keep the setup clean, you can have `nix-env`
|
||||
atomically *uninstall* all other imperatively installed packages and replace
|
||||
your profile with just `myEnv` by using the `--replace` flag.
|
||||
|
||||
import numpy
|
||||
##### Environment defined in `/etc/nixos/configuration.nix`
|
||||
|
||||
print(numpy.__version__)
|
||||
For the sake of completeness, here's how to install the environment system-wide
|
||||
on NixOS.
|
||||
|
||||
```nix
|
||||
{ # ...
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
(python38.withPackages(ps: with ps; [ numpy toolz ]))
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
### Developing with Python
|
||||
|
||||
Now that you know how to get a working Python environment with Nix, it is time
|
||||
to go forward and start actually developing with Python. We will first have a
|
||||
look at how Python packages are packaged on Nix. Then, we will look at how you
|
||||
can use development mode with your code.
|
||||
Above, we were mostly just focused on use cases and what to do to get started
|
||||
creating working Python environments in nix.
|
||||
|
||||
#### Packaging a library
|
||||
Now that you know the basics to be up and running, it is time to take a step
|
||||
back and take a deeper look at at how Python packages are packaged on Nix. Then,
|
||||
we will look at how you can use development mode with your code.
|
||||
|
||||
#### Python library packages in Nixpkgs
|
||||
|
||||
With Nix all packages are built by functions. The main function in Nix for
|
||||
building Python libraries is `buildPythonPackage`. Let's see how we can build the
|
||||
|
@ -231,11 +348,11 @@ building Python libraries is `buildPythonPackage`. Let's see how we can build th
|
|||
|
||||
buildPythonPackage rec {
|
||||
pname = "toolz";
|
||||
version = "0.7.4";
|
||||
version = "0.10.0";
|
||||
|
||||
src = fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
|
||||
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
|
||||
};
|
||||
|
||||
doCheck = false;
|
||||
|
@ -260,8 +377,9 @@ information. The output of the function is a derivation.
|
|||
|
||||
An expression for `toolz` can be found in the Nixpkgs repository. As explained
|
||||
in the introduction of this Python section, a derivation of `toolz` is available
|
||||
for each interpreter version, e.g. `python35.pkgs.toolz` refers to the `toolz`
|
||||
derivation corresponding to the CPython 3.5 interpreter.
|
||||
for each interpreter version, e.g. `python38.pkgs.toolz` refers to the `toolz`
|
||||
derivation corresponding to the CPython 3.8 interpreter.
|
||||
|
||||
The above example works when you're directly working on
|
||||
`pkgs/top-level/python-packages.nix` in the Nixpkgs repository. Often though,
|
||||
you will want to test a Nix expression outside of the Nixpkgs tree.
|
||||
|
@ -273,13 +391,13 @@ and adds it along with a `numpy` package to a Python environment.
|
|||
with import <nixpkgs> {};
|
||||
|
||||
( let
|
||||
my_toolz = python35.pkgs.buildPythonPackage rec {
|
||||
my_toolz = python38.pkgs.buildPythonPackage rec {
|
||||
pname = "toolz";
|
||||
version = "0.7.4";
|
||||
version = "0.10.0";
|
||||
|
||||
src = python35.pkgs.fetchPypi {
|
||||
src = python38.pkgs.fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
|
||||
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
|
||||
};
|
||||
|
||||
doCheck = false;
|
||||
|
@ -290,12 +408,12 @@ with import <nixpkgs> {};
|
|||
};
|
||||
};
|
||||
|
||||
in python35.withPackages (ps: [ps.numpy my_toolz])
|
||||
in python38.withPackages (ps: [ps.numpy my_toolz])
|
||||
).env
|
||||
```
|
||||
|
||||
Executing `nix-shell` will result in an environment in which you can use
|
||||
Python 3.5 and the `toolz` package. As you can see we had to explicitly mention
|
||||
Python 3.8 and the `toolz` package. As you can see we had to explicitly mention
|
||||
for which Python version we want to build a package.
|
||||
|
||||
So, what did we do here? Well, we took the Nix expression that we used earlier
|
||||
|
@ -312,7 +430,7 @@ Our example, `toolz`, does not have any dependencies on other Python packages or
|
|||
system libraries. According to the manual, `buildPythonPackage` uses the
|
||||
arguments `buildInputs` and `propagatedBuildInputs` to specify dependencies. If
|
||||
something is exclusively a build-time dependency, then the dependency should be
|
||||
included as a `buildInput`, but if it is (also) a runtime dependency, then it
|
||||
included in `buildInputs`, but if it is (also) a runtime dependency, then it
|
||||
should be added to `propagatedBuildInputs`. Test dependencies are considered
|
||||
build-time dependencies and passed to `checkInputs`.
|
||||
|
||||
|
@ -423,10 +541,11 @@ Note also the line `doCheck = false;`, we explicitly disabled running the test-s
|
|||
|
||||
#### Develop local package
|
||||
|
||||
As a Python developer you're likely aware of [development mode](http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode) (`python setup.py develop`);
|
||||
instead of installing the package this command creates a special link to the project code.
|
||||
That way, you can run updated code without having to reinstall after each and every change you make.
|
||||
Development mode is also available. Let's see how you can use it.
|
||||
As a Python developer you're likely aware of [development mode](http://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode)
|
||||
(`python setup.py develop`); instead of installing the package this command
|
||||
creates a special link to the project code. That way, you can run updated code
|
||||
without having to reinstall after each and every change you make. Development
|
||||
mode is also available. Let's see how you can use it.
|
||||
|
||||
In the previous Nix expression the source was fetched from an url. We can also
|
||||
refer to a local source instead using `src = ./path/to/source/tree;`
|
||||
|
@ -435,7 +554,7 @@ If we create a `shell.nix` file which calls `buildPythonPackage`, and if `src`
|
|||
is a local source, and if the local source has a `setup.py`, then development
|
||||
mode is activated.
|
||||
|
||||
In the following example we create a simple environment that has a Python 3.5
|
||||
In the following example we create a simple environment that has a Python 3.8
|
||||
version of our package in it, as well as its dependencies and other packages we
|
||||
like to have in the environment, all specified with `propagatedBuildInputs`.
|
||||
Indeed, we can just add any package we like to have in our environment to
|
||||
|
@ -443,7 +562,7 @@ Indeed, we can just add any package we like to have in our environment to
|
|||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
with python35Packages;
|
||||
with python38Packages;
|
||||
|
||||
buildPythonPackage rec {
|
||||
name = "mypackage";
|
||||
|
@ -455,7 +574,6 @@ buildPythonPackage rec {
|
|||
It is important to note that due to how development mode is implemented on Nix
|
||||
it is not possible to have multiple packages simultaneously in development mode.
|
||||
|
||||
|
||||
### Organising your packages
|
||||
|
||||
So far we discussed how you can use Python on Nix, and how you can develop with
|
||||
|
@ -481,11 +599,11 @@ We first create a function that builds `toolz` in `~/path/to/toolz/release.nix`
|
|||
|
||||
buildPythonPackage rec {
|
||||
pname = "toolz";
|
||||
version = "0.7.4";
|
||||
version = "0.10.0";
|
||||
|
||||
src = fetchPypi {
|
||||
inherit pname version;
|
||||
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
|
||||
sha256 = "08fdd5ef7c96480ad11c12d472de21acd32359996f69a5259299b540feba4560";
|
||||
};
|
||||
|
||||
meta = with lib; {
|
||||
|
@ -497,17 +615,17 @@ buildPythonPackage rec {
|
|||
}
|
||||
```
|
||||
|
||||
It takes an argument `buildPythonPackage`.
|
||||
We now call this function using `callPackage` in the definition of our environment
|
||||
It takes an argument `buildPythonPackage`. We now call this function using
|
||||
`callPackage` in the definition of our environment
|
||||
|
||||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
( let
|
||||
toolz = callPackage /path/to/toolz/release.nix {
|
||||
buildPythonPackage = python35Packages.buildPythonPackage;
|
||||
buildPythonPackage = python38Packages.buildPythonPackage;
|
||||
};
|
||||
in python35.withPackages (ps: [ ps.numpy toolz ])
|
||||
in python38.withPackages (ps: [ ps.numpy toolz ])
|
||||
).env
|
||||
```
|
||||
|
||||
|
@ -515,8 +633,8 @@ Important to remember is that the Python version for which the package is made
|
|||
depends on the `python` derivation that is passed to `buildPythonPackage`. Nix
|
||||
tries to automatically pass arguments when possible, which is why generally you
|
||||
don't explicitly define which `python` derivation should be used. In the above
|
||||
example we use `buildPythonPackage` that is part of the set `python35Packages`,
|
||||
and in this case the `python35` interpreter is automatically used.
|
||||
example we use `buildPythonPackage` that is part of the set `python38Packages`,
|
||||
and in this case the `python38` interpreter is automatically used.
|
||||
|
||||
## Reference
|
||||
|
||||
|
@ -548,7 +666,7 @@ Each interpreter has the following attributes:
|
|||
- `buildEnv`. Function to build python interpreter environments with extra packages bundled together. See section *python.buildEnv function* for usage and documentation.
|
||||
- `withPackages`. Simpler interface to `buildEnv`. See section *python.withPackages function* for usage and documentation.
|
||||
- `sitePackages`. Alias for `lib/${libPrefix}/site-packages`.
|
||||
- `executable`. Name of the interpreter executable, e.g. `python3.7`.
|
||||
- `executable`. Name of the interpreter executable, e.g. `python3.8`.
|
||||
- `pkgs`. Set of Python packages for that specific interpreter. The package set can be modified by overriding the interpreter and passing `packageOverrides`.
|
||||
|
||||
### Building packages and applications
|
||||
|
@ -643,7 +761,7 @@ following are specific to `buildPythonPackage`:
|
|||
appears more than once in dependency tree. Default is `true`.
|
||||
* `disabled` ? false: If `true`, package is not built for the particular Python
|
||||
interpreter version.
|
||||
* `dontWrapPythonPrograms ? false`: Skip wrapping of python programs.
|
||||
* `dontWrapPythonPrograms ? false`: Skip wrapping of Python programs.
|
||||
* `permitUserSite ? false`: Skip setting the `PYTHONNOUSERSITE` environment
|
||||
variable in wrapped programs.
|
||||
* `installFlags ? []`: A list of strings. Arguments to be passed to `pip
|
||||
|
@ -662,7 +780,7 @@ following are specific to `buildPythonPackage`:
|
|||
variables which will be available when the binary is run. For example,
|
||||
`makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"]`.
|
||||
* `namePrefix`: Prepends text to `${name}` parameter. In case of libraries, this
|
||||
defaults to `"python3.5-"` for Python 3.5, etc., and in case of applications
|
||||
defaults to `"python3.8-"` for Python 3.8, etc., and in case of applications
|
||||
to `""`.
|
||||
* `pythonPath ? []`: List of packages to be added into `$PYTHONPATH`. Packages
|
||||
in `pythonPath` are not propagated (contrary to `propagatedBuildInputs`).
|
||||
|
@ -730,7 +848,7 @@ Another difference is that `buildPythonPackage` by default prefixes the names of
|
|||
the packages with the version of the interpreter. Because this is irrelevant for
|
||||
applications, the prefix is omitted.
|
||||
|
||||
When packaging a python application with `buildPythonApplication`, it should be
|
||||
When packaging a Python application with `buildPythonApplication`, it should be
|
||||
called with `callPackage` and passed `python` or `pythonPackages` (possibly
|
||||
specifying an interpreter version), like this:
|
||||
|
||||
|
@ -761,7 +879,7 @@ luigi = callPackage ../applications/networking/cluster/luigi { };
|
|||
```
|
||||
|
||||
Since the package is an application, a consumer doesn't need to care about
|
||||
python versions or modules, which is why they don't go in `pythonPackages`.
|
||||
Python versions or modules, which is why they don't go in `pythonPackages`.
|
||||
|
||||
#### `toPythonApplication` function
|
||||
|
||||
|
@ -875,7 +993,7 @@ thus be also written like this:
|
|||
```nix
|
||||
with import <nixpkgs> {};
|
||||
|
||||
(python36.withPackages (ps: [ps.numpy ps.requests])).env
|
||||
(python38.withPackages (ps: [ps.numpy ps.requests])).env
|
||||
```
|
||||
|
||||
In contrast to `python.buildEnv`, `python.withPackages` does not support the
|
||||
|
@ -932,7 +1050,7 @@ pythonPackages.buildPythonPackage {
|
|||
Running `nix-shell` with no arguments should give you the environment in which
|
||||
the package would be built with `nix-build`.
|
||||
|
||||
Shortcut to setup environments with C headers/libraries and python packages:
|
||||
Shortcut to setup environments with C headers/libraries and Python packages:
|
||||
|
||||
```shell
|
||||
nix-shell -p pythonPackages.pyramid zlib libjpeg git
|
||||
|
@ -960,10 +1078,9 @@ has security implications and is relevant for those using Python in a
|
|||
|
||||
When the environment variable `DETERMINISTIC_BUILD` is set, all bytecode will
|
||||
have timestamp 1. The `buildPythonPackage` function sets `DETERMINISTIC_BUILD=1`
|
||||
and [PYTHONHASHSEED=0](https://docs.python.org/3.5/using/cmdline.html#envvar-PYTHONHASHSEED).
|
||||
and [PYTHONHASHSEED=0](https://docs.python.org/3.8/using/cmdline.html#envvar-PYTHONHASHSEED).
|
||||
Both are also exported in `nix-shell`.
|
||||
|
||||
|
||||
### Automatic tests
|
||||
|
||||
It is recommended to test packages as part of the build process.
|
||||
|
@ -976,7 +1093,7 @@ example of such a situation is when `py.test` is used.
|
|||
#### Common issues
|
||||
|
||||
* Non-working tests can often be deselected. By default `buildPythonPackage`
|
||||
runs `python setup.py test`. Most python modules follows the standard test
|
||||
runs `python setup.py test`. Most Python modules follows the standard test
|
||||
protocol where the pytest runner can be used instead. `py.test` supports a
|
||||
`-k` parameter to ignore test methods or classes:
|
||||
|
||||
|
@ -1014,7 +1131,7 @@ with import <nixpkgs> {};
|
|||
packageOverrides = self: super: {
|
||||
pandas = super.pandas.overridePythonAttrs(old: {name="foo";});
|
||||
};
|
||||
in pkgs.python35.override {inherit packageOverrides;};
|
||||
in pkgs.python38.override {inherit packageOverrides;};
|
||||
|
||||
in python.withPackages(ps: [ps.pandas])).env
|
||||
```
|
||||
|
@ -1036,7 +1153,7 @@ with import <nixpkgs> {};
|
|||
packageOverrides = self: super: {
|
||||
scipy = super.scipy_0_17;
|
||||
};
|
||||
in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
|
||||
in (pkgs.python38.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
|
||||
).env
|
||||
```
|
||||
|
||||
|
@ -1049,12 +1166,12 @@ If you want the whole of Nixpkgs to use your modifications, then you can use
|
|||
```nix
|
||||
let
|
||||
pkgs = import <nixpkgs> {};
|
||||
newpkgs = import pkgs.path { overlays = [ (pkgsself: pkgssuper: {
|
||||
python27 = let
|
||||
packageOverrides = self: super: {
|
||||
numpy = super.numpy_1_10;
|
||||
newpkgs = import pkgs.path { overlays = [ (self: super: {
|
||||
python38 = let
|
||||
packageOverrides = python-self: python-super: {
|
||||
numpy = python-super.numpy_1_18;
|
||||
};
|
||||
in pkgssuper.python27.override {inherit packageOverrides;};
|
||||
in super.python38.override {inherit packageOverrides;};
|
||||
} ) ]; };
|
||||
in newpkgs.inkscape
|
||||
```
|
||||
|
@ -1127,14 +1244,14 @@ If you want to create a Python environment for development, then the recommended
|
|||
method is to use `nix-shell`, either with or without the `python.buildEnv`
|
||||
function.
|
||||
|
||||
### How to consume python modules using pip in a virtual environment like I am used to on other Operating Systems?
|
||||
### How to consume Python modules using pip in a virtual environment like I am used to on other Operating Systems?
|
||||
|
||||
While this approach is not very idiomatic from Nix perspective, it can still be
|
||||
useful when dealing with pre-existing projects or in situations where it's not
|
||||
feasible or desired to write derivations for all required dependencies.
|
||||
|
||||
This is an example of a `default.nix` for a `nix-shell`, which allows to consume
|
||||
a virtual environment created by `venv`, and install python modules through
|
||||
a virtual environment created by `venv`, and install Python modules through
|
||||
`pip` the traditional way.
|
||||
|
||||
Create this `default.nix` file, together with a `requirements.txt` and simply
|
||||
|
@ -1149,7 +1266,7 @@ in pkgs.mkShell rec {
|
|||
name = "impurePythonEnv";
|
||||
venvDir = "./.venv";
|
||||
buildInputs = [
|
||||
# A python interpreter including the 'venv' module is required to bootstrap
|
||||
# A Python interpreter including the 'venv' module is required to bootstrap
|
||||
# the environment.
|
||||
pythonPackages.python
|
||||
|
||||
|
@ -1163,7 +1280,7 @@ in pkgs.mkShell rec {
|
|||
pythonPackages.requests
|
||||
|
||||
# In this particular example, in order to compile any binary extensions they may
|
||||
# require, the python modules listed in the hypothetical requirements.txt need
|
||||
# require, the Python modules listed in the hypothetical requirements.txt need
|
||||
# the following packages to be installed locally:
|
||||
taglib
|
||||
openssl
|
||||
|
@ -1183,7 +1300,7 @@ in pkgs.mkShell rec {
|
|||
}
|
||||
```
|
||||
|
||||
In case the supplied venvShellHook is insufficient, or when python 2 support is
|
||||
In case the supplied venvShellHook is insufficient, or when Python 2 support is
|
||||
needed, you can define your own shell hook and adapt to your needs like in the
|
||||
following example:
|
||||
|
||||
|
@ -1229,7 +1346,7 @@ in pkgs.mkShell rec {
|
|||
```
|
||||
|
||||
Note that the `pip install` is an imperative action. So every time `nix-shell`
|
||||
is executed it will attempt to download the python modules listed in
|
||||
is executed it will attempt to download the Python modules listed in
|
||||
requirements.txt. However these will be cached locally within the `virtualenv`
|
||||
folder and not downloaded again.
|
||||
|
||||
|
@ -1290,9 +1407,8 @@ self: super: {
|
|||
|
||||
### How to use Intel's MKL with numpy and scipy?
|
||||
|
||||
MKL can be configured using an overlay. See the section “[Using
|
||||
overlays to configure
|
||||
alternatives](#sec-overlays-alternatives-blas-lapack)”.
|
||||
MKL can be configured using an overlay. See the section "[Using overlays to
|
||||
configure alternatives](#sec-overlays-alternatives-blas-lapack)".
|
||||
|
||||
### What inputs do `setup_requires`, `install_requires` and `tests_require` map to?
|
||||
|
||||
|
|
2
third_party/nixpkgs/doc/preface.chapter.md
vendored
2
third_party/nixpkgs/doc/preface.chapter.md
vendored
|
@ -42,7 +42,7 @@ distributed as soon as all tests for that channel pass, e.g.
|
|||
[this table](https://hydra.nixos.org/job/nixpkgs/trunk/unstable#tabs-constituents)
|
||||
shows the status of tests for the `nixpkgs` channel.
|
||||
|
||||
The tests are conducted by a cluster called [Hydra](http://nixos.org/hydra/),
|
||||
The tests are conducted by a cluster called [Hydra](https://nixos.org/hydra/),
|
||||
which also builds binary packages from the Nix expressions in Nixpkgs for
|
||||
`x86_64-linux`, `i686-linux` and `x86_64-darwin`.
|
||||
The binaries are made available via a [binary cache](https://cache.nixos.org).
|
||||
|
|
4
third_party/nixpkgs/doc/release-notes.xml
vendored
4
third_party/nixpkgs/doc/release-notes.xml
vendored
|
@ -286,7 +286,7 @@ export NIX_MIRRORS_sourceforge=http://osdn.dl.sourceforge.net/sourceforge/</prog
|
|||
<note>
|
||||
<para>
|
||||
This release of Nixpkgs requires <link
|
||||
xlink:href='http://nixos.org/releases/nix/nix-0.10/'>Nix 0.10</link> or higher.
|
||||
xlink:href='https://nixos.org/releases/nix/nix-0.10/'>Nix 0.10</link> or higher.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
|
@ -436,7 +436,7 @@ stdenv.mkDerivation {
|
|||
<listitem>
|
||||
<para>
|
||||
Distribution files have been moved to <link
|
||||
xlink:href="http://nixos.org/" />.
|
||||
xlink:href="https://nixos.org/" />.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
2
third_party/nixpkgs/doc/stdenv/stdenv.xml
vendored
2
third_party/nixpkgs/doc/stdenv/stdenv.xml
vendored
|
@ -145,7 +145,7 @@ genericBuild
|
|||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
GNU Make. It has been patched to provide <quote>nested</quote> output that can be fed into the <command>nix-log2xml</command> command and <command>log2html</command> stylesheet to create a structured, readable output of the build steps performed by Make.
|
||||
GNU Make.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
2
third_party/nixpkgs/lib/default.nix
vendored
2
third_party/nixpkgs/lib/default.nix
vendored
|
@ -141,7 +141,7 @@ let
|
|||
mergeAttrsWithFunc mergeAttrsConcatenateValues
|
||||
mergeAttrsNoOverride mergeAttrByFunc mergeAttrsByFuncDefaults
|
||||
mergeAttrsByFuncDefaultsClean mergeAttrBy
|
||||
fakeSri fakeSha256 fakeSha512
|
||||
fakeHash fakeSha256 fakeSha512
|
||||
nixType imap;
|
||||
inherit (versions)
|
||||
splitVersion;
|
||||
|
|
2
third_party/nixpkgs/lib/deprecated.nix
vendored
2
third_party/nixpkgs/lib/deprecated.nix
vendored
|
@ -272,7 +272,7 @@ rec {
|
|||
imap = imap1;
|
||||
|
||||
# Fake hashes. Can be used as hash placeholders, when computing hash ahead isn't trivial
|
||||
fakeSri = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
|
||||
fakeHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";
|
||||
fakeSha256 = "0000000000000000000000000000000000000000000000000000000000000000";
|
||||
fakeSha512 = "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000";
|
||||
}
|
||||
|
|
2
third_party/nixpkgs/lib/kernel.nix
vendored
2
third_party/nixpkgs/lib/kernel.nix
vendored
|
@ -14,7 +14,7 @@ with lib;
|
|||
freeform = x: { freeform = x; };
|
||||
|
||||
/*
|
||||
Common patterns/legacy used in common-config/hardened-config.nix
|
||||
Common patterns/legacy used in common-config/hardened/config.nix
|
||||
*/
|
||||
whenHelpers = version: {
|
||||
whenAtLeast = ver: mkIf (versionAtLeast version ver);
|
||||
|
|
4
third_party/nixpkgs/lib/lists.nix
vendored
4
third_party/nixpkgs/lib/lists.nix
vendored
|
@ -73,8 +73,8 @@ rec {
|
|||
lconcat [ "a" "b" "c" ]
|
||||
=> "zabc"
|
||||
# different types
|
||||
lstrange = foldl (str: int: str + toString (int + 1)) ""
|
||||
strange [ 1 2 3 4 ]
|
||||
lstrange = foldl (str: int: str + toString (int + 1)) "a"
|
||||
lstrange [ 1 2 3 4 ]
|
||||
=> "a2345"
|
||||
*/
|
||||
foldl = op: nul: list:
|
||||
|
|
|
@ -1662,6 +1662,12 @@
|
|||
}
|
||||
];
|
||||
};
|
||||
cyplo = {
|
||||
email = "nixos@cyplo.dev";
|
||||
github = "cyplo";
|
||||
githubId = 217899;
|
||||
name = "Cyryl Płotnicki";
|
||||
};
|
||||
d-goldin = {
|
||||
email = "dgoldin+github@protonmail.ch";
|
||||
github = "d-goldin";
|
||||
|
@ -2038,6 +2044,12 @@
|
|||
githubId = 108501;
|
||||
name = "David Pflug";
|
||||
};
|
||||
dramaturg = {
|
||||
email = "seb@ds.ag";
|
||||
github = "dramaturg";
|
||||
githubId = 472846;
|
||||
name = "Sebastian Krohn";
|
||||
};
|
||||
drets = {
|
||||
email = "dmitryrets@gmail.com";
|
||||
github = "drets";
|
||||
|
@ -2476,7 +2488,7 @@
|
|||
};
|
||||
evils = {
|
||||
email = "evils.devils@protonmail.com";
|
||||
github = "evils-devils";
|
||||
github = "evils";
|
||||
githubId = 30512529;
|
||||
name = "Evils";
|
||||
};
|
||||
|
@ -4019,12 +4031,6 @@
|
|||
fingerprint = "8992 44FC D291 5CA2 0A97 802C 156C 88A5 B0A0 4B2A";
|
||||
}];
|
||||
};
|
||||
kjuvi = {
|
||||
email = "quentin.vaucher@pm.me";
|
||||
github = "kjuvi";
|
||||
githubId = 17534323;
|
||||
name = "Quentin Vaucher";
|
||||
};
|
||||
kkallio = {
|
||||
email = "tierpluspluslists@gmail.com";
|
||||
name = "Karn Kallio";
|
||||
|
@ -4429,6 +4435,16 @@
|
|||
fingerprint = "74F5 E5CC 19D3 B5CB 608F 6124 68FF 81E6 A785 0F49";
|
||||
}];
|
||||
};
|
||||
lourkeur = {
|
||||
name = "Louis Bettens";
|
||||
email = "louis@bettens.info";
|
||||
github = "lourkeur";
|
||||
githubId = 15657735;
|
||||
keys = [{
|
||||
longkeyid = "ed25519/0xDFE1D4A017337E2A";
|
||||
fingerprint = "5B93 9CFA E8FC 4D8F E07A 3AEA DFE1 D4A0 1733 7E2A";
|
||||
}];
|
||||
};
|
||||
luis = {
|
||||
email = "luis.nixos@gmail.com";
|
||||
github = "Luis-Hebendanz";
|
||||
|
@ -4599,6 +4615,12 @@
|
|||
githubId = 2057309;
|
||||
name = "Sergey Sofeychuk";
|
||||
};
|
||||
lynty = {
|
||||
email = "ltdong93+nix@gmail.com";
|
||||
github = "lynty";
|
||||
githubId = 39707188;
|
||||
name = "Lynn Dong";
|
||||
};
|
||||
lyt = {
|
||||
email = "wheatdoge@gmail.com";
|
||||
name = "Tim Liou";
|
||||
|
@ -5054,6 +5076,12 @@
|
|||
githubId = 3269878;
|
||||
name = "Miguel Madrid Mencía";
|
||||
};
|
||||
mindavi = {
|
||||
email = "rol3517@gmail.com";
|
||||
github = "Mindavi";
|
||||
githubId = 9799623;
|
||||
name = "Rick van Schijndel";
|
||||
};
|
||||
minijackson = {
|
||||
email = "minijackson@riseup.net";
|
||||
github = "minijackson";
|
||||
|
@ -5780,6 +5808,12 @@
|
|||
githubId = 15930073;
|
||||
name = "Moritz Scheuren";
|
||||
};
|
||||
pablovsky = {
|
||||
email = "dealberapablo07@gmail.com";
|
||||
github = "pablo1107";
|
||||
githubId = 17091659;
|
||||
name = "Pablo Andres Dealbera";
|
||||
};
|
||||
pacien = {
|
||||
email = "b4gx3q.nixpkgs@pacien.net";
|
||||
github = "pacien";
|
||||
|
@ -5852,6 +5886,16 @@
|
|||
githubId = 131844;
|
||||
name = "Igor Pashev";
|
||||
};
|
||||
patryk27 = {
|
||||
email = "wychowaniec.patryk@gmail.com";
|
||||
github = "Patryk27";
|
||||
githubId = 3395477;
|
||||
name = "Patryk Wychowaniec";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0xF62547D075E09767";
|
||||
fingerprint = "196A BFEC 6A1D D1EC 7594 F8D1 F625 47D0 75E0 9767";
|
||||
}];
|
||||
};
|
||||
patternspandemic = {
|
||||
email = "patternspandemic@live.com";
|
||||
github = "patternspandemic";
|
||||
|
@ -7098,6 +7142,12 @@
|
|||
githubId = 1505617;
|
||||
name = "Sean Lee";
|
||||
};
|
||||
SlothOfAnarchy = {
|
||||
email = "slothofanarchy1@gmail.com";
|
||||
github = "SlothOfAnarchy";
|
||||
githubId = 12828415;
|
||||
name = "Michel Weitbrecht";
|
||||
};
|
||||
smakarov = {
|
||||
email = "setser200018@gmail.com";
|
||||
github = "setser";
|
||||
|
@ -7136,6 +7186,12 @@
|
|||
githubId = 602439;
|
||||
name = "Serguei Narojnyi";
|
||||
};
|
||||
snicket2100 = {
|
||||
email = "57048005+snicket2100@users.noreply.github.com";
|
||||
github = "snicket2100";
|
||||
githubId = 57048005;
|
||||
name = "snicket2100";
|
||||
};
|
||||
snyh = {
|
||||
email = "snyh@snyh.org";
|
||||
github = "snyh";
|
||||
|
@ -7598,12 +7654,6 @@
|
|||
githubId = 1141680;
|
||||
name = "Thane Gill";
|
||||
};
|
||||
the-kenny = {
|
||||
email = "moritz@tarn-vedra.de";
|
||||
github = "the-kenny";
|
||||
githubId = 31167;
|
||||
name = "Moritz Ulrich";
|
||||
};
|
||||
thedavidmeister = {
|
||||
email = "thedavidmeister@gmail.com";
|
||||
github = "thedavidmeister";
|
||||
|
@ -7656,12 +7706,24 @@
|
|||
githubId = 7709;
|
||||
name = "Thomaz Leite";
|
||||
};
|
||||
thomasdesr = {
|
||||
email = "git@hive.pw";
|
||||
github = "thomasdesr";
|
||||
githubId = 681004;
|
||||
name = "Thomas Desrosiers";
|
||||
};
|
||||
ThomasMader = {
|
||||
email = "thomas.mader@gmail.com";
|
||||
github = "ThomasMader";
|
||||
githubId = 678511;
|
||||
name = "Thomas Mader";
|
||||
};
|
||||
thomasjm = {
|
||||
email = "tom@codedown.io";
|
||||
github = "thomasjm";
|
||||
githubId = 1634990;
|
||||
name = "Tom McLaughlin";
|
||||
};
|
||||
thoughtpolice = {
|
||||
email = "aseipp@pobox.com";
|
||||
github = "thoughtpolice";
|
||||
|
@ -8317,6 +8379,12 @@
|
|||
githubId = 1297598;
|
||||
name = "Konrad Borowski";
|
||||
};
|
||||
xiorcale = {
|
||||
email = "quentin.vaucher@pm.me";
|
||||
github = "xiorcale";
|
||||
githubId = 17534323;
|
||||
name = "Quentin Vaucher";
|
||||
};
|
||||
xnaveira = {
|
||||
email = "xnaveira@gmail.com";
|
||||
github = "xnaveira";
|
||||
|
|
43
third_party/nixpkgs/maintainers/scripts/build.nix
vendored
Normal file
43
third_party/nixpkgs/maintainers/scripts/build.nix
vendored
Normal file
|
@ -0,0 +1,43 @@
|
|||
{ maintainer }:
|
||||
|
||||
# based on update.nix
|
||||
# nix-build build.nix --argstr maintainer <yourname>
|
||||
|
||||
let
|
||||
pkgs = import ./../../default.nix {};
|
||||
maintainer_ = pkgs.lib.maintainers.${maintainer};
|
||||
packagesWith = cond: return: set:
|
||||
(pkgs.lib.flatten
|
||||
(pkgs.lib.mapAttrsToList
|
||||
(name: pkg:
|
||||
let
|
||||
result = builtins.tryEval
|
||||
(
|
||||
if pkgs.lib.isDerivation pkg && cond name pkg
|
||||
then [ (return name pkg) ]
|
||||
else if pkg.recurseForDerivations or false || pkg.recurseForRelease or false
|
||||
then packagesWith cond return pkg
|
||||
else [ ]
|
||||
);
|
||||
in
|
||||
if result.success then result.value
|
||||
else [ ]
|
||||
)
|
||||
set
|
||||
)
|
||||
);
|
||||
in
|
||||
packagesWith
|
||||
(name: pkg:
|
||||
(
|
||||
if builtins.hasAttr "maintainers" pkg.meta
|
||||
then (
|
||||
if builtins.isList pkg.meta.maintainers
|
||||
then builtins.elem maintainer_ pkg.meta.maintainers
|
||||
else maintainer_ == pkg.meta.maintainers
|
||||
)
|
||||
else false
|
||||
)
|
||||
)
|
||||
(name: pkg: pkg)
|
||||
pkgs
|
|
@ -79,7 +79,7 @@ def cli(jobset):
|
|||
and print a summary of failed builds
|
||||
"""
|
||||
|
||||
url = "http://hydra.nixos.org/jobset/{}".format(jobset)
|
||||
url = "https://hydra.nixos.org/jobset/{}".format(jobset)
|
||||
|
||||
# get the last evaluation
|
||||
click.echo(click.style(
|
||||
|
|
|
@ -9,6 +9,10 @@
|
|||
# TODO: add assert statements
|
||||
|
||||
let
|
||||
pkgs = import ./../../default.nix (if include-overlays then { } else { overlays = []; });
|
||||
|
||||
inherit (pkgs) lib;
|
||||
|
||||
/* Remove duplicate elements from the list based on some extracted value. O(n^2) complexity.
|
||||
*/
|
||||
nubOn = f: list:
|
||||
|
@ -16,43 +20,44 @@ let
|
|||
[]
|
||||
else
|
||||
let
|
||||
x = pkgs.lib.head list;
|
||||
xs = pkgs.lib.filter (p: f x != f p) (pkgs.lib.drop 1 list);
|
||||
x = lib.head list;
|
||||
xs = lib.filter (p: f x != f p) (lib.drop 1 list);
|
||||
in
|
||||
[x] ++ nubOn f xs;
|
||||
|
||||
pkgs = import ./../../default.nix (if include-overlays then { } else { overlays = []; });
|
||||
|
||||
packagesWith = cond: return: set:
|
||||
nubOn (pkg: pkg.updateScript)
|
||||
(pkgs.lib.flatten
|
||||
(pkgs.lib.mapAttrsToList
|
||||
(name: pkg:
|
||||
packagesWithPath = relativePath: cond: return: pathContent:
|
||||
let
|
||||
result = builtins.tryEval (
|
||||
if pkgs.lib.isDerivation pkg && cond name pkg
|
||||
then [(return name pkg)]
|
||||
else if pkg.recurseForDerivations or false || pkg.recurseForRelease or false
|
||||
then packagesWith cond return pkg
|
||||
else []
|
||||
);
|
||||
result = builtins.tryEval pathContent;
|
||||
|
||||
dedupResults = lst: nubOn (pkg: pkg.updateScript) (lib.concatLists lst);
|
||||
in
|
||||
if result.success then result.value
|
||||
if result.success then
|
||||
let
|
||||
pathContent = result.value;
|
||||
in
|
||||
if lib.isDerivation pathContent then
|
||||
lib.optional (cond relativePath pathContent) (return relativePath pathContent)
|
||||
else if lib.isAttrs pathContent then
|
||||
# If user explicitly points to an attrSet or it is marked for recursion, we recur.
|
||||
if relativePath == [] || pathContent.recurseForDerivations or false || pathContent.recurseForRelease or false then
|
||||
dedupResults (lib.mapAttrsToList (name: elem: packagesWithPath (relativePath ++ [name]) cond return elem) pathContent)
|
||||
else []
|
||||
)
|
||||
set
|
||||
)
|
||||
);
|
||||
else if lib.isList pathContent then
|
||||
dedupResults (lib.imap0 (i: elem: packagesWithPath (relativePath ++ [i]) cond return elem) pathContent)
|
||||
else []
|
||||
else [];
|
||||
|
||||
packagesWith = packagesWithPath [];
|
||||
|
||||
packagesWithUpdateScriptAndMaintainer = maintainer':
|
||||
let
|
||||
maintainer =
|
||||
if ! builtins.hasAttr maintainer' pkgs.lib.maintainers then
|
||||
if ! builtins.hasAttr maintainer' lib.maintainers then
|
||||
builtins.throw "Maintainer with name `${maintainer'} does not exist in `maintainers/maintainer-list.nix`."
|
||||
else
|
||||
builtins.getAttr maintainer' pkgs.lib.maintainers;
|
||||
builtins.getAttr maintainer' lib.maintainers;
|
||||
in
|
||||
packagesWith (name: pkg: builtins.hasAttr "updateScript" pkg &&
|
||||
packagesWith (relativePath: pkg: builtins.hasAttr "updateScript" pkg &&
|
||||
(if builtins.hasAttr "maintainers" pkg.meta
|
||||
then (if builtins.isList pkg.meta.maintainers
|
||||
then builtins.elem maintainer pkg.meta.maintainers
|
||||
|
@ -61,23 +66,23 @@ let
|
|||
else false
|
||||
)
|
||||
)
|
||||
(name: pkg: pkg)
|
||||
(relativePath: pkg: pkg)
|
||||
pkgs;
|
||||
|
||||
packagesWithUpdateScript = path:
|
||||
let
|
||||
attrSet = pkgs.lib.attrByPath (pkgs.lib.splitString "." path) null pkgs;
|
||||
pathContent = lib.attrByPath (lib.splitString "." path) null pkgs;
|
||||
in
|
||||
if attrSet == null then
|
||||
if pathContent == null then
|
||||
builtins.throw "Attribute path `${path}` does not exists."
|
||||
else
|
||||
packagesWith (name: pkg: builtins.hasAttr "updateScript" pkg)
|
||||
(name: pkg: pkg)
|
||||
attrSet;
|
||||
packagesWith (relativePath: pkg: builtins.hasAttr "updateScript" pkg)
|
||||
(relativePath: pkg: pkg)
|
||||
pathContent;
|
||||
|
||||
packageByName = name:
|
||||
let
|
||||
package = pkgs.lib.attrByPath (pkgs.lib.splitString "." name) null pkgs;
|
||||
package = lib.attrByPath (lib.splitString "." name) null pkgs;
|
||||
in
|
||||
if package == null then
|
||||
builtins.throw "Package with an attribute name `${name}` does not exists."
|
||||
|
@ -125,15 +130,15 @@ let
|
|||
|
||||
packageData = package: {
|
||||
name = package.name;
|
||||
pname = pkgs.lib.getName package;
|
||||
updateScript = map builtins.toString (pkgs.lib.toList package.updateScript);
|
||||
pname = lib.getName package;
|
||||
updateScript = map builtins.toString (lib.toList package.updateScript);
|
||||
};
|
||||
|
||||
packagesJson = pkgs.writeText "packages.json" (builtins.toJSON (map packageData packages));
|
||||
|
||||
optionalArgs =
|
||||
pkgs.lib.optional (max-workers != null) "--max-workers=${max-workers}"
|
||||
++ pkgs.lib.optional (keep-going == "true") "--keep-going";
|
||||
lib.optional (max-workers != null) "--max-workers=${max-workers}"
|
||||
++ lib.optional (keep-going == "true") "--keep-going";
|
||||
|
||||
args = [ packagesJson ] ++ optionalArgs;
|
||||
|
||||
|
|
2
third_party/nixpkgs/nixos/README
vendored
2
third_party/nixpkgs/nixos/README
vendored
|
@ -2,4 +2,4 @@
|
|||
|
||||
NixOS is a Linux distribution based on the purely functional package
|
||||
management system Nix. More information can be found at
|
||||
http://nixos.org/nixos and in the manual in doc/manual.
|
||||
https://nixos.org/nixos and in the manual in doc/manual.
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
the package to your clone, and (optionally) submit a patch or pull request to
|
||||
have it accepted into the main Nixpkgs repository. This is described in
|
||||
detail in the <link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual">Nixpkgs
|
||||
xlink:href="https://nixos.org/nixpkgs/manual">Nixpkgs
|
||||
manual</link>. In short, you clone Nixpkgs:
|
||||
<screen>
|
||||
<prompt>$ </prompt>git clone https://github.com/NixOS/nixpkgs
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
when managing complex systems. The syntax and semantics of the Nix language
|
||||
are fully described in the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
xlink:href="https://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
manual</link>, but here we give a short overview of the most important
|
||||
constructs useful in NixOS configuration files.
|
||||
</para>
|
||||
|
|
|
@ -16,6 +16,17 @@
|
|||
fsType = "ext4";
|
||||
};
|
||||
</programlisting>
|
||||
This will create an entry in <filename>/etc/fstab</filename>, which will
|
||||
generate a corresponding
|
||||
<link xlink:href="https://www.freedesktop.org/software/systemd/man/systemd.mount.html">systemd.mount</link>
|
||||
unit via
|
||||
<link xlink:href="https://www.freedesktop.org/software/systemd/man/systemd-fstab-generator.html">systemd-fstab-generator</link>.
|
||||
The filesystem will be mounted automatically unless
|
||||
<literal>"noauto"</literal> is present in <link
|
||||
linkend="opt-fileSystems._name__.options">options</link>.
|
||||
<literal>"noauto"</literal> filesystems can be mounted explicitly using
|
||||
<command>systemctl</command> e.g. <command>systemctl start
|
||||
data.mount</command>.
|
||||
Mount points are created automatically if they don’t already exist. For
|
||||
<option><link linkend="opt-fileSystems._name__.device">device</link></option>,
|
||||
it’s best to use the topology-independent device aliases in
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
expression language. It’s not complete. In particular, there are many other
|
||||
built-in functions. See the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
xlink:href="https://nixos.org/nix/manual/#chap-writing-nix-expressions">Nix
|
||||
manual</link> for the rest.
|
||||
</para>
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
<link linkend="opt-services.picom.enable">services.picom</link> = {
|
||||
<link linkend="opt-services.picom.enable">enable</link> = true;
|
||||
<link linkend="opt-services.picom.fade">fade</link> = true;
|
||||
<link linkend="opt-services.picom.inactiveOpacity">inactiveOpacity</link> = "0.9";
|
||||
<link linkend="opt-services.picom.inactiveOpacity">inactiveOpacity</link> = 0.9;
|
||||
<link linkend="opt-services.picom.shadow">shadow</link> = true;
|
||||
<link linkend="opt-services.picom.fadeDelta">fadeDelta</link> = 4;
|
||||
};
|
||||
|
|
|
@ -57,7 +57,7 @@
|
|||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/NixOS/nixos-org-configurations/pull/18">
|
||||
Make sure a channel is created at http://nixos.org/channels/. </link>
|
||||
Make sure a channel is created at https://nixos.org/channels/. </link>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
|
||||
imports =
|
||||
[ # Use postgresql service from nixos-unstable channel.
|
||||
# sudo nix-channel --add http://nixos.org/channels/nixos-unstable nixos-unstable
|
||||
# sudo nix-channel --add https://nixos.org/channels/nixos-unstable nixos-unstable
|
||||
<nixos-unstable/nixos/modules/services/databases/postgresql.nix>
|
||||
];
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
<para>
|
||||
NixOS ISO images can be downloaded from the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixos/download.html">NixOS download
|
||||
xlink:href="https://nixos.org/nixos/download.html">NixOS download
|
||||
page</link>. There are a number of installation options. If you happen to
|
||||
have an optical drive and a spare CD, burning the image to CD and booting
|
||||
from that is probably the easiest option. Most people will need to prepare a
|
||||
|
@ -26,7 +26,7 @@ xlink:href="https://nixos.wiki/wiki/NixOS_Installation_Guide#Making_the_installa
|
|||
<para>
|
||||
Using virtual appliances in Open Virtualization Format (OVF) that can be
|
||||
imported into VirtualBox. These are available from the
|
||||
<link xlink:href="http://nixos.org/nixos/download.html">NixOS download
|
||||
<link xlink:href="https://nixos.org/nixos/download.html">NixOS download
|
||||
page</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
|
|
|
@ -24,16 +24,6 @@
|
|||
</arg>
|
||||
</group>
|
||||
</arg>
|
||||
<arg>
|
||||
<group choice='req'>
|
||||
<arg choice='plain'>
|
||||
<option>--print-build-logs</option>
|
||||
</arg>
|
||||
<arg choice='plain'>
|
||||
<option>-L</option>
|
||||
</arg>
|
||||
</group>
|
||||
</arg>
|
||||
<arg>
|
||||
<arg choice='plain'>
|
||||
<option>-I</option>
|
||||
|
@ -178,12 +168,6 @@
|
|||
<para>Please note that this option may be specified repeatedly.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term><option>--print-build-logs</option> / <option>-L</option></term>
|
||||
<listitem>
|
||||
<para>Print the full build logs of <command>nix build</command> to stderr.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<option>--root</option>
|
||||
|
|
|
@ -49,7 +49,7 @@
|
|||
<para>
|
||||
Nix has been updated to 1.7
|
||||
(<link
|
||||
xlink:href="http://nixos.org/nix/manual/#ssec-relnotes-1.7">details</link>).
|
||||
xlink:href="https://nixos.org/nix/manual/#ssec-relnotes-1.7">details</link>).
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
in excess of 8,000 Haskell packages. Detailed instructions on how to use
|
||||
that infrastructure can be found in the
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
xlink:href="https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link>. Users migrating from an earlier
|
||||
release may find helpful information below, in the list of
|
||||
backwards-incompatible changes. Furthermore, we distribute 51(!) additional
|
||||
|
@ -555,7 +555,7 @@ nix-env -f "<nixpkgs>" -iA haskellPackages.pandoc
|
|||
the compiler now is the <literal>haskellPackages.ghcWithPackages</literal>
|
||||
function. The
|
||||
<link
|
||||
xlink:href="http://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
xlink:href="https://nixos.org/nixpkgs/manual/#users-guide-to-the-haskell-infrastructure">User's
|
||||
Guide to the Haskell Infrastructure</link> provides more information about
|
||||
this subject.
|
||||
</para>
|
||||
|
|
|
@ -54,7 +54,7 @@
|
|||
xlink:href="https://reproducible-builds.org/specs/source-date-epoch/">SOURCE_DATE_EPOCH</envar>
|
||||
to a deterministic value, and Nix has
|
||||
<link
|
||||
xlink:href="http://nixos.org/nix/manual/#ssec-relnotes-1.11">gained
|
||||
xlink:href="https://nixos.org/nix/manual/#ssec-relnotes-1.11">gained
|
||||
an option</link> to repeat a build a number of times to test determinism.
|
||||
An ongoing project, the goal of exact reproducibility is to allow binaries
|
||||
to be verified independently (e.g., a user might only trust binaries that
|
||||
|
|
|
@ -55,6 +55,34 @@
|
|||
The new <varname>virtualisation.containers</varname> module manages configuration shared by the CRI-O and Podman modules.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Declarative Docker containers are renamed from <varname>docker-containers</varname> to <varname>virtualisation.oci-containers.containers</varname>.
|
||||
This is to make it possible to use <literal>podman</literal> instead of <literal>docker</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
MariaDB has been updated to 10.4, MariaDB Galera to 26.4.
|
||||
Before you upgrade, it would be best to take a backup of your database.
|
||||
For MariaDB Galera Cluster, see <link xlink:href="https://mariadb.com/kb/en/upgrading-from-mariadb-103-to-mariadb-104-with-galera-cluster/">Upgrading
|
||||
from MariaDB 10.3 to MariaDB 10.4 with Galera Cluster</link> instead.
|
||||
Before doing the upgrade read <link xlink:href="https://mariadb.com/kb/en/upgrading-from-mariadb-103-to-mariadb-104/#incompatible-changes-between-103-and-104">Incompatible
|
||||
Changes Between 10.3 and 10.4</link>.
|
||||
After the upgrade you will need to run <literal>mysql_upgrade</literal>.
|
||||
MariaDB 10.4 introduces a number of changes to the authentication process, intended to make things easier and more
|
||||
intuitive. See <link xlink:href="https://mariadb.com/kb/en/authentication-from-mariadb-104/">Authentication from MariaDB 10.4</link>.
|
||||
unix_socket auth plugin does not use a password, and uses the connecting user's UID instead. When a new MariaDB data directory is initialized, two MariaDB users are
|
||||
created and can be used with new unix_socket auth plugin, as well as traditional mysql_native_password plugin: root@localhost and mysql@localhost. To actually use
|
||||
the traditional mysql_native_password plugin method, one must run the following:
|
||||
<programlisting>
|
||||
services.mysql.initialScript = pkgs.writeText "mariadb-init.sql" ''
|
||||
ALTER USER root@localhost IDENTIFIED VIA mysql_native_password USING PASSWORD("verysecret");
|
||||
'';
|
||||
</programlisting>
|
||||
When MariaDB data directory is just upgraded (not initialized), the users are not created or modified.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
|
@ -71,7 +99,9 @@
|
|||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para />
|
||||
<para>
|
||||
There is a new <xref linkend="opt-security.doas.enable"/> module that provides <command>doas</command>, a lighter alternative to <command>sudo</command> with many of the same features.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
|
@ -90,6 +120,12 @@
|
|||
</para>
|
||||
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The go-modules builder now uses vendorSha256 instead of modSha256 to pin
|
||||
fetched version data. This is currently a warning, but will be removed in the next release.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Grafana is now built without support for phantomjs by default. Phantomjs support has been
|
||||
|
@ -227,7 +263,16 @@ php.override {
|
|||
Be aware that backwards state migrations are not supported by Deluge.
|
||||
</para>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Add option <literal>services.nginx.enableSandbox</literal> to starting Nginx web server with additional sandbox/hardening options.
|
||||
By default, write access to <literal>services.nginx.stateDir</literal> is allowed. To allow writing to other folders,
|
||||
use <literal>systemd.services.nginx.serviceConfig.ReadWritePaths</literal>
|
||||
<programlisting>
|
||||
systemd.services.nginx.serviceConfig.ReadWritePaths = [ "/var/www" ];
|
||||
</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The NixOS options <literal>nesting.clone</literal> and
|
||||
|
@ -271,6 +316,13 @@ php.override {
|
|||
</programlisting>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The Nginx log directory has been moved to <literal>/var/log/nginx</literal>, the cache directory
|
||||
to <literal>/var/cache/nginx</literal>. The option <literal>services.nginx.stateDir</literal> has
|
||||
been removed.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The httpd web server previously started its main process as root
|
||||
|
@ -311,6 +363,24 @@ php.override {
|
|||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
In the <literal>picom</literal> module, several options that accepted
|
||||
floating point numbers encoded as strings (for example
|
||||
<xref linkend="opt-services.picom.activeOpacity"/>) have been changed
|
||||
to the (relatively) new native <literal>float</literal> type. To migrate
|
||||
your configuration simply remove the quotes around the numbers.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
When using <literal>buildBazelPackage</literal> from Nixpkgs,
|
||||
<literal>flat</literal> hash mode is now used for dependencies
|
||||
instead of <literal>recursive</literal>. This is to better allow
|
||||
using hashed mirrors where needed. As a result, these hashes
|
||||
will have changed.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
|
||||
|
@ -338,6 +408,11 @@ php.override {
|
|||
the <literal>notmuch.emacs</literal> output.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The default output of <literal>buildGoPackage</literal> is now <literal>$out</literal> instead of <literal>$bin</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -1,135 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
|
||||
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
|
||||
|
||||
<xsl:output method='html' encoding="UTF-8"
|
||||
doctype-public="-//W3C//DTD HTML 4.01//EN"
|
||||
doctype-system="http://www.w3.org/TR/html4/strict.dtd" />
|
||||
|
||||
<xsl:template match="logfile">
|
||||
<html>
|
||||
<head>
|
||||
<script type="text/javascript" src="jquery.min.js"></script>
|
||||
<script type="text/javascript" src="jquery-ui.min.js"></script>
|
||||
<script type="text/javascript" src="treebits.js" />
|
||||
<link rel="stylesheet" href="logfile.css" type="text/css" />
|
||||
<title>Log File</title>
|
||||
</head>
|
||||
<body>
|
||||
<h1>VM build log</h1>
|
||||
<p>
|
||||
<a href="javascript:" class="logTreeExpandAll">Expand all</a> |
|
||||
<a href="javascript:" class="logTreeCollapseAll">Collapse all</a>
|
||||
</p>
|
||||
<ul class='toplevel'>
|
||||
<xsl:for-each select='line|nest'>
|
||||
<li>
|
||||
<xsl:apply-templates select='.'/>
|
||||
</li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
|
||||
<xsl:if test=".//*[@image]">
|
||||
<h1>Screenshots</h1>
|
||||
<ul class="vmScreenshots">
|
||||
<xsl:for-each select='.//*[@image]'>
|
||||
<li><a href="{@image}"><xsl:value-of select="@image" /></a></li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
</xsl:if>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="nest">
|
||||
|
||||
<!-- The tree should be collapsed by default if all children are
|
||||
unimportant or if the header is unimportant. -->
|
||||
<xsl:variable name="collapsed" select="not(./head[@expanded]) and count(.//*[@error]) = 0"/>
|
||||
|
||||
<xsl:variable name="style"><xsl:if test="$collapsed">display: none;</xsl:if></xsl:variable>
|
||||
|
||||
<xsl:if test="line|nest">
|
||||
<a href="javascript:" class="logTreeToggle">
|
||||
<xsl:choose>
|
||||
<xsl:when test="$collapsed"><xsl:text>+</xsl:text></xsl:when>
|
||||
<xsl:otherwise><xsl:text>-</xsl:text></xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</a>
|
||||
<xsl:text> </xsl:text>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:apply-templates select='head'/>
|
||||
|
||||
<!-- Be careful to only generate <ul>s if there are <li>s, otherwise it’s malformed. -->
|
||||
<xsl:if test="line|nest">
|
||||
|
||||
<ul class='nesting' style="{$style}">
|
||||
<xsl:for-each select='line|nest'>
|
||||
|
||||
<!-- Is this the last line? If so, mark it as such so that it
|
||||
can be rendered differently. -->
|
||||
<xsl:variable name="class"><xsl:choose><xsl:when test="position() != last()">line</xsl:when><xsl:otherwise>lastline</xsl:otherwise></xsl:choose></xsl:variable>
|
||||
|
||||
<li class='{$class}'>
|
||||
<span class='lineconn' />
|
||||
<span class='linebody'>
|
||||
<xsl:apply-templates select='.'/>
|
||||
</span>
|
||||
</li>
|
||||
</xsl:for-each>
|
||||
</ul>
|
||||
</xsl:if>
|
||||
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="head|line">
|
||||
<code>
|
||||
<xsl:if test="@error">
|
||||
<xsl:attribute name="class">errorLine</xsl:attribute>
|
||||
</xsl:if>
|
||||
<xsl:if test="@warning">
|
||||
<xsl:attribute name="class">warningLine</xsl:attribute>
|
||||
</xsl:if>
|
||||
<xsl:if test="@priority = 3">
|
||||
<xsl:attribute name="class">prio3</xsl:attribute>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="@type = 'serial'">
|
||||
<xsl:attribute name="class">serial</xsl:attribute>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:if test="@machine">
|
||||
<xsl:choose>
|
||||
<xsl:when test="@type = 'serial'">
|
||||
<span class="machine"><xsl:value-of select="@machine"/># </span>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<span class="machine"><xsl:value-of select="@machine"/>: </span>
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</xsl:if>
|
||||
|
||||
<xsl:choose>
|
||||
<xsl:when test="@image">
|
||||
<a href="{@image}"><xsl:apply-templates/></a>
|
||||
</xsl:when>
|
||||
<xsl:otherwise>
|
||||
<xsl:apply-templates/>
|
||||
</xsl:otherwise>
|
||||
</xsl:choose>
|
||||
</code>
|
||||
</xsl:template>
|
||||
|
||||
|
||||
<xsl:template match="storeref">
|
||||
<em class='storeref'>
|
||||
<span class='popup'><xsl:apply-templates/></span>
|
||||
<span class='elided'>/...</span><xsl:apply-templates select='name'/><xsl:apply-templates select='path'/>
|
||||
</em>
|
||||
</xsl:template>
|
||||
|
||||
</xsl:stylesheet>
|
|
@ -1,129 +0,0 @@
|
|||
body {
|
||||
font-family: sans-serif;
|
||||
background: white;
|
||||
}
|
||||
|
||||
h1
|
||||
{
|
||||
color: #005aa0;
|
||||
font-size: 180%;
|
||||
}
|
||||
|
||||
a {
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
|
||||
ul.nesting, ul.toplevel {
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
ul.toplevel {
|
||||
list-style-type: none;
|
||||
}
|
||||
|
||||
.line, .head {
|
||||
padding-top: 0em;
|
||||
}
|
||||
|
||||
ul.nesting li.line, ul.nesting li.lastline {
|
||||
position: relative;
|
||||
list-style-type: none;
|
||||
}
|
||||
|
||||
ul.nesting li.line {
|
||||
padding-left: 2.0em;
|
||||
}
|
||||
|
||||
ul.nesting li.lastline {
|
||||
padding-left: 2.1em; /* for the 0.1em border-left in .lastline > .lineconn */
|
||||
}
|
||||
|
||||
li.line {
|
||||
border-left: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
li.line > span.lineconn, li.lastline > span.lineconn {
|
||||
position: absolute;
|
||||
height: 0.65em;
|
||||
left: 0em;
|
||||
width: 1.5em;
|
||||
border-bottom: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
li.lastline > span.lineconn {
|
||||
border-left: 0.1em solid #6185a0;
|
||||
}
|
||||
|
||||
|
||||
em.storeref {
|
||||
color: #500000;
|
||||
position: relative;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
em.storeref:hover {
|
||||
background-color: #eeeeee;
|
||||
}
|
||||
|
||||
*.popup {
|
||||
display: none;
|
||||
/* background: url('http://losser.st-lab.cs.uu.nl/~mbravenb/menuback.png') repeat; */
|
||||
background: #ffffcd;
|
||||
border: solid #555555 1px;
|
||||
position: absolute;
|
||||
top: 0em;
|
||||
left: 0em;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
z-index: 100;
|
||||
}
|
||||
|
||||
em.storeref:hover span.popup {
|
||||
display: inline;
|
||||
width: 40em;
|
||||
}
|
||||
|
||||
|
||||
.logTreeToggle {
|
||||
text-decoration: none;
|
||||
font-family: monospace;
|
||||
font-size: larger;
|
||||
}
|
||||
|
||||
.errorLine {
|
||||
color: #ff0000;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.warningLine {
|
||||
color: darkorange;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.prio3 {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
code {
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
.serial {
|
||||
color: #56115c;
|
||||
}
|
||||
|
||||
.machine {
|
||||
color: #002399;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
ul.vmScreenshots {
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
ul.vmScreenshots li {
|
||||
font-family: monospace;
|
||||
list-style: square;
|
||||
}
|
|
@ -143,7 +143,7 @@ class Logger:
|
|||
self.logfile = os.environ.get("LOGFILE", "/dev/null")
|
||||
self.logfile_handle = codecs.open(self.logfile, "wb")
|
||||
self.xml = XMLGenerator(self.logfile_handle, encoding="utf-8")
|
||||
self.queue: "Queue[Dict[str, str]]" = Queue(1000)
|
||||
self.queue: "Queue[Dict[str, str]]" = Queue()
|
||||
|
||||
self.xml.startDocument()
|
||||
self.xml.startElement("logfile", attrs={})
|
||||
|
@ -369,7 +369,7 @@ class Machine:
|
|||
q = q.replace("'", "\\'")
|
||||
return self.execute(
|
||||
(
|
||||
"su -l {} -c "
|
||||
"su -l {} --shell /bin/sh -c "
|
||||
"$'XDG_RUNTIME_DIR=/run/user/`id -u` "
|
||||
"systemctl --user {}'"
|
||||
).format(user, q)
|
||||
|
@ -391,11 +391,11 @@ class Machine:
|
|||
def execute(self, command: str) -> Tuple[int, str]:
|
||||
self.connect()
|
||||
|
||||
out_command = "( {} ); echo '|!EOF' $?\n".format(command)
|
||||
out_command = "( {} ); echo '|!=EOF' $?\n".format(command)
|
||||
self.shell.send(out_command.encode())
|
||||
|
||||
output = ""
|
||||
status_code_pattern = re.compile(r"(.*)\|\!EOF\s+(\d+)")
|
||||
status_code_pattern = re.compile(r"(.*)\|\!=EOF\s+(\d+)")
|
||||
|
||||
while True:
|
||||
chunk = self.shell.recv(4096).decode(errors="ignore")
|
||||
|
|
|
@ -1,30 +0,0 @@
|
|||
$(document).ready(function() {
|
||||
|
||||
/* When a toggle is clicked, show or hide the subtree. */
|
||||
$(".logTreeToggle").click(function() {
|
||||
if ($(this).siblings("ul:hidden").length != 0) {
|
||||
$(this).siblings("ul").show();
|
||||
$(this).text("-");
|
||||
} else {
|
||||
$(this).siblings("ul").hide();
|
||||
$(this).text("+");
|
||||
}
|
||||
});
|
||||
|
||||
/* Implementation of the expand all link. */
|
||||
$(".logTreeExpandAll").click(function() {
|
||||
$(".logTreeToggle", $(this).parent().siblings(".toplevel")).map(function() {
|
||||
$(this).siblings("ul").show();
|
||||
$(this).text("-");
|
||||
});
|
||||
});
|
||||
|
||||
/* Implementation of the collapse all link. */
|
||||
$(".logTreeCollapseAll").click(function() {
|
||||
$(".logTreeToggle", $(this).parent().siblings(".toplevel")).map(function() {
|
||||
$(this).siblings("ul").hide();
|
||||
$(this).text("+");
|
||||
});
|
||||
});
|
||||
|
||||
});
|
24
third_party/nixpkgs/nixos/lib/testing-python.nix
vendored
24
third_party/nixpkgs/nixos/lib/testing-python.nix
vendored
|
@ -10,11 +10,7 @@
|
|||
with import ./build-vms.nix { inherit system pkgs minimal extraConfigurations; };
|
||||
with pkgs;
|
||||
|
||||
let
|
||||
jquery-ui = callPackage ./testing/jquery-ui.nix { };
|
||||
jquery = callPackage ./testing/jquery.nix { };
|
||||
|
||||
in rec {
|
||||
rec {
|
||||
|
||||
inherit pkgs;
|
||||
|
||||
|
@ -62,25 +58,11 @@ in rec {
|
|||
|
||||
requiredSystemFeatures = [ "kvm" "nixos-test" ];
|
||||
|
||||
buildInputs = [ libxslt ];
|
||||
|
||||
buildCommand =
|
||||
''
|
||||
mkdir -p $out/nix-support
|
||||
mkdir -p $out
|
||||
|
||||
LOGFILE=$out/log.xml tests='exec(os.environ["testScript"])' ${driver}/bin/nixos-test-driver
|
||||
|
||||
# Generate a pretty-printed log.
|
||||
xsltproc --output $out/log.html ${./test-driver/log2html.xsl} $out/log.xml
|
||||
ln -s ${./test-driver/logfile.css} $out/logfile.css
|
||||
ln -s ${./test-driver/treebits.js} $out/treebits.js
|
||||
ln -s ${jquery}/js/jquery.min.js $out/
|
||||
ln -s ${jquery}/js/jquery.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.min.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.js $out/
|
||||
|
||||
touch $out/nix-support/hydra-build-products
|
||||
echo "report testlog $out log.html" >> $out/nix-support/hydra-build-products
|
||||
LOGFILE=/dev/null tests='exec(os.environ["testScript"])' ${driver}/bin/nixos-test-driver
|
||||
|
||||
for i in */xchg/coverage-data; do
|
||||
mkdir -p $out/coverage-data
|
||||
|
|
22
third_party/nixpkgs/nixos/lib/testing.nix
vendored
22
third_party/nixpkgs/nixos/lib/testing.nix
vendored
|
@ -10,11 +10,7 @@
|
|||
with import ./build-vms.nix { inherit system pkgs minimal extraConfigurations; };
|
||||
with pkgs;
|
||||
|
||||
let
|
||||
jquery-ui = callPackage ./testing/jquery-ui.nix { };
|
||||
jquery = callPackage ./testing/jquery.nix { };
|
||||
|
||||
in rec {
|
||||
rec {
|
||||
|
||||
inherit pkgs;
|
||||
|
||||
|
@ -58,23 +54,11 @@ in rec {
|
|||
|
||||
requiredSystemFeatures = [ "kvm" "nixos-test" ];
|
||||
|
||||
buildInputs = [ libxslt ];
|
||||
|
||||
buildCommand =
|
||||
''
|
||||
mkdir -p $out/nix-support
|
||||
mkdir -p $out
|
||||
|
||||
LOGFILE=$out/log.xml tests='eval $ENV{testScript}; die $@ if $@;' ${driver}/bin/nixos-test-driver
|
||||
|
||||
# Generate a pretty-printed log.
|
||||
xsltproc --output $out/log.html ${./test-driver/log2html.xsl} $out/log.xml
|
||||
ln -s ${./test-driver/logfile.css} $out/logfile.css
|
||||
ln -s ${./test-driver/treebits.js} $out/treebits.js
|
||||
ln -s ${jquery}/js/jquery.min.js $out/
|
||||
ln -s ${jquery-ui}/js/jquery-ui.min.js $out/
|
||||
|
||||
touch $out/nix-support/hydra-build-products
|
||||
echo "report testlog $out log.html" >> $out/nix-support/hydra-build-products
|
||||
LOGFILE=/dev/null tests='eval $ENV{testScript}; die $@ if $@;' ${driver}/bin/nixos-test-driver
|
||||
|
||||
for i in */xchg/coverage-data; do
|
||||
mkdir -p $out/coverage-data
|
||||
|
|
|
@ -1,24 +0,0 @@
|
|||
{ stdenv, fetchurl, unzip }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
name = "jquery-ui-1.11.4";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://jqueryui.com/resources/download/${name}.zip";
|
||||
sha256 = "0ciyaj1acg08g8hpzqx6whayq206fvf4whksz2pjgxlv207lqgjh";
|
||||
};
|
||||
|
||||
buildInputs = [ unzip ];
|
||||
|
||||
installPhase =
|
||||
''
|
||||
mkdir -p "$out/js"
|
||||
cp -rv . "$out/js"
|
||||
'';
|
||||
|
||||
meta = {
|
||||
homepage = "https://jqueryui.com/";
|
||||
description = "A library of JavaScript widgets and effects";
|
||||
platforms = stdenv.lib.platforms.all;
|
||||
};
|
||||
}
|
36
third_party/nixpkgs/nixos/lib/testing/jquery.nix
vendored
36
third_party/nixpkgs/nixos/lib/testing/jquery.nix
vendored
|
@ -1,36 +0,0 @@
|
|||
{ stdenv, fetchurl, compressed ? true }:
|
||||
|
||||
with stdenv.lib;
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
name = "jquery-1.11.3";
|
||||
|
||||
src = if compressed then
|
||||
fetchurl {
|
||||
url = "http://code.jquery.com/${name}.min.js";
|
||||
sha256 = "1f4glgxxn3jnvry3dpzmazj3207baacnap5w20gr2xlk789idfgc";
|
||||
}
|
||||
else
|
||||
fetchurl {
|
||||
url = "http://code.jquery.com/${name}.js";
|
||||
sha256 = "1v956yf5spw0156rni5z77hzqwmby7ajwdcd6mkhb6zvl36awr90";
|
||||
};
|
||||
|
||||
dontUnpack = true;
|
||||
|
||||
installPhase =
|
||||
''
|
||||
mkdir -p "$out/js"
|
||||
cp -v "$src" "$out/js/jquery.js"
|
||||
${optionalString compressed ''
|
||||
(cd "$out/js" && ln -s jquery.js jquery.min.js)
|
||||
''}
|
||||
'';
|
||||
|
||||
meta = with stdenv.lib; {
|
||||
description = "JavaScript library designed to simplify the client-side scripting of HTML";
|
||||
homepage = "http://jquery.com/";
|
||||
license = licenses.mit;
|
||||
platforms = platforms.all;
|
||||
};
|
||||
}
|
|
@ -244,6 +244,10 @@ in
|
|||
if cfg.daemon.enable then nss_pam_ldapd else nss_ldap
|
||||
);
|
||||
|
||||
system.nssDatabases.group = optional cfg.nsswitch "ldap";
|
||||
system.nssDatabases.passwd = optional cfg.nsswitch "ldap";
|
||||
system.nssDatabases.shadow = optional cfg.nsswitch "ldap";
|
||||
|
||||
users = mkIf cfg.daemon.enable {
|
||||
groups.nslcd = {
|
||||
gid = config.ids.gids.nslcd;
|
||||
|
|
|
@ -4,42 +4,7 @@
|
|||
|
||||
with lib;
|
||||
|
||||
let
|
||||
|
||||
# only with nscd up and running we can load NSS modules that are not integrated in NSS
|
||||
canLoadExternalModules = config.services.nscd.enable;
|
||||
myhostname = canLoadExternalModules;
|
||||
mymachines = canLoadExternalModules;
|
||||
# XXX Move these to their respective modules
|
||||
nssmdns = canLoadExternalModules && config.services.avahi.nssmdns;
|
||||
nsswins = canLoadExternalModules && config.services.samba.nsswins;
|
||||
ldap = canLoadExternalModules && (config.users.ldap.enable && config.users.ldap.nsswitch);
|
||||
resolved = canLoadExternalModules && config.services.resolved.enable;
|
||||
|
||||
hostArray = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkIf mymachines [ "mymachines" ])
|
||||
(mkIf nssmdns [ "mdns_minimal [NOTFOUND=return]" ])
|
||||
(mkIf nsswins [ "wins" ])
|
||||
(mkIf resolved [ "resolve [!UNAVAIL=return]" ])
|
||||
(mkAfter [ "dns" ])
|
||||
(mkIf nssmdns (mkOrder 1501 [ "mdns" ])) # 1501 to ensure it's after dns
|
||||
(mkIf myhostname (mkOrder 1600 [ "myhostname" ])) # 1600 to ensure it's always the last
|
||||
];
|
||||
|
||||
passwdArray = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkIf ldap [ "ldap" ])
|
||||
(mkIf mymachines [ "mymachines" ])
|
||||
(mkIf canLoadExternalModules (mkAfter [ "systemd" ]))
|
||||
];
|
||||
|
||||
shadowArray = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkIf ldap [ "ldap" ])
|
||||
];
|
||||
|
||||
in {
|
||||
{
|
||||
options = {
|
||||
|
||||
# NSS modules. Hacky!
|
||||
|
@ -130,14 +95,11 @@ in {
|
|||
config = {
|
||||
assertions = [
|
||||
{
|
||||
# generic catch if the NixOS module adding to nssModules does not prevent it with specific message.
|
||||
assertion = config.system.nssModules.path != "" -> canLoadExternalModules;
|
||||
message = "Loading NSS modules from path ${config.system.nssModules.path} requires nscd being enabled.";
|
||||
}
|
||||
{
|
||||
# resolved does not need to add to nssModules, therefore needs an extra assertion
|
||||
assertion = resolved -> canLoadExternalModules;
|
||||
message = "Loading systemd-resolved's nss-resolve NSS module requires nscd being enabled.";
|
||||
# Prevent users from disabling nscd, with nssModules being set.
|
||||
# If disabling nscd is really necessary, it's still possible to opt out
|
||||
# by forcing config.system.nssModules to [].
|
||||
assertion = config.system.nssModules.path != "" -> config.services.nscd.enable;
|
||||
message = "Loading NSS modules from system.nssModules (${config.system.nssModules.path}), requires services.nscd.enable being set to true.";
|
||||
}
|
||||
];
|
||||
|
||||
|
@ -158,18 +120,14 @@ in {
|
|||
'';
|
||||
|
||||
system.nssDatabases = {
|
||||
passwd = passwdArray;
|
||||
group = passwdArray;
|
||||
shadow = shadowArray;
|
||||
hosts = hostArray;
|
||||
passwd = mkBefore [ "files" ];
|
||||
group = mkBefore [ "files" ];
|
||||
shadow = mkBefore [ "files" ];
|
||||
hosts = mkMerge [
|
||||
(mkBefore [ "files" ])
|
||||
(mkAfter [ "dns" ])
|
||||
];
|
||||
services = mkBefore [ "files" ];
|
||||
};
|
||||
|
||||
# Systemd provides nss-myhostname to ensure that our hostname
|
||||
# always resolves to a valid IP address. It returns all locally
|
||||
# configured IP addresses, or ::1 and 127.0.0.2 as
|
||||
# fallbacks. Systemd also provides nss-mymachines to return IP
|
||||
# addresses of local containers.
|
||||
system.nssModules = (optionals canLoadExternalModules [ config.systemd.package.out ]);
|
||||
};
|
||||
}
|
||||
|
|
|
@ -51,6 +51,7 @@ in {
|
|||
rtlwifi_new-firmware
|
||||
zd1211fw
|
||||
alsa-firmware
|
||||
sof-firmware
|
||||
openelec-dvb-firmware
|
||||
] ++ optional (pkgs.stdenv.hostPlatform.isAarch32 || pkgs.stdenv.hostPlatform.isAarch64) raspberrypiWirelessFirmware
|
||||
++ optionals (versionOlder config.boot.kernelPackages.kernel.version "4.13") [
|
||||
|
|
|
@ -19,7 +19,7 @@ in {
|
|||
base = mkOption {
|
||||
default = "${config.boot.kernelPackages.kernel}/dtbs";
|
||||
defaultText = "\${config.boot.kernelPackages.kernel}/dtbs";
|
||||
example = literalExample "pkgs.deviceTree_rpi";
|
||||
example = literalExample "pkgs.device-tree_rpi";
|
||||
type = types.path;
|
||||
description = ''
|
||||
The package containing the base device-tree (.dtb) to boot. Contains
|
||||
|
@ -30,7 +30,7 @@ in {
|
|||
overlays = mkOption {
|
||||
default = [];
|
||||
example = literalExample
|
||||
"[\"\${pkgs.deviceTree_rpi.overlays}/w1-gpio.dtbo\"]";
|
||||
"[\"\${pkgs.device-tree_rpi.overlays}/w1-gpio.dtbo\"]";
|
||||
type = types.listOf types.path;
|
||||
description = ''
|
||||
A path containing device tree overlays (.dtbo) to be applied to all
|
||||
|
|
|
@ -34,10 +34,12 @@ let
|
|||
enabled = nvidia_x11 != null;
|
||||
|
||||
cfg = config.hardware.nvidia;
|
||||
|
||||
pCfg = cfg.prime;
|
||||
syncCfg = pCfg.sync;
|
||||
offloadCfg = pCfg.offload;
|
||||
primeEnabled = syncCfg.enable || offloadCfg.enable;
|
||||
nvidiaPersistencedEnabled = cfg.nvidiaPersistenced;
|
||||
in
|
||||
|
||||
{
|
||||
|
@ -50,6 +52,15 @@ in
|
|||
];
|
||||
|
||||
options = {
|
||||
hardware.nvidia.powerManagement.enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Experimental power management through systemd. For more information, see
|
||||
the NVIDIA docs, on Chapter 21. Configuring Power Management Support.
|
||||
'';
|
||||
};
|
||||
|
||||
hardware.nvidia.modesetting.enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
|
@ -129,6 +140,15 @@ in
|
|||
<option>hardware.nvidia.prime.intelBusId</option>).
|
||||
'';
|
||||
};
|
||||
|
||||
hardware.nvidia.nvidiaPersistenced = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Update for NVIDA GPU headless mode, i.e. nvidia-persistenced. It ensures all
|
||||
GPUs stay awake even during headless mode.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf enabled {
|
||||
|
@ -215,6 +235,46 @@ in
|
|||
environment.systemPackages = [ nvidia_x11.bin nvidia_x11.settings ]
|
||||
++ filter (p: p != null) [ nvidia_x11.persistenced ];
|
||||
|
||||
systemd.packages = optional cfg.powerManagement.enable nvidia_x11.out;
|
||||
|
||||
systemd.services = let
|
||||
baseNvidiaService = state: {
|
||||
description = "NVIDIA system ${state} actions";
|
||||
|
||||
path = with pkgs; [ kbd ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${nvidia_x11.out}/bin/nvidia-sleep.sh '${state}'";
|
||||
};
|
||||
};
|
||||
|
||||
nvidiaService = sleepState: (baseNvidiaService sleepState) // {
|
||||
before = [ "systemd-${sleepState}.service" ];
|
||||
requiredBy = [ "systemd-${sleepState}.service" ];
|
||||
};
|
||||
|
||||
services = (builtins.listToAttrs (map (t: nameValuePair "nvidia-${t}" (nvidiaService t)) ["hibernate" "suspend"]))
|
||||
// {
|
||||
nvidia-resume = (baseNvidiaService "resume") // {
|
||||
after = [ "systemd-suspend.service" "systemd-hibernate.service" ];
|
||||
requiredBy = [ "systemd-suspend.service" "systemd-hibernate.service" ];
|
||||
};
|
||||
};
|
||||
in optionalAttrs cfg.powerManagement.enable services
|
||||
// optionalAttrs nvidiaPersistencedEnabled {
|
||||
"nvidia-persistenced" = mkIf nvidiaPersistencedEnabled {
|
||||
description = "NVIDIA Persistence Daemon";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
Restart = "always";
|
||||
PIDFile = "/var/run/nvidia-persistenced/nvidia-persistenced.pid";
|
||||
ExecStart = "${nvidia_x11.persistenced}/bin/nvidia-persistenced --verbose";
|
||||
ExecStopPost = "${pkgs.coreutils}/bin/rm -rf /var/run/nvidia-persistenced";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = optional config.virtualisation.docker.enableNvidia
|
||||
"L+ /run/nvidia-docker/bin - - - - ${nvidia_x11.bin}/origBin"
|
||||
++ optional (nvidia_x11.persistenced != null && config.virtualisation.docker.enableNvidia)
|
||||
|
@ -227,7 +287,8 @@ in
|
|||
optionals config.services.xserver.enable [ "nvidia" "nvidia_modeset" "nvidia_drm" ];
|
||||
|
||||
# If requested enable modesetting via kernel parameter.
|
||||
boot.kernelParams = optional (offloadCfg.enable || cfg.modesetting.enable) "nvidia-drm.modeset=1";
|
||||
boot.kernelParams = optional (offloadCfg.enable || cfg.modesetting.enable) "nvidia-drm.modeset=1"
|
||||
++ optional cfg.powerManagement.enable "nvidia.NVreg_PreserveVideoMemoryAllocations=1";
|
||||
|
||||
# Create /dev/nvidia-uvm when the nvidia-uvm module is loaded.
|
||||
services.udev.extraRules =
|
||||
|
|
|
@ -15,7 +15,6 @@ mountPoint=/mnt
|
|||
channelPath=
|
||||
system=
|
||||
verbosity=()
|
||||
buildLogs=
|
||||
|
||||
while [ "$#" -gt 0 ]; do
|
||||
i="$1"; shift 1
|
||||
|
@ -60,9 +59,6 @@ while [ "$#" -gt 0 ]; do
|
|||
-v*|--verbose)
|
||||
verbosity+=("$i")
|
||||
;;
|
||||
-L|--print-build-logs)
|
||||
buildLogs="$i"
|
||||
;;
|
||||
*)
|
||||
echo "$0: unknown option \`$i'"
|
||||
exit 1
|
||||
|
@ -91,8 +87,11 @@ if [[ ! -e $NIXOS_CONFIG && -z $system ]]; then
|
|||
fi
|
||||
|
||||
# A place to drop temporary stuff.
|
||||
tmpdir="$(mktemp -d -p $mountPoint)"
|
||||
trap "rm -rf $tmpdir" EXIT
|
||||
tmpdir="$(mktemp -d)"
|
||||
|
||||
# store temporary files on target filesystem by default
|
||||
export TMPDIR=${TMPDIR:-$tmpdir}
|
||||
|
||||
sub="auto?trusted=1"
|
||||
|
||||
|
@ -100,9 +99,9 @@ sub="auto?trusted=1"
|
|||
if [[ -z $system ]]; then
|
||||
echo "building the configuration in $NIXOS_CONFIG..."
|
||||
outLink="$tmpdir/system"
|
||||
nix build --out-link "$outLink" --store "$mountPoint" "${extraBuildFlags[@]}" \
|
||||
nix-build --out-link "$outLink" --store "$mountPoint" "${extraBuildFlags[@]}" \
|
||||
--extra-substituters "$sub" \
|
||||
-f '<nixpkgs/nixos>' system -I "nixos-config=$NIXOS_CONFIG" ${verbosity[@]} ${buildLogs}
|
||||
'<nixpkgs/nixos>' -A system -I "nixos-config=$NIXOS_CONFIG" ${verbosity[@]}
|
||||
system=$(readlink -f $outLink)
|
||||
fi
|
||||
|
||||
|
|
|
@ -200,6 +200,7 @@
|
|||
./security/rtkit.nix
|
||||
./security/wrappers/default.nix
|
||||
./security/sudo.nix
|
||||
./security/doas.nix
|
||||
./security/systemd-confinement.nix
|
||||
./security/tpm2.nix
|
||||
./services/admin/oxidized.nix
|
||||
|
@ -791,6 +792,7 @@
|
|||
./services/security/nginx-sso.nix
|
||||
./services/security/oauth2_proxy.nix
|
||||
./services/security/oauth2_proxy_nginx.nix
|
||||
./services/security/privacyidea.nix
|
||||
./services/security/physlock.nix
|
||||
./services/security/shibboleth-sp.nix
|
||||
./services/security/sks.nix
|
||||
|
@ -984,9 +986,9 @@
|
|||
./virtualisation/container-config.nix
|
||||
./virtualisation/containers.nix
|
||||
./virtualisation/nixos-containers.nix
|
||||
./virtualisation/oci-containers.nix
|
||||
./virtualisation/cri-o.nix
|
||||
./virtualisation/docker.nix
|
||||
./virtualisation/docker-containers.nix
|
||||
./virtualisation/ecs-agent.nix
|
||||
./virtualisation/libvirtd.nix
|
||||
./virtualisation/lxc.nix
|
||||
|
|
|
@ -5,8 +5,8 @@ let
|
|||
cfg = config.programs.singularity;
|
||||
singularity = pkgs.singularity.overrideAttrs (attrs : {
|
||||
installPhase = attrs.installPhase + ''
|
||||
mv $bin/libexec/singularity/bin/starter-suid $bin/libexec/singularity/bin/starter-suid.orig
|
||||
ln -s /run/wrappers/bin/singularity-suid $bin/libexec/singularity/bin/starter-suid
|
||||
mv $out/libexec/singularity/bin/starter-suid $out/libexec/singularity/bin/starter-suid.orig
|
||||
ln -s /run/wrappers/bin/singularity-suid $out/libexec/singularity/bin/starter-suid
|
||||
'';
|
||||
});
|
||||
in {
|
||||
|
|
|
@ -75,7 +75,7 @@ in
|
|||
};
|
||||
|
||||
link = mkOption {
|
||||
default = "http://planet.nixos.org";
|
||||
default = "https://planet.nixos.org";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Link to the main page.
|
||||
|
|
|
@ -87,13 +87,13 @@ let
|
|||
default = {};
|
||||
example = literalExample ''
|
||||
{
|
||||
"example.org" = "/srv/http/nginx";
|
||||
"example.org" = null;
|
||||
"mydomain.org" = null;
|
||||
}
|
||||
'';
|
||||
description = ''
|
||||
A list of extra domain names, which are included in the one certificate to be issued, with their
|
||||
own server roots if needed.
|
||||
A list of extra domain names, which are included in the one certificate to be issued.
|
||||
Setting a distinct server root is deprecated and not functional in 20.03+
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -250,7 +250,7 @@ in
|
|||
"example.com" = {
|
||||
webroot = "/var/www/challenges/";
|
||||
email = "foo@example.com";
|
||||
extraDomains = { "www.example.com" = null; "foo.example.com" = "/var/www/foo/"; };
|
||||
extraDomains = { "www.example.com" = null; "foo.example.com" = null; };
|
||||
};
|
||||
"bar.example.com" = {
|
||||
webroot = "/var/www/challenges/";
|
||||
|
|
265
third_party/nixpkgs/nixos/modules/security/acme.xml
vendored
265
third_party/nixpkgs/nixos/modules/security/acme.xml
vendored
|
@ -6,65 +6,49 @@
|
|||
<title>SSL/TLS Certificates with ACME</title>
|
||||
<para>
|
||||
NixOS supports automatic domain validation & certificate retrieval and
|
||||
renewal using the ACME protocol. This is currently only implemented by and
|
||||
for Let's Encrypt. The alternative ACME client <literal>lego</literal> is
|
||||
used under the hood.
|
||||
renewal using the ACME protocol. Any provider can be used, but by default
|
||||
NixOS uses Let's Encrypt. The alternative ACME client <literal>lego</literal>
|
||||
is used under the hood.
|
||||
</para>
|
||||
<para>
|
||||
Automatic cert validation and configuration for Apache and Nginx virtual
|
||||
hosts is included in NixOS, however if you would like to generate a wildcard
|
||||
cert or you are not using a web server you will have to configure DNS
|
||||
based validation.
|
||||
</para>
|
||||
<section xml:id="module-security-acme-prerequisites">
|
||||
<title>Prerequisites</title>
|
||||
|
||||
<para>
|
||||
You need to have a running HTTP server for verification. The server must
|
||||
have a webroot defined that can serve
|
||||
To use the ACME module, you must accept the provider's terms of service
|
||||
by setting <literal><xref linkend="opt-security.acme.acceptTerms" /></literal>
|
||||
to <literal>true</literal>. The Let's Encrypt ToS can be found
|
||||
<link xlink:href="https://letsencrypt.org/repository/">here</link>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You must also set an email address to be used when creating accounts with
|
||||
Let's Encrypt. You can set this for all certs with
|
||||
<literal><xref linkend="opt-security.acme.email" /></literal>
|
||||
and/or on a per-cert basis with
|
||||
<literal><xref linkend="opt-security.acme.certs._name_.email" /></literal>.
|
||||
This address is only used for registration and renewal reminders,
|
||||
and cannot be used to administer the certificates in any way.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively, you can use a different ACME server by changing the
|
||||
<literal><xref linkend="opt-security.acme.server" /></literal> option
|
||||
to a provider of your choosing, or just change the server for one cert with
|
||||
<literal><xref linkend="opt-security.acme.certs._name_.server" /></literal>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
You will need an HTTP server or DNS server for verification. For HTTP,
|
||||
the server must have a webroot defined that can serve
|
||||
<filename>.well-known/acme-challenge</filename>. This directory must be
|
||||
writeable by the user that will run the ACME client.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
For instance, this generic snippet could be used for Nginx:
|
||||
<programlisting>
|
||||
http {
|
||||
server {
|
||||
server_name _;
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
location /.well-known/acme-challenge {
|
||||
root /var/www/challenges;
|
||||
}
|
||||
|
||||
location / {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-configuring">
|
||||
<title>Configuring</title>
|
||||
|
||||
<para>
|
||||
To enable ACME certificate retrieval & renewal for a certificate for
|
||||
<literal>foo.example.com</literal>, add the following in your
|
||||
<filename>configuration.nix</filename>:
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.certs"/>."foo.example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.webroot">webroot</link> = "/var/www/challenges";
|
||||
<link linkend="opt-security.acme.certs._name_.email">email</link> = "foo@example.com";
|
||||
};
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>
|
||||
The private key <filename>key.pem</filename> and certificate
|
||||
<filename>fullchain.pem</filename> will be put into
|
||||
<filename>/var/lib/acme/foo.example.com</filename>.
|
||||
</para>
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available configuration
|
||||
options for the <link linkend="opt-security.acme.certs">security.acme</link>
|
||||
module.
|
||||
writeable by the user that will run the ACME client. For DNS, you must
|
||||
set up credentials with your provider/server for use with lego.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-nginx">
|
||||
|
@ -80,12 +64,27 @@ http {
|
|||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
services.nginx = {
|
||||
<link linkend="opt-services.nginx.enable">enable = true;</link>
|
||||
<link linkend="opt-services.nginx.enable">enable</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts">virtualHosts</link> = {
|
||||
"foo.example.com" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME">enableACME</link> = true;
|
||||
# All serverAliases will be added as <link linkend="opt-security.acme.certs._name_.extraDomains">extra domains</link> on the certificate.
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [ "bar.example.com" ];
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/www";
|
||||
};
|
||||
};
|
||||
|
||||
# We can also add a different vhost and reuse the same certificate
|
||||
# but we have to append extraDomains manually.
|
||||
<link linkend="opt-security.acme.certs._name_.extraDomains">security.acme.certs."foo.example.com".extraDomains."baz.example.com"</link> = null;
|
||||
"baz.example.com" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.forceSSL">forceSSL</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.useACMEHost">useACMEHost</link> = "foo.example.com";
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/www";
|
||||
};
|
||||
|
@ -94,4 +93,162 @@ services.nginx = {
|
|||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-httpd">
|
||||
<title>Using ACME certificates in Apache/httpd</title>
|
||||
|
||||
<para>
|
||||
Using ACME certificates with Apache virtual hosts is identical
|
||||
to using them with Nginx. The attribute names are all the same, just replace
|
||||
"nginx" with "httpd" where appropriate.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-configuring">
|
||||
<title>Manual configuration of HTTP-01 validation</title>
|
||||
|
||||
<para>
|
||||
First off you will need to set up a virtual host to serve the challenges.
|
||||
This example uses a vhost called <literal>certs.example.com</literal>, with
|
||||
the intent that you will generate certs for all your vhosts and redirect
|
||||
everyone to HTTPS.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
services.nginx = {
|
||||
<link linkend="opt-services.nginx.enable">enable</link> = true;
|
||||
<link linkend="opt-services.nginx.virtualHosts">virtualHosts</link> = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.serverAliases">serverAliases</link> = [ "*.example.com" ];
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user
|
||||
# and readable by the Nginx user.
|
||||
# By default, this is the case.
|
||||
locations."/.well-known/acme-challenge" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.root">root</link> = "/var/lib/acme/.challenges";
|
||||
};
|
||||
locations."/" = {
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.locations._name_.return">return</link> = "301 https://$host$request_uri";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
# Alternative config for Apache
|
||||
services.httpd = {
|
||||
<link linkend="opt-services.httpd.enable">enable = true;</link>
|
||||
<link linkend="opt-services.httpd.virtualHosts">virtualHosts</link> = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.serverAliases">serverAliases</link> = [ "*.example.com" ];
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user and readable by the Apache user.
|
||||
# By default, this is the case.
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.documentRoot">documentRoot</link> = "/var/lib/acme/.challenges";
|
||||
<link linkend="opt-services.httpd.virtualHosts._name_.extraConfig">extraConfig</link> = ''
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge [NC]
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301]
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Now you need to configure ACME to generate a certificate.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
<xref linkend="opt-security.acme.certs"/>."foo.example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.webroot">webroot</link> = "/var/lib/acme/.challenges";
|
||||
<link linkend="opt-security.acme.certs._name_.email">email</link> = "foo@example.com";
|
||||
# Since we have a wildcard vhost to handle port 80,
|
||||
# we can generate certs for anything!
|
||||
# Just make sure your DNS resolves them.
|
||||
<link linkend="opt-security.acme.certs._name_.extraDomains">extraDomains</link> = [ "mail.example.com" ];
|
||||
};
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The private key <filename>key.pem</filename> and certificate
|
||||
<filename>fullchain.pem</filename> will be put into
|
||||
<filename>/var/lib/acme/foo.example.com</filename>.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available configuration
|
||||
options for the <link linkend="opt-security.acme.certs">security.acme</link>
|
||||
module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-config-dns">
|
||||
<title>Configuring ACME for DNS validation</title>
|
||||
|
||||
<para>
|
||||
This is useful if you want to generate a wildcard certificate, since
|
||||
ACME servers will only hand out wildcard certs over DNS validation.
|
||||
There a number of supported DNS providers and servers you can utilise,
|
||||
see the <link xlink:href="https://go-acme.github.io/lego/dns/">lego docs</link>
|
||||
for provider/server specific configuration values. For the sake of these
|
||||
docs, we will provide a fully self-hosted example using bind.
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
services.bind = {
|
||||
<link linkend="opt-services.bind.enable">enable</link> = true;
|
||||
<link linkend="opt-services.bind.extraConfig">extraConfig</link> = ''
|
||||
include "/var/lib/secrets/dnskeys.conf";
|
||||
'';
|
||||
<link linkend="opt-services.bind.zones">zones</link> = [
|
||||
rec {
|
||||
name = "example.com";
|
||||
file = "/var/db/bind/${name}";
|
||||
master = true;
|
||||
extraConfig = "allow-update { key rfc2136key.example.com.; };";
|
||||
}
|
||||
];
|
||||
}
|
||||
|
||||
# Now we can configure ACME
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> = true;
|
||||
<xref linkend="opt-security.acme.email" /> = "admin+acme@example.com";
|
||||
<xref linkend="opt-security.acme.certs" />."example.com" = {
|
||||
<link linkend="opt-security.acme.certs._name_.domain">domain</link> = "*.example.com";
|
||||
<link linkend="opt-security.acme.certs._name_.dnsProvider">dnsProvider</link> = "rfc2136";
|
||||
<link linkend="opt-security.acme.certs._name_.credentialsFile">credentialsFile</link> = "/var/lib/secrets/certs.secret";
|
||||
# We don't need to wait for propagation since this is a local DNS server
|
||||
<link linkend="opt-security.acme.certs._name_.dnsPropagationCheck">dnsPropagationCheck</link> = false;
|
||||
};
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
The <filename>dnskeys.conf</filename> and <filename>certs.secret</filename>
|
||||
must be kept secure and thus you should not keep their contents in your
|
||||
Nix config. Instead, generate them one time with these commands:
|
||||
</para>
|
||||
|
||||
<programlisting>
|
||||
mkdir -p /var/lib/secrets
|
||||
tsig-keygen rfc2136key.example.com > /var/lib/secrets/dnskeys.conf
|
||||
chown named:root /var/lib/secrets/dnskeys.conf
|
||||
chmod 400 /var/lib/secrets/dnskeys.conf
|
||||
|
||||
# Copy the secret value from the dnskeys.conf, and put it in
|
||||
# RFC2136_TSIG_SECRET below
|
||||
|
||||
cat > /var/lib/secrets/certs.secret << EOF
|
||||
RFC2136_NAMESERVER='127.0.0.1:53'
|
||||
RFC2136_TSIG_ALGORITHM='hmac-sha256.'
|
||||
RFC2136_TSIG_KEY='rfc2136key.example.com'
|
||||
RFC2136_TSIG_SECRET='your secret key'
|
||||
EOF
|
||||
chmod 400 /var/lib/secrets/certs.secret
|
||||
</programlisting>
|
||||
|
||||
<para>
|
||||
Now you're all set to generate certs! You should monitor the first invokation
|
||||
by running <literal>systemctl start acme-example.com.service &
|
||||
journalctl -fu acme-example.com.service</literal> and watching its log output.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
||||
|
|
274
third_party/nixpkgs/nixos/modules/security/doas.nix
vendored
Normal file
274
third_party/nixpkgs/nixos/modules/security/doas.nix
vendored
Normal file
|
@ -0,0 +1,274 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
let
|
||||
cfg = config.security.doas;
|
||||
|
||||
inherit (pkgs) doas;
|
||||
|
||||
mkUsrString = user: toString user;
|
||||
|
||||
mkGrpString = group: ":${toString group}";
|
||||
|
||||
mkOpts = rule: concatStringsSep " " [
|
||||
(optionalString rule.noPass "nopass")
|
||||
(optionalString rule.persist "persist")
|
||||
(optionalString rule.keepEnv "keepenv")
|
||||
"setenv { SSH_AUTH_SOCK ${concatStringsSep " " rule.setEnv} }"
|
||||
];
|
||||
|
||||
mkArgs = rule:
|
||||
if (isNull rule.args) then ""
|
||||
else if (length rule.args == 0) then "args"
|
||||
else "args ${concatStringsSep " " rule.args}";
|
||||
|
||||
mkRule = rule:
|
||||
let
|
||||
opts = mkOpts rule;
|
||||
|
||||
as = optionalString (!isNull rule.runAs) "as ${rule.runAs}";
|
||||
|
||||
cmd = optionalString (!isNull rule.cmd) "cmd ${rule.cmd}";
|
||||
|
||||
args = mkArgs rule;
|
||||
in
|
||||
optionals (length cfg.extraRules > 0) [
|
||||
(
|
||||
optionalString (length rule.users > 0)
|
||||
(map (usr: "permit ${opts} ${mkUsrString usr} ${as} ${cmd} ${args}") rule.users)
|
||||
)
|
||||
(
|
||||
optionalString (length rule.groups > 0)
|
||||
(map (grp: "permit ${opts} ${mkGrpString grp} ${as} ${cmd} ${args}") rule.groups)
|
||||
)
|
||||
];
|
||||
in
|
||||
{
|
||||
|
||||
###### interface
|
||||
|
||||
options.security.doas = {
|
||||
|
||||
enable = mkOption {
|
||||
type = with types; bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to enable the <command>doas</command> command, which allows
|
||||
non-root users to execute commands as root.
|
||||
'';
|
||||
};
|
||||
|
||||
wheelNeedsPassword = mkOption {
|
||||
type = with types; bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Whether users of the <code>wheel</code> group must provide a password to
|
||||
run commands as super user via <command>doas</command>.
|
||||
'';
|
||||
};
|
||||
|
||||
extraRules = mkOption {
|
||||
default = [];
|
||||
description = ''
|
||||
Define specific rules to be set in the
|
||||
<filename>/etc/doas.conf</filename> file. More specific rules should
|
||||
come after more general ones in order to yield the expected behavior.
|
||||
You can use <code>mkBefore</code> and/or <code>mkAfter</code> to ensure
|
||||
this is the case when configuration options are merged.
|
||||
'';
|
||||
example = literalExample ''
|
||||
[
|
||||
# Allow execution of any command by any user in group doas, requiring
|
||||
# a password and keeping any previously-defined environment variables.
|
||||
{ groups = [ "doas" ]; noPass = false; keepEnv = true; }
|
||||
|
||||
# Allow execution of "/home/root/secret.sh" by user `backup` OR user
|
||||
# `database` OR any member of the group with GID `1006`, without a
|
||||
# password.
|
||||
{ users = [ "backup" "database" ]; groups = [ 1006 ];
|
||||
cmd = "/home/root/secret.sh"; noPass = true; }
|
||||
|
||||
# Allow any member of group `bar` to run `/home/baz/cmd1.sh` as user
|
||||
# `foo` with argument `hello-doas`.
|
||||
{ groups = [ "bar" ]; runAs = "foo";
|
||||
cmd = "/home/baz/cmd1.sh"; args = [ "hello-doas" ]; }
|
||||
|
||||
# Allow any member of group `bar` to run `/home/baz/cmd2.sh` as user
|
||||
# `foo` with no arguments.
|
||||
{ groups = [ "bar" ]; runAs = "foo";
|
||||
cmd = "/home/baz/cmd2.sh"; args = [ ]; }
|
||||
|
||||
# Allow user `abusers` to execute "nano" and unset the value of
|
||||
# SSH_AUTH_SOCK, override the value of ALPHA to 1, and inherit the
|
||||
# value of BETA from the current environment.
|
||||
{ users = [ "abusers" ]; cmd = "nano";
|
||||
setEnv = [ "-SSH_AUTH_SOCK" "ALPHA=1" "BETA" ]; }
|
||||
]
|
||||
'';
|
||||
type = with types; listOf (
|
||||
submodule {
|
||||
options = {
|
||||
|
||||
noPass = mkOption {
|
||||
type = with types; bool;
|
||||
default = false;
|
||||
description = ''
|
||||
If <code>true</code>, the user is not required to enter a
|
||||
password.
|
||||
'';
|
||||
};
|
||||
|
||||
persist = mkOption {
|
||||
type = with types; bool;
|
||||
default = false;
|
||||
description = ''
|
||||
If <code>true</code>, do not ask for a password again for some
|
||||
time after the user successfully authenticates.
|
||||
'';
|
||||
};
|
||||
|
||||
keepEnv = mkOption {
|
||||
type = with types; bool;
|
||||
default = false;
|
||||
description = ''
|
||||
If <code>true</code>, environment variables other than those
|
||||
listed in
|
||||
<citerefentry><refentrytitle>doas</refentrytitle><manvolnum>1</manvolnum></citerefentry>
|
||||
are kept when creating the environment for the new process.
|
||||
'';
|
||||
};
|
||||
|
||||
setEnv = mkOption {
|
||||
type = with types; listOf str;
|
||||
default = [];
|
||||
description = ''
|
||||
Keep or set the specified variables. Variables may also be
|
||||
removed with a leading '-' or set using
|
||||
<code>variable=value</code>. If the first character of
|
||||
<code>value</code> is a '$', the value to be set is taken from
|
||||
the existing environment variable of the indicated name. This
|
||||
option is processed after the default environment has been
|
||||
created.
|
||||
|
||||
NOTE: All rules have <code>setenv { SSH_AUTH_SOCK }</code> by
|
||||
default. To prevent <code>SSH_AUTH_SOCK</code> from being
|
||||
inherited, add <code>"-SSH_AUTH_SOCK"</code> anywhere in this
|
||||
list.
|
||||
'';
|
||||
};
|
||||
|
||||
users = mkOption {
|
||||
type = with types; listOf (either str int);
|
||||
default = [];
|
||||
description = "The usernames / UIDs this rule should apply for.";
|
||||
};
|
||||
|
||||
groups = mkOption {
|
||||
type = with types; listOf (either str int);
|
||||
default = [];
|
||||
description = "The groups / GIDs this rule should apply for.";
|
||||
};
|
||||
|
||||
runAs = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Which user or group the specified command is allowed to run as.
|
||||
When set to <code>null</code> (the default), all users are
|
||||
allowed.
|
||||
|
||||
A user can be specified using just the username:
|
||||
<code>"foo"</code>. It is also possible to only allow running as
|
||||
a specific group with <code>":bar"</code>.
|
||||
'';
|
||||
};
|
||||
|
||||
cmd = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
The command the user is allowed to run. When set to
|
||||
<code>null</code> (the default), all commands are allowed.
|
||||
|
||||
NOTE: It is best practice to specify absolute paths. If a
|
||||
relative path is specified, only a restricted PATH will be
|
||||
searched.
|
||||
'';
|
||||
};
|
||||
|
||||
args = mkOption {
|
||||
type = with types; nullOr (listOf str);
|
||||
default = null;
|
||||
description = ''
|
||||
Arguments that must be provided to the command. When set to
|
||||
<code>[]</code>, the command must be run without any arguments.
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
);
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = with types; lines;
|
||||
default = "";
|
||||
description = ''
|
||||
Extra configuration text appended to <filename>doas.conf</filename>.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
###### implementation
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
security.doas.extraRules = mkOrder 600 [
|
||||
{
|
||||
groups = [ "wheel" ];
|
||||
noPass = !cfg.wheelNeedsPassword;
|
||||
}
|
||||
];
|
||||
|
||||
security.wrappers = {
|
||||
doas.source = "${doas}/bin/doas";
|
||||
};
|
||||
|
||||
environment.systemPackages = [
|
||||
doas
|
||||
];
|
||||
|
||||
security.pam.services.doas = {
|
||||
allowNullPassword = true;
|
||||
sshAgentAuth = true;
|
||||
};
|
||||
|
||||
environment.etc."doas.conf" = {
|
||||
source = pkgs.runCommand "doas-conf"
|
||||
{
|
||||
src = pkgs.writeText "doas-conf-in" ''
|
||||
# To modify this file, set the NixOS options
|
||||
# `security.doas.extraRules` or `security.doas.extraConfig`. To
|
||||
# completely replace the contents of this file, use
|
||||
# `environment.etc."doas.conf"`.
|
||||
|
||||
# "root" is allowed to do anything.
|
||||
permit nopass keepenv root
|
||||
|
||||
# extraRules
|
||||
${concatStringsSep "\n" (lists.flatten (map mkRule cfg.extraRules))}
|
||||
|
||||
# extraConfig
|
||||
${cfg.extraConfig}
|
||||
'';
|
||||
preferLocalBuild = true;
|
||||
}
|
||||
# Make sure that the doas.conf file is syntactically valid.
|
||||
"${pkgs.buildPackages.doas}/bin/doas -C $src && cp $src $out";
|
||||
mode = "0440";
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ cole-h ];
|
||||
}
|
|
@ -50,6 +50,7 @@ in
|
|||
# enable the nss module, so user lookups etc. work
|
||||
system.nssModules = [ package ];
|
||||
system.nssDatabases.passwd = [ "cache_oslogin" "oslogin" ];
|
||||
system.nssDatabases.group = [ "cache_oslogin" "oslogin" ];
|
||||
|
||||
# Ugly: sshd refuses to start if a store path is given because /nix/store is group-writable.
|
||||
# So indirect by a symlink.
|
||||
|
|
|
@ -54,7 +54,7 @@ let
|
|||
description = ''
|
||||
If set, users listed in
|
||||
<filename>~/.yubico/authorized_yubikeys</filename>
|
||||
are able to log in with the asociated Yubikey tokens.
|
||||
are able to log in with the associated Yubikey tokens.
|
||||
'';
|
||||
};
|
||||
|
||||
|
|
|
@ -160,6 +160,11 @@ in {
|
|||
+ " the 'users.users' option instead as this combination is"
|
||||
+ " currently not supported.";
|
||||
}
|
||||
{ assertion = !cfg.serviceConfig.ProtectSystem or false;
|
||||
message = "${whatOpt "ProtectSystem"}. ProtectSystem is not compatible"
|
||||
+ " with service confinement as it fails to remount /usr within"
|
||||
+ " our chroot. Please disable the option.";
|
||||
}
|
||||
]) config.systemd.services);
|
||||
|
||||
config.systemd.packages = lib.concatLists (lib.mapAttrsToList (name: cfg: let
|
||||
|
|
|
@ -18,8 +18,6 @@ let
|
|||
''}
|
||||
state_file "${cfg.dataDir}/state"
|
||||
sticker_file "${cfg.dataDir}/sticker.sql"
|
||||
user "${cfg.user}"
|
||||
group "${cfg.group}"
|
||||
|
||||
${optionalString (cfg.network.listenAddress != "any") ''bind_to_address "${cfg.network.listenAddress}"''}
|
||||
${optionalString (cfg.network.port != 6600) ''port "${toString cfg.network.port}"''}
|
||||
|
|
|
@ -268,7 +268,8 @@ let
|
|||
|
||||
mkSrcAttrs = srcCfg: with srcCfg; {
|
||||
enabled = onOff enable;
|
||||
mbuffer = with mbuffer; if enable then "${pkgs.mbuffer}/bin/mbuffer"
|
||||
# mbuffer is not referenced by its full path to accomodate non-NixOS systems or differing mbuffer versions between source and target
|
||||
mbuffer = with mbuffer; if enable then "mbuffer"
|
||||
+ optionalString (port != null) ":${toString port}" else "off";
|
||||
mbuffer_size = mbuffer.size;
|
||||
post_znap_cmd = nullOff postsnap;
|
||||
|
@ -357,6 +358,12 @@ in
|
|||
default = false;
|
||||
};
|
||||
|
||||
features.oracleMode = mkEnableOption ''
|
||||
Destroy snapshots one by one instead of using one long argument list.
|
||||
If source and destination are out of sync for a long time, you may have
|
||||
so many snapshots to destroy that the argument gets is too long and the
|
||||
command fails.
|
||||
'';
|
||||
features.recvu = mkEnableOption ''
|
||||
recvu feature which uses <literal>-u</literal> on the receiving end to keep the destination
|
||||
filesystem unmounted.
|
||||
|
@ -372,6 +379,41 @@ in
|
|||
and <citerefentry><refentrytitle>zfs</refentrytitle><manvolnum>8</manvolnum></citerefentry>
|
||||
for more info.
|
||||
'';
|
||||
features.sendRaw = mkEnableOption ''
|
||||
sendRaw feature which adds the options <literal>-w</literal> to the
|
||||
<command>zfs send</command> command. For encrypted source datasets this
|
||||
instructs zfs not to decrypt before sending which results in a remote
|
||||
backup that can't be read without the encryption key/passphrase, useful
|
||||
when the remote isn't fully trusted or not physically secure. This
|
||||
option must be used consistently, raw incrementals cannot be based on
|
||||
non-raw snapshots and vice versa.
|
||||
'';
|
||||
features.skipIntermediates = mkEnableOption ''
|
||||
Enable the skipIntermediates feature to send a single increment
|
||||
between latest common snapshot and the newly made one. It may skip
|
||||
several source snaps if the destination was offline for some time, and
|
||||
it should skip snapshots not managed by znapzend. Normally for online
|
||||
destinations, the new snapshot is sent as soon as it is created on the
|
||||
source, so there are no automatic increments to skip.
|
||||
'';
|
||||
features.lowmemRecurse = mkEnableOption ''
|
||||
use lowmemRecurse on systems where you have too many datasets, so a
|
||||
recursive listing of attributes to find backup plans exhausts the
|
||||
memory available to <command>znapzend</command>: instead, go the slower
|
||||
way to first list all impacted dataset names, and then query their
|
||||
configs one by one.
|
||||
'';
|
||||
features.zfsGetType = mkEnableOption ''
|
||||
use zfsGetType if your <command>zfs get</command> supports a
|
||||
<literal>-t</literal> argument for filtering by dataset type at all AND
|
||||
lists properties for snapshots by default when recursing, so that there
|
||||
is too much data to process while searching for backup plans.
|
||||
If these two conditions apply to your system, the time needed for a
|
||||
<literal>--recursive</literal> search for backup plans can literally
|
||||
differ by hundreds of times (depending on the amount of snapshots in
|
||||
that dataset tree... and a decent backup plan will ensure you have a lot
|
||||
of those), so you would benefit from requesting this feature.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -423,5 +465,5 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ infinisil ];
|
||||
meta.maintainers = with maintainers; [ infinisil SlothOfAnarchy ];
|
||||
}
|
||||
|
|
|
@ -461,7 +461,7 @@ in
|
|||
moreutils
|
||||
remarshal
|
||||
utillinux
|
||||
cfg.package.bin
|
||||
cfg.package
|
||||
] ++ cfg.extraPackages;
|
||||
reloadIfChanged = true;
|
||||
serviceConfig = {
|
||||
|
|
|
@ -87,7 +87,6 @@ in
|
|||
datadir = /var/lib/mysql
|
||||
bind-address = 127.0.0.1
|
||||
port = 3336
|
||||
plugin-load-add = auth_socket.so
|
||||
|
||||
!includedir /etc/mysql/conf.d/
|
||||
''';
|
||||
|
@ -315,13 +314,16 @@ in
|
|||
datadir = cfg.dataDir;
|
||||
bind-address = mkIf (cfg.bind != null) cfg.bind;
|
||||
port = cfg.port;
|
||||
plugin-load-add = optional (cfg.ensureUsers != []) "auth_socket.so";
|
||||
}
|
||||
(mkIf (cfg.replication.role == "master" || cfg.replication.role == "slave") {
|
||||
log-bin = "mysql-bin-${toString cfg.replication.serverId}";
|
||||
log-bin-index = "mysql-bin-${toString cfg.replication.serverId}.index";
|
||||
relay-log = "mysql-relay-bin";
|
||||
server-id = cfg.replication.serverId;
|
||||
binlog-ignore-db = [ "information_schema" "performance_schema" "mysql" ];
|
||||
})
|
||||
(mkIf (!isMariaDB) {
|
||||
plugin-load-add = optional (cfg.ensureUsers != []) "auth_socket.so";
|
||||
})
|
||||
];
|
||||
|
||||
|
@ -444,7 +446,6 @@ in
|
|||
|
||||
( echo "stop slave;"
|
||||
echo "change master to master_host='${cfg.replication.masterHost}', master_user='${cfg.replication.masterUser}', master_password='${cfg.replication.masterPassword}';"
|
||||
echo "set global slave_exec_mode='IDEMPOTENT';"
|
||||
echo "start slave;"
|
||||
) | ${mysql}/bin/mysql -u root -N
|
||||
''}
|
||||
|
|
|
@ -231,6 +231,10 @@ in
|
|||
|
||||
};
|
||||
|
||||
meta = {
|
||||
maintainers = lib.maintainers.mic92;
|
||||
};
|
||||
|
||||
|
||||
###### implementation
|
||||
|
||||
|
|
|
@ -17,6 +17,7 @@ let
|
|||
hba_file = '${pkgs.writeText "pg_hba.conf" cfg.authentication}'
|
||||
ident_file = '${pkgs.writeText "pg_ident.conf" cfg.identMap}'
|
||||
log_destination = 'stderr'
|
||||
log_line_prefix = '${cfg.logLinePrefix}'
|
||||
listen_addresses = '${if cfg.enableTCPIP then "*" else "localhost"}'
|
||||
port = ${toString cfg.port}
|
||||
${cfg.extraConfig}
|
||||
|
@ -34,13 +35,7 @@ in
|
|||
|
||||
services.postgresql = {
|
||||
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run PostgreSQL.
|
||||
'';
|
||||
};
|
||||
enable = mkEnableOption "PostgreSQL Server";
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
|
@ -192,6 +187,17 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
logLinePrefix = mkOption {
|
||||
type = types.str;
|
||||
default = "[%p] ";
|
||||
example = "%m [%p] ";
|
||||
description = ''
|
||||
A printf-style string that is output at the beginning of each log line.
|
||||
Upstream default is <literal>'%m [%p] '</literal>, i.e. it includes the timestamp. We do
|
||||
not include the timestamp, because journal has it anyway.
|
||||
'';
|
||||
};
|
||||
|
||||
extraPlugins = mkOption {
|
||||
type = types.listOf types.path;
|
||||
default = [];
|
||||
|
@ -337,7 +343,7 @@ in
|
|||
# Wait for PostgreSQL to be ready to accept connections.
|
||||
postStart =
|
||||
''
|
||||
PSQL="${pkgs.sudo}/bin/sudo -u ${cfg.superUser} psql --port=${toString cfg.port}"
|
||||
PSQL="${pkgs.utillinux}/bin/runuser -u ${cfg.superUser} -- psql --port=${toString cfg.port}"
|
||||
|
||||
while ! $PSQL -d postgres -c "" 2> /dev/null; do
|
||||
if ! kill -0 "$MAINPID"; then exit 1; fi
|
||||
|
|
|
@ -294,7 +294,7 @@ https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides
|
|||
If you are not on NixOS or want to install this particular Emacs only for
|
||||
yourself, you can do so by adding it to your
|
||||
<filename>~/.config/nixpkgs/config.nix</filename> (see
|
||||
<link xlink:href="http://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides">Nixpkgs
|
||||
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides">Nixpkgs
|
||||
manual</link>):
|
||||
<example xml:id="module-services-emacs-config-nix">
|
||||
<title>Custom Emacs in <filename>~/.config/nixpkgs/config.nix</filename></title>
|
||||
|
|
|
@ -24,7 +24,7 @@ let
|
|||
|
||||
logFile = mkOption {
|
||||
type = types.str;
|
||||
example = "/var/spool/nginx/logs/access.log";
|
||||
example = "/var/log/nginx/access.log";
|
||||
description = ''
|
||||
The log file to be scanned.
|
||||
|
||||
|
@ -110,7 +110,7 @@ in
|
|||
{
|
||||
"mysite" = {
|
||||
domain = "example.com";
|
||||
logFile = "/var/spool/nginx/logs/access.log";
|
||||
logFile = "/var/log/nginx/access.log";
|
||||
};
|
||||
}
|
||||
'';
|
||||
|
|
|
@ -407,7 +407,7 @@ in
|
|||
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
restartTriggers = [ cfg.configFile ];
|
||||
restartTriggers = [ cfg.configFile modulesDir ];
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${dovecotPkg}/sbin/dovecot -F";
|
||||
|
|
|
@ -75,7 +75,7 @@ in {
|
|||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" ];
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package.bin}/bin/confd";
|
||||
ExecStart = "${cfg.package}/bin/confd";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -178,7 +178,7 @@ in {
|
|||
|
||||
serviceConfig = {
|
||||
Type = "notify";
|
||||
ExecStart = "${pkgs.etcd.bin}/bin/etcd";
|
||||
ExecStart = "${pkgs.etcd}/bin/etcd";
|
||||
User = "etcd";
|
||||
LimitNOFILE = 40000;
|
||||
};
|
||||
|
|
|
@ -14,53 +14,9 @@ let
|
|||
RUN_USER = ${cfg.user}
|
||||
RUN_MODE = prod
|
||||
|
||||
[database]
|
||||
DB_TYPE = ${cfg.database.type}
|
||||
${optionalString (usePostgresql || useMysql) ''
|
||||
HOST = ${if cfg.database.socket != null then cfg.database.socket else cfg.database.host + ":" + toString cfg.database.port}
|
||||
NAME = ${cfg.database.name}
|
||||
USER = ${cfg.database.user}
|
||||
PASSWD = #dbpass#
|
||||
''}
|
||||
${optionalString useSqlite ''
|
||||
PATH = ${cfg.database.path}
|
||||
''}
|
||||
${optionalString usePostgresql ''
|
||||
SSL_MODE = disable
|
||||
''}
|
||||
${generators.toINI {} cfg.settings}
|
||||
|
||||
[repository]
|
||||
ROOT = ${cfg.repositoryRoot}
|
||||
|
||||
[server]
|
||||
DOMAIN = ${cfg.domain}
|
||||
HTTP_ADDR = ${cfg.httpAddress}
|
||||
HTTP_PORT = ${toString cfg.httpPort}
|
||||
ROOT_URL = ${cfg.rootUrl}
|
||||
STATIC_ROOT_PATH = ${cfg.staticRootPath}
|
||||
LFS_JWT_SECRET = #jwtsecret#
|
||||
|
||||
[session]
|
||||
COOKIE_NAME = session
|
||||
COOKIE_SECURE = ${boolToString cfg.cookieSecure}
|
||||
|
||||
[security]
|
||||
SECRET_KEY = #secretkey#
|
||||
INSTALL_LOCK = true
|
||||
|
||||
[log]
|
||||
ROOT_PATH = ${cfg.log.rootPath}
|
||||
LEVEL = ${cfg.log.level}
|
||||
|
||||
[service]
|
||||
DISABLE_REGISTRATION = ${boolToString cfg.disableRegistration}
|
||||
|
||||
${optionalString (cfg.mailerPasswordFile != null) ''
|
||||
[mailer]
|
||||
PASSWD = #mailerpass#
|
||||
''}
|
||||
|
||||
${cfg.extraConfig}
|
||||
${optionalString (cfg.extraConfig != null) cfg.extraConfig}
|
||||
'';
|
||||
in
|
||||
|
||||
|
@ -279,9 +235,36 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = with types; attrsOf (attrsOf (oneOf [ bool int str ]));
|
||||
default = {};
|
||||
description = ''
|
||||
Gitea configuration. Refer to <link xlink:href="https://docs.gitea.io/en-us/config-cheat-sheet/"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
example = literalExample ''
|
||||
{
|
||||
"cron.sync_external_users" = {
|
||||
RUN_AT_START = true;
|
||||
SCHEDULE = "@every 24h";
|
||||
UPDATE_EXISTING = true;
|
||||
};
|
||||
mailer = {
|
||||
ENABLED = true;
|
||||
MAILER_TYPE = "sendmail";
|
||||
FROM = "do-not-reply@example.org";
|
||||
SENDMAIL_PATH = "${pkgs.system-sendmail}/bin/sendmail";
|
||||
};
|
||||
other = {
|
||||
SHOW_FOOTER_VERSION = false;
|
||||
};
|
||||
}
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.str;
|
||||
default = "";
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = "Configuration lines appended to the generated gitea configuration file.";
|
||||
};
|
||||
};
|
||||
|
@ -294,6 +277,62 @@ in
|
|||
}
|
||||
];
|
||||
|
||||
services.gitea.settings = {
|
||||
database = mkMerge [
|
||||
{
|
||||
DB_TYPE = cfg.database.type;
|
||||
}
|
||||
(mkIf (useMysql || usePostgresql) {
|
||||
HOST = if cfg.database.socket != null then cfg.database.socket else cfg.database.host + ":" + toString cfg.database.port;
|
||||
NAME = cfg.database.name;
|
||||
USER = cfg.database.user;
|
||||
PASSWD = "#dbpass#";
|
||||
})
|
||||
(mkIf useSqlite {
|
||||
PATH = cfg.database.path;
|
||||
})
|
||||
(mkIf usePostgresql {
|
||||
SSL_MODE = "disable";
|
||||
})
|
||||
];
|
||||
|
||||
repository = {
|
||||
ROOT = cfg.repositoryRoot;
|
||||
};
|
||||
|
||||
server = {
|
||||
DOMAIN = cfg.domain;
|
||||
HTTP_ADDR = cfg.httpAddress;
|
||||
HTTP_PORT = cfg.httpPort;
|
||||
ROOT_URL = cfg.rootUrl;
|
||||
STATIC_ROOT_PATH = cfg.staticRootPath;
|
||||
LFS_JWT_SECRET = "#jwtsecret#";
|
||||
};
|
||||
|
||||
session = {
|
||||
COOKIE_NAME = "session";
|
||||
COOKIE_SECURE = cfg.cookieSecure;
|
||||
};
|
||||
|
||||
security = {
|
||||
SECRET_KEY = "#secretkey#";
|
||||
INSTALL_LOCK = true;
|
||||
};
|
||||
|
||||
log = {
|
||||
ROOT_PATH = cfg.log.rootPath;
|
||||
LEVEL = cfg.log.level;
|
||||
};
|
||||
|
||||
service = {
|
||||
DISABLE_REGISTRATION = cfg.disableRegistration;
|
||||
};
|
||||
|
||||
mailer = mkIf (cfg.mailerPasswordFile != null) {
|
||||
PASSWD = "#mailerpass#";
|
||||
};
|
||||
};
|
||||
|
||||
services.postgresql = optionalAttrs (usePostgresql && cfg.database.createDatabase) {
|
||||
enable = mkDefault true;
|
||||
|
||||
|
@ -335,7 +374,7 @@ in
|
|||
description = "gitea";
|
||||
after = [ "network.target" ] ++ lib.optional usePostgresql "postgresql.service" ++ lib.optional useMysql "mysql.service";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [ gitea.bin pkgs.gitAndTools.git ];
|
||||
path = [ gitea pkgs.gitAndTools.git ];
|
||||
|
||||
preStart = let
|
||||
runConfig = "${cfg.stateDir}/custom/conf/app.ini";
|
||||
|
@ -347,11 +386,11 @@ in
|
|||
cp -f ${configFile} ${runConfig}
|
||||
|
||||
if [ ! -e ${secretKey} ]; then
|
||||
${gitea.bin}/bin/gitea generate secret SECRET_KEY > ${secretKey}
|
||||
${gitea}/bin/gitea generate secret SECRET_KEY > ${secretKey}
|
||||
fi
|
||||
|
||||
if [ ! -e ${jwtSecret} ]; then
|
||||
${gitea.bin}/bin/gitea generate secret LFS_JWT_SECRET > ${jwtSecret}
|
||||
${gitea}/bin/gitea generate secret LFS_JWT_SECRET > ${jwtSecret}
|
||||
fi
|
||||
|
||||
KEY="$(head -n1 ${secretKey})"
|
||||
|
@ -374,7 +413,7 @@ in
|
|||
HOOKS=$(find ${cfg.repositoryRoot} -mindepth 4 -maxdepth 6 -type f -wholename "*git/hooks/*")
|
||||
if [ "$HOOKS" ]
|
||||
then
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea.bin}/bin/gitea,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea}/bin/gitea,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/env,${pkgs.coreutils}/bin/env,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/bash,${pkgs.bash}/bin/bash,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/perl,${pkgs.perl}/bin/perl,g' $HOOKS
|
||||
|
@ -383,7 +422,7 @@ in
|
|||
# update command option in authorized_keys
|
||||
if [ -r ${cfg.stateDir}/.ssh/authorized_keys ]
|
||||
then
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea.bin}/bin/gitea,g' ${cfg.stateDir}/.ssh/authorized_keys
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gitea,${gitea}/bin/gitea,g' ${cfg.stateDir}/.ssh/authorized_keys
|
||||
fi
|
||||
'';
|
||||
|
||||
|
@ -392,7 +431,7 @@ in
|
|||
User = cfg.user;
|
||||
Group = "gitea";
|
||||
WorkingDirectory = cfg.stateDir;
|
||||
ExecStart = "${gitea.bin}/bin/gitea web";
|
||||
ExecStart = "${gitea}/bin/gitea web";
|
||||
Restart = "always";
|
||||
|
||||
# Filesystem
|
||||
|
@ -435,9 +474,12 @@ in
|
|||
|
||||
users.groups.gitea = {};
|
||||
|
||||
warnings = optional (cfg.database.password != "")
|
||||
''config.services.gitea.database.password will be stored as plaintext
|
||||
in the Nix store. Use database.passwordFile instead.'';
|
||||
warnings =
|
||||
optional (cfg.database.password != "") ''
|
||||
config.services.gitea.database.password will be stored as plaintext in the Nix store. Use database.passwordFile instead.'' ++
|
||||
optional (cfg.extraConfig != null) ''
|
||||
services.gitea.`extraConfig` is deprecated, please use services.gitea.`settings`.
|
||||
'';
|
||||
|
||||
# Create database passwordFile default when password is configured.
|
||||
services.gitea.database.passwordFile =
|
||||
|
@ -450,7 +492,7 @@ in
|
|||
description = "gitea dump";
|
||||
after = [ "gitea.service" ];
|
||||
wantedBy = [ "default.target" ];
|
||||
path = [ gitea.bin ];
|
||||
path = [ gitea ];
|
||||
|
||||
environment = {
|
||||
USER = cfg.user;
|
||||
|
@ -461,7 +503,7 @@ in
|
|||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
User = cfg.user;
|
||||
ExecStart = "${gitea.bin}/bin/gitea dump";
|
||||
ExecStart = "${gitea}/bin/gitea dump";
|
||||
WorkingDirectory = cfg.stateDir;
|
||||
};
|
||||
};
|
||||
|
|
|
@ -200,7 +200,7 @@ in
|
|||
description = "Gogs (Go Git Service)";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = [ pkgs.gogs.bin ];
|
||||
path = [ pkgs.gogs ];
|
||||
|
||||
preStart = let
|
||||
runConfig = "${cfg.stateDir}/custom/conf/app.ini";
|
||||
|
@ -230,7 +230,7 @@ in
|
|||
HOOKS=$(find ${cfg.repositoryRoot} -mindepth 4 -maxdepth 4 -type f -wholename "*git/hooks/*")
|
||||
if [ "$HOOKS" ]
|
||||
then
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gogs,${pkgs.gogs.bin}/bin/gogs,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/gogs,${pkgs.gogs}/bin/gogs,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/env,${pkgs.coreutils}/bin/env,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/bash,${pkgs.bash}/bin/bash,g' $HOOKS
|
||||
sed -ri 's,/nix/store/[a-z0-9.-]+/bin/perl,${pkgs.perl}/bin/perl,g' $HOOKS
|
||||
|
@ -242,7 +242,7 @@ in
|
|||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
WorkingDirectory = cfg.stateDir;
|
||||
ExecStart = "${pkgs.gogs.bin}/bin/gogs web";
|
||||
ExecStart = "${pkgs.gogs}/bin/gogs web";
|
||||
Restart = "always";
|
||||
};
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ in
|
|||
Restart = "on-failure";
|
||||
WorkingDirectory = stateDir;
|
||||
PrivateTmp = true;
|
||||
ExecStart = "${pkgs.leaps.bin}/bin/leaps -path ${toString cfg.path} -address ${cfg.address}:${toString cfg.port}";
|
||||
ExecStart = "${pkgs.leaps}/bin/leaps -path ${toString cfg.path} -address ${cfg.address}:${toString cfg.port}";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -283,7 +283,7 @@ in
|
|||
trustedBinaryCaches = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
example = [ "http://hydra.nixos.org/" ];
|
||||
example = [ "https://hydra.nixos.org/" ];
|
||||
description = ''
|
||||
List of binary cache URLs that non-root users can use (in
|
||||
addition to those specified using
|
||||
|
@ -510,8 +510,7 @@ in
|
|||
|
||||
system.activationScripts.nix = stringAfter [ "etc" "users" ]
|
||||
''
|
||||
# Create directories in /nix.
|
||||
${nix}/bin/nix ping-store --no-net
|
||||
install -m 0755 -d /nix/var/nix/{gcroots,profiles}/per-user
|
||||
|
||||
# Subscribe the root user to the NixOS channel by default.
|
||||
if [ ! -e "/root/.nix-channels" ]; then
|
||||
|
|
|
@ -35,7 +35,7 @@ in {
|
|||
|
||||
path = [ fake-lsb-release ];
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package.bin}/bin/agent";
|
||||
ExecStart = "${cfg.package}/bin/agent";
|
||||
KillMode = "process";
|
||||
Restart = "on-failure";
|
||||
RestartSec = "15min";
|
||||
|
@ -43,4 +43,3 @@ in {
|
|||
};
|
||||
};
|
||||
}
|
||||
|
||||
|
|
|
@ -42,11 +42,6 @@ in {
|
|||
};
|
||||
config = mkMerge [
|
||||
(mkIf cfg.enable {
|
||||
assertions = singleton {
|
||||
assertion = nscd.enable;
|
||||
message = "nscd must be enabled through `services.nscd.enable` for SSSD to work.";
|
||||
};
|
||||
|
||||
systemd.services.sssd = {
|
||||
description = "System Security Services Daemon";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
@ -74,11 +69,12 @@ in {
|
|||
mode = "0400";
|
||||
};
|
||||
|
||||
system.nssModules = optional cfg.enable pkgs.sssd;
|
||||
system.nssModules = pkgs.sssd;
|
||||
system.nssDatabases = {
|
||||
group = [ "sss" ];
|
||||
passwd = [ "sss" ];
|
||||
shadow = [ "sss" ];
|
||||
services = [ "sss" ];
|
||||
shadow = [ "sss" ];
|
||||
};
|
||||
services.dbus.packages = [ pkgs.sssd ];
|
||||
})
|
||||
|
|
|
@ -148,7 +148,7 @@ in {
|
|||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ExecStart = ''
|
||||
${cfg.package.bin}/bin/bosun -c ${configFile}
|
||||
${cfg.package}/bin/bosun -c ${configFile}
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
|
|
@ -59,7 +59,7 @@ in {
|
|||
"-templates ${cfg.templateDir}"
|
||||
];
|
||||
in {
|
||||
ExecStart = "${pkgs.grafana_reporter.bin}/bin/grafana-reporter ${args}";
|
||||
ExecStart = "${pkgs.grafana_reporter}/bin/grafana-reporter ${args}";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -535,7 +535,7 @@ in {
|
|||
${optionalString cfg.provision.enable ''
|
||||
export GF_PATHS_PROVISIONING=${provisionConfDir};
|
||||
''}
|
||||
exec ${cfg.package.bin}/bin/grafana-server -homepath ${cfg.dataDir}
|
||||
exec ${cfg.package}/bin/grafana-server -homepath ${cfg.dataDir}
|
||||
'';
|
||||
serviceConfig = {
|
||||
WorkingDirectory = cfg.dataDir;
|
||||
|
|
|
@ -58,7 +58,7 @@ in
|
|||
in {
|
||||
serviceConfig = {
|
||||
ExecStart = ''
|
||||
${pkgs.prometheus-snmp-exporter.bin}/bin/snmp_exporter \
|
||||
${pkgs.prometheus-snmp-exporter}/bin/snmp_exporter \
|
||||
--config.file=${escapeShellArg configFile} \
|
||||
--log.format=${escapeShellArg cfg.logFormat} \
|
||||
--log.level=${cfg.logLevel} \
|
||||
|
|
|
@ -118,7 +118,7 @@ in {
|
|||
serviceConfig = {
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ExecStart = "${cfg.package.bin}/bin/scollector -conf=${conf} ${lib.concatStringsSep " " cfg.extraOpts}";
|
||||
ExecStart = "${cfg.package}/bin/scollector -conf=${conf} ${lib.concatStringsSep " " cfg.extraOpts}";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -37,9 +37,7 @@ let
|
|||
baseService = recursiveUpdate commonEnv {
|
||||
wants = [ "ipfs-init.service" ];
|
||||
# NB: migration must be performed prior to pre-start, else we get the failure message!
|
||||
preStart = ''
|
||||
ipfs repo fsck # workaround for BUG #4212 (https://github.com/ipfs/go-ipfs/issues/4214)
|
||||
'' + optionalString cfg.autoMount ''
|
||||
preStart = optionalString cfg.autoMount ''
|
||||
ipfs --local config Mounts.FuseAllowOther --json true
|
||||
ipfs --local config Mounts.IPFS ${cfg.ipfsMountDir}
|
||||
ipfs --local config Mounts.IPNS ${cfg.ipnsMountDir}
|
||||
|
@ -219,6 +217,9 @@ in {
|
|||
createHome = false;
|
||||
uid = config.ids.uids.ipfs;
|
||||
description = "IPFS daemon user";
|
||||
packages = [
|
||||
pkgs.ipfs-migrator
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -224,6 +224,7 @@ in
|
|||
(mkIf cfg.enable {
|
||||
|
||||
system.nssModules = optional cfg.nsswins samba;
|
||||
system.nssDatabases.hosts = optional cfg.nsswins "wins";
|
||||
|
||||
systemd = {
|
||||
targets.samba = {
|
||||
|
|
|
@ -238,6 +238,10 @@ in
|
|||
users.groups.avahi = {};
|
||||
|
||||
system.nssModules = optional cfg.nssmdns pkgs.nssmdns;
|
||||
system.nssDatabases.hosts = optionals cfg.nssmdns (mkMerge [
|
||||
[ "mdns_minimal [NOTFOUND=return]" ]
|
||||
(mkOrder 1501 [ "mdns" ]) # 1501 to ensure it's after dns
|
||||
]);
|
||||
|
||||
environment.systemPackages = [ pkgs.avahi ];
|
||||
|
||||
|
|
|
@ -179,15 +179,15 @@ in
|
|||
(filterAttrs (n: _: hasPrefix "consul.d/" n) config.environment.etc);
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "@${cfg.package.bin}/bin/consul consul agent -config-dir /etc/consul.d"
|
||||
ExecStart = "@${cfg.package}/bin/consul consul agent -config-dir /etc/consul.d"
|
||||
+ concatMapStrings (n: " -config-file ${n}") configFiles;
|
||||
ExecReload = "${cfg.package.bin}/bin/consul reload";
|
||||
ExecReload = "${cfg.package}/bin/consul reload";
|
||||
PermissionsStartOnly = true;
|
||||
User = if cfg.dropPrivileges then "consul" else null;
|
||||
Restart = "on-failure";
|
||||
TimeoutStartSec = "infinity";
|
||||
} // (optionalAttrs (cfg.leaveOnStop) {
|
||||
ExecStop = "${cfg.package.bin}/bin/consul leave";
|
||||
ExecStop = "${cfg.package}/bin/consul leave";
|
||||
});
|
||||
|
||||
path = with pkgs; [ iproute gnugrep gawk consul ];
|
||||
|
@ -238,7 +238,7 @@ in
|
|||
|
||||
serviceConfig = {
|
||||
ExecStart = ''
|
||||
${cfg.alerts.package.bin}/bin/consul-alerts start \
|
||||
${cfg.alerts.package}/bin/consul-alerts start \
|
||||
--alert-addr=${cfg.alerts.listenAddr} \
|
||||
--consul-addr=${cfg.alerts.consulAddr} \
|
||||
${optionalString cfg.alerts.watchChecks "--watch-checks"} \
|
||||
|
|
|
@ -19,8 +19,8 @@ in {
|
|||
package = mkOption {
|
||||
description = "Package to use for flannel";
|
||||
type = types.package;
|
||||
default = pkgs.flannel.bin;
|
||||
defaultText = "pkgs.flannel.bin";
|
||||
default = pkgs.flannel;
|
||||
defaultText = "pkgs.flannel";
|
||||
};
|
||||
|
||||
publicIp = mkOption {
|
||||
|
@ -167,7 +167,7 @@ in {
|
|||
touch /run/flannel/docker
|
||||
'' + optionalString (cfg.storageBackend == "etcd") ''
|
||||
echo "setting network configuration"
|
||||
until ${pkgs.etcdctl.bin}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
|
||||
until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
|
||||
do
|
||||
echo "setting network configuration, retry"
|
||||
sleep 1
|
||||
|
|
|
@ -20,12 +20,14 @@ let
|
|||
ssid=${cfg.ssid}
|
||||
hw_mode=${cfg.hwMode}
|
||||
channel=${toString cfg.channel}
|
||||
${optionalString (cfg.countryCode != null) ''country_code=${cfg.countryCode}''}
|
||||
${optionalString (cfg.countryCode != null) ''ieee80211d=1''}
|
||||
|
||||
# logging (debug level)
|
||||
logger_syslog=-1
|
||||
logger_syslog_level=2
|
||||
logger_syslog_level=${toString cfg.logLevel}
|
||||
logger_stdout=-1
|
||||
logger_stdout_level=2
|
||||
logger_stdout_level=${toString cfg.logLevel}
|
||||
|
||||
ctrl_interface=/run/hostapd
|
||||
ctrl_interface_group=${cfg.group}
|
||||
|
@ -147,6 +149,35 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
logLevel = mkOption {
|
||||
default = 2;
|
||||
type = types.int;
|
||||
description = ''
|
||||
Levels (minimum value for logged events):
|
||||
0 = verbose debugging
|
||||
1 = debugging
|
||||
2 = informational messages
|
||||
3 = notification
|
||||
4 = warning
|
||||
'';
|
||||
};
|
||||
|
||||
countryCode = mkOption {
|
||||
default = null;
|
||||
example = "US";
|
||||
type = with types; nullOr str;
|
||||
description = ''
|
||||
Country code (ISO/IEC 3166-1). Used to set regulatory domain.
|
||||
Set as needed to indicate country in which device is operating.
|
||||
This can limit available channels and transmit power.
|
||||
These two octets are used as the first two octets of the Country String
|
||||
(dot11CountryString).
|
||||
If set this enables IEEE 802.11d. This advertises the countryCode and
|
||||
the set of allowed channels and transmit power levels based on the
|
||||
regulatory limits.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
default = "";
|
||||
example = ''
|
||||
|
@ -167,6 +198,8 @@ in
|
|||
|
||||
environment.systemPackages = [ pkgs.hostapd ];
|
||||
|
||||
services.udev.packages = optional (cfg.countryCode != null) [ pkgs.crda ];
|
||||
|
||||
systemd.services.hostapd =
|
||||
{ description = "hostapd wireless AP";
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ let
|
|||
rpc-login=${rpc.user}:${rpc.password}
|
||||
''}
|
||||
${optionalString rpc.restricted ''
|
||||
restrict-rpc=1
|
||||
restricted-rpc=1
|
||||
''}
|
||||
|
||||
limit-rate-up=${toString limits.upload}
|
||||
|
|
|
@ -115,7 +115,7 @@ in
|
|||
if cfg.mode == "boot"
|
||||
then [ "boot" cfg.kernel ]
|
||||
++ optional (cfg.initrd != "") cfg.initrd
|
||||
++ optional (cfg.cmdLine != "") "--cmdline=${lib.escapeShellArg cfg.cmdLine}"
|
||||
++ optionals (cfg.cmdLine != "") [ "--cmdline" cfg.cmdLine ]
|
||||
else [ "api" cfg.apiServer ];
|
||||
in
|
||||
''
|
||||
|
|
|
@ -382,6 +382,11 @@ let
|
|||
default = "en";
|
||||
description = "Default room language.";
|
||||
};
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
description = "Additional MUC specific configuration";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -792,6 +797,8 @@ in
|
|||
|
||||
https_ports = ${toLua cfg.httpsPorts}
|
||||
|
||||
${ cfg.extraConfig }
|
||||
|
||||
${lib.concatMapStrings (muc: ''
|
||||
Component ${toLua muc.domain} "muc"
|
||||
modules_enabled = { "muc_mam"; ${optionalString muc.vcard_muc ''"vcard_muc";'' } }
|
||||
|
@ -809,7 +816,7 @@ in
|
|||
muc_room_default_change_subject = ${toLua muc.roomDefaultChangeSubject}
|
||||
muc_room_default_history_length = ${toLua muc.roomDefaultHistoryLength}
|
||||
muc_room_default_language = ${toLua muc.roomDefaultLanguage}
|
||||
|
||||
${ muc.extraConfig }
|
||||
'') cfg.muc}
|
||||
|
||||
${ lib.optionalString (cfg.uploadHttp != null) ''
|
||||
|
@ -820,8 +827,6 @@ in
|
|||
http_upload_path = ${toLua cfg.uploadHttp.httpUploadPath}
|
||||
''}
|
||||
|
||||
${ cfg.extraConfig }
|
||||
|
||||
${ lib.concatStringsSep "\n" (lib.mapAttrsToList (n: v: ''
|
||||
VirtualHost "${v.domain}"
|
||||
enabled = ${boolToString v.enabled};
|
||||
|
|
|
@ -83,7 +83,7 @@ in {
|
|||
SKYDNS_NAMESERVERS = concatStringsSep "," cfg.nameservers;
|
||||
};
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package.bin}/bin/skydns";
|
||||
ExecStart = "${cfg.package}/bin/skydns";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -48,7 +48,7 @@ in {
|
|||
/run/current-system/sw/bin/rm -fv /run/hologram.sock
|
||||
'';
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.hologram.bin}/bin/hologram-agent -debug -conf ${cfgFile} -port ${cfg.httpPort}";
|
||||
ExecStart = "${pkgs.hologram}/bin/hologram-agent -debug -conf ${cfgFile} -port ${cfg.httpPort}";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -123,7 +123,7 @@ in {
|
|||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.hologram.bin}/bin/hologram-server --debug --conf ${cfgFile}";
|
||||
ExecStart = "${pkgs.hologram}/bin/hologram-server --debug --conf ${cfgFile}";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -577,7 +577,7 @@ in
|
|||
serviceConfig = {
|
||||
User = "oauth2_proxy";
|
||||
Restart = "always";
|
||||
ExecStart = "${cfg.package.bin}/bin/oauth2_proxy ${configString}";
|
||||
ExecStart = "${cfg.package}/bin/oauth2_proxy ${configString}";
|
||||
EnvironmentFile = mkIf (cfg.keyFile != null) cfg.keyFile;
|
||||
};
|
||||
};
|
||||
|
|
279
third_party/nixpkgs/nixos/modules/services/security/privacyidea.nix
vendored
Normal file
279
third_party/nixpkgs/nixos/modules/services/security/privacyidea.nix
vendored
Normal file
|
@ -0,0 +1,279 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.privacyidea;
|
||||
|
||||
uwsgi = pkgs.uwsgi.override { plugins = [ "python3" ]; };
|
||||
python = uwsgi.python3;
|
||||
penv = python.withPackages (ps: [ ps.privacyidea ]);
|
||||
logCfg = pkgs.writeText "privacyidea-log.cfg" ''
|
||||
[formatters]
|
||||
keys=detail
|
||||
|
||||
[handlers]
|
||||
keys=stream
|
||||
|
||||
[formatter_detail]
|
||||
class=privacyidea.lib.log.SecureFormatter
|
||||
format=[%(asctime)s][%(process)d][%(thread)d][%(levelname)s][%(name)s:%(lineno)d] %(message)s
|
||||
|
||||
[handler_stream]
|
||||
class=StreamHandler
|
||||
level=NOTSET
|
||||
formatter=detail
|
||||
args=(sys.stdout,)
|
||||
|
||||
[loggers]
|
||||
keys=root,privacyidea
|
||||
|
||||
[logger_privacyidea]
|
||||
handlers=stream
|
||||
qualname=privacyidea
|
||||
level=INFO
|
||||
|
||||
[logger_root]
|
||||
handlers=stream
|
||||
level=ERROR
|
||||
'';
|
||||
|
||||
piCfgFile = pkgs.writeText "privacyidea.cfg" ''
|
||||
SUPERUSER_REALM = [ '${concatStringsSep "', '" cfg.superuserRealm}' ]
|
||||
SQLALCHEMY_DATABASE_URI = 'postgresql:///privacyidea'
|
||||
SECRET_KEY = '${cfg.secretKey}'
|
||||
PI_PEPPER = '${cfg.pepper}'
|
||||
PI_ENCFILE = '${cfg.encFile}'
|
||||
PI_AUDIT_KEY_PRIVATE = '${cfg.auditKeyPrivate}'
|
||||
PI_AUDIT_KEY_PUBLIC = '${cfg.auditKeyPublic}'
|
||||
PI_LOGCONFIG = '${logCfg}'
|
||||
${cfg.extraConfig}
|
||||
'';
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
options = {
|
||||
services.privacyidea = {
|
||||
enable = mkEnableOption "PrivacyIDEA";
|
||||
|
||||
stateDir = mkOption {
|
||||
type = types.str;
|
||||
default = "/var/lib/privacyidea";
|
||||
description = ''
|
||||
Directory where all PrivacyIDEA files will be placed by default.
|
||||
'';
|
||||
};
|
||||
|
||||
superuserRealm = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ "super" "administrators" ];
|
||||
description = ''
|
||||
The realm where users are allowed to login as administrators.
|
||||
'';
|
||||
};
|
||||
|
||||
secretKey = mkOption {
|
||||
type = types.str;
|
||||
example = "t0p s3cr3t";
|
||||
description = ''
|
||||
This is used to encrypt the auth_token.
|
||||
'';
|
||||
};
|
||||
|
||||
pepper = mkOption {
|
||||
type = types.str;
|
||||
example = "Never know...";
|
||||
description = ''
|
||||
This is used to encrypt the admin passwords.
|
||||
'';
|
||||
};
|
||||
|
||||
encFile = mkOption {
|
||||
type = types.str;
|
||||
default = "${cfg.stateDir}/enckey";
|
||||
description = ''
|
||||
This is used to encrypt the token data and token passwords
|
||||
'';
|
||||
};
|
||||
|
||||
auditKeyPrivate = mkOption {
|
||||
type = types.str;
|
||||
default = "${cfg.stateDir}/private.pem";
|
||||
description = ''
|
||||
Private Key for signing the audit log.
|
||||
'';
|
||||
};
|
||||
|
||||
auditKeyPublic = mkOption {
|
||||
type = types.str;
|
||||
default = "${cfg.stateDir}/public.pem";
|
||||
description = ''
|
||||
Public key for checking signatures of the audit log.
|
||||
'';
|
||||
};
|
||||
|
||||
adminPasswordFile = mkOption {
|
||||
type = types.path;
|
||||
description = "File containing password for the admin user";
|
||||
};
|
||||
|
||||
adminEmail = mkOption {
|
||||
type = types.str;
|
||||
example = "admin@example.com";
|
||||
description = "Mail address for the admin user";
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
description = ''
|
||||
Extra configuration options for pi.cfg.
|
||||
'';
|
||||
};
|
||||
|
||||
user = mkOption {
|
||||
type = types.str;
|
||||
default = "privacyidea";
|
||||
description = "User account under which PrivacyIDEA runs.";
|
||||
};
|
||||
|
||||
group = mkOption {
|
||||
type = types.str;
|
||||
default = "privacyidea";
|
||||
description = "Group account under which PrivacyIDEA runs.";
|
||||
};
|
||||
|
||||
ldap-proxy = {
|
||||
enable = mkEnableOption "PrivacyIDEA LDAP Proxy";
|
||||
|
||||
configFile = mkOption {
|
||||
type = types.path;
|
||||
default = "";
|
||||
description = ''
|
||||
Path to PrivacyIDEA LDAP Proxy configuration (proxy.ini).
|
||||
'';
|
||||
};
|
||||
|
||||
user = mkOption {
|
||||
type = types.str;
|
||||
default = "pi-ldap-proxy";
|
||||
description = "User account under which PrivacyIDEA LDAP proxy runs.";
|
||||
};
|
||||
|
||||
group = mkOption {
|
||||
type = types.str;
|
||||
default = "pi-ldap-proxy";
|
||||
description = "Group account under which PrivacyIDEA LDAP proxy runs.";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkMerge [
|
||||
|
||||
(mkIf cfg.enable {
|
||||
|
||||
environment.systemPackages = [ python.pkgs.privacyidea ];
|
||||
|
||||
services.postgresql.enable = mkDefault true;
|
||||
|
||||
systemd.services.privacyidea = let
|
||||
piuwsgi = pkgs.writeText "uwsgi.json" (builtins.toJSON {
|
||||
uwsgi = {
|
||||
plugins = [ "python3" ];
|
||||
pythonpath = "${penv}/${uwsgi.python3.sitePackages}";
|
||||
socket = "/run/privacyidea/socket";
|
||||
uid = cfg.user;
|
||||
gid = cfg.group;
|
||||
chmod-socket = 770;
|
||||
chown-socket = "${cfg.user}:nginx";
|
||||
chdir = cfg.stateDir;
|
||||
wsgi-file = "${penv}/etc/privacyidea/privacyideaapp.wsgi";
|
||||
processes = 4;
|
||||
harakiri = 60;
|
||||
reload-mercy = 8;
|
||||
stats = "/run/privacyidea/stats.socket";
|
||||
max-requests = 2000;
|
||||
limit-as = 1024;
|
||||
reload-on-as = 512;
|
||||
reload-on-rss = 256;
|
||||
no-orphans = true;
|
||||
vacuum = true;
|
||||
};
|
||||
});
|
||||
in {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "postgresql.service" ];
|
||||
path = with pkgs; [ openssl ];
|
||||
environment.PRIVACYIDEA_CONFIGFILE = piCfgFile;
|
||||
preStart = let
|
||||
pi-manage = "${pkgs.sudo}/bin/sudo -u privacyidea -HE ${penv}/bin/pi-manage";
|
||||
pgsu = config.services.postgresql.superUser;
|
||||
psql = config.services.postgresql.package;
|
||||
in ''
|
||||
mkdir -p ${cfg.stateDir} /run/privacyidea
|
||||
chown ${cfg.user}:${cfg.group} -R ${cfg.stateDir} /run/privacyidea
|
||||
if ! test -e "${cfg.stateDir}/db-created"; then
|
||||
${pkgs.sudo}/bin/sudo -u ${pgsu} ${psql}/bin/createuser --no-superuser --no-createdb --no-createrole ${cfg.user}
|
||||
${pkgs.sudo}/bin/sudo -u ${pgsu} ${psql}/bin/createdb --owner ${cfg.user} privacyidea
|
||||
${pi-manage} create_enckey
|
||||
${pi-manage} create_audit_keys
|
||||
${pi-manage} createdb
|
||||
${pi-manage} admin add admin -e ${cfg.adminEmail} -p "$(cat ${cfg.adminPasswordFile})"
|
||||
${pi-manage} db stamp head -d ${penv}/lib/privacyidea/migrations
|
||||
touch "${cfg.stateDir}/db-created"
|
||||
chmod g+r "${cfg.stateDir}/enckey" "${cfg.stateDir}/private.pem"
|
||||
fi
|
||||
${pi-manage} db upgrade -d ${penv}/lib/privacyidea/migrations
|
||||
'';
|
||||
serviceConfig = {
|
||||
Type = "notify";
|
||||
ExecStart = "${uwsgi}/bin/uwsgi --json ${piuwsgi}";
|
||||
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||
ExecStop = "${pkgs.coreutils}/bin/kill -INT $MAINPID";
|
||||
NotifyAccess = "main";
|
||||
KillSignal = "SIGQUIT";
|
||||
StandardError = "syslog";
|
||||
};
|
||||
};
|
||||
|
||||
users.users.privacyidea = mkIf (cfg.user == "privacyidea") {
|
||||
group = cfg.group;
|
||||
};
|
||||
|
||||
users.groups.privacyidea = mkIf (cfg.group == "privacyidea") {};
|
||||
})
|
||||
|
||||
(mkIf cfg.ldap-proxy.enable {
|
||||
|
||||
systemd.services.privacyidea-ldap-proxy = let
|
||||
ldap-proxy-env = pkgs.python2.withPackages (ps: [ ps.privacyidea-ldap-proxy ]);
|
||||
in {
|
||||
description = "privacyIDEA LDAP proxy";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
User = cfg.ldap-proxy.user;
|
||||
Group = cfg.ldap-proxy.group;
|
||||
ExecStart = ''
|
||||
${ldap-proxy-env}/bin/twistd \
|
||||
--nodaemon \
|
||||
--pidfile= \
|
||||
-u ${cfg.ldap-proxy.user} \
|
||||
-g ${cfg.ldap-proxy.group} \
|
||||
ldap-proxy \
|
||||
-c ${cfg.ldap-proxy.configFile}
|
||||
'';
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
|
||||
users.users.pi-ldap-proxy = mkIf (cfg.ldap-proxy.user == "pi-ldap-proxy") {
|
||||
group = cfg.ldap-proxy.group;
|
||||
};
|
||||
|
||||
users.groups.pi-ldap-proxy = mkIf (cfg.ldap-proxy.group == "pi-ldap-proxy") {};
|
||||
})
|
||||
];
|
||||
|
||||
}
|
|
@ -188,7 +188,7 @@ let
|
|||
name = "icalevents";
|
||||
# Download the plugin from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip;
|
||||
url = "https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip";
|
||||
sha256 = "e40ed7dd6bbe7fe3363bbbecb4de481d5e42385b5a0f62f6a6ce6bf3a1f9dfa8";
|
||||
};
|
||||
sourceRoot = ".";
|
||||
|
@ -216,7 +216,7 @@ let
|
|||
name = "bootstrap3";
|
||||
# Download the theme from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip;
|
||||
url = "https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip";
|
||||
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
|
|
|
@ -224,7 +224,7 @@ in
|
|||
serviceConfig = {
|
||||
User = "nobody";
|
||||
Group = "nogroup";
|
||||
ExecStart = "${pkgs.matterircd.bin}/bin/matterircd ${concatStringsSep " " cfg.matterircd.parameters}";
|
||||
ExecStart = "${pkgs.matterircd}/bin/matterircd ${concatStringsSep " " cfg.matterircd.parameters}";
|
||||
WorkingDirectory = "/tmp";
|
||||
PrivateTmp = true;
|
||||
Restart = "always";
|
||||
|
|
|
@ -187,7 +187,7 @@ let
|
|||
then "/etc/nginx/nginx.conf"
|
||||
else configFile;
|
||||
|
||||
execCommand = "${cfg.package}/bin/nginx -c '${configPath}' -p '${cfg.stateDir}'";
|
||||
execCommand = "${cfg.package}/bin/nginx -c '${configPath}'";
|
||||
|
||||
vhosts = concatStringsSep "\n" (mapAttrsToList (vhostName: vhost:
|
||||
let
|
||||
|
@ -463,11 +463,12 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
stateDir = mkOption {
|
||||
default = "/var/spool/nginx";
|
||||
description = "
|
||||
Directory holding all state for nginx to run.
|
||||
";
|
||||
enableSandbox = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Starting Nginx web server with additional sandbox/hardening options.
|
||||
'';
|
||||
};
|
||||
|
||||
user = mkOption {
|
||||
|
@ -636,6 +637,13 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "services" "nginx" "stateDir" ] ''
|
||||
The Nginx log directory has been moved to /var/log/nginx, the cache directory
|
||||
to /var/cache/nginx. The option services.nginx.stateDir has been removed.
|
||||
'')
|
||||
];
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
# TODO: test user supplied config file pases syntax test
|
||||
|
||||
|
@ -680,12 +688,6 @@ in
|
|||
}
|
||||
];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '${cfg.stateDir}' 0750 ${cfg.user} ${cfg.group} - -"
|
||||
"d '${cfg.stateDir}/logs' 0750 ${cfg.user} ${cfg.group} - -"
|
||||
"Z '${cfg.stateDir}' - ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
|
||||
systemd.services.nginx = {
|
||||
description = "Nginx Web Server";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
@ -708,8 +710,35 @@ in
|
|||
# Runtime directory and mode
|
||||
RuntimeDirectory = "nginx";
|
||||
RuntimeDirectoryMode = "0750";
|
||||
# Cache directory and mode
|
||||
CacheDirectory = "nginx";
|
||||
CacheDirectoryMode = "0750";
|
||||
# Logs directory and mode
|
||||
LogsDirectory = "nginx";
|
||||
LogsDirectoryMode = "0750";
|
||||
# Capabilities
|
||||
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" "CAP_SYS_RESOURCE" ];
|
||||
CapabilityBoundingSet = [ "CAP_NET_BIND_SERVICE" "CAP_SYS_RESOURCE" ];
|
||||
# Security
|
||||
NoNewPrivileges = true;
|
||||
} // optionalAttrs cfg.enableSandbox {
|
||||
# Sandboxing
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = mkDefault true;
|
||||
PrivateTmp = true;
|
||||
PrivateDevices = true;
|
||||
ProtectHostname = true;
|
||||
ProtectKernelTunables = true;
|
||||
ProtectKernelModules = true;
|
||||
ProtectControlGroups = true;
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) pkgs.nginx.modules);
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
PrivateMounts = true;
|
||||
# System Call Filtering
|
||||
SystemCallArchitectures = "native";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -91,41 +91,47 @@ in {
|
|||
description = "Unit App Server";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
path = with pkgs; [ curl ];
|
||||
preStart = ''
|
||||
test -f '${cfg.stateDir}/conf.json' || rm -f '${cfg.stateDir}/conf.json'
|
||||
[ ! -e '${cfg.stateDir}/conf.json' ] || rm -f '${cfg.stateDir}/conf.json'
|
||||
'';
|
||||
postStart = ''
|
||||
curl -X PUT --data-binary '@${configFile}' --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
${pkgs.curl}/bin/curl -X PUT --data-binary '@${configFile}' --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
'';
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
PIDFile = "/run/unit/unit.pid";
|
||||
ExecStart = ''
|
||||
${cfg.package}/bin/unitd --control 'unix:/run/unit/control.unit.sock' --pid '/run/unit/unit.pid' \
|
||||
--log '${cfg.logDir}/unit.log' --state '${cfg.stateDir}' --no-daemon \
|
||||
--log '${cfg.logDir}/unit.log' --state '${cfg.stateDir}' \
|
||||
--user ${cfg.user} --group ${cfg.group}
|
||||
'';
|
||||
# User and group
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
# Capabilities
|
||||
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" "CAP_SETGID" "CAP_SETUID" ];
|
||||
ExecStop = ''
|
||||
${pkgs.curl}/bin/curl -X DELETE --unix-socket '/run/unit/control.unit.sock' 'http://localhost/config'
|
||||
'';
|
||||
# Runtime directory and mode
|
||||
RuntimeDirectory = "unit";
|
||||
RuntimeDirectoryMode = "0750";
|
||||
# Access write directories
|
||||
ReadWritePaths = [ cfg.stateDir cfg.logDir ];
|
||||
# Security
|
||||
NoNewPrivileges = true;
|
||||
# Sandboxing
|
||||
ProtectSystem = "full";
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = true;
|
||||
RuntimeDirectory = "unit";
|
||||
RuntimeDirectoryMode = "0750";
|
||||
PrivateTmp = true;
|
||||
PrivateDevices = true;
|
||||
ProtectHostname = true;
|
||||
ProtectKernelTunables = true;
|
||||
ProtectKernelModules = true;
|
||||
ProtectControlGroups = true;
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = true;
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
PrivateMounts = true;
|
||||
# System Call Filtering
|
||||
SystemCallArchitectures = "native";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -109,7 +109,7 @@ in
|
|||
|
||||
# Without this, elementary LightDM greeter will pre-select non-existent `default` session
|
||||
# https://github.com/elementary/greeter/issues/368
|
||||
services.xserver.displayManager.defaultSession = "pantheon";
|
||||
services.xserver.displayManager.defaultSession = mkDefault "pantheon";
|
||||
|
||||
services.xserver.displayManager.sessionCommands = ''
|
||||
if test "$XDG_CURRENT_DESKTOP" = "Pantheon"; then
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue