Project import generated by Copybara.

GitOrigin-RevId: a930f7da84786807bb105df40e76b541604c3e72
This commit is contained in:
Default email 2021-09-22 23:38:15 +08:00
parent 88abffb7d2
commit d9e13ed064
703 changed files with 17516 additions and 12948 deletions

View file

@ -1,8 +1,16 @@
# Fetchers {#chap-pkgs-fetchers} # Fetchers {#chap-pkgs-fetchers}
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way. When using Nix, you will frequently need to download source code and other files from the internet. For this purpose, Nix provides the [_fixed output derivation_](https://nixos.org/manual/nix/stable/#fixed-output-drvs) feature and Nixpkgs provides various functions that implement the actual fetching from various protocols and services.
The two fetcher primitives are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below. ## Caveats
Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
For those who develop and maintain fetcheres, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
```nix ```nix
{ stdenv, fetchurl }: { stdenv, fetchurl }:
@ -20,7 +28,7 @@ The main difference between `fetchurl` and `fetchzip` is in how they store the c
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time. `fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like `fetchzip`. Most other fetchers return a directory rather than a single file.
## `fetchsvn` {#fetchsvn} ## `fetchsvn` {#fetchsvn}

View file

@ -7,4 +7,5 @@
</para> </para>
<xi:include href="special/fhs-environments.section.xml" /> <xi:include href="special/fhs-environments.section.xml" />
<xi:include href="special/mkshell.section.xml" /> <xi:include href="special/mkshell.section.xml" />
<xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
</chapter> </chapter>

View file

@ -0,0 +1,31 @@
## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};

View file

@ -28,12 +28,12 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
* `domain` (optional, defaults to `"github.com"`), domains including the strings `"github"` or `"gitlab"` in their names are automatically supported, otherwise, one must change the `fetcher` argument to support them (cf `pkgs/development/coq-modules/heq/default.nix` for an example), * `domain` (optional, defaults to `"github.com"`), domains including the strings `"github"` or `"gitlab"` in their names are automatically supported, otherwise, one must change the `fetcher` argument to support them (cf `pkgs/development/coq-modules/heq/default.nix` for an example),
* `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags, * `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags,
* `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers, * `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers,
* `namePrefix` (optional), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`, * `namePrefix` (optional, defaults to `[ "coq" ]`), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
* `extraBuildInputs` (optional), by default `buildInputs` just contains `coq`, this allows to add more build inputs, * `extraBuildInputs` (optional), by default `buildInputs` just contains `coq`, this allows to add more build inputs,
* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against. * `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`, * `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`,
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one. * `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
* `opam-name` (optional, defaults to `coq-` followed by the value of `pname`), name of the Dune package to build. * `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it. * `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation. * `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`, * `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,

View file

@ -24,6 +24,7 @@
<xi:include href="lua.section.xml" /> <xi:include href="lua.section.xml" />
<xi:include href="maven.section.xml" /> <xi:include href="maven.section.xml" />
<xi:include href="ocaml.section.xml" /> <xi:include href="ocaml.section.xml" />
<xi:include href="octave.section.xml" />
<xi:include href="perl.section.xml" /> <xi:include href="perl.section.xml" />
<xi:include href="php.section.xml" /> <xi:include href="php.section.xml" />
<xi:include href="python.section.xml" /> <xi:include href="python.section.xml" />

View file

@ -0,0 +1,100 @@
# Octave {#sec-octave}
## Introduction {#ssec-octave-introduction}
Octave is a modular scientific programming language and environment.
A majority of the packages supported by Octave from their [website](https://octave.sourceforge.io/packages.php) are packaged in nixpkgs.
## Structure {#ssec-octave-structure}
All Octave add-on packages are available in two ways:
1. Under the top-level `Octave` attribute, `octave.pkgs`.
2. As a top-level attribute, `octavePackages`.
## Packaging Octave Packages {#ssec-octave-packaging}
Nixpkgs provides a function `buildOctavePackage`, a generic package builder function for any Octave package that complies with the Octave's current packaging format.
All Octave packages are defined in [pkgs/top-level/octave-packages.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/octave-packages.nix) rather than `pkgs/all-packages.nix`.
Each package is defined in their own file in the [pkgs/development/octave-modules](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules) directory.
Octave packages are made available through `all-packages.nix` through both the attribute `octavePackages` and `octave.pkgs`.
You can test building an Octave package as follows:
```ShellSession
$ nix-build -A octavePackages.symbolic
```
When building Octave packages with `nix-build`, the `buildOctavePackage` function adds `octave-octaveVersion` to; the start of the package's name attribute.
This can be required when installing the package using `nix-env`:
```ShellSession
$ nix-env -i octave-6.2.0-symbolic
```
Although, you can also install it using the attribute name:
```ShellSession
$ nix-env -i -A octavePackages.symbolic
```
You can build Octave with packages by using the `withPackages` passed-through function.
```ShellSession
$ nix-shell -p 'octave.withPackages (ps: with ps; [ symbolic ])'
```
This will also work in a `shell.nix` file.
```nix
{ pkgs ? import <nixpkgs> { }}:
pkgs.mkShell {
nativeBuildInputs = with pkgs; [
(octave.withPackages (opkgs: with opkgs; [ symbolic ]))
];
}
```
### `buildOctavePackage` Steps {#sssec-buildOctavePackage-steps}
The `buildOctavePackage` does several things to make sure things work properly.
1. Sets the environment variable `OCTAVE_HISTFILE` to `/dev/null` during package compilation so that the commands run through the Octave interpreter directly are not logged.
2. Skips the configuration step, because the packages are stored as gzipped tarballs, which Octave itself handles directly.
3. Change the hierarchy of the tarball so that only a single directory is at the top-most level of the tarball.
4. Use Octave itself to run the `pkg build` command, which unzips the tarball, extracts the necessary files written in Octave, and compiles any code written in C++ or Fortran, and places the fully compiled artifact in `$out`.
`buildOctavePackage` is built on top of `stdenv` in a standard way, allowing most things to be customized.
### Handling Dependencies {#sssec-octave-handling-dependencies}
In Octave packages, there are four sets of dependencies that can be specified:
`nativeBuildInputs`
: Just like other packages, `nativeBuildInputs` is intended for architecture-dependent build-time-only dependencies.
`buildInputs`
: Like other packages, `buildInputs` is intended for architecture-independent build-time-only dependencies.
`propagatedBuildInputs`
: Similar to other packages, `propagatedBuildInputs` is intended for packages that are required for both building and running of the package.
See [Symbolic](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules/symbolic/default.nix) for how this works and why it is needed.
`requiredOctavePackages`
: This is a special dependency that ensures the specified Octave packages are dependent on others, and are made available simultaneously when loading them in Octave.
### Installing Octave Packages {#sssec-installing-octave-packages}
By default, the `buildOctavePackage` function does _not_ install the requested package into Octave for use.
The function will only build the requested package.
This is due to Octave maintaining an text-based database about which packages are installed where.
To this end, when all the requested packages have been built, the Octave package and all its add-on packages are put together into an environment, similar to Python.
1. First, all the Octave binaries are wrapped with the environment variable `OCTAVE_SITE_INITFILE` set to a file in `$out`, which is required for Octave to be able to find the non-standard package database location.
2. Because of the way `buildEnv` works, all tarballs that are present (which should be all Octave packages to install) should be removed.
3. The path down to the default install location of Octave packages is recreated so that Nix-operated Octave can install the packages.
4. Install the packages into the `$out` environment while writing package entries to the database file.
This database file is unique for each different (according to Nix) environment invocation.
5. Rewrite the Octave-wide startup file to read from the list of packages installed in that particular environment.
6. Wrap any programs that are required by the Octave packages so that they work with all the paths defined within the environment.

View file

@ -20,7 +20,7 @@ or use Mozilla's [Rust nightlies overlay](#using-the-rust-nightlies-overlay).
Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`: Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`:
```nix ```nix
{ lib, rustPlatform }: { lib, fetchFromGitHub, rustPlatform }:
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage rec {
pname = "ripgrep"; pname = "ripgrep";
@ -116,22 +116,44 @@ is updated after every change to `Cargo.lock`. Therefore,
a `Cargo.lock` file using the `cargoLock` argument. For example: a `Cargo.lock` file using the `cargoLock` argument. For example:
```nix ```nix
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage {
pname = "myproject"; pname = "myproject";
version = "1.0.0"; version = "1.0.0";
cargoLock = { cargoLock = {
lockFile = ./Cargo.lock; lockFile = ./Cargo.lock;
} };
# ... # ...
} }
``` ```
This will retrieve the dependencies using fixed-output derivations from This will retrieve the dependencies using fixed-output derivations from
the specified lockfile. Note that setting `cargoLock.lockFile` doesn't the specified lockfile.
add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still required
to build a rust package. A simple fix is to use: One caveat is that `Cargo.lock` cannot be patched in the `patchPhase`
because it runs after the dependencies have already been fetched. If
you need to patch or generate the lockfile you can alternatively set
`cargoLock.lockFileContents` to a string of its contents:
```nix
rustPlatform.buildRustPackage {
pname = "myproject";
version = "1.0.0";
cargoLock = let
fixupLockFile = path: f (builtins.readFile path);
in {
lockFileContents = fixupLockFile ./Cargo.lock;
};
# ...
}
```
Note that setting `cargoLock.lockFile` or `cargoLock.lockFileContents`
doesn't add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still
required to build a rust package. A simple fix is to use:
```nix ```nix
postPatch = '' postPatch = ''

View file

@ -79,7 +79,7 @@ A commonly adopted convention in `nixpkgs` is that executables provided by the p
The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${stdenv.glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${stdenv.glibc.bin}/bin/ldd`). The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${stdenv.glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${stdenv.glibc.bin}/bin/ldd`).
The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf/blob/master/README) for more details). The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf) for more details).
### File type groups {#multiple-output-file-type-groups} ### File type groups {#multiple-output-file-type-groups}

View file

@ -1853,6 +1853,12 @@
githubId = 1762540; githubId = 1762540;
name = "Changlin Li"; name = "Changlin Li";
}; };
chanley = {
email = "charlieshanley@gmail.com";
github = "charlieshanley";
githubId = 8228888;
name = "Charlie Hanley";
};
CharlesHD = { CharlesHD = {
email = "charleshdespointes@gmail.com"; email = "charleshdespointes@gmail.com";
github = "CharlesHD"; github = "CharlesHD";
@ -4441,6 +4447,12 @@
fingerprint = "D618 7A03 A40A 3D56 62F5 4B46 03EF BF83 9A5F DC15"; fingerprint = "D618 7A03 A40A 3D56 62F5 4B46 03EF BF83 9A5F DC15";
}]; }];
}; };
hleboulanger = {
email = "hleboulanger@protonmail.com";
name = "Harold Leboulanger";
github = "thbkrhsw";
githubId = 33122;
};
hlolli = { hlolli = {
email = "hlolli@gmail.com"; email = "hlolli@gmail.com";
github = "hlolli"; github = "hlolli";
@ -4565,6 +4577,16 @@
githubId = 2789926; githubId = 2789926;
name = "Imran Hossain"; name = "Imran Hossain";
}; };
iagoq = {
email = "18238046+iagocq@users.noreply.github.com";
github = "iagocq";
githubId = 18238046;
name = "Iago Manoel Brito";
keys = [{
longkeyid = "rsa4096/0x35D39F9A9A1BC8DA";
fingerprint = "DF90 9D58 BEE4 E73A 1B8C 5AF3 35D3 9F9A 9A1B C8DA";
}];
};
iammrinal0 = { iammrinal0 = {
email = "nixpkgs@mrinalpurohit.in"; email = "nixpkgs@mrinalpurohit.in";
github = "iammrinal0"; github = "iammrinal0";
@ -9182,6 +9204,12 @@
githubId = 546296; githubId = 546296;
name = "Eric Ren"; name = "Eric Ren";
}; };
renesat = {
name = "Ivan Smolyakov";
email = "smol.ivan97@gmail.com";
github = "renesat";
githubId = 11363539;
};
renzo = { renzo = {
email = "renzocarbonara@gmail.com"; email = "renzocarbonara@gmail.com";
github = "k0001"; github = "k0001";
@ -9846,12 +9874,6 @@
githubId = 11613056; githubId = 11613056;
name = "Scott Dier"; name = "Scott Dier";
}; };
sdll = {
email = "sasha.delly@gmail.com";
github = "sdll";
githubId = 17913919;
name = "Sasha Illarionov";
};
SeanZicari = { SeanZicari = {
email = "sean.zicari@gmail.com"; email = "sean.zicari@gmail.com";
github = "SeanZicari"; github = "SeanZicari";

View file

@ -0,0 +1,10 @@
# Nix script to calculate the Haskell dependencies of every haskellPackage. Used by ./hydra-report.hs.
let
pkgs = import ../../.. {};
inherit (pkgs) lib;
getDeps = _: pkg: {
deps = builtins.filter (x: !isNull x) (map (x: x.pname or null) (pkg.propagatedBuildInputs or []));
broken = (pkg.meta.hydraPlatforms or [null]) == [];
};
in
lib.mapAttrs getDeps pkgs.haskellPackages

View file

@ -26,6 +26,8 @@ Because step 1) is quite expensive and takes roughly ~5 minutes the result is ca
{-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TupleSections #-} {-# LANGUAGE TupleSections #-}
{-# OPTIONS_GHC -Wall #-} {-# OPTIONS_GHC -Wall #-}
{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE TupleSections #-}
import Control.Monad (forM_, (<=<)) import Control.Monad (forM_, (<=<))
import Control.Monad.Trans (MonadIO (liftIO)) import Control.Monad.Trans (MonadIO (liftIO))
@ -41,7 +43,7 @@ import Data.List.NonEmpty (NonEmpty, nonEmpty)
import qualified Data.List.NonEmpty as NonEmpty import qualified Data.List.NonEmpty as NonEmpty
import Data.Map.Strict (Map) import Data.Map.Strict (Map)
import qualified Data.Map.Strict as Map import qualified Data.Map.Strict as Map
import Data.Maybe (fromMaybe, mapMaybe) import Data.Maybe (fromMaybe, mapMaybe, isNothing)
import Data.Monoid (Sum (Sum, getSum)) import Data.Monoid (Sum (Sum, getSum))
import Data.Sequence (Seq) import Data.Sequence (Seq)
import qualified Data.Sequence as Seq import qualified Data.Sequence as Seq
@ -70,6 +72,12 @@ import System.Directory (XdgDirectory (XdgCache), getXdgDirectory)
import System.Environment (getArgs) import System.Environment (getArgs)
import System.Process (readProcess) import System.Process (readProcess)
import Prelude hiding (id) import Prelude hiding (id)
import Data.List (sortOn)
import Control.Concurrent.Async (concurrently)
import Control.Exception (evaluate)
import qualified Data.IntMap.Strict as IntMap
import qualified Data.IntSet as IntSet
import Data.Bifunctor (second)
newtype JobsetEvals = JobsetEvals newtype JobsetEvals = JobsetEvals
{ evals :: Seq Eval { evals :: Seq Eval
@ -134,20 +142,17 @@ hydraEvalCommand = "hydra-eval-jobs"
hydraEvalParams :: [String] hydraEvalParams :: [String]
hydraEvalParams = ["-I", ".", "pkgs/top-level/release-haskell.nix"] hydraEvalParams = ["-I", ".", "pkgs/top-level/release-haskell.nix"]
handlesCommand :: FilePath nixExprCommand :: FilePath
handlesCommand = "nix-instantiate" nixExprCommand = "nix-instantiate"
handlesParams :: [String] nixExprParams :: [String]
handlesParams = ["--eval", "--strict", "--json", "-"] nixExprParams = ["--eval", "--strict", "--json"]
handlesExpression :: String
handlesExpression = "with import ./. {}; with lib; zipAttrsWith (_: builtins.head) (mapAttrsToList (_: v: if v ? github then { \"${v.email}\" = v.github; } else {}) (import maintainers/maintainer-list.nix))"
-- | This newtype is used to parse a Hydra job output from @hydra-eval-jobs@. -- | This newtype is used to parse a Hydra job output from @hydra-eval-jobs@.
-- The only field we are interested in is @maintainers@, which is why this -- The only field we are interested in is @maintainers@, which is why this
-- is just a newtype. -- is just a newtype.
-- --
-- Note that there are occassionally jobs that don't have a maintainers -- Note that there are occasionally jobs that don't have a maintainers
-- field, which is why this has to be @Maybe Text@. -- field, which is why this has to be @Maybe Text@.
newtype Maintainers = Maintainers { maintainers :: Maybe Text } newtype Maintainers = Maintainers { maintainers :: Maybe Text }
deriving stock (Generic, Show) deriving stock (Generic, Show)
@ -195,13 +200,49 @@ type EmailToGitHubHandles = Map Text Text
-- @@ -- @@
type MaintainerMap = Map Text (NonEmpty Text) type MaintainerMap = Map Text (NonEmpty Text)
-- | Generate a mapping of Hydra job names to maintainer GitHub handles. -- | Information about a package which lists its dependencies and whether the
-- package is marked broken.
data DepInfo = DepInfo {
deps :: Set Text,
broken :: Bool
}
deriving stock (Generic, Show)
deriving anyclass (FromJSON, ToJSON)
-- | Map from package names to their DepInfo. This is the data we get out of a
-- nix call.
type DependencyMap = Map Text DepInfo
-- | Map from package names to its broken state, number of reverse dependencies (fst) and
-- unbroken reverse dependencies (snd).
type ReverseDependencyMap = Map Text (Int, Int)
-- | Calculate the (unbroken) reverse dependencies of a package by transitively
-- going through all packages if its a dependency of them.
calculateReverseDependencies :: DependencyMap -> ReverseDependencyMap
calculateReverseDependencies depMap = Map.fromDistinctAscList $ zip keys (zip (rdepMap False) (rdepMap True))
where
-- This code tries to efficiently invert the dependency map and calculate
-- its transitive closure by internally identifying every pkg with its index
-- in the package list and then using memoization.
keys = Map.keys depMap
pkgToIndexMap = Map.fromDistinctAscList (zip keys [0..])
intDeps = zip [0..] $ (\DepInfo{broken,deps} -> (broken,mapMaybe (`Map.lookup` pkgToIndexMap) $ Set.toList deps)) <$> Map.elems depMap
rdepMap onlyUnbroken = IntSet.size <$> resultList
where
resultList = go <$> [0..]
oneStepMap = IntMap.fromListWith IntSet.union $ (\(key,(_,deps)) -> (,IntSet.singleton key) <$> deps) <=< filter (\(_, (broken,_)) -> not (broken && onlyUnbroken)) $ intDeps
go pkg = IntSet.unions (oneStep:((resultList !!) <$> IntSet.toList oneStep))
where oneStep = IntMap.findWithDefault mempty pkg oneStepMap
-- | Generate a mapping of Hydra job names to maintainer GitHub handles. Calls
-- hydra-eval-jobs and the nix script ./maintainer-handles.nix.
getMaintainerMap :: IO MaintainerMap getMaintainerMap :: IO MaintainerMap
getMaintainerMap = do getMaintainerMap = do
hydraJobs :: HydraJobs <- hydraJobs :: HydraJobs <-
readJSONProcess hydraEvalCommand hydraEvalParams "" "Failed to decode hydra-eval-jobs output: " readJSONProcess hydraEvalCommand hydraEvalParams "Failed to decode hydra-eval-jobs output: "
handlesMap :: EmailToGitHubHandles <- handlesMap :: EmailToGitHubHandles <-
readJSONProcess handlesCommand handlesParams handlesExpression "Failed to decode nix output for lookup of github handles: " readJSONProcess nixExprCommand ("maintainers/scripts/haskell/maintainer-handles.nix":nixExprParams) "Failed to decode nix output for lookup of github handles: "
pure $ Map.mapMaybe (splitMaintainersToGitHubHandles handlesMap) hydraJobs pure $ Map.mapMaybe (splitMaintainersToGitHubHandles handlesMap) hydraJobs
where where
-- Split a comma-spearated string of Maintainers into a NonEmpty list of -- Split a comma-spearated string of Maintainers into a NonEmpty list of
@ -211,6 +252,12 @@ getMaintainerMap = do
splitMaintainersToGitHubHandles handlesMap (Maintainers maint) = splitMaintainersToGitHubHandles handlesMap (Maintainers maint) =
nonEmpty . mapMaybe (`Map.lookup` handlesMap) . Text.splitOn ", " $ fromMaybe "" maint nonEmpty . mapMaybe (`Map.lookup` handlesMap) . Text.splitOn ", " $ fromMaybe "" maint
-- | Get the a map of all dependencies of every package by calling the nix
-- script ./dependencies.nix.
getDependencyMap :: IO DependencyMap
getDependencyMap =
readJSONProcess nixExprCommand ("maintainers/scripts/haskell/dependencies.nix":nixExprParams) "Failed to decode nix output for lookup of dependencies: "
-- | Run a process that produces JSON on stdout and and decode the JSON to a -- | Run a process that produces JSON on stdout and and decode the JSON to a
-- data type. -- data type.
-- --
@ -219,11 +266,10 @@ readJSONProcess
:: FromJSON a :: FromJSON a
=> FilePath -- ^ Filename of executable. => FilePath -- ^ Filename of executable.
-> [String] -- ^ Arguments -> [String] -- ^ Arguments
-> String -- ^ stdin to pass to the process
-> String -- ^ String to prefix to JSON-decode error. -> String -- ^ String to prefix to JSON-decode error.
-> IO a -> IO a
readJSONProcess exe args input err = do readJSONProcess exe args err = do
output <- readProcess exe args input output <- readProcess exe args ""
let eitherDecodedOutput = eitherDecodeStrict' . encodeUtf8 . Text.pack $ output let eitherDecodedOutput = eitherDecodeStrict' . encodeUtf8 . Text.pack $ output
case eitherDecodedOutput of case eitherDecodedOutput of
Left decodeErr -> error $ err <> decodeErr <> "\nRaw: '" <> take 1000 output <> "'" Left decodeErr -> error $ err <> decodeErr <> "\nRaw: '" <> take 1000 output <> "'"
@ -264,7 +310,13 @@ platformIcon (Platform x) = case x of
data BuildResult = BuildResult {state :: BuildState, id :: Int} deriving (Show, Eq, Ord) data BuildResult = BuildResult {state :: BuildState, id :: Int} deriving (Show, Eq, Ord)
newtype Platform = Platform {platform :: Text} deriving (Show, Eq, Ord) newtype Platform = Platform {platform :: Text} deriving (Show, Eq, Ord)
newtype Table row col a = Table (Map (row, col) a) newtype Table row col a = Table (Map (row, col) a)
type StatusSummary = Map Text (Table Text Platform BuildResult, Set Text) data SummaryEntry = SummaryEntry {
summaryBuilds :: Table Text Platform BuildResult,
summaryMaintainers :: Set Text,
summaryReverseDeps :: Int,
summaryUnbrokenReverseDeps :: Int
}
type StatusSummary = Map Text SummaryEntry
instance (Ord row, Ord col, Semigroup a) => Semigroup (Table row col a) where instance (Ord row, Ord col, Semigroup a) => Semigroup (Table row col a) where
Table l <> Table r = Table (Map.unionWith (<>) l r) Table l <> Table r = Table (Map.unionWith (<>) l r)
@ -275,11 +327,11 @@ instance Functor (Table row col) where
instance Foldable (Table row col) where instance Foldable (Table row col) where
foldMap f (Table a) = foldMap f a foldMap f (Table a) = foldMap f a
buildSummary :: MaintainerMap -> Seq Build -> StatusSummary buildSummary :: MaintainerMap -> ReverseDependencyMap -> Seq Build -> StatusSummary
buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary buildSummary maintainerMap reverseDependencyMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary
where where
unionSummary (Table l, l') (Table r, r') = (Table $ Map.union l r, l' <> r') unionSummary (SummaryEntry (Table lb) lm lr lu) (SummaryEntry (Table rb) rm rr ru) = SummaryEntry (Table $ Map.union lb rb) (lm <> rm) (max lr rr) (max lu ru)
toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (Table (Map.singleton (set, Platform system) (BuildResult state id)), maintainers) toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (SummaryEntry (Table (Map.singleton (set, Platform system) (BuildResult state id))) maintainers reverseDeps unbrokenReverseDeps)
where where
state :: BuildState state :: BuildState
state = case (finished, buildstatus) of state = case (finished, buildstatus) of
@ -297,6 +349,7 @@ buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap
name = maybe packageName NonEmpty.last splitted name = maybe packageName NonEmpty.last splitted
set = maybe "" (Text.intercalate "." . NonEmpty.init) splitted set = maybe "" (Text.intercalate "." . NonEmpty.init) splitted
maintainers = maybe mempty (Set.fromList . toList) (Map.lookup job maintainerMap) maintainers = maybe mempty (Set.fromList . toList) (Map.lookup job maintainerMap)
(reverseDeps, unbrokenReverseDeps) = Map.findWithDefault (0,0) name reverseDependencyMap
readBuildReports :: IO (Eval, UTCTime, Seq Build) readBuildReports :: IO (Eval, UTCTime, Seq Build)
readBuildReports = do readBuildReports = do
@ -339,25 +392,29 @@ makeSearchLink evalId linkLabel query = "[" <> linkLabel <> "](" <> "https://hyd
statusToNumSummary :: StatusSummary -> NumSummary statusToNumSummary :: StatusSummary -> NumSummary
statusToNumSummary = fmap getSum . foldMap (fmap Sum . jobTotals) statusToNumSummary = fmap getSum . foldMap (fmap Sum . jobTotals)
jobTotals :: (Table Text Platform BuildResult, a) -> Table Platform BuildState Int jobTotals :: SummaryEntry -> Table Platform BuildState Int
jobTotals (Table mapping, _) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping) jobTotals (summaryBuilds -> Table mapping) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping)
details :: Text -> [Text] -> [Text] details :: Text -> [Text] -> [Text]
details summary content = ["<details><summary>" <> summary <> " </summary>", ""] <> content <> ["</details>", ""] details summary content = ["<details><summary>" <> summary <> " </summary>", ""] <> content <> ["</details>", ""]
printBuildSummary :: Eval -> UTCTime -> StatusSummary -> Text printBuildSummary :: Eval -> UTCTime -> StatusSummary -> [(Text, Int)] -> Text
printBuildSummary printBuildSummary
Eval{id, jobsetevalinputs = JobsetEvalInputs{nixpkgs = Nixpkgs{revision}}} Eval{id, jobsetevalinputs = JobsetEvalInputs{nixpkgs = Nixpkgs{revision}}}
fetchTime fetchTime
summary = summary
topBrokenRdeps =
Text.unlines $ Text.unlines $
headline <> totals headline <> [""] <> tldr <> ((" * "<>) <$> (errors <> warnings)) <> [""]
<> totals
<> optionalList "#### Maintained packages with build failure" (maintainedList fails) <> optionalList "#### Maintained packages with build failure" (maintainedList fails)
<> optionalList "#### Maintained packages with failed dependency" (maintainedList failedDeps) <> optionalList "#### Maintained packages with failed dependency" (maintainedList failedDeps)
<> optionalList "#### Maintained packages with unknown error" (maintainedList unknownErr) <> optionalList "#### Maintained packages with unknown error" (maintainedList unknownErr)
<> optionalHideableList "#### Unmaintained packages with build failure" (unmaintainedList fails) <> optionalHideableList "#### Unmaintained packages with build failure" (unmaintainedList fails)
<> optionalHideableList "#### Unmaintained packages with failed dependency" (unmaintainedList failedDeps) <> optionalHideableList "#### Unmaintained packages with failed dependency" (unmaintainedList failedDeps)
<> optionalHideableList "#### Unmaintained packages with unknown error" (unmaintainedList unknownErr) <> optionalHideableList "#### Unmaintained packages with unknown error" (unmaintainedList unknownErr)
<> optionalHideableList "#### Top 50 broken packages, sorted by number of reverse dependencies" (brokenLine <$> topBrokenRdeps)
<> ["","*:arrow_heading_up:: The number of packages that depend (directly or indirectly) on this package (if any). If two numbers are shown the first (lower) number considers only packages which currently have enabled hydra jobs, i.e. are not marked broken. The second (higher) number considers all packages.*",""]
<> footer <> footer
where where
footer = ["*Report generated with [maintainers/scripts/haskell/hydra-report.hs](https://github.com/NixOS/nixpkgs/blob/haskell-updates/maintainers/scripts/haskell/hydra-report.sh)*"] footer = ["*Report generated with [maintainers/scripts/haskell/hydra-report.hs](https://github.com/NixOS/nixpkgs/blob/haskell-updates/maintainers/scripts/haskell/hydra-report.sh)*"]
@ -365,7 +422,7 @@ printBuildSummary
[ "#### Build summary" [ "#### Build summary"
, "" , ""
] ]
<> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT (statusToNumSummary summary) <> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT numSummary
headline = headline =
[ "### [haskell-updates build report from hydra](https://hydra.nixos.org/jobset/nixpkgs/haskell-updates)" [ "### [haskell-updates build report from hydra](https://hydra.nixos.org/jobset/nixpkgs/haskell-updates)"
, "*evaluation [" , "*evaluation ["
@ -380,24 +437,49 @@ printBuildSummary
<> Text.pack (formatTime defaultTimeLocale "%Y-%m-%d %H:%M UTC" fetchTime) <> Text.pack (formatTime defaultTimeLocale "%Y-%m-%d %H:%M UTC" fetchTime)
<> "*" <> "*"
] ]
jobsByState predicate = Map.filter (predicate . foldl' min Success . fmap state . fst) summary brokenLine (name, rdeps) = "[" <> name <> "](https://search.nixos.org/packages?channel=unstable&show=haskellPackages." <> name <> "&query=haskellPackages." <> name <> ") :arrow_heading_up: " <> Text.pack (show rdeps)
numSummary = statusToNumSummary summary
jobsByState predicate = Map.filter (predicate . worstState) summary
worstState = foldl' min Success . fmap state . summaryBuilds
fails = jobsByState (== Failed) fails = jobsByState (== Failed)
failedDeps = jobsByState (== DependencyFailed) failedDeps = jobsByState (== DependencyFailed)
unknownErr = jobsByState (\x -> x > DependencyFailed && x < TimedOut) unknownErr = jobsByState (\x -> x > DependencyFailed && x < TimedOut)
withMaintainer = Map.mapMaybe (\(x, m) -> (x,) <$> nonEmpty (Set.toList m)) withMaintainer = Map.mapMaybe (\e -> (summaryBuilds e,) <$> nonEmpty (Set.toList (summaryMaintainers e)))
withoutMaintainer = Map.mapMaybe (\(x, m) -> if Set.null m then Just x else Nothing) withoutMaintainer = Map.mapMaybe (\e -> if Set.null (summaryMaintainers e) then Just e else Nothing)
optionalList heading list = if null list then mempty else [heading] <> list optionalList heading list = if null list then mempty else [heading] <> list
optionalHideableList heading list = if null list then mempty else [heading] <> details (showT (length list) <> " job(s)") list optionalHideableList heading list = if null list then mempty else [heading] <> details (showT (length list) <> " job(s)") list
maintainedList = showMaintainedBuild <=< Map.toList . withMaintainer maintainedList = showMaintainedBuild <=< Map.toList . withMaintainer
unmaintainedList = showBuild <=< Map.toList . withoutMaintainer unmaintainedList = showBuild <=< sortOn (\(snd -> x) -> (negate (summaryUnbrokenReverseDeps x), negate (summaryReverseDeps x))) . Map.toList . withoutMaintainer
showBuild (name, table) = printJob id name (table, "") showBuild (name, entry) = printJob id name (summaryBuilds entry, Text.pack (if summaryReverseDeps entry > 0 then " :arrow_heading_up: " <> show (summaryUnbrokenReverseDeps entry) <>" | "<> show (summaryReverseDeps entry) else ""))
showMaintainedBuild (name, (table, maintainers)) = printJob id name (table, Text.intercalate " " (fmap ("@" <>) (toList maintainers))) showMaintainedBuild (name, (table, maintainers)) = printJob id name (table, Text.intercalate " " (fmap ("@" <>) (toList maintainers)))
tldr = case (errors, warnings) of
([],[]) -> [":green_circle: **Ready to merge**"]
([],_) -> [":yellow_circle: **Potential issues**"]
_ -> [":red_circle: **Branch not mergeable**"]
warnings =
if' (Unfinished > maybe Success worstState maintainedJob) "`maintained` jobset failed." <>
if' (Unfinished == maybe Success worstState mergeableJob) "`mergeable` jobset is not finished." <>
if' (Unfinished == maybe Success worstState maintainedJob) "`maintained` jobset is not finished."
errors =
if' (isNothing mergeableJob) "No `mergeable` job found." <>
if' (isNothing maintainedJob) "No `maintained` job found." <>
if' (Unfinished > maybe Success worstState mergeableJob) "`mergeable` jobset failed." <>
if' (outstandingJobs (Platform "x86_64-linux") > 100) "Too much outstanding jobs on x86_64-linux." <>
if' (outstandingJobs (Platform "aarch64-linux") > 100) "Too much outstanding jobs on aarch64-linux."
if' p e = if p then [e] else mempty
outstandingJobs platform | Table m <- numSummary = Map.findWithDefault 0 (platform, Unfinished) m
maintainedJob = Map.lookup "maintained" summary
mergeableJob = Map.lookup "mergeable" summary
printMaintainerPing :: IO () printMaintainerPing :: IO ()
printMaintainerPing = do printMaintainerPing = do
maintainerMap <- getMaintainerMap (maintainerMap, (reverseDependencyMap, topBrokenRdeps)) <- concurrently getMaintainerMap do
depMap <- getDependencyMap
rdepMap <- evaluate . calculateReverseDependencies $ depMap
let tops = take 50 . sortOn (negate . snd) . fmap (second fst) . filter (\x -> maybe False broken $ Map.lookup (fst x) depMap) . Map.toList $ rdepMap
pure (rdepMap, tops)
(eval, fetchTime, buildReport) <- readBuildReports (eval, fetchTime, buildReport) <- readBuildReports
putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap buildReport))) putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap reverseDependencyMap buildReport) topBrokenRdeps))
printMarkBrokenList :: IO () printMarkBrokenList :: IO ()
printMarkBrokenList = do printMarkBrokenList = do

View file

@ -0,0 +1,7 @@
# Nix script to lookup maintainer github handles from their email address. Used by ./hydra-report.hs.
let
pkgs = import ../../.. {};
maintainers = import ../../maintainer-list.nix;
inherit (pkgs) lib;
mkMailGithubPair = _: maintainer: if maintainer ? github then { "${maintainer.email}" = maintainer.github; } else {};
in lib.zipAttrsWith (_: builtins.head) (lib.mapAttrsToList mkMailGithubPair maintainers)

View file

@ -0,0 +1,118 @@
#! /usr/bin/env nix-shell
#! nix-shell -i bash -p git gh -I nixpkgs=.
#
# Script to merge the currently open haskell-updates PR into master, bump the
# Stackage version and Hackage versions, and open the next haskell-updates PR.
set -eu -o pipefail
# exit after printing first argument to this function
function die {
# echo the first argument
echo "ERROR: $1"
echo "Aborting!"
exit 1
}
function help {
echo "Usage: $0 HASKELL_UPDATES_PR_NUM"
echo "Merge the currently open haskell-updates PR into master, and open the next one."
echo
echo " -h, --help print this help"
echo " HASKELL_UPDATES_PR_NUM number of the currently open PR on NixOS/nixpkgs"
echo " for the haskell-updates branch"
echo
echo "Example:"
echo " \$ $0 137340"
exit 1
}
# Read in the current haskell-updates PR number from the command line.
while [[ $# -gt 0 ]]; do
key="$1"
case $key in
-h|--help)
help
;;
*)
curr_haskell_updates_pr_num="$1"
shift
;;
esac
done
if [[ -z "${curr_haskell_updates_pr_num-}" ]] ; then
die "You must pass the current haskell-updates PR number as the first argument to this script."
fi
# Make sure you have gh authentication setup.
if ! gh auth status 2>/dev/null ; then
die "You must setup the \`gh\` command. Run \`gh auth login\`."
fi
# Fetch nixpkgs to get an up-to-date origin/haskell-updates branch.
echo "Fetching origin..."
git fetch origin >/dev/null
# Make sure we are currently on a local haskell-updates branch.
curr_branch="$(git rev-parse --abbrev-ref HEAD)"
if [[ "$curr_branch" != "haskell-updates" ]]; then
die "Current branch is not called \"haskell-updates\"."
fi
# Make sure our local haskell-updates branch is on the same commit as
# origin/haskell-updates.
curr_branch_commit="$(git rev-parse haskell-updates)"
origin_haskell_updates_commit="$(git rev-parse origin/haskell-updates)"
if [[ "$curr_branch_commit" != "$origin_haskell_updates_commit" ]]; then
die "Current branch is not at the same commit as origin/haskell-updates"
fi
# Merge the current open haskell-updates PR.
echo "Merging https://github.com/NixOS/nixpkgs/pull/${curr_haskell_updates_pr_num}..."
gh pr merge --repo NixOS/nixpkgs --merge "$curr_haskell_updates_pr_num"
# Update stackage, Hackage hashes, and regenerate Haskell package set
echo "Updating Stackage..."
./maintainers/scripts/haskell/update-stackage.sh --do-commit
echo "Updating Hackage hashes..."
./maintainers/scripts/haskell/update-hackage.sh --do-commit
echo "Regenerating Hackage packages..."
./maintainers/scripts/haskell/regenerate-hackage-packages.sh --do-commit
# Push these new commits to the haskell-updates branch
echo "Pushing commits just created to the haskell-updates branch"
git push
# Open new PR
new_pr_body=$(cat <<EOF
### This Merge
This PR is the regular merge of the \`haskell-updates\` branch into \`master\`.
This branch is being continually built and tested by hydra at https://hydra.nixos.org/jobset/nixpkgs/haskell-updates.
We roughly aim to merge these \`haskell-updates\` PRs at least once every two weeks. See the @NixOS/haskell [team calendar](https://cloud.maralorn.de/apps/calendar/p/Mw5WLnzsP7fC4Zky) for who is currently in charge of this branch.
### haskellPackages Workflow Summary
Our workflow is currently described in [\`pkgs/development/haskell-modules/HACKING.md\`](https://github.com/NixOS/nixpkgs/blob/haskell-updates/pkgs/development/haskell-modules/HACKING.md).
The short version is this:
* We regularly update the Stackage and Hackage pins on \`haskell-updates\` (normally at the beginning of a merge window).
* The community fixes builds of Haskell packages on that branch.
* We aim at at least one merge of \`haskell-updates\` into \`master\` every two weeks.
* We only do the merge if the [\`mergeable\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/mergeable) job is succeeding on hydra.
* If a [\`maintained\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/maintained) package is still broken at the time of merge, we will only merge if the maintainer has been pinged 7 days in advance. (If you care about a Haskell package, become a maintainer!)
---
This is the follow-up to #${curr_haskell_updates_pr_num}. Come to [#haskell:nixos.org](https://matrix.to/#/#haskell:nixos.org) if you have any questions.
EOF
)
echo "Opening a PR for the next haskell-updates merge cycle"
gh pr create --repo NixOS/nixpkgs --base master --head haskell-updates --title "haskellPackages: update stackage and hackage" --body "$new_pr_body"

View file

@ -37,6 +37,13 @@
PostgreSQL now defaults to major version 13. PostgreSQL now defaults to major version 13.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
spark now defaults to spark 3, updated from 2. A
<link xlink:href="https://spark.apache.org/docs/latest/core-migration-guide.html#upgrading-from-core-24-to-30">migration
guide</link> is available.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
Activation scripts can now opt int to be run when running Activation scripts can now opt int to be run when running
@ -48,6 +55,13 @@
actions. actions.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
Pantheon desktop has been updated to version 6. Due to changes
of screen locker, if locking doesnt work for you, please try
<literal>gsettings set org.gnome.desktop.lockdown disable-lock-screen false</literal>.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
<section xml:id="sec-release-21.11-new-services"> <section xml:id="sec-release-21.11-new-services">
@ -114,6 +128,13 @@
<link linkend="opt-services.vikunja.enable">services.vikunja</link>. <link linkend="opt-services.vikunja.enable">services.vikunja</link>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<link xlink:href="https://github.com/evilsocket/opensnitch">opensnitch</link>,
an application firewall. Available as
<link linkend="opt-services.opensnitch.enable">services.opensnitch</link>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<link xlink:href="https://www.snapraid.it/">snapraid</link>, a <link xlink:href="https://www.snapraid.it/">snapraid</link>, a
@ -182,8 +203,6 @@
<link linkend="opt-services.isso.enable">isso</link> <link linkend="opt-services.isso.enable">isso</link>
</para> </para>
</listitem> </listitem>
</itemizedlist>
<itemizedlist spacing="compact">
<listitem> <listitem>
<para> <para>
<link xlink:href="https://www.navidrome.org/">navidrome</link>, <link xlink:href="https://www.navidrome.org/">navidrome</link>,
@ -192,8 +211,6 @@
<link linkend="opt-services.navidrome.enable">navidrome</link>. <link linkend="opt-services.navidrome.enable">navidrome</link>.
</para> </para>
</listitem> </listitem>
</itemizedlist>
<itemizedlist>
<listitem> <listitem>
<para> <para>
<link xlink:href="https://docs.fluidd.xyz/">fluidd</link>, a <link xlink:href="https://docs.fluidd.xyz/">fluidd</link>, a
@ -250,11 +267,41 @@
entry</link>. entry</link>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<link xlink:href="https://spark.apache.org/">spark</link>, a
unified analytics engine for large-scale data processing.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/JoseExposito/touchegg">touchegg</link>,
a multi-touch gesture recognizer. Available as
<link linkend="opt-services.touchegg.enable">services.touchegg</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/pantheon-tweaks/pantheon-tweaks">pantheon-tweaks</link>,
an unofficial system settings panel for Pantheon. Available as
<link linkend="opt-programs.pantheon-tweaks.enable">programs.pantheon-tweaks</link>.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>
<section xml:id="sec-release-21.11-incompatibilities"> <section xml:id="sec-release-21.11-incompatibilities">
<title>Backward Incompatibilities</title> <title>Backward Incompatibilities</title>
<itemizedlist> <itemizedlist>
<listitem>
<para>
The <literal>security.wrappers</literal> option now requires
to always specify an owner, group and whether the
setuid/setgid bit should be set. This is motivated by the fact
that before NixOS 21.11, specifying either setuid or setgid
but not owner/group resulted in wrappers owned by
nobody/nogroup, which is unsafe.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The <literal>paperless</literal> module and package have been The <literal>paperless</literal> module and package have been
@ -1016,6 +1063,14 @@ Superuser created successfully.
attempts from the SSH logs. attempts from the SSH logs.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The
<link xlink:href="options.html#opt-services.xserver.extraLayouts"><literal>services.xserver.extraLayouts</literal></link>
no longer cause additional rebuilds when a layout is added or
modified.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
Sway: The terminal emulator <literal>rxvt-unicode</literal> is Sway: The terminal emulator <literal>rxvt-unicode</literal> is
@ -1067,6 +1122,22 @@ Superuser created successfully.
be removed in 22.05. be removed in 22.05.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The dokuwiki module provides a new interface which allows to
use different webservers with the new option
<link xlink:href="options.html#opt-services.dokuwiki.webserver"><literal>services.dokuwiki.webserver</literal></link>.
Currently <literal>caddy</literal> and
<literal>nginx</literal> are supported. The definitions of
dokuwiki sites should now be set in
<link xlink:href="options.html#opt-services.dokuwiki.sites"><literal>services.dokuwiki.sites</literal></link>.
</para>
<para>
Sites definitions that use the old interface are automatically
migrated in the new option. This backward compatibility will
be removed in 22.05.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The order of NSS (host) modules has been brought in line with The order of NSS (host) modules has been brought in line with

View file

@ -14,10 +14,14 @@ In addition to numerous new and upgraded packages, this release has the followin
- PostgreSQL now defaults to major version 13. - PostgreSQL now defaults to major version 13.
- spark now defaults to spark 3, updated from 2. A [migration guide](https://spark.apache.org/docs/latest/core-migration-guide.html#upgrading-from-core-24-to-30) is available.
- Activation scripts can now opt int to be run when running `nixos-rebuild dry-activate` and detect the dry activation by reading `$NIXOS_ACTION`. - Activation scripts can now opt int to be run when running `nixos-rebuild dry-activate` and detect the dry activation by reading `$NIXOS_ACTION`.
This allows activation scripts to output what they would change if the activation was really run. This allows activation scripts to output what they would change if the activation was really run.
The users/modules activation script supports this and outputs some of is actions. The users/modules activation script supports this and outputs some of is actions.
- Pantheon desktop has been updated to version 6. Due to changes of screen locker, if locking doesn't work for you, please try `gsettings set org.gnome.desktop.lockdown disable-lock-screen false`.
## New Services {#sec-release-21.11-new-services} ## New Services {#sec-release-21.11-new-services}
- [btrbk](https://digint.ch/btrbk/index.html), a backup tool for btrfs subvolumes, taking advantage of btrfs specific capabilities to create atomic snapshots and transfer them incrementally to your backup locations. Available as [services.btrbk](options.html#opt-services.brtbk.instances). - [btrbk](https://digint.ch/btrbk/index.html), a backup tool for btrfs subvolumes, taking advantage of btrfs specific capabilities to create atomic snapshots and transfer them incrementally to your backup locations. Available as [services.btrbk](options.html#opt-services.brtbk.instances).
@ -37,6 +41,8 @@ pt-services.clipcat.enable).
- [vikunja](https://vikunja.io), a to-do list app. Available as [services.vikunja](#opt-services.vikunja.enable). - [vikunja](https://vikunja.io), a to-do list app. Available as [services.vikunja](#opt-services.vikunja.enable).
- [opensnitch](https://github.com/evilsocket/opensnitch), an application firewall. Available as [services.opensnitch](#opt-services.opensnitch.enable).
- [snapraid](https://www.snapraid.it/), a backup program for disk arrays. - [snapraid](https://www.snapraid.it/), a backup program for disk arrays.
Available as [snapraid](#opt-snapraid.enable). Available as [snapraid](#opt-snapraid.enable).
@ -58,7 +64,7 @@ pt-services.clipcat.enable).
- [isso](https://posativ.org/isso/), a commenting server similar to Disqus. - [isso](https://posativ.org/isso/), a commenting server similar to Disqus.
Available as [isso](#opt-services.isso.enable) Available as [isso](#opt-services.isso.enable)
* [navidrome](https://www.navidrome.org/), a personal music streaming server with - [navidrome](https://www.navidrome.org/), a personal music streaming server with
subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable). subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable).
- [fluidd](https://docs.fluidd.xyz/), a Klipper web interface for managing 3d printers using moonraker. Available as [fluidd](#opt-services.fluidd.enable). - [fluidd](https://docs.fluidd.xyz/), a Klipper web interface for managing 3d printers using moonraker. Available as [fluidd](#opt-services.fluidd.enable).
@ -78,8 +84,16 @@ subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable
or sends them to a downstream service for further analysis. or sends them to a downstream service for further analysis.
Documented in [its manual entry](#module-services-parsedmarc). Documented in [its manual entry](#module-services-parsedmarc).
- [spark](https://spark.apache.org/), a unified analytics engine for large-scale data processing.
- [touchegg](https://github.com/JoseExposito/touchegg), a multi-touch gesture recognizer. Available as [services.touchegg](#opt-services.touchegg.enable).
- [pantheon-tweaks](https://github.com/pantheon-tweaks/pantheon-tweaks), an unofficial system settings panel for Pantheon. Available as [programs.pantheon-tweaks](#opt-programs.pantheon-tweaks.enable).
## Backward Incompatibilities {#sec-release-21.11-incompatibilities} ## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
- The `security.wrappers` option now requires to always specify an owner, group and whether the setuid/setgid bit should be set.
This is motivated by the fact that before NixOS 21.11, specifying either setuid or setgid but not owner/group resulted in wrappers owned by nobody/nogroup, which is unsafe.
- The `paperless` module and package have been removed. All users should migrate to the - The `paperless` module and package have been removed. All users should migrate to the
successor `paperless-ng` instead. The Paperless project [has been successor `paperless-ng` instead. The Paperless project [has been
@ -309,6 +323,8 @@ To be able to access the web UI this port needs to be opened in the firewall.
However, if [`services.fail2ban.enable`](options.html#opt-services.fail2ban.enable) is `true`, the `fail2ban` will override the verbosity to `"VERBOSE"`, so that `fail2ban` can observe the failed login attempts from the SSH logs. However, if [`services.fail2ban.enable`](options.html#opt-services.fail2ban.enable) is `true`, the `fail2ban` will override the verbosity to `"VERBOSE"`, so that `fail2ban` can observe the failed login attempts from the SSH logs.
- The [`services.xserver.extraLayouts`](options.html#opt-services.xserver.extraLayouts) no longer cause additional rebuilds when a layout is added or modified.
- Sway: The terminal emulator `rxvt-unicode` is no longer installed by default via `programs.sway.extraPackages`. The current default configuration uses `alacritty` (and soon `foot`) so this is only an issue when using a customized configuration and not installing `rxvt-unicode` explicitly. - Sway: The terminal emulator `rxvt-unicode` is no longer installed by default via `programs.sway.extraPackages`. The current default configuration uses `alacritty` (and soon `foot`) so this is only an issue when using a customized configuration and not installing `rxvt-unicode` explicitly.
- `python3` now defaults to Python 3.9. Python 3.9 introduces many deprecation warnings, please look at the [What's New In Python 3.9 post](https://docs.python.org/3/whatsnew/3.9.html) for more information. - `python3` now defaults to Python 3.9. Python 3.9 introduces many deprecation warnings, please look at the [What's New In Python 3.9 post](https://docs.python.org/3/whatsnew/3.9.html) for more information.
@ -321,6 +337,10 @@ To be able to access the web UI this port needs to be opened in the firewall.
Sites definitions that use the old interface are automatically migrated in the new option. This backward compatibility will be removed in 22.05. Sites definitions that use the old interface are automatically migrated in the new option. This backward compatibility will be removed in 22.05.
- The dokuwiki module provides a new interface which allows to use different webservers with the new option [`services.dokuwiki.webserver`](options.html#opt-services.dokuwiki.webserver). Currently `caddy` and `nginx` are supported. The definitions of dokuwiki sites should now be set in [`services.dokuwiki.sites`](options.html#opt-services.dokuwiki.sites).
Sites definitions that use the old interface are automatically migrated in the new option. This backward compatibility will be removed in 22.05.
- The order of NSS (host) modules has been brought in line with upstream - The order of NSS (host) modules has been brought in line with upstream
recommendations: recommendations:

View file

@ -116,7 +116,11 @@ in
{ console.keyMap = with config.services.xserver; { console.keyMap = with config.services.xserver;
mkIf cfg.useXkbConfig mkIf cfg.useXkbConfig
(pkgs.runCommand "xkb-console-keymap" { preferLocalBuild = true; } '' (pkgs.runCommand "xkb-console-keymap" { preferLocalBuild = true; } ''
'${pkgs.ckbcomp}/bin/ckbcomp' -model '${xkbModel}' -layout '${layout}' \ '${pkgs.ckbcomp}/bin/ckbcomp' \
${optionalString (config.environment.sessionVariables ? XKB_CONFIG_ROOT)
"-I${config.environment.sessionVariables.XKB_CONFIG_ROOT}"
} \
-model '${xkbModel}' -layout '${layout}' \
-option '${xkbOptions}' -variant '${xkbVariant}' > "$out" -option '${xkbOptions}' -variant '${xkbVariant}' > "$out"
''); '');
} }

View file

@ -84,7 +84,7 @@ in {
type = types.package; type = types.package;
default = pkgs.krb5Full; default = pkgs.krb5Full;
defaultText = "pkgs.krb5Full"; defaultText = "pkgs.krb5Full";
example = literalExample "pkgs.heimdalFull"; example = literalExample "pkgs.heimdal";
description = '' description = ''
The Kerberos implementation that will be present in The Kerberos implementation that will be present in
<literal>environment.systemPackages</literal> after enabling this <literal>environment.systemPackages</literal> after enabling this

View file

@ -30,6 +30,15 @@ let
vulnerabilities, while maintaining good performance. vulnerabilities, while maintaining good performance.
''; '';
}; };
mimalloc = {
libPath = "${pkgs.mimalloc}/lib/libmimalloc.so";
description = ''
A compact and fast general purpose allocator, which may
optionally be built with mitigations against various heap
vulnerabilities.
'';
};
}; };
providerConf = providers.${cfg.provider}; providerConf = providers.${cfg.provider};
@ -91,7 +100,10 @@ in
"abstractions/base" = '' "abstractions/base" = ''
r /etc/ld-nix.so.preload, r /etc/ld-nix.so.preload,
r ${config.environment.etc."ld-nix.so.preload".source}, r ${config.environment.etc."ld-nix.so.preload".source},
mr ${providerLibPath}, include "${pkgs.apparmorRulesFromClosure {
name = "mallocLib";
baseRules = ["mr $path/lib/**.so*"];
} [ mallocLib ] }"
''; '';
}; };
}; };

View file

@ -137,9 +137,9 @@ in
#mongodb = 98; #dynamically allocated as of 2021-09-03 #mongodb = 98; #dynamically allocated as of 2021-09-03
#openldap = 99; # dynamically allocated as of PR#94610 #openldap = 99; # dynamically allocated as of PR#94610
#users = 100; # unused #users = 100; # unused
cgminer = 101; # cgminer = 101; #dynamically allocated as of 2021-09-17
munin = 102; munin = 102;
logcheck = 103; #logcheck = 103; #dynamically allocated as of 2021-09-17
#nix-ssh = 104; #dynamically allocated as of 2021-09-03 #nix-ssh = 104; #dynamically allocated as of 2021-09-03
dictd = 105; dictd = 105;
couchdb = 106; couchdb = 106;
@ -153,7 +153,7 @@ in
#btsync = 113; # unused #btsync = 113; # unused
#minecraft = 114; #dynamically allocated as of 2021-09-03 #minecraft = 114; #dynamically allocated as of 2021-09-03
vault = 115; vault = 115;
rippled = 116; # rippled = 116; #dynamically allocated as of 2021-09-18
murmur = 117; murmur = 117;
foundationdb = 118; foundationdb = 118;
newrelic = 119; newrelic = 119;
@ -210,17 +210,17 @@ in
#fleet = 173; # unused #fleet = 173; # unused
#input = 174; # unused #input = 174; # unused
sddm = 175; sddm = 175;
tss = 176; #tss = 176; # dynamically allocated as of 2021-09-17
#memcached = 177; removed 2018-01-03 #memcached = 177; removed 2018-01-03
ntp = 179; #ntp = 179; # dynamically allocated as of 2021-09-17
zabbix = 180; zabbix = 180;
#redis = 181; removed 2018-01-03 #redis = 181; removed 2018-01-03
unifi = 183; #unifi = 183; dynamically allocated as of 2021-09-17
uptimed = 184; uptimed = 184;
zope2 = 185; #zope2 = 185; # dynamically allocated as of 2021-09-18
ripple-data-api = 186; #ripple-data-api = 186; dynamically allocated as of 2021-09-17
mediatomb = 187; mediatomb = 187;
rdnssd = 188; #rdnssd = 188; #dynamically allocated as of 2021-09-18
ihaskell = 189; ihaskell = 189;
i2p = 190; i2p = 190;
lambdabot = 191; lambdabot = 191;
@ -231,20 +231,20 @@ in
skydns = 197; skydns = 197;
# ripple-rest = 198; # unused, removed 2017-08-12 # ripple-rest = 198; # unused, removed 2017-08-12
# nix-serve = 199; # unused, removed 2020-12-12 # nix-serve = 199; # unused, removed 2020-12-12
tvheadend = 200; #tvheadend = 200; # dynamically allocated as of 2021-09-18
uwsgi = 201; uwsgi = 201;
gitit = 202; gitit = 202;
riemanntools = 203; riemanntools = 203;
subsonic = 204; subsonic = 204;
riak = 205; riak = 205;
shout = 206; #shout = 206; # dynamically allocated as of 2021-09-18
gateone = 207; gateone = 207;
namecoin = 208; namecoin = 208;
#lxd = 210; # unused #lxd = 210; # unused
#kibana = 211;# dynamically allocated as of 2021-09-03 #kibana = 211;# dynamically allocated as of 2021-09-03
xtreemfs = 212; xtreemfs = 212;
calibre-server = 213; calibre-server = 213;
heapster = 214; #heapster = 214; #dynamically allocated as of 2021-09-17
bepasty = 215; bepasty = 215;
# pumpio = 216; # unused, removed 2018-02-24 # pumpio = 216; # unused, removed 2018-02-24
nm-openvpn = 217; nm-openvpn = 217;
@ -258,11 +258,11 @@ in
rspamd = 225; rspamd = 225;
# rmilter = 226; # unused, removed 2019-08-22 # rmilter = 226; # unused, removed 2019-08-22
cfdyndns = 227; cfdyndns = 227;
gammu-smsd = 228; # gammu-smsd = 228; #dynamically allocated as of 2021-09-17
pdnsd = 229; pdnsd = 229;
octoprint = 230; octoprint = 230;
avahi-autoipd = 231; avahi-autoipd = 231;
nntp-proxy = 232; # nntp-proxy = 232; #dynamically allocated as of 2021-09-17
mjpg-streamer = 233; mjpg-streamer = 233;
#radicale = 234;# dynamically allocated as of 2021-09-03 #radicale = 234;# dynamically allocated as of 2021-09-03
hydra-queue-runner = 235; hydra-queue-runner = 235;
@ -276,7 +276,7 @@ in
sniproxy = 244; sniproxy = 244;
nzbget = 245; nzbget = 245;
mosquitto = 246; mosquitto = 246;
toxvpn = 247; #toxvpn = 247; # dynamically allocated as of 2021-09-18
# squeezelite = 248; # DynamicUser = true # squeezelite = 248; # DynamicUser = true
turnserver = 249; turnserver = 249;
#smokeping = 250;# dynamically allocated as of 2021-09-03 #smokeping = 250;# dynamically allocated as of 2021-09-03
@ -524,7 +524,7 @@ in
#fleet = 173; # unused #fleet = 173; # unused
input = 174; input = 174;
sddm = 175; sddm = 175;
tss = 176; #tss = 176; #dynamically allocateda as of 2021-09-20
#memcached = 177; # unused, removed 2018-01-03 #memcached = 177; # unused, removed 2018-01-03
#ntp = 179; # unused #ntp = 179; # unused
zabbix = 180; zabbix = 180;

View file

@ -171,6 +171,7 @@
./programs/npm.nix ./programs/npm.nix
./programs/noisetorch.nix ./programs/noisetorch.nix
./programs/oblogout.nix ./programs/oblogout.nix
./programs/pantheon-tweaks.nix
./programs/partition-manager.nix ./programs/partition-manager.nix
./programs/plotinus.nix ./programs/plotinus.nix
./programs/proxychains.nix ./programs/proxychains.nix
@ -201,6 +202,7 @@
./programs/vim.nix ./programs/vim.nix
./programs/wavemon.nix ./programs/wavemon.nix
./programs/waybar.nix ./programs/waybar.nix
./programs/weylus.nix
./programs/wireshark.nix ./programs/wireshark.nix
./programs/wshowkeys.nix ./programs/wshowkeys.nix
./programs/xfs_quota.nix ./programs/xfs_quota.nix
@ -297,6 +299,7 @@
./services/cluster/kubernetes/pki.nix ./services/cluster/kubernetes/pki.nix
./services/cluster/kubernetes/proxy.nix ./services/cluster/kubernetes/proxy.nix
./services/cluster/kubernetes/scheduler.nix ./services/cluster/kubernetes/scheduler.nix
./services/cluster/spark/default.nix
./services/computing/boinc/client.nix ./services/computing/boinc/client.nix
./services/computing/foldingathome/client.nix ./services/computing/foldingathome/client.nix
./services/computing/slurm/slurm.nix ./services/computing/slurm/slurm.nix
@ -341,6 +344,7 @@
./services/desktops/accountsservice.nix ./services/desktops/accountsservice.nix
./services/desktops/bamf.nix ./services/desktops/bamf.nix
./services/desktops/blueman.nix ./services/desktops/blueman.nix
./services/desktops/cpupower-gui.nix
./services/desktops/dleyna-renderer.nix ./services/desktops/dleyna-renderer.nix
./services/desktops/dleyna-server.nix ./services/desktops/dleyna-server.nix
./services/desktops/pantheon/files.nix ./services/desktops/pantheon/files.nix
@ -897,6 +901,7 @@
./services/search/elasticsearch-curator.nix ./services/search/elasticsearch-curator.nix
./services/search/hound.nix ./services/search/hound.nix
./services/search/kibana.nix ./services/search/kibana.nix
./services/search/meilisearch.nix
./services/search/solr.nix ./services/search/solr.nix
./services/security/certmgr.nix ./services/security/certmgr.nix
./services/security/cfssl.nix ./services/security/cfssl.nix
@ -913,6 +918,7 @@
./services/security/nginx-sso.nix ./services/security/nginx-sso.nix
./services/security/oauth2_proxy.nix ./services/security/oauth2_proxy.nix
./services/security/oauth2_proxy_nginx.nix ./services/security/oauth2_proxy_nginx.nix
./services/security/opensnitch.nix
./services/security/privacyidea.nix ./services/security/privacyidea.nix
./services/security/physlock.nix ./services/security/physlock.nix
./services/security/shibboleth-sp.nix ./services/security/shibboleth-sp.nix
@ -1054,6 +1060,7 @@
./services/x11/gdk-pixbuf.nix ./services/x11/gdk-pixbuf.nix
./services/x11/imwheel.nix ./services/x11/imwheel.nix
./services/x11/redshift.nix ./services/x11/redshift.nix
./services/x11/touchegg.nix
./services/x11/urserver.nix ./services/x11/urserver.nix
./services/x11/urxvtd.nix ./services/x11/urxvtd.nix
./services/x11/window-managers/awesome.nix ./services/x11/window-managers/awesome.nix

View file

@ -141,8 +141,15 @@ in
// mkService cfg.atopgpu.enable "atopgpu" [ atop ]; // mkService cfg.atopgpu.enable "atopgpu" [ atop ];
timers = mkTimer cfg.atopRotateTimer.enable "atop-rotate" [ atop ]; timers = mkTimer cfg.atopRotateTimer.enable "atop-rotate" [ atop ];
}; };
security.wrappers =
lib.mkIf cfg.setuidWrapper.enable { atop = { source = "${atop}/bin/atop"; }; }; security.wrappers = lib.mkIf cfg.setuidWrapper.enable {
atop =
{ setuid = true;
owner = "root";
group = "root";
source = "${atop}/bin/atop";
};
};
} }
); );
} }

View file

@ -22,8 +22,10 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ bandwhich ]; environment.systemPackages = with pkgs; [ bandwhich ];
security.wrappers.bandwhich = { security.wrappers.bandwhich = {
source = "${pkgs.bandwhich}/bin/bandwhich"; owner = "root";
group = "root";
capabilities = "cap_net_raw,cap_net_admin+ep"; capabilities = "cap_net_raw,cap_net_admin+ep";
source = "${pkgs.bandwhich}/bin/bandwhich";
}; };
}; };
} }

View file

@ -105,11 +105,15 @@ in
); );
security.wrappers.udhcpc = { security.wrappers.udhcpc = {
owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = "${pkgs.busybox}/bin/udhcpc"; source = "${pkgs.busybox}/bin/udhcpc";
}; };
security.wrappers.captive-browser = { security.wrappers.captive-browser = {
owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = pkgs.writeShellScript "captive-browser" '' source = pkgs.writeShellScript "captive-browser" ''
export PREV_CONFIG_HOME="$XDG_CONFIG_HOME" export PREV_CONFIG_HOME="$XDG_CONFIG_HOME"

View file

@ -28,7 +28,9 @@ in {
# "nix-ccache --show-stats" and "nix-ccache --clear" # "nix-ccache --show-stats" and "nix-ccache --clear"
security.wrappers.nix-ccache = { security.wrappers.nix-ccache = {
owner = "nobody";
group = "nixbld"; group = "nixbld";
setuid = false;
setgid = true; setgid = true;
source = pkgs.writeScript "nix-ccache.pl" '' source = pkgs.writeScript "nix-ccache.pl" ''
#!${pkgs.perl}/bin/perl #!${pkgs.perl}/bin/perl

View file

@ -81,7 +81,12 @@ in {
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.firejail.source = "${lib.getBin pkgs.firejail}/bin/firejail"; security.wrappers.firejail =
{ setuid = true;
owner = "root";
group = "root";
source = "${lib.getBin pkgs.firejail}/bin/firejail";
};
environment.systemPackages = [ pkgs.firejail ] ++ [ wrappedBins ]; environment.systemPackages = [ pkgs.firejail ] ++ [ wrappedBins ];
}; };

View file

@ -56,6 +56,8 @@ in
polkit.enable = true; polkit.enable = true;
wrappers = mkIf cfg.enableRenice { wrappers = mkIf cfg.enableRenice {
gamemoded = { gamemoded = {
owner = "root";
group = "root";
source = "${pkgs.gamemode}/bin/gamemoded"; source = "${pkgs.gamemode}/bin/gamemoded";
capabilities = "cap_sys_nice+ep"; capabilities = "cap_sys_nice+ep";
}; };

View file

@ -11,8 +11,10 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.iftop ]; environment.systemPackages = [ pkgs.iftop ];
security.wrappers.iftop = { security.wrappers.iftop = {
source = "${pkgs.iftop}/bin/iftop"; owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = "${pkgs.iftop}/bin/iftop";
}; };
}; };
} }

View file

@ -10,8 +10,10 @@ in {
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.iotop = { security.wrappers.iotop = {
source = "${pkgs.iotop}/bin/iotop"; owner = "root";
group = "root";
capabilities = "cap_net_admin+p"; capabilities = "cap_net_admin+p";
source = "${pkgs.iotop}/bin/iotop";
}; };
}; };
} }

View file

@ -11,6 +11,11 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.kbdlight ]; environment.systemPackages = [ pkgs.kbdlight ];
security.wrappers.kbdlight.source = "${pkgs.kbdlight.out}/bin/kbdlight"; security.wrappers.kbdlight =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.kbdlight.out}/bin/kbdlight";
};
}; };
} }

View file

@ -13,8 +13,10 @@ in {
security.wrappers = mkMerge (map ( security.wrappers = mkMerge (map (
exec: { exec: {
"${exec}" = { "${exec}" = {
source = "${pkgs.liboping}/bin/${exec}"; owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = "${pkgs.liboping}/bin/${exec}";
}; };
} }
) [ "oping" "noping" ]); ) [ "oping" "noping" ]);

View file

@ -78,6 +78,8 @@ in {
source = "${pkgs.msmtp}/bin/sendmail"; source = "${pkgs.msmtp}/bin/sendmail";
setuid = false; setuid = false;
setgid = false; setgid = false;
owner = "root";
group = "root";
}; };
environment.etc."msmtprc".text = let environment.etc."msmtprc".text = let

View file

@ -31,8 +31,10 @@ in {
environment.systemPackages = with pkgs; [ cfg.package ]; environment.systemPackages = with pkgs; [ cfg.package ];
security.wrappers.mtr-packet = { security.wrappers.mtr-packet = {
source = "${cfg.package}/bin/mtr-packet"; owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = "${cfg.package}/bin/mtr-packet";
}; };
}; };
} }

View file

@ -18,8 +18,10 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.noisetorch = { security.wrappers.noisetorch = {
source = "${cfg.package}/bin/noisetorch"; owner = "root";
group = "root";
capabilities = "cap_sys_resource=+ep"; capabilities = "cap_sys_resource=+ep";
source = "${cfg.package}/bin/noisetorch";
}; };
}; };
} }

View file

@ -0,0 +1,19 @@
{ config, lib, pkgs, ... }:
with lib;
{
meta = {
maintainers = teams.pantheon.members;
};
###### interface
options = {
programs.pantheon-tweaks.enable = mkEnableOption "Pantheon Tweaks, an unofficial system settings panel for Pantheon";
};
###### implementation
config = mkIf config.programs.pantheon-tweaks.enable {
services.xserver.desktopManager.pantheon.extraSwitchboardPlugs = [ pkgs.pantheon-tweaks ];
};
}

View file

@ -30,7 +30,7 @@ in
###### implementation ###### implementation
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.variables.XDG_DATA_DIRS = [ "${pkgs.plotinus}/share/gsettings-schemas/${pkgs.plotinus.name}" ]; environment.sessionVariables.XDG_DATA_DIRS = [ "${pkgs.plotinus}/share/gsettings-schemas/${pkgs.plotinus.name}" ];
environment.variables.GTK3_MODULES = [ "${pkgs.plotinus}/lib/libplotinus.so" ]; environment.variables.GTK3_MODULES = [ "${pkgs.plotinus}/lib/libplotinus.so" ];
}; };
} }

View file

@ -43,6 +43,13 @@ let
''; '';
mkSetuidRoot = source:
{ setuid = true;
owner = "root";
group = "root";
inherit source;
};
in in
{ {
@ -109,14 +116,14 @@ in
}; };
security.wrappers = { security.wrappers = {
su.source = "${pkgs.shadow.su}/bin/su"; su = mkSetuidRoot "${pkgs.shadow.su}/bin/su";
sg.source = "${pkgs.shadow.out}/bin/sg"; sg = mkSetuidRoot "${pkgs.shadow.out}/bin/sg";
newgrp.source = "${pkgs.shadow.out}/bin/newgrp"; newgrp = mkSetuidRoot "${pkgs.shadow.out}/bin/newgrp";
newuidmap.source = "${pkgs.shadow.out}/bin/newuidmap"; newuidmap = mkSetuidRoot "${pkgs.shadow.out}/bin/newuidmap";
newgidmap.source = "${pkgs.shadow.out}/bin/newgidmap"; newgidmap = mkSetuidRoot "${pkgs.shadow.out}/bin/newgidmap";
} // lib.optionalAttrs config.users.mutableUsers { } // lib.optionalAttrs config.users.mutableUsers {
chsh.source = "${pkgs.shadow.out}/bin/chsh"; chsh = mkSetuidRoot "${pkgs.shadow.out}/bin/chsh";
passwd.source = "${pkgs.shadow.out}/bin/passwd"; passwd = mkSetuidRoot "${pkgs.shadow.out}/bin/passwd";
}; };
}; };
} }

View file

@ -16,7 +16,12 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ singularity ]; environment.systemPackages = [ singularity ];
security.wrappers.singularity-suid.source = "${singularity}/libexec/singularity/bin/starter-suid.orig"; security.wrappers.singularity-suid =
{ setuid = true;
owner = "root";
group = "root";
source = "${singularity}/libexec/singularity/bin/starter-suid.orig";
};
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
"d /var/singularity/mnt/session 0770 root root -" "d /var/singularity/mnt/session 0770 root root -"
"d /var/singularity/mnt/final 0770 root root -" "d /var/singularity/mnt/final 0770 root root -"

View file

@ -21,6 +21,11 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.slock ]; environment.systemPackages = [ pkgs.slock ];
security.wrappers.slock.source = "${pkgs.slock.out}/bin/slock"; security.wrappers.slock =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.slock.out}/bin/slock";
};
}; };
} }

View file

@ -181,6 +181,8 @@ in
source = "${pkgs.ssmtp}/bin/sendmail"; source = "${pkgs.ssmtp}/bin/sendmail";
setuid = false; setuid = false;
setgid = false; setgid = false;
owner = "root";
group = "root";
}; };
}; };

View file

@ -19,8 +19,10 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.traceroute = { security.wrappers.traceroute = {
source = "${pkgs.traceroute}/bin/traceroute"; owner = "root";
group = "root";
capabilities = "cap_net_raw+p"; capabilities = "cap_net_raw+p";
source = "${pkgs.traceroute}/bin/traceroute";
}; };
}; };
} }

View file

@ -9,6 +9,11 @@ in {
options.programs.udevil.enable = mkEnableOption "udevil"; options.programs.udevil.enable = mkEnableOption "udevil";
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.udevil.source = "${lib.getBin pkgs.udevil}/bin/udevil"; security.wrappers.udevil =
{ setuid = true;
owner = "root";
group = "root";
source = "${lib.getBin pkgs.udevil}/bin/udevil";
};
}; };
} }

View file

@ -21,8 +21,10 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = with pkgs; [ wavemon ]; environment.systemPackages = with pkgs; [ wavemon ];
security.wrappers.wavemon = { security.wrappers.wavemon = {
source = "${pkgs.wavemon}/bin/wavemon"; owner = "root";
group = "root";
capabilities = "cap_net_admin+ep"; capabilities = "cap_net_admin+ep";
source = "${pkgs.wavemon}/bin/wavemon";
}; };
}; };
} }

View file

@ -0,0 +1,47 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.weylus;
in
{
options.programs.weylus = with types; {
enable = mkEnableOption "weylus";
openFirewall = mkOption {
type = bool;
default = false;
description = ''
Open ports needed for the functionality of the program.
'';
};
users = mkOption {
type = listOf str;
default = [ ];
description = ''
To enable stylus and multi-touch support, the user you're going to use must be added to this list.
These users can synthesize input events system-wide, even when another user is logged in - untrusted users should not be added.
'';
};
package = mkOption {
type = package;
default = pkgs.weylus;
defaultText = "pkgs.weylus";
description = "Weylus package to install.";
};
};
config = mkIf cfg.enable {
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ 1701 9001 ];
};
hardware.uinput.enable = true;
users.groups.uinput.members = cfg.users;
environment.systemPackages = [ cfg.package ];
};
}

View file

@ -17,6 +17,11 @@ in {
}; };
config = mkIf cfg.enable { config = mkIf cfg.enable {
security.wrappers.wshowkeys.source = "${pkgs.wshowkeys}/bin/wshowkeys"; security.wrappers.wshowkeys =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.wshowkeys}/bin/wshowkeys";
};
}; };
} }

View file

@ -28,6 +28,11 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
environment.systemPackages = [ sandbox ]; environment.systemPackages = [ sandbox ];
security.wrappers.${sandbox.passthru.sandboxExecutableName}.source = "${sandbox}/bin/${sandbox.passthru.sandboxExecutableName}"; security.wrappers.${sandbox.passthru.sandboxExecutableName} =
{ setuid = true;
owner = "root";
group = "root";
source = "${sandbox}/bin/${sandbox.passthru.sandboxExecutableName}";
};
}; };
} }

View file

@ -241,8 +241,11 @@ in
} }
]; ];
security.wrappers = { security.wrappers.doas =
doas.source = "${doas}/bin/doas"; { setuid = true;
owner = "root";
group = "root";
source = "${doas}/bin/doas";
}; };
environment.systemPackages = [ environment.systemPackages = [

View file

@ -186,7 +186,12 @@ in
config = mkIf (cfg.ssh.enable || cfg.pam.enable) { config = mkIf (cfg.ssh.enable || cfg.pam.enable) {
environment.systemPackages = [ pkgs.duo-unix ]; environment.systemPackages = [ pkgs.duo-unix ];
security.wrappers.login_duo.source = "${pkgs.duo-unix.out}/bin/login_duo"; security.wrappers.login_duo =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.duo-unix.out}/bin/login_duo";
};
system.activationScripts = { system.activationScripts = {
login_duo = mkIf cfg.ssh.enable '' login_duo = mkIf cfg.ssh.enable ''

View file

@ -35,10 +35,10 @@ with lib;
wants = [ "systemd-udevd.service" ]; wants = [ "systemd-udevd.service" ];
wantedBy = [ config.systemd.defaultUnit ]; wantedBy = [ config.systemd.defaultUnit ];
before = [ config.systemd.defaultUnit ];
after = after =
[ "firewall.service" [ "firewall.service"
"systemd-modules-load.service" "systemd-modules-load.service"
config.systemd.defaultUnit
]; ];
unitConfig.ConditionPathIsReadWrite = "/proc/sys/kernel"; unitConfig.ConditionPathIsReadWrite = "/proc/sys/kernel";

View file

@ -869,9 +869,10 @@ in
security.wrappers = { security.wrappers = {
unix_chkpwd = { unix_chkpwd = {
source = "${pkgs.pam}/sbin/unix_chkpwd.orig";
owner = "root";
setuid = true; setuid = true;
owner = "root";
group = "root";
source = "${pkgs.pam}/sbin/unix_chkpwd.orig";
}; };
}; };

View file

@ -32,8 +32,18 @@ in
# Make sure pmount and pumount are setuid wrapped. # Make sure pmount and pumount are setuid wrapped.
security.wrappers = { security.wrappers = {
pmount.source = "${pkgs.pmount.out}/bin/pmount"; pmount =
pumount.source = "${pkgs.pmount.out}/bin/pumount"; { setuid = true;
owner = "root";
group = "root";
source = "${pkgs.pmount.out}/bin/pmount";
};
pumount =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.pmount.out}/bin/pumount";
};
}; };
environment.systemPackages = [ pkgs.pmount ]; environment.systemPackages = [ pkgs.pmount ];

View file

@ -83,8 +83,18 @@ in
security.pam.services.polkit-1 = {}; security.pam.services.polkit-1 = {};
security.wrappers = { security.wrappers = {
pkexec.source = "${pkgs.polkit.bin}/bin/pkexec"; pkexec =
polkit-agent-helper-1.source = "${pkgs.polkit.out}/lib/polkit-1/polkit-agent-helper-1"; { setuid = true;
owner = "root";
group = "root";
source = "${pkgs.polkit.bin}/bin/pkexec";
};
polkit-agent-helper-1 =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.polkit.out}/lib/polkit-1/polkit-agent-helper-1";
};
}; };
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [

View file

@ -146,6 +146,7 @@ in {
# Create the tss user and group only if the default value is used # Create the tss user and group only if the default value is used
users.users.${cfg.tssUser} = lib.mkIf (cfg.tssUser == "tss") { users.users.${cfg.tssUser} = lib.mkIf (cfg.tssUser == "tss") {
isSystemUser = true; isSystemUser = true;
group = "tss";
}; };
users.groups.${cfg.tssGroup} = lib.mkIf (cfg.tssGroup == "tss") {}; users.groups.${cfg.tssGroup} = lib.mkIf (cfg.tssGroup == "tss") {};
@ -172,7 +173,7 @@ in {
BusName = "com.intel.tss2.Tabrmd"; BusName = "com.intel.tss2.Tabrmd";
ExecStart = "${cfg.abrmd.package}/bin/tpm2-abrmd"; ExecStart = "${cfg.abrmd.package}/bin/tpm2-abrmd";
User = "tss"; User = "tss";
Group = "nogroup"; Group = "tss";
}; };
}; };

View file

@ -5,85 +5,140 @@ let
parentWrapperDir = dirOf wrapperDir; parentWrapperDir = dirOf wrapperDir;
programs =
(lib.mapAttrsToList
(n: v: (if v ? program then v else v // {program=n;}))
wrappers);
securityWrapper = pkgs.callPackage ./wrapper.nix { securityWrapper = pkgs.callPackage ./wrapper.nix {
inherit parentWrapperDir; inherit parentWrapperDir;
}; };
fileModeType =
let
# taken from the chmod(1) man page
symbolic = "[ugoa]*([-+=]([rwxXst]*|[ugo]))+|[-+=][0-7]+";
numeric = "[-+=]?[0-7]{0,4}";
mode = "((${symbolic})(,${symbolic})*)|(${numeric})";
in
lib.types.strMatching mode
// { description = "file mode string"; };
wrapperType = lib.types.submodule ({ name, config, ... }: {
options.source = lib.mkOption
{ type = lib.types.path;
description = "The absolute path to the program to be wrapped.";
};
options.program = lib.mkOption
{ type = with lib.types; nullOr str;
default = name;
description = ''
The name of the wrapper program. Defaults to the attribute name.
'';
};
options.owner = lib.mkOption
{ type = lib.types.str;
description = "The owner of the wrapper program.";
};
options.group = lib.mkOption
{ type = lib.types.str;
description = "The group of the wrapper program.";
};
options.permissions = lib.mkOption
{ type = fileModeType;
default = "u+rx,g+x,o+x";
example = "a+rx";
description = ''
The permissions of the wrapper program. The format is that of a
symbolic or numeric file mode understood by <command>chmod</command>.
'';
};
options.capabilities = lib.mkOption
{ type = lib.types.commas;
default = "";
description = ''
A comma-separated list of capabilities to be given to the wrapper
program. For capabilities supported by the system check the
<citerefentry>
<refentrytitle>capabilities</refentrytitle>
<manvolnum>7</manvolnum>
</citerefentry>
manual page.
<note><para>
<literal>cap_setpcap</literal>, which is required for the wrapper
program to be able to raise caps into the Ambient set is NOT raised
to the Ambient set so that the real program cannot modify its own
capabilities!! This may be too restrictive for cases in which the
real program needs cap_setpcap but it at least leans on the side
security paranoid vs. too relaxed.
</para></note>
'';
};
options.setuid = lib.mkOption
{ type = lib.types.bool;
default = false;
description = "Whether to add the setuid bit the wrapper program.";
};
options.setgid = lib.mkOption
{ type = lib.types.bool;
default = false;
description = "Whether to add the setgid bit the wrapper program.";
};
});
###### Activation script for the setcap wrappers ###### Activation script for the setcap wrappers
mkSetcapProgram = mkSetcapProgram =
{ program { program
, capabilities , capabilities
, source , source
, owner ? "nobody" , owner
, group ? "nogroup" , group
, permissions ? "u+rx,g+x,o+x" , permissions
, ... , ...
}: }:
assert (lib.versionAtLeast (lib.getVersion config.boot.kernelPackages.kernel) "4.3"); assert (lib.versionAtLeast (lib.getVersion config.boot.kernelPackages.kernel) "4.3");
'' ''
cp ${securityWrapper}/bin/security-wrapper $wrapperDir/${program} cp ${securityWrapper}/bin/security-wrapper "$wrapperDir/${program}"
echo -n "${source}" > $wrapperDir/${program}.real echo -n "${source}" > "$wrapperDir/${program}.real"
# Prevent races # Prevent races
chmod 0000 $wrapperDir/${program} chmod 0000 "$wrapperDir/${program}"
chown ${owner}.${group} $wrapperDir/${program} chown ${owner}.${group} "$wrapperDir/${program}"
# Set desired capabilities on the file plus cap_setpcap so # Set desired capabilities on the file plus cap_setpcap so
# the wrapper program can elevate the capabilities set on # the wrapper program can elevate the capabilities set on
# its file into the Ambient set. # its file into the Ambient set.
${pkgs.libcap.out}/bin/setcap "cap_setpcap,${capabilities}" $wrapperDir/${program} ${pkgs.libcap.out}/bin/setcap "cap_setpcap,${capabilities}" "$wrapperDir/${program}"
# Set the executable bit # Set the executable bit
chmod ${permissions} $wrapperDir/${program} chmod ${permissions} "$wrapperDir/${program}"
''; '';
###### Activation script for the setuid wrappers ###### Activation script for the setuid wrappers
mkSetuidProgram = mkSetuidProgram =
{ program { program
, source , source
, owner ? "nobody" , owner
, group ? "nogroup" , group
, setuid ? false , setuid
, setgid ? false , setgid
, permissions ? "u+rx,g+x,o+x" , permissions
, ... , ...
}: }:
'' ''
cp ${securityWrapper}/bin/security-wrapper $wrapperDir/${program} cp ${securityWrapper}/bin/security-wrapper "$wrapperDir/${program}"
echo -n "${source}" > $wrapperDir/${program}.real echo -n "${source}" > "$wrapperDir/${program}.real"
# Prevent races # Prevent races
chmod 0000 $wrapperDir/${program} chmod 0000 "$wrapperDir/${program}"
chown ${owner}.${group} $wrapperDir/${program} chown ${owner}.${group} "$wrapperDir/${program}"
chmod "u${if setuid then "+" else "-"}s,g${if setgid then "+" else "-"}s,${permissions}" $wrapperDir/${program} chmod "u${if setuid then "+" else "-"}s,g${if setgid then "+" else "-"}s,${permissions}" "$wrapperDir/${program}"
''; '';
mkWrappedPrograms = mkWrappedPrograms =
builtins.map builtins.map
(s: if (s ? capabilities) (opts:
then mkSetcapProgram if opts.capabilities != ""
({ owner = "root"; then mkSetcapProgram opts
group = "root"; else mkSetuidProgram opts
} // s) ) (lib.attrValues wrappers);
else if
(s ? setuid && s.setuid) ||
(s ? setgid && s.setgid) ||
(s ? permissions)
then mkSetuidProgram s
else mkSetuidProgram
({ owner = "root";
group = "root";
setuid = true;
setgid = false;
permissions = "u+rx,g+x,o+x";
} // s)
) programs;
in in
{ {
imports = [ imports = [
@ -95,45 +150,42 @@ in
options = { options = {
security.wrappers = lib.mkOption { security.wrappers = lib.mkOption {
type = lib.types.attrs; type = lib.types.attrsOf wrapperType;
default = {}; default = {};
example = lib.literalExample example = lib.literalExample
'' ''
{ sendmail.source = "/nix/store/.../bin/sendmail"; {
ping = { # a setuid root program
source = "${pkgs.iputils.out}/bin/ping"; doas =
owner = "nobody"; { setuid = true;
group = "nogroup"; owner = "root";
group = "root";
source = "''${pkgs.doas}/bin/doas";
};
# a setgid program
locate =
{ setgid = true;
owner = "root";
group = "mlocate";
source = "''${pkgs.locate}/bin/locate";
};
# a program with the CAP_NET_RAW capability
ping =
{ owner = "root";
group = "root";
capabilities = "cap_net_raw+ep"; capabilities = "cap_net_raw+ep";
source = "''${pkgs.iputils.out}/bin/ping";
}; };
} }
''; '';
description = '' description = ''
This option allows the ownership and permissions on the setuid This option effectively allows adding setuid/setgid bits, capabilities,
wrappers for specific programs to be overridden from the changing file ownership and permissions of a program without directly
default (setuid root, but not setgid root). modifying it. This works by creating a wrapper program under the
<option>security.wrapperDir</option> directory, which is then added to
<note> the shell <literal>PATH</literal>.
<para>The sub-attribute <literal>source</literal> is mandatory,
it must be the absolute path to the program to be wrapped.
</para>
<para>The sub-attribute <literal>program</literal> is optional and
can give the wrapper program a new name. The default name is the same
as the attribute name itself.</para>
<para>Additionally, this option can set capabilities on a
wrapper program that propagates those capabilities down to the
wrapped, real program.</para>
<para>NOTE: cap_setpcap, which is required for the wrapper
program to be able to raise caps into the Ambient set is NOT
raised to the Ambient set so that the real program cannot
modify its own capabilities!! This may be too restrictive for
cases in which the real program needs cap_setpcap but it at
least leans on the side security paranoid vs. too
relaxed.</para>
</note>
''; '';
}; };
@ -151,12 +203,30 @@ in
###### implementation ###### implementation
config = { config = {
security.wrappers = { assertions = lib.mapAttrsToList
# These are mount related wrappers that require the +s permission. (name: opts:
fusermount.source = "${pkgs.fuse}/bin/fusermount"; { assertion = opts.setuid || opts.setgid -> opts.capabilities == "";
fusermount3.source = "${pkgs.fuse3}/bin/fusermount3"; message = ''
mount.source = "${lib.getBin pkgs.util-linux}/bin/mount"; The security.wrappers.${name} wrapper is not valid:
umount.source = "${lib.getBin pkgs.util-linux}/bin/umount"; setuid/setgid and capabilities are mutually exclusive.
'';
}
) wrappers;
security.wrappers =
let
mkSetuidRoot = source:
{ setuid = true;
owner = "root";
group = "root";
inherit source;
};
in
{ # These are mount related wrappers that require the +s permission.
fusermount = mkSetuidRoot "${pkgs.fuse}/bin/fusermount";
fusermount3 = mkSetuidRoot "${pkgs.fuse3}/bin/fusermount3";
mount = mkSetuidRoot "${lib.getBin pkgs.util-linux}/bin/mount";
umount = mkSetuidRoot "${lib.getBin pkgs.util-linux}/bin/umount";
}; };
boot.specialFileSystems.${parentWrapperDir} = { boot.specialFileSystems.${parentWrapperDir} = {
@ -179,19 +249,15 @@ in
]}" ]}"
''; '';
###### setcap activation script ###### wrappers activation script
system.activationScripts.wrappers = system.activationScripts.wrappers =
lib.stringAfter [ "specialfs" "users" ] lib.stringAfter [ "specialfs" "users" ]
'' ''
# Look in the system path and in the default profile for
# programs to be wrapped.
WRAPPER_PATH=${config.system.path}/bin:${config.system.path}/sbin
chmod 755 "${parentWrapperDir}" chmod 755 "${parentWrapperDir}"
# We want to place the tmpdirs for the wrappers to the parent dir. # We want to place the tmpdirs for the wrappers to the parent dir.
wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX) wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX)
chmod a+rx $wrapperDir chmod a+rx "$wrapperDir"
${lib.concatStringsSep "\n" mkWrappedPrograms} ${lib.concatStringsSep "\n" mkWrappedPrograms}
@ -199,16 +265,44 @@ in
# Atomically replace the symlink # Atomically replace the symlink
# See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/ # See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/
old=$(readlink -f ${wrapperDir}) old=$(readlink -f ${wrapperDir})
if [ -e ${wrapperDir}-tmp ]; then if [ -e "${wrapperDir}-tmp" ]; then
rm --force --recursive ${wrapperDir}-tmp rm --force --recursive "${wrapperDir}-tmp"
fi fi
ln --symbolic --force --no-dereference $wrapperDir ${wrapperDir}-tmp ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp"
mv --no-target-directory ${wrapperDir}-tmp ${wrapperDir} mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}"
rm --force --recursive $old rm --force --recursive "$old"
else else
# For initial setup # For initial setup
ln --symbolic $wrapperDir ${wrapperDir} ln --symbolic "$wrapperDir" "${wrapperDir}"
fi fi
''; '';
###### wrappers consistency checks
system.extraDependencies = lib.singleton (pkgs.runCommandLocal
"ensure-all-wrappers-paths-exist" { }
''
# make sure we produce output
mkdir -p $out
echo -n "Checking that Nix store paths of all wrapped programs exist... "
declare -A wrappers
${lib.concatStringsSep "\n" (lib.mapAttrsToList (n: v:
"wrappers['${n}']='${v.source}'") wrappers)}
for name in "''${!wrappers[@]}"; do
path="''${wrappers[$name]}"
if [[ "$path" =~ /nix/store ]] && [ ! -e "$path" ]; then
test -t 1 && echo -ne '\033[1;31m'
echo "FAIL"
echo "The path $path does not exist!"
echo 'Please, check the value of `security.wrappers."'$name'".source`.'
test -t 1 && echo -ne '\033[0m'
exit 1
fi
done
echo "OK"
'');
}; };
} }

View file

@ -5,28 +5,33 @@ with lib;
let let
cfg = config.services.kubernetes; cfg = config.services.kubernetes;
defaultContainerdConfigFile = pkgs.writeText "containerd.toml" '' defaultContainerdSettings = {
version = 2 version = 2;
root = "/var/lib/containerd" root = "/var/lib/containerd";
state = "/run/containerd" state = "/run/containerd";
oom_score = 0 oom_score = 0;
[grpc] grpc = {
address = "/run/containerd/containerd.sock" address = "/run/containerd/containerd.sock";
};
[plugins."io.containerd.grpc.v1.cri"] plugins."io.containerd.grpc.v1.cri" = {
sandbox_image = "pause:latest" sandbox_image = "pause:latest";
[plugins."io.containerd.grpc.v1.cri".cni] cni = {
bin_dir = "/opt/cni/bin" bin_dir = "/opt/cni/bin";
max_conf_num = 0 max_conf_num = 0;
};
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] containerd.runtimes.runc = {
runtime_type = "io.containerd.runc.v2" runtime_type = "io.containerd.runc.v2";
};
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."io.containerd.runc.v2".options] containerd.runtimes."io.containerd.runc.v2".options = {
SystemdCgroup = true SystemdCgroup = true;
''; };
};
};
mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON { mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON {
apiVersion = "v1"; apiVersion = "v1";
@ -248,7 +253,7 @@ in {
(mkIf cfg.kubelet.enable { (mkIf cfg.kubelet.enable {
virtualisation.containerd = { virtualisation.containerd = {
enable = mkDefault true; enable = mkDefault true;
configFile = mkDefault defaultContainerdConfigFile; settings = mkDefault defaultContainerdSettings;
}; };
}) })

View file

@ -0,0 +1,162 @@
{config, pkgs, lib, ...}:
let
cfg = config.services.spark;
in
with lib;
{
options = {
services.spark = {
master = {
enable = mkEnableOption "Spark master service";
bind = mkOption {
type = types.str;
description = "Address the spark master binds to.";
default = "127.0.0.1";
example = "0.0.0.0";
};
restartIfChanged = mkOption {
type = types.bool;
description = ''
Automatically restart master service on config change.
This can be set to false to defer restarts on clusters running critical applications.
Please consider the security implications of inadvertently running an older version,
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
'';
default = true;
};
extraEnvironment = mkOption {
type = types.attrsOf types.str;
description = "Extra environment variables to pass to spark master. See spark-standalone documentation.";
default = {};
example = {
SPARK_MASTER_WEBUI_PORT = 8181;
SPARK_MASTER_OPTS = "-Dspark.deploy.defaultCores=5";
};
};
};
worker = {
enable = mkEnableOption "Spark worker service";
workDir = mkOption {
type = types.path;
description = "Spark worker work dir.";
default = "/var/lib/spark";
};
master = mkOption {
type = types.str;
description = "Address of the spark master.";
default = "127.0.0.1:7077";
};
restartIfChanged = mkOption {
type = types.bool;
description = ''
Automatically restart worker service on config change.
This can be set to false to defer restarts on clusters running critical applications.
Please consider the security implications of inadvertently running an older version,
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
'';
default = true;
};
extraEnvironment = mkOption {
type = types.attrsOf types.str;
description = "Extra environment variables to pass to spark worker.";
default = {};
example = {
SPARK_WORKER_CORES = 5;
SPARK_WORKER_MEMORY = "2g";
};
};
};
confDir = mkOption {
type = types.path;
description = "Spark configuration directory. Spark will use the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc) from this directory.";
default = "${cfg.package}/lib/${cfg.package.untarDir}/conf";
defaultText = literalExample "\${cfg.package}/lib/\${cfg.package.untarDir}/conf";
};
logDir = mkOption {
type = types.path;
description = "Spark log directory.";
default = "/var/log/spark";
};
package = mkOption {
type = types.package;
description = "Spark package.";
default = pkgs.spark;
defaultText = "pkgs.spark";
example = literalExample ''pkgs.spark.overrideAttrs (super: rec {
pname = "spark";
version = "2.4.4";
src = pkgs.fetchzip {
url = "mirror://apache/spark/"''${pname}-''${version}/''${pname}-''${version}-bin-without-hadoop.tgz";
sha256 = "1a9w5k0207fysgpxx6db3a00fs5hdc2ncx99x4ccy2s0v5ndc66g";
};
})'';
};
};
};
config = lib.mkIf (cfg.worker.enable || cfg.master.enable) {
environment.systemPackages = [ cfg.package ];
systemd = {
services = {
spark-master = lib.mkIf cfg.master.enable {
path = with pkgs; [ procps openssh nettools ];
description = "spark master service.";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
restartIfChanged = cfg.master.restartIfChanged;
environment = cfg.master.extraEnvironment // {
SPARK_MASTER_HOST = cfg.master.bind;
SPARK_CONF_DIR = cfg.confDir;
SPARK_LOG_DIR = cfg.logDir;
};
serviceConfig = {
Type = "forking";
User = "spark";
Group = "spark";
WorkingDirectory = "${cfg.package}/lib/${cfg.package.untarDir}";
ExecStart = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/start-master.sh";
ExecStop = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/stop-master.sh";
TimeoutSec = 300;
StartLimitBurst=10;
Restart = "always";
};
};
spark-worker = lib.mkIf cfg.worker.enable {
path = with pkgs; [ procps openssh nettools rsync ];
description = "spark master service.";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
restartIfChanged = cfg.worker.restartIfChanged;
environment = cfg.worker.extraEnvironment // {
SPARK_MASTER = cfg.worker.master;
SPARK_CONF_DIR = cfg.confDir;
SPARK_LOG_DIR = cfg.logDir;
SPARK_WORKER_DIR = cfg.worker.workDir;
};
serviceConfig = {
Type = "forking";
User = "spark";
WorkingDirectory = "${cfg.package}/lib/${cfg.package.untarDir}";
ExecStart = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/start-worker.sh spark://${cfg.worker.master}";
ExecStop = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/stop-worker.sh";
TimeoutSec = 300;
StartLimitBurst=10;
Restart = "always";
};
};
};
tmpfiles.rules = [
"d '${cfg.worker.workDir}' - spark spark - -"
"d '${cfg.logDir}' - spark spark - -"
];
};
users = {
users.spark = {
description = "spark user.";
group = "spark";
isSystemUser = true;
};
groups.spark = { };
};
};
}

View file

@ -0,0 +1,56 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.cpupower-gui;
in {
options = {
services.cpupower-gui = {
enable = mkOption {
type = lib.types.bool;
default = false;
example = true;
description = ''
Enables dbus/systemd service needed by cpupower-gui.
These services are responsible for retrieving and modifying cpu power
saving settings.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.cpupower-gui ];
services.dbus.packages = [ pkgs.cpupower-gui ];
systemd.user = {
services.cpupower-gui-user = {
description = "Apply cpupower-gui config at user login";
wantedBy = [ "graphical-session.target" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.cpupower-gui}/bin/cpupower-gui config";
};
};
};
systemd.services = {
cpupower-gui = {
description = "Apply cpupower-gui config at boot";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.cpupower-gui}/bin/cpupower-gui config";
};
};
cpupower-gui-helper = {
description = "cpupower-gui system helper";
aliases = [ "dbus-org.rnd2.cpupower_gui.helper.service" ];
serviceConfig = {
Type = "dbus";
BusName = "org.rnd2.cpupower_gui.helper";
ExecStart = "${pkgs.cpupower-gui}/lib/cpupower-gui/cpupower-gui-helper";
};
};
};
};
}

View file

@ -52,8 +52,10 @@ with lib;
security.pam.services.login.enableGnomeKeyring = true; security.pam.services.login.enableGnomeKeyring = true;
security.wrappers.gnome-keyring-daemon = { security.wrappers.gnome-keyring-daemon = {
source = "${pkgs.gnome.gnome-keyring}/bin/gnome-keyring-daemon"; owner = "root";
group = "root";
capabilities = "cap_ipc_lock=ep"; capabilities = "cap_ipc_lock=ep";
source = "${pkgs.gnome.gnome-keyring}/bin/gnome-keyring-daemon";
}; };
}; };

View file

@ -9,7 +9,7 @@ let
in in
{ {
meta.maintainers = pkgs.pantheon.maintainers; meta.maintainers = teams.pantheon.members;
###### interface ###### interface

View file

@ -99,7 +99,12 @@ in
systemd.defaultUnit = "graphical.target"; systemd.defaultUnit = "graphical.target";
users.users.greeter.isSystemUser = true; users.users.greeter = {
isSystemUser = true;
group = "greeter";
};
users.groups.greeter = {};
}; };
meta.maintainers = with maintainers; [ queezle ]; meta.maintainers = with maintainers; [ queezle ];

View file

@ -149,12 +149,10 @@ in
users.users = optionalAttrs (cfg.user == "tss") { users.users = optionalAttrs (cfg.user == "tss") {
tss = { tss = {
group = "tss"; group = "tss";
uid = config.ids.uids.tss; isSystemUser = true;
}; };
}; };
users.groups = optionalAttrs (cfg.group == "tss") { users.groups = optionalAttrs (cfg.group == "tss") { tss = {}; };
tss.gid = config.ids.gids.tss;
};
}; };
} }

View file

@ -215,12 +215,16 @@ in
users.users = optionalAttrs (cfg.user == "logcheck") { users.users = optionalAttrs (cfg.user == "logcheck") {
logcheck = { logcheck = {
uid = config.ids.uids.logcheck; group = "logcheck";
isSystemUser = true;
shell = "/bin/sh"; shell = "/bin/sh";
description = "Logcheck user account"; description = "Logcheck user account";
extraGroups = cfg.extraGroups; extraGroups = cfg.extraGroups;
}; };
}; };
users.groups = optionalAttrs (cfg.user == "logcheck") {
logcheck = {};
};
system.activationScripts.logcheck = '' system.activationScripts.logcheck = ''
mkdir -m 700 -p /var/{lib,lock}/logcheck mkdir -m 700 -p /var/{lib,lock}/logcheck

View file

@ -104,7 +104,12 @@ in
gid = config.ids.gids.exim; gid = config.ids.gids.exim;
}; };
security.wrappers.exim.source = "${cfg.package}/bin/exim"; security.wrappers.exim =
{ setuid = true;
owner = "root";
group = "root";
source = "${cfg.package}/bin/exim";
};
systemd.services.exim = { systemd.services.exim = {
description = "Exim Mail Daemon"; description = "Exim Mail Daemon";

View file

@ -1,4 +1,4 @@
{ config, lib, ... }: { config, options, lib, ... }:
with lib; with lib;
@ -11,6 +11,7 @@ with lib;
services.mail = { services.mail = {
sendmailSetuidWrapper = mkOption { sendmailSetuidWrapper = mkOption {
type = types.nullOr options.security.wrappers.type.nestedTypes.elemType;
default = null; default = null;
internal = true; internal = true;
description = '' description = ''

View file

@ -103,12 +103,15 @@ in {
}; };
security.wrappers.smtpctl = { security.wrappers.smtpctl = {
owner = "nobody";
group = "smtpq"; group = "smtpq";
setuid = false;
setgid = true; setgid = true;
source = "${cfg.package}/bin/smtpctl"; source = "${cfg.package}/bin/smtpctl";
}; };
services.mail.sendmailSetuidWrapper = mkIf cfg.setSendmail security.wrappers.smtpctl; services.mail.sendmailSetuidWrapper = mkIf cfg.setSendmail
security.wrappers.smtpctl // { program = "sendmail"; };
systemd.tmpfiles.rules = [ systemd.tmpfiles.rules = [
"d /var/spool/smtpd 711 root - - -" "d /var/spool/smtpd 711 root - - -"

View file

@ -673,6 +673,7 @@ in
services.mail.sendmailSetuidWrapper = mkIf config.services.postfix.setSendmail { services.mail.sendmailSetuidWrapper = mkIf config.services.postfix.setSendmail {
program = "sendmail"; program = "sendmail";
source = "${pkgs.postfix}/bin/sendmail"; source = "${pkgs.postfix}/bin/sendmail";
owner = "nobody";
group = setgidGroup; group = setgidGroup;
setuid = false; setuid = false;
setgid = true; setgid = true;
@ -681,6 +682,7 @@ in
security.wrappers.mailq = { security.wrappers.mailq = {
program = "mailq"; program = "mailq";
source = "${pkgs.postfix}/bin/mailq"; source = "${pkgs.postfix}/bin/mailq";
owner = "nobody";
group = setgidGroup; group = setgidGroup;
setuid = false; setuid = false;
setgid = true; setgid = true;
@ -689,6 +691,7 @@ in
security.wrappers.postqueue = { security.wrappers.postqueue = {
program = "postqueue"; program = "postqueue";
source = "${pkgs.postfix}/bin/postqueue"; source = "${pkgs.postfix}/bin/postqueue";
owner = "nobody";
group = setgidGroup; group = setgidGroup;
setuid = false; setuid = false;
setgid = true; setgid = true;
@ -697,6 +700,7 @@ in
security.wrappers.postdrop = { security.wrappers.postdrop = {
program = "postdrop"; program = "postdrop";
source = "${pkgs.postfix}/bin/postdrop"; source = "${pkgs.postfix}/bin/postdrop";
owner = "nobody";
group = setgidGroup; group = setgidGroup;
setuid = false; setuid = false;
setgid = true; setgid = true;

View file

@ -86,7 +86,7 @@ in
config = mkOption { config = mkOption {
default = {}; default = {};
type = (types.either types.bool types.int); type = types.attrsOf (types.either types.bool types.int);
description = "Additional config"; description = "Additional config";
example = { example = {
auto-fan = true; auto-fan = true;
@ -110,10 +110,14 @@ in
users.users = optionalAttrs (cfg.user == "cgminer") { users.users = optionalAttrs (cfg.user == "cgminer") {
cgminer = { cgminer = {
uid = config.ids.uids.cgminer; isSystemUser = true;
group = "cgminer";
description = "Cgminer user"; description = "Cgminer user";
}; };
}; };
users.groups = optionalAttrs (cfg.user == "cgminer") {
cgminer = {};
};
environment.systemPackages = [ cfg.package ]; environment.systemPackages = [ cfg.package ];

View file

@ -202,8 +202,8 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
users.users.${cfg.user} = { users.users.${cfg.user} = {
description = "gammu-smsd user"; description = "gammu-smsd user";
uid = config.ids.uids.gammu-smsd; isSystemUser = true;
extraGroups = [ "${cfg.device.group}" ]; group = cfg.device.group;
}; };
environment.systemPackages = with cfg.backend; [ gammuPackage ] environment.systemPackages = with cfg.backend; [ gammuPackage ]

View file

@ -88,6 +88,7 @@ in
users.users.gpsd = users.users.gpsd =
{ inherit uid; { inherit uid;
group = "gpsd";
description = "gpsd daemon user"; description = "gpsd daemon user";
home = "/var/empty"; home = "/var/empty";
}; };

View file

@ -45,8 +45,10 @@ in
environment.systemPackages = [ pkgs.mame ]; environment.systemPackages = [ pkgs.mame ];
security.wrappers."${mame}" = { security.wrappers."${mame}" = {
source = "${pkgs.mame}/bin/${mame}"; owner = "root";
group = "root";
capabilities = "cap_net_admin,cap_net_raw+eip"; capabilities = "cap_net_admin,cap_net_raw+eip";
source = "${pkgs.mame}/bin/${mame}";
}; };
systemd.services.mame = { systemd.services.mame = {

View file

@ -187,7 +187,9 @@ in {
users.users.ripple-data-api = users.users.ripple-data-api =
{ description = "Ripple data api user"; { description = "Ripple data api user";
uid = config.ids.uids.ripple-data-api; isSystemUser = true;
group = "ripple-data-api";
}; };
users.groups.ripple-data-api = {};
}; };
} }

View file

@ -407,12 +407,14 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
users.users.rippled = users.users.rippled = {
{ description = "Ripple server user"; description = "Ripple server user";
uid = config.ids.uids.rippled; isSystemUser = true;
group = "rippled";
home = cfg.databasePath; home = cfg.databasePath;
createHome = true; createHome = true;
}; };
users.groups.rippled = {};
systemd.services.rippled = { systemd.services.rippled = {
after = [ "network.target" ]; after = [ "network.target" ];

View file

@ -52,7 +52,12 @@ in
wants = [ "network.target" ]; wants = [ "network.target" ];
}; };
security.wrappers.screen.source = "${pkgs.screen}/bin/screen"; security.wrappers.screen =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.screen}/bin/screen";
};
}; };
meta.doc = ./weechat.xml; meta.doc = ./weechat.xml;

View file

@ -50,8 +50,10 @@ in {
}; };
users.users.heapster = { users.users.heapster = {
uid = config.ids.uids.heapster; isSystemUser = true;
group = "heapster";
description = "Heapster user"; description = "Heapster user";
}; };
users.groups.heapster = {};
}; };
} }

View file

@ -71,7 +71,12 @@ in
environment.systemPackages = [ pkgs.incron ]; environment.systemPackages = [ pkgs.incron ];
security.wrappers.incrontab.source = "${pkgs.incron}/bin/incrontab"; security.wrappers.incrontab =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.incron}/bin/incrontab";
};
# incron won't read symlinks # incron won't read symlinks
environment.etc."incron.d/system" = { environment.etc."incron.d/system" = {

View file

@ -9,9 +9,9 @@ let
mkdir -p $out/libexec/netdata/plugins.d mkdir -p $out/libexec/netdata/plugins.d
ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin
ln -s /run/wrappers/bin/cgroup-network $out/libexec/netdata/plugins.d/cgroup-network ln -s /run/wrappers/bin/cgroup-network $out/libexec/netdata/plugins.d/cgroup-network
ln -s /run/wrappers/bin/freeipmi.plugin $out/libexec/netdata/plugins.d/freeipmi.plugin
ln -s /run/wrappers/bin/perf.plugin $out/libexec/netdata/plugins.d/perf.plugin ln -s /run/wrappers/bin/perf.plugin $out/libexec/netdata/plugins.d/perf.plugin
ln -s /run/wrappers/bin/slabinfo.plugin $out/libexec/netdata/plugins.d/slabinfo.plugin ln -s /run/wrappers/bin/slabinfo.plugin $out/libexec/netdata/plugins.d/slabinfo.plugin
ln -s /run/wrappers/bin/freeipmi.plugin $out/libexec/netdata/plugins.d/freeipmi.plugin
''; '';
plugins = [ plugins = [
@ -211,7 +211,8 @@ in {
systemd.enableCgroupAccounting = true; systemd.enableCgroupAccounting = true;
security.wrappers."apps.plugin" = { security.wrappers = {
"apps.plugin" = {
source = "${cfg.package}/libexec/netdata/plugins.d/apps.plugin.org"; source = "${cfg.package}/libexec/netdata/plugins.d/apps.plugin.org";
capabilities = "cap_dac_read_search,cap_sys_ptrace+ep"; capabilities = "cap_dac_read_search,cap_sys_ptrace+ep";
owner = cfg.user; owner = cfg.user;
@ -219,7 +220,7 @@ in {
permissions = "u+rx,g+x,o-rwx"; permissions = "u+rx,g+x,o-rwx";
}; };
security.wrappers."cgroup-network" = { "cgroup-network" = {
source = "${cfg.package}/libexec/netdata/plugins.d/cgroup-network.org"; source = "${cfg.package}/libexec/netdata/plugins.d/cgroup-network.org";
capabilities = "cap_setuid+ep"; capabilities = "cap_setuid+ep";
owner = cfg.user; owner = cfg.user;
@ -227,15 +228,7 @@ in {
permissions = "u+rx,g+x,o-rwx"; permissions = "u+rx,g+x,o-rwx";
}; };
security.wrappers."freeipmi.plugin" = { "perf.plugin" = {
source = "${cfg.package}/libexec/netdata/plugins.d/freeipmi.plugin.org";
capabilities = "cap_dac_override,cap_fowner+ep";
owner = cfg.user;
group = cfg.group;
permissions = "u+rx,g+x,o-rwx";
};
security.wrappers."perf.plugin" = {
source = "${cfg.package}/libexec/netdata/plugins.d/perf.plugin.org"; source = "${cfg.package}/libexec/netdata/plugins.d/perf.plugin.org";
capabilities = "cap_sys_admin+ep"; capabilities = "cap_sys_admin+ep";
owner = cfg.user; owner = cfg.user;
@ -243,7 +236,7 @@ in {
permissions = "u+rx,g+x,o-rwx"; permissions = "u+rx,g+x,o-rwx";
}; };
security.wrappers."slabinfo.plugin" = { "slabinfo.plugin" = {
source = "${cfg.package}/libexec/netdata/plugins.d/slabinfo.plugin.org"; source = "${cfg.package}/libexec/netdata/plugins.d/slabinfo.plugin.org";
capabilities = "cap_dac_override+ep"; capabilities = "cap_dac_override+ep";
owner = cfg.user; owner = cfg.user;
@ -251,6 +244,16 @@ in {
permissions = "u+rx,g+x,o-rwx"; permissions = "u+rx,g+x,o-rwx";
}; };
} // optionalAttrs (cfg.package.withIpmi) {
"freeipmi.plugin" = {
source = "${cfg.package}/libexec/netdata/plugins.d/freeipmi.plugin.org";
capabilities = "cap_dac_override,cap_fowner+ep";
owner = cfg.user;
group = cfg.group;
permissions = "u+rx,g+x,o-rwx";
};
};
security.pam.loginLimits = [ security.pam.loginLimits = [
{ domain = "netdata"; type = "soft"; item = "nofile"; value = "10000"; } { domain = "netdata"; type = "soft"; item = "nofile"; value = "10000"; }
{ domain = "netdata"; type = "hard"; item = "nofile"; value = "30000"; } { domain = "netdata"; type = "hard"; item = "nofile"; value = "30000"; }

View file

@ -262,7 +262,12 @@ in
}; };
security.wrappers = { security.wrappers = {
fping.source = "${pkgs.fping}/bin/fping"; fping =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.fping}/bin/fping";
};
}; };
systemd.services.zabbix-proxy = { systemd.services.zabbix-proxy = {

View file

@ -217,6 +217,7 @@ in {
home = "${dataDir}"; home = "${dataDir}";
createHome = true; createHome = true;
isSystemUser = true; isSystemUser = true;
group = "dnscrypt-wrapper";
}; };
users.groups.dnscrypt-wrapper = { }; users.groups.dnscrypt-wrapper = { };

View file

@ -164,7 +164,7 @@ in {
path = [ pkgs.iptables ]; path = [ pkgs.iptables ];
preStart = optionalString (cfg.storageBackend == "etcd") '' preStart = optionalString (cfg.storageBackend == "etcd") ''
echo "setting network configuration" echo "setting network configuration"
until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}' until ${pkgs.etcd}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
do do
echo "setting network configuration, retry" echo "setting network configuration, retry"
sleep 1 sleep 1

View file

@ -6,8 +6,6 @@ let
inherit (pkgs) nntp-proxy; inherit (pkgs) nntp-proxy;
proxyUser = "nntp-proxy";
cfg = config.services.nntp-proxy; cfg = config.services.nntp-proxy;
configBool = b: if b then "TRUE" else "FALSE"; configBool = b: if b then "TRUE" else "FALSE";
@ -210,16 +208,18 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
users.users.${proxyUser} = users.users.nntp-proxy = {
{ uid = config.ids.uids.nntp-proxy; isSystemUser = true;
group = "nntp-proxy";
description = "NNTP-Proxy daemon user"; description = "NNTP-Proxy daemon user";
}; };
users.groups.nntp-proxy = {};
systemd.services.nntp-proxy = { systemd.services.nntp-proxy = {
description = "NNTP proxy"; description = "NNTP proxy";
after = [ "network.target" "nss-lookup.target" ]; after = [ "network.target" "nss-lookup.target" ];
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
serviceConfig = { User="${proxyUser}"; }; serviceConfig = { User="nntp-proxy"; };
serviceConfig.ExecStart = "${nntp-proxy}/bin/nntp-proxy ${confFile}"; serviceConfig.ExecStart = "${nntp-proxy}/bin/nntp-proxy ${confFile}";
preStart = '' preStart = ''
if [ ! \( -f ${cfg.sslCert} -a -f ${cfg.sslKey} \) ]; then if [ ! \( -f ${cfg.sslCert} -a -f ${cfg.sslKey} \) ]; then

View file

@ -10,8 +10,6 @@ let
stateDir = "/var/lib/ntp"; stateDir = "/var/lib/ntp";
ntpUser = "ntp";
configFile = pkgs.writeText "ntp.conf" '' configFile = pkgs.writeText "ntp.conf" ''
driftfile ${stateDir}/ntp.drift driftfile ${stateDir}/ntp.drift
@ -27,7 +25,7 @@ let
${cfg.extraConfig} ${cfg.extraConfig}
''; '';
ntpFlags = "-c ${configFile} -u ${ntpUser}:nogroup ${toString cfg.extraFlags}"; ntpFlags = "-c ${configFile} -u ntp:ntp ${toString cfg.extraFlags}";
in in
@ -119,11 +117,13 @@ in
systemd.services.systemd-timedated.environment = { SYSTEMD_TIMEDATED_NTP_SERVICES = "ntpd.service"; }; systemd.services.systemd-timedated.environment = { SYSTEMD_TIMEDATED_NTP_SERVICES = "ntpd.service"; };
users.users.${ntpUser} = users.users.ntp =
{ uid = config.ids.uids.ntp; { isSystemUser = true;
group = "ntp";
description = "NTP daemon user"; description = "NTP daemon user";
home = stateDir; home = stateDir;
}; };
users.groups.ntp = {};
systemd.services.ntpd = systemd.services.ntpd =
{ description = "NTP Daemon"; { description = "NTP Daemon";
@ -135,7 +135,7 @@ in
preStart = preStart =
'' ''
mkdir -m 0755 -p ${stateDir} mkdir -m 0755 -p ${stateDir}
chown ${ntpUser} ${stateDir} chown ntp ${stateDir}
''; '';
serviceConfig = { serviceConfig = {

View file

@ -61,10 +61,12 @@ in
environment.etc."ntpd.conf".text = configFile; environment.etc."ntpd.conf".text = configFile;
users.users.ntp = { users.users.ntp = {
uid = config.ids.uids.ntp; isSystemUser = true;
group = "ntp";
description = "OpenNTP daemon user"; description = "OpenNTP daemon user";
home = "/var/empty"; home = "/var/empty";
}; };
users.groups.ntp = {};
systemd.services.openntpd = { systemd.services.openntpd = {
description = "OpenNTP Server"; description = "OpenNTP Server";

View file

@ -72,8 +72,10 @@ in
users.users.rdnssd = { users.users.rdnssd = {
description = "RDNSSD Daemon User"; description = "RDNSSD Daemon User";
uid = config.ids.uids.rdnssd; isSystemUser = true;
group = "rdnssd";
}; };
users.groups.rdnssd = {};
}; };

View file

@ -83,11 +83,13 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
users.users.shout = { users.users.shout = {
uid = config.ids.uids.shout; isSystemUser = true;
group = "shout";
description = "Shout daemon user"; description = "Shout daemon user";
home = shoutHome; home = shoutHome;
createHome = true; createHome = true;
}; };
users.groups.shout = {};
systemd.services.shout = { systemd.services.shout = {
description = "Shout web IRC client"; description = "Shout web IRC client";

View file

@ -278,8 +278,12 @@ in
} }
]; ];
security.wrappers = { security.wrappers = {
fping.source = "${pkgs.fping}/bin/fping"; fping =
fping6.source = "${pkgs.fping}/bin/fping6"; { setuid = true;
owner = "root";
group = "root";
source = "${pkgs.fping}/bin/fping";
};
}; };
environment.systemPackages = [ pkgs.fping ]; environment.systemPackages = [ pkgs.fping ];
users.users.${cfg.user} = { users.users.${cfg.user} = {

View file

@ -59,10 +59,12 @@ with lib;
users.users = { users.users = {
toxvpn = { toxvpn = {
uid = config.ids.uids.toxvpn; isSystemUser = true;
group = "toxvpn";
home = "/var/lib/toxvpn"; home = "/var/lib/toxvpn";
createHome = true; createHome = true;
}; };
}; };
users.groups.toxvpn = {};
}; };
} }

View file

@ -29,8 +29,10 @@ in
description = "Tvheadend Service user"; description = "Tvheadend Service user";
home = "/var/lib/tvheadend"; home = "/var/lib/tvheadend";
createHome = true; createHome = true;
uid = config.ids.uids.tvheadend; isSystemUser = true;
group = "tvheadend";
}; };
users.groups.tvheadend = {};
systemd.services.tvheadend = { systemd.services.tvheadend = {
description = "Tvheadend TV streaming server"; description = "Tvheadend TV streaming server";

View file

@ -115,10 +115,12 @@ in
config = mkIf cfg.enable { config = mkIf cfg.enable {
users.users.unifi = { users.users.unifi = {
uid = config.ids.uids.unifi; isSystemUser = true;
group = "unifi";
description = "UniFi controller daemon user"; description = "UniFi controller daemon user";
home = "${stateDir}"; home = "${stateDir}";
}; };
users.groups.unifi = {};
networking.firewall = mkIf cfg.openPorts { networking.firewall = mkIf cfg.openPorts {
# https://help.ubnt.com/hc/en-us/articles/218506997 # https://help.ubnt.com/hc/en-us/articles/218506997

View file

@ -88,12 +88,14 @@ in {
source = "${pkgs.x2goserver}/lib/x2go/libx2go-server-db-sqlite3-wrapper.pl"; source = "${pkgs.x2goserver}/lib/x2go/libx2go-server-db-sqlite3-wrapper.pl";
owner = "x2go"; owner = "x2go";
group = "x2go"; group = "x2go";
setuid = false;
setgid = true; setgid = true;
}; };
security.wrappers.x2goprintWrapper = { security.wrappers.x2goprintWrapper = {
source = "${pkgs.x2goserver}/bin/x2goprint"; source = "${pkgs.x2goserver}/bin/x2goprint";
owner = "x2go"; owner = "x2go";
group = "x2go"; group = "x2go";
setuid = false;
setgid = true; setgid = true;
}; };

View file

@ -93,7 +93,12 @@ in
{ services.cron.enable = mkDefault (allFiles != []); } { services.cron.enable = mkDefault (allFiles != []); }
(mkIf (config.services.cron.enable) { (mkIf (config.services.cron.enable) {
security.wrappers.crontab.source = "${cronNixosPkg}/bin/crontab"; security.wrappers.crontab =
{ setuid = true;
owner = "root";
group = "root";
source = "${cronNixosPkg}/bin/crontab";
};
environment.systemPackages = [ cronNixosPkg ]; environment.systemPackages = [ cronNixosPkg ];
environment.etc.crontab = environment.etc.crontab =
{ source = pkgs.runCommand "crontabs" { inherit allFiles; preferLocalBuild = true; } { source = pkgs.runCommand "crontabs" { inherit allFiles; preferLocalBuild = true; }

View file

@ -136,10 +136,13 @@ in
owner = "fcron"; owner = "fcron";
group = "fcron"; group = "fcron";
setgid = true; setgid = true;
setuid = false;
}; };
fcronsighup = { fcronsighup = {
source = "${pkgs.fcron}/bin/fcronsighup"; source = "${pkgs.fcron}/bin/fcronsighup";
owner = "root";
group = "fcron"; group = "fcron";
setuid = true;
}; };
}; };
systemd.services.fcron = { systemd.services.fcron = {

View file

@ -5,13 +5,13 @@ with lib;
let let
cfg = config.services.elasticsearch; cfg = config.services.elasticsearch;
es7 = builtins.compareVersions cfg.package.version "7" >= 0;
esConfig = '' esConfig = ''
network.host: ${cfg.listenAddress} network.host: ${cfg.listenAddress}
cluster.name: ${cfg.cluster_name} cluster.name: ${cfg.cluster_name}
${lib.optionalString cfg.single_node '' ${lib.optionalString cfg.single_node "discovery.type: single-node"}
discovery.type: single-node ${lib.optionalString (cfg.single_node && es7) "gateway.auto_import_dangling_indices: true"}
gateway.auto_import_dangling_indices: true
''}
http.port: ${toString cfg.port} http.port: ${toString cfg.port}
transport.port: ${toString cfg.tcp_port} transport.port: ${toString cfg.tcp_port}

View file

@ -0,0 +1,129 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.meilisearch;
in
{
meta.maintainers = with maintainers; [ Br1ght0ne ];
###### interface
options.services.meilisearch = {
enable = mkEnableOption "MeiliSearch - a RESTful search API";
package = mkOption {
description = "The package to use for meilisearch. Use this if you require specific features to be enabled. The default package has no features.";
default = pkgs.meilisearch;
defaultText = "pkgs.meilisearch";
type = types.package;
};
listenAddress = mkOption {
description = "MeiliSearch listen address.";
default = "127.0.0.1";
type = types.str;
};
listenPort = mkOption {
description = "MeiliSearch port to listen on.";
default = 7700;
type = types.port;
};
environment = mkOption {
description = "Defines the running environment of MeiliSearch.";
default = "development";
type = types.enum [ "development" "production" ];
};
# TODO change this to LoadCredentials once possible
masterKeyEnvironmentFile = mkOption {
description = ''
Path to file which contains the master key.
By doing so, all routes will be protected and will require a key to be accessed.
If no master key is provided, all routes can be accessed without requiring any key.
The format is the following:
MEILI_MASTER_KEY=my_secret_key
'';
default = null;
type = with types; nullOr path;
};
noAnalytics = mkOption {
description = ''
Deactivates analytics.
Analytics allow MeiliSearch to know how many users are using MeiliSearch,
which versions and which platforms are used.
This process is entirely anonymous.
'';
default = true;
type = types.bool;
};
logLevel = mkOption {
description = ''
Defines how much detail should be present in MeiliSearch's logs.
MeiliSearch currently supports four log levels, listed in order of increasing verbosity:
- 'ERROR': only log unexpected events indicating MeiliSearch is not functioning as expected
- 'WARN:' log all unexpected events, regardless of their severity
- 'INFO:' log all events. This is the default value
- 'DEBUG': log all events and including detailed information on MeiliSearch's internal processes.
Useful when diagnosing issues and debugging
'';
default = "INFO";
type = types.str;
};
maxIndexSize = mkOption {
description = ''
Sets the maximum size of the index.
Value must be given in bytes or explicitly stating a base unit.
For example, the default value can be written as 107374182400, '107.7Gb', or '107374 Mb'.
Default is 100 GiB
'';
default = "107374182400";
type = types.str;
};
payloadSizeLimit = mkOption {
description = ''
Sets the maximum size of accepted JSON payloads.
Value must be given in bytes or explicitly stating a base unit.
For example, the default value can be written as 107374182400, '107.7Gb', or '107374 Mb'.
Default is ~ 100 MB
'';
default = "104857600";
type = types.str;
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.meilisearch = {
description = "MeiliSearch daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
environment = {
MEILI_DB_PATH = "/var/lib/meilisearch";
MEILI_HTTP_ADDR = "${cfg.listenAddress}:${toString cfg.listenPort}";
MEILI_NO_ANALYTICS = toString cfg.noAnalytics;
MEILI_ENV = cfg.environment;
MEILI_DUMPS_DIR = "/var/lib/meilisearch/dumps";
MEILI_LOG_LEVEL = cfg.logLevel;
MEILI_MAX_INDEX_SIZE = cfg.maxIndexSize;
};
serviceConfig = {
ExecStart = "${cfg.package}/bin/meilisearch";
DynamicUser = true;
StateDirectory = "meilisearch";
EnvironmentFile = mkIf (cfg.masterKeyEnvironmentFile != null) cfg.masterKeyEnvironmentFile;
};
};
};
}

View file

@ -0,0 +1,24 @@
{ config, lib, pkgs, ... }:
with lib;
let
name = "opensnitch";
cfg = config.services.opensnitch;
in {
options = {
services.opensnitch = {
enable = mkEnableOption "Opensnitch application firewall";
};
};
config = mkIf cfg.enable {
systemd = {
packages = [ pkgs.opensnitch ];
services.opensnitchd.wantedBy = [ "multi-user.target" ];
};
};
}

View file

@ -38,9 +38,6 @@ in
setuid wrapper to allow any user to start physlock as root, which setuid wrapper to allow any user to start physlock as root, which
is a minor security risk. Call the physlock binary to use this instead is a minor security risk. Call the physlock binary to use this instead
of using the systemd service. of using the systemd service.
Note that you might need to relog to have the correct binary in your
PATH upon changing this option.
''; '';
}; };
@ -129,7 +126,12 @@ in
(mkIf cfg.allowAnyUser { (mkIf cfg.allowAnyUser {
security.wrappers.physlock = { source = "${pkgs.physlock}/bin/physlock"; user = "root"; }; security.wrappers.physlock =
{ setuid = true;
owner = "root";
group = "root";
source = "${pkgs.physlock}/bin/physlock";
};
}) })
]); ]);

View file

@ -27,7 +27,7 @@ in
{ {
# No documentation about correct triggers, so guessing at them. # No documentation about correct triggers, so guessing at them.
config = mkIf (cfg.enable && kerberos == pkgs.heimdalFull) { config = mkIf (cfg.enable && kerberos == pkgs.heimdal) {
systemd.services.kadmind = { systemd.services.kadmind = {
description = "Kerberos Administration Daemon"; description = "Kerberos Administration Daemon";
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];

View file

@ -37,7 +37,9 @@ in {
users.users.localtimed = { users.users.localtimed = {
description = "localtime daemon"; description = "localtime daemon";
isSystemUser = true; isSystemUser = true;
group = "localtimed";
}; };
users.groups.localtimed = {};
systemd.services.localtime = { systemd.services.localtime = {
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];

View file

@ -44,8 +44,10 @@ in
security.wrappers = mkIf cfg.enableSysAdminCapability { security.wrappers = mkIf cfg.enableSysAdminCapability {
replay-sorcery = { replay-sorcery = {
source = "${pkgs.replay-sorcery}/bin/replay-sorcery"; owner = "root";
group = "root";
capabilities = "cap_sys_admin+ep"; capabilities = "cap_sys_admin+ep";
source = "${pkgs.replay-sorcery}/bin/replay-sorcery";
}; };
}; };

View file

@ -1,16 +1,21 @@
{ config, lib, pkgs, ... }: { config, pkgs, lib, ... }:
let let
inherit (lib) mkDefault mkEnableOption mkForce mkIf mkMerge mkOption types maintainers recursiveUpdate;
inherit (lib) any attrValues concatMapStrings concatMapStringsSep flatten literalExample;
inherit (lib) filterAttrs mapAttrs mapAttrs' mapAttrsToList nameValuePair optional optionalAttrs optionalString;
inherit (lib) mkEnableOption mkForce mkIf mkMerge mkOption optionalAttrs recursiveUpdate types maintainers; cfg = migrateOldAttrs config.services.dokuwiki;
inherit (lib) concatMapStringsSep flatten mapAttrs mapAttrs' mapAttrsToList nameValuePair concatMapStringSep; eachSite = cfg.sites;
eachSite = config.services.dokuwiki;
user = "dokuwiki"; user = "dokuwiki";
group = config.services.nginx.group; webserver = config.services.${cfg.webserver};
stateDir = hostName: "/var/lib/dokuwiki/${hostName}/data";
dokuwikiAclAuthConfig = cfg: pkgs.writeText "acl.auth.php" '' # Migrate config.services.dokuwiki.<hostName> to config.services.dokuwiki.sites.<hostName>
oldSites = filterAttrs (o: _: o != "sites" && o != "webserver");
migrateOldAttrs = cfg: cfg // { sites = cfg.sites // oldSites cfg; };
dokuwikiAclAuthConfig = hostName: cfg: pkgs.writeText "acl.auth-${hostName}.php" ''
# acl.auth.php # acl.auth.php
# <?php exit()?> # <?php exit()?>
# #
@ -19,7 +24,7 @@ let
${toString cfg.acl} ${toString cfg.acl}
''; '';
dokuwikiLocalConfig = cfg: pkgs.writeText "local.php" '' dokuwikiLocalConfig = hostName: cfg: pkgs.writeText "local-${hostName}.php" ''
<?php <?php
$conf['savedir'] = '${cfg.stateDir}'; $conf['savedir'] = '${cfg.stateDir}';
$conf['superuser'] = '${toString cfg.superUser}'; $conf['superuser'] = '${toString cfg.superUser}';
@ -28,11 +33,12 @@ let
${toString cfg.extraConfig} ${toString cfg.extraConfig}
''; '';
dokuwikiPluginsLocalConfig = cfg: pkgs.writeText "plugins.local.php" '' dokuwikiPluginsLocalConfig = hostName: cfg: pkgs.writeText "plugins.local-${hostName}.php" ''
<?php <?php
${cfg.pluginsConfig} ${cfg.pluginsConfig}
''; '';
pkg = hostName: cfg: pkgs.stdenv.mkDerivation rec { pkg = hostName: cfg: pkgs.stdenv.mkDerivation rec {
pname = "dokuwiki-${hostName}"; pname = "dokuwiki-${hostName}";
version = src.version; version = src.version;
@ -43,13 +49,13 @@ let
cp -r * $out/ cp -r * $out/
# symlink the dokuwiki config # symlink the dokuwiki config
ln -s ${dokuwikiLocalConfig cfg} $out/share/dokuwiki/local.php ln -s ${dokuwikiLocalConfig hostName cfg} $out/share/dokuwiki/local.php
# symlink plugins config # symlink plugins config
ln -s ${dokuwikiPluginsLocalConfig cfg} $out/share/dokuwiki/plugins.local.php ln -s ${dokuwikiPluginsLocalConfig hostName cfg} $out/share/dokuwiki/plugins.local.php
# symlink acl # symlink acl
ln -s ${dokuwikiAclAuthConfig cfg} $out/share/dokuwiki/acl.auth.php ln -s ${dokuwikiAclAuthConfig hostName cfg} $out/share/dokuwiki/acl.auth.php
# symlink additional plugin(s) and templates(s) # symlink additional plugin(s) and templates(s)
${concatMapStringsSep "\n" (template: "ln -s ${template} $out/share/dokuwiki/lib/tpl/${template.name}") cfg.templates} ${concatMapStringsSep "\n" (template: "ln -s ${template} $out/share/dokuwiki/lib/tpl/${template.name}") cfg.templates}
@ -57,26 +63,19 @@ let
''; '';
}; };
siteOpts = { config, lib, name, ...}: { siteOpts = { config, lib, name, ... }:
{
options = { options = {
enable = mkEnableOption "DokuWiki web application.";
package = mkOption { package = mkOption {
type = types.package; type = types.package;
default = pkgs.dokuwiki; default = pkgs.dokuwiki;
description = "Which dokuwiki package to use."; description = "Which DokuWiki package to use.";
};
hostName = mkOption {
type = types.str;
default = "localhost";
description = "FQDN for the instance.";
}; };
stateDir = mkOption { stateDir = mkOption {
type = types.path; type = types.path;
default = "/var/lib/dokuwiki/${name}/data"; default = "/var/lib/dokuwiki/${name}/data";
description = "Location of the dokuwiki state directory."; description = "Location of the DokuWiki state directory.";
}; };
acl = mkOption { acl = mkOption {
@ -161,20 +160,6 @@ let
''; '';
}; };
extraConfig = mkOption {
type = types.nullOr types.lines;
default = null;
example = ''
$conf['title'] = 'My Wiki';
$conf['userewrite'] = 1;
'';
description = ''
DokuWiki configuration. Refer to
<link xlink:href="https://www.dokuwiki.org/config"/>
for details on supported values.
'';
};
plugins = mkOption { plugins = mkOption {
type = types.listOf types.path; type = types.listOf types.path;
default = []; default = [];
@ -193,7 +178,7 @@ let
}; };
sourceRoot = "."; sourceRoot = ".";
# We need unzip to build this package # We need unzip to build this package
nativeBuildInputs = [ pkgs.unzip ]; buildInputs = [ pkgs.unzip ];
# Installing simply means copying all files to the output directory # Installing simply means copying all files to the output directory
installPhase = "mkdir -p $out; cp -R * $out/"; installPhase = "mkdir -p $out; cp -R * $out/";
}; };
@ -220,7 +205,7 @@ let
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6"; sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
}; };
# We need unzip to build this package # We need unzip to build this package
nativeBuildInputs = [ pkgs.unzip ]; buildInputs = [ pkgs.unzip ];
# Installing simply means copying all files to the output directory # Installing simply means copying all files to the output directory
installPhase = "mkdir -p $out; cp -R * $out/"; installPhase = "mkdir -p $out; cp -R * $out/";
}; };
@ -241,105 +226,153 @@ let
"pm.max_requests" = 500; "pm.max_requests" = 500;
}; };
description = '' description = ''
Options for the dokuwiki PHP pool. See the documentation on <literal>php-fpm.conf</literal> Options for the DokuWiki PHP pool. See the documentation on <literal>php-fpm.conf</literal>
for details on configuration directives. for details on configuration directives.
''; '';
}; };
nginx = mkOption { extraConfig = mkOption {
type = types.submodule ( type = types.nullOr types.lines;
recursiveUpdate default = null;
(import ../web-servers/nginx/vhost-options.nix { inherit config lib; }) {} example = ''
); $conf['title'] = 'My Wiki';
default = {}; $conf['userewrite'] = 1;
example = { '';
serverAliases = [
"wiki.\${config.networking.domain}"
];
# To enable encryption and let let's encrypt take care of certificate
forceSSL = true;
enableACME = true;
};
description = '' description = ''
With this option, you can customize the nginx virtualHost settings. DokuWiki configuration. Refer to
<link xlink:href="https://www.dokuwiki.org/config"/>
for details on supported values.
''; '';
}; };
}; };
}; };
in in
{ {
# interface # interface
options = { options = {
services.dokuwiki = mkOption { services.dokuwiki = mkOption {
type = types.submodule {
# Used to support old interface
freeformType = types.attrsOf (types.submodule siteOpts);
# New interface
options.sites = mkOption {
type = types.attrsOf (types.submodule siteOpts); type = types.attrsOf (types.submodule siteOpts);
default = {}; default = {};
description = "Sepcification of one or more dokuwiki sites to serve."; description = "Specification of one or more DokuWiki sites to serve";
}; };
options.webserver = mkOption {
type = types.enum [ "nginx" "caddy" ];
default = "nginx";
description = ''
Whether to use nginx or caddy for virtual host management.
Further nginx configuration can be done by adapting <literal>services.nginx.virtualHosts.&lt;name&gt;</literal>.
See <xref linkend="opt-services.nginx.virtualHosts"/> for further information.
Further apache2 configuration can be done by adapting <literal>services.httpd.virtualHosts.&lt;name&gt;</literal>.
See <xref linkend="opt-services.httpd.virtualHosts"/> for further information.
'';
};
};
default = {};
description = "DokuWiki configuration";
};
}; };
# implementation # implementation
config = mkIf (eachSite != {}) (mkMerge [{
config = mkIf (eachSite != {}) {
warnings = mapAttrsToList (hostName: cfg: mkIf (cfg.superUser == null) "Not setting services.dokuwiki.${hostName} superUser will impair your ability to administer DokuWiki") eachSite;
assertions = flatten (mapAttrsToList (hostName: cfg: assertions = flatten (mapAttrsToList (hostName: cfg:
[{ [{
assertion = cfg.aclUse -> (cfg.acl != null || cfg.aclFile != null); assertion = cfg.aclUse -> (cfg.acl != null || cfg.aclFile != null);
message = "Either services.dokuwiki.${hostName}.acl or services.dokuwiki.${hostName}.aclFile is mandatory if aclUse true"; message = "Either services.dokuwiki.sites.${hostName}.acl or services.dokuwiki.sites.${hostName}.aclFile is mandatory if aclUse true";
} }
{ {
assertion = cfg.usersFile != null -> cfg.aclUse != false; assertion = cfg.usersFile != null -> cfg.aclUse != false;
message = "services.dokuwiki.${hostName}.aclUse must must be true if usersFile is not null"; message = "services.dokuwiki.sites.${hostName}.aclUse must must be true if usersFile is not null";
} }
]) eachSite); ]) eachSite);
warnings = mapAttrsToList (hostName: _: ''services.dokuwiki."${hostName}" is deprecated use services.dokuwiki.sites."${hostName}"'') (oldSites cfg);
services.phpfpm.pools = mapAttrs' (hostName: cfg: ( services.phpfpm.pools = mapAttrs' (hostName: cfg: (
nameValuePair "dokuwiki-${hostName}" { nameValuePair "dokuwiki-${hostName}" {
inherit user; inherit user;
inherit group; group = webserver.group;
phpEnv = { phpEnv = {
DOKUWIKI_LOCAL_CONFIG = "${dokuwikiLocalConfig cfg}"; DOKUWIKI_LOCAL_CONFIG = "${dokuwikiLocalConfig hostName cfg}";
DOKUWIKI_PLUGINS_LOCAL_CONFIG = "${dokuwikiPluginsLocalConfig cfg}"; DOKUWIKI_PLUGINS_LOCAL_CONFIG = "${dokuwikiPluginsLocalConfig hostName cfg}";
} // optionalAttrs (cfg.usersFile != null) { } // optionalAttrs (cfg.usersFile != null) {
DOKUWIKI_USERS_AUTH_CONFIG = "${cfg.usersFile}"; DOKUWIKI_USERS_AUTH_CONFIG = "${cfg.usersFile}";
} //optionalAttrs (cfg.aclUse) { } //optionalAttrs (cfg.aclUse) {
DOKUWIKI_ACL_AUTH_CONFIG = if (cfg.acl != null) then "${dokuwikiAclAuthConfig cfg}" else "${toString cfg.aclFile}"; DOKUWIKI_ACL_AUTH_CONFIG = if (cfg.acl != null) then "${dokuwikiAclAuthConfig hostName cfg}" else "${toString cfg.aclFile}";
}; };
settings = { settings = {
"listen.mode" = "0660"; "listen.owner" = webserver.user;
"listen.owner" = user; "listen.group" = webserver.group;
"listen.group" = group;
} // cfg.poolConfig; } // cfg.poolConfig;
})) eachSite; }
)) eachSite;
}
{
systemd.tmpfiles.rules = flatten (mapAttrsToList (hostName: cfg: [
"d ${stateDir hostName}/attic 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/cache 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/index 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/locks 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/media 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/media_attic 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/media_meta 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/meta 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/pages 0750 ${user} ${webserver.group} - -"
"d ${stateDir hostName}/tmp 0750 ${user} ${webserver.group} - -"
] ++ lib.optional (cfg.aclFile != null) "C ${cfg.aclFile} 0640 ${user} ${webserver.group} - ${pkg hostName cfg}/share/dokuwiki/conf/acl.auth.php.dist"
++ lib.optional (cfg.usersFile != null) "C ${cfg.usersFile} 0640 ${user} ${webserver.group} - ${pkg hostName cfg}/share/dokuwiki/conf/users.auth.php.dist"
) eachSite);
users.users.${user} = {
group = webserver.group;
isSystemUser = true;
};
}
(mkIf (cfg.webserver == "nginx") {
services.nginx = { services.nginx = {
enable = true; enable = true;
virtualHosts = mapAttrs (hostName: cfg: mkMerge [ cfg.nginx { virtualHosts = mapAttrs (hostName: cfg: {
root = mkForce "${pkg hostName cfg}/share/dokuwiki"; serverName = mkDefault hostName;
extraConfig = lib.optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;"; root = "${pkg hostName cfg}/share/dokuwiki";
locations."~ /(conf/|bin/|inc/|install.php)" = { locations = {
"~ /(conf/|bin/|inc/|install.php)" = {
extraConfig = "deny all;"; extraConfig = "deny all;";
}; };
locations."~ ^/data/" = { "~ ^/data/" = {
root = "${cfg.stateDir}"; root = "${stateDir hostName}";
extraConfig = "internal;"; extraConfig = "internal;";
}; };
locations."~ ^/lib.*\\.(js|css|gif|png|ico|jpg|jpeg)$" = { "~ ^/lib.*\.(js|css|gif|png|ico|jpg|jpeg)$" = {
extraConfig = "expires 365d;"; extraConfig = "expires 365d;";
}; };
locations."/" = { "/" = {
priority = 1; priority = 1;
index = "doku.php"; index = "doku.php";
extraConfig = "try_files $uri $uri/ @dokuwiki;"; extraConfig = ''try_files $uri $uri/ @dokuwiki;'';
}; };
locations."@dokuwiki" = { "@dokuwiki" = {
extraConfig = '' extraConfig = ''
# rewrites "doku.php/" out of the URLs if you set the userwrite setting to .htaccess in dokuwiki config page # rewrites "doku.php/" out of the URLs if you set the userwrite setting to .htaccess in dokuwiki config page
rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last; rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last;
@ -349,40 +382,66 @@ in
''; '';
}; };
locations."~ \\.php$" = { "~ \\.php$" = {
extraConfig = '' extraConfig = ''
try_files $uri $uri/ /doku.php; try_files $uri $uri/ /doku.php;
include ${pkgs.nginx}/conf/fastcgi_params; include ${pkgs.nginx}/conf/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param REDIRECT_STATUS 200; fastcgi_param REDIRECT_STATUS 200;
fastcgi_pass unix:${config.services.phpfpm.pools."dokuwiki-${hostName}".socket}; fastcgi_pass unix:${config.services.phpfpm.pools."dokuwiki-${hostName}".socket};
${lib.optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;"}
''; '';
}; };
}]) eachSite;
};
systemd.tmpfiles.rules = flatten (mapAttrsToList (hostName: cfg: [
"d ${cfg.stateDir}/attic 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/cache 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/index 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/locks 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/media 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/media_attic 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/media_meta 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/meta 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/pages 0750 ${user} ${group} - -"
"d ${cfg.stateDir}/tmp 0750 ${user} ${group} - -"
] ++ lib.optional (cfg.aclFile != null) "C ${cfg.aclFile} 0640 ${user} ${group} - ${pkg hostName cfg}/share/dokuwiki/conf/acl.auth.php.dist"
++ lib.optional (cfg.usersFile != null) "C ${cfg.usersFile} 0640 ${user} ${group} - ${pkg hostName cfg}/share/dokuwiki/conf/users.auth.php.dist"
) eachSite);
users.users.${user} = {
group = group;
isSystemUser = true;
}; };
}) eachSite;
}; };
})
meta.maintainers = with maintainers; [ _1000101 ]; (mkIf (cfg.webserver == "caddy") {
services.caddy = {
enable = true;
virtualHosts = mapAttrs' (hostName: cfg: (
nameValuePair "http://${hostName}" {
extraConfig = ''
root * ${pkg hostName cfg}/share/dokuwiki
file_server
encode zstd gzip
php_fastcgi unix/${config.services.phpfpm.pools."dokuwiki-${hostName}".socket}
@restrict_files {
path /data/* /conf/* /bin/* /inc/* /vendor/* /install.php
}
respond @restrict_files 404
@allow_media {
path_regexp path ^/_media/(.*)$
}
rewrite @allow_media /lib/exe/fetch.php?media=/{http.regexp.path.1}
@allow_detail {
path /_detail*
}
rewrite @allow_detail /lib/exe/detail.php?media={path}
@allow_export {
path /_export*
path_regexp export /([^/]+)/(.*)
}
rewrite @allow_export /doku.php?do=export_{http.regexp.export.1}&id={http.regexp.export.2}
try_files {path} {path}/ /doku.php?id={path}&{query}
'';
}
)) eachSite;
};
})
]);
meta.maintainers = with maintainers; [
_1000101
onny
];
} }

View file

@ -9,6 +9,13 @@ let
RAILS_ENV = "production"; RAILS_ENV = "production";
NODE_ENV = "production"; NODE_ENV = "production";
# mastodon-web concurrency.
WEB_CONCURRENCY = toString cfg.webProcesses;
MAX_THREADS = toString cfg.webThreads;
# mastodon-streaming concurrency.
STREAMING_CLUSTER_NUM = toString cfg.streamingProcesses;
DB_USER = cfg.database.user; DB_USER = cfg.database.user;
REDIS_HOST = cfg.redis.host; REDIS_HOST = cfg.redis.host;
@ -146,18 +153,41 @@ in {
type = lib.types.port; type = lib.types.port;
default = 55000; default = 55000;
}; };
streamingProcesses = lib.mkOption {
description = ''
Processes used by the mastodon-streaming service.
Defaults to the number of CPU cores minus one.
'';
type = lib.types.nullOr lib.types.int;
default = null;
};
webPort = lib.mkOption { webPort = lib.mkOption {
description = "TCP port used by the mastodon-web service."; description = "TCP port used by the mastodon-web service.";
type = lib.types.port; type = lib.types.port;
default = 55001; default = 55001;
}; };
webProcesses = lib.mkOption {
description = "Processes used by the mastodon-web service.";
type = lib.types.int;
default = 2;
};
webThreads = lib.mkOption {
description = "Threads per process used by the mastodon-web service.";
type = lib.types.int;
default = 5;
};
sidekiqPort = lib.mkOption { sidekiqPort = lib.mkOption {
description = "TCP port used by the mastodon-sidekiq service"; description = "TCP port used by the mastodon-sidekiq service.";
type = lib.types.port; type = lib.types.port;
default = 55002; default = 55002;
}; };
sidekiqThreads = lib.mkOption {
description = "Worker threads used by the mastodon-sidekiq service.";
type = lib.types.int;
default = 25;
};
vapidPublicKeyFile = lib.mkOption { vapidPublicKeyFile = lib.mkOption {
description = '' description = ''
@ -524,9 +554,10 @@ in {
wantedBy = [ "multi-user.target" ]; wantedBy = [ "multi-user.target" ];
environment = env // { environment = env // {
PORT = toString(cfg.sidekiqPort); PORT = toString(cfg.sidekiqPort);
DB_POOL = toString cfg.sidekiqThreads;
}; };
serviceConfig = { serviceConfig = {
ExecStart = "${cfg.package}/bin/sidekiq -c 25 -r ${cfg.package}"; ExecStart = "${cfg.package}/bin/sidekiq -c ${toString cfg.sidekiqThreads} -r ${cfg.package}";
Restart = "always"; Restart = "always";
RestartSec = 20; RestartSec = 20;
EnvironmentFile = "/var/lib/mastodon/.secrets_env"; EnvironmentFile = "/var/lib/mastodon/.secrets_env";

View file

@ -103,7 +103,11 @@ in
config = mkIf (cfg.instances != {}) { config = mkIf (cfg.instances != {}) {
users.users.zope2.uid = config.ids.uids.zope2; users.users.zope2 = {
isSystemUser = true;
group = "zope2";
};
users.groups.zope2 = {};
systemd.services = systemd.services =
let let

Some files were not shown because too many files have changed in this diff Show more