Project import generated by Copybara.
GitOrigin-RevId: a930f7da84786807bb105df40e76b541604c3e72
This commit is contained in:
parent
88abffb7d2
commit
d9e13ed064
703 changed files with 17516 additions and 12948 deletions
|
@ -1,8 +1,16 @@
|
|||
# Fetchers {#chap-pkgs-fetchers}
|
||||
|
||||
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
|
||||
When using Nix, you will frequently need to download source code and other files from the internet. For this purpose, Nix provides the [_fixed output derivation_](https://nixos.org/manual/nix/stable/#fixed-output-drvs) feature and Nixpkgs provides various functions that implement the actual fetching from various protocols and services.
|
||||
|
||||
The two fetcher primitives are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
|
||||
## Caveats
|
||||
|
||||
Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
|
||||
|
||||
For those who develop and maintain fetcheres, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
|
||||
|
||||
## `fetchurl` and `fetchzip` {#fetchurl}
|
||||
|
||||
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
|
||||
|
||||
```nix
|
||||
{ stdenv, fetchurl }:
|
||||
|
@ -20,7 +28,7 @@ The main difference between `fetchurl` and `fetchzip` is in how they store the c
|
|||
|
||||
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
|
||||
|
||||
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like `fetchzip`.
|
||||
Most other fetchers return a directory rather than a single file.
|
||||
|
||||
## `fetchsvn` {#fetchsvn}
|
||||
|
||||
|
|
1
third_party/nixpkgs/doc/builders/special.xml
vendored
1
third_party/nixpkgs/doc/builders/special.xml
vendored
|
@ -7,4 +7,5 @@
|
|||
</para>
|
||||
<xi:include href="special/fhs-environments.section.xml" />
|
||||
<xi:include href="special/mkshell.section.xml" />
|
||||
<xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
|
||||
</chapter>
|
||||
|
|
31
third_party/nixpkgs/doc/builders/special/invalidateFetcherByDrvHash.section.md
vendored
Normal file
31
third_party/nixpkgs/doc/builders/special/invalidateFetcherByDrvHash.section.md
vendored
Normal file
|
@ -0,0 +1,31 @@
|
|||
|
||||
## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
|
||||
|
||||
Use the derivation hash to invalidate the output via name, for testing.
|
||||
|
||||
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
|
||||
|
||||
Normally, fixed output derivations can and should be cached by their output
|
||||
hash only, but for testing we want to re-fetch everytime the fetcher changes.
|
||||
|
||||
Changes to the fetcher become apparent in the drvPath, which is a hash of
|
||||
how to fetch, rather than a fixed store path.
|
||||
By inserting this hash into the name, we can make sure to re-run the fetcher
|
||||
every time the fetcher changes.
|
||||
|
||||
This relies on the assumption that Nix isn't clever enough to reuse its
|
||||
database of local store contents to optimize fetching.
|
||||
|
||||
You might notice that the "salted" name derives from the normal invocation,
|
||||
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
|
||||
function twice: once to get a derivation hash, and again to produce the final
|
||||
fixed output derivation.
|
||||
|
||||
Example:
|
||||
|
||||
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
|
||||
name = "nix-source";
|
||||
url = "https://github.com/NixOS/nix";
|
||||
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
|
||||
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
|
||||
};
|
|
@ -28,12 +28,12 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
|
|||
* `domain` (optional, defaults to `"github.com"`), domains including the strings `"github"` or `"gitlab"` in their names are automatically supported, otherwise, one must change the `fetcher` argument to support them (cf `pkgs/development/coq-modules/heq/default.nix` for an example),
|
||||
* `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags,
|
||||
* `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers,
|
||||
* `namePrefix` (optional), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
|
||||
* `namePrefix` (optional, defaults to `[ "coq" ]`), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
|
||||
* `extraBuildInputs` (optional), by default `buildInputs` just contains `coq`, this allows to add more build inputs,
|
||||
* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
|
||||
* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`,
|
||||
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
|
||||
* `opam-name` (optional, defaults to `coq-` followed by the value of `pname`), name of the Dune package to build.
|
||||
* `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
|
||||
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
|
||||
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
|
||||
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,
|
||||
|
|
|
@ -24,6 +24,7 @@
|
|||
<xi:include href="lua.section.xml" />
|
||||
<xi:include href="maven.section.xml" />
|
||||
<xi:include href="ocaml.section.xml" />
|
||||
<xi:include href="octave.section.xml" />
|
||||
<xi:include href="perl.section.xml" />
|
||||
<xi:include href="php.section.xml" />
|
||||
<xi:include href="python.section.xml" />
|
||||
|
|
100
third_party/nixpkgs/doc/languages-frameworks/octave.section.md
vendored
Normal file
100
third_party/nixpkgs/doc/languages-frameworks/octave.section.md
vendored
Normal file
|
@ -0,0 +1,100 @@
|
|||
# Octave {#sec-octave}
|
||||
|
||||
## Introduction {#ssec-octave-introduction}
|
||||
|
||||
Octave is a modular scientific programming language and environment.
|
||||
A majority of the packages supported by Octave from their [website](https://octave.sourceforge.io/packages.php) are packaged in nixpkgs.
|
||||
|
||||
## Structure {#ssec-octave-structure}
|
||||
|
||||
All Octave add-on packages are available in two ways:
|
||||
1. Under the top-level `Octave` attribute, `octave.pkgs`.
|
||||
2. As a top-level attribute, `octavePackages`.
|
||||
|
||||
## Packaging Octave Packages {#ssec-octave-packaging}
|
||||
|
||||
Nixpkgs provides a function `buildOctavePackage`, a generic package builder function for any Octave package that complies with the Octave's current packaging format.
|
||||
|
||||
All Octave packages are defined in [pkgs/top-level/octave-packages.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/octave-packages.nix) rather than `pkgs/all-packages.nix`.
|
||||
Each package is defined in their own file in the [pkgs/development/octave-modules](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules) directory.
|
||||
Octave packages are made available through `all-packages.nix` through both the attribute `octavePackages` and `octave.pkgs`.
|
||||
You can test building an Octave package as follows:
|
||||
|
||||
```ShellSession
|
||||
$ nix-build -A octavePackages.symbolic
|
||||
```
|
||||
|
||||
When building Octave packages with `nix-build`, the `buildOctavePackage` function adds `octave-octaveVersion` to; the start of the package's name attribute.
|
||||
|
||||
This can be required when installing the package using `nix-env`:
|
||||
|
||||
```ShellSession
|
||||
$ nix-env -i octave-6.2.0-symbolic
|
||||
```
|
||||
|
||||
Although, you can also install it using the attribute name:
|
||||
|
||||
```ShellSession
|
||||
$ nix-env -i -A octavePackages.symbolic
|
||||
```
|
||||
|
||||
You can build Octave with packages by using the `withPackages` passed-through function.
|
||||
|
||||
```ShellSession
|
||||
$ nix-shell -p 'octave.withPackages (ps: with ps; [ symbolic ])'
|
||||
```
|
||||
|
||||
This will also work in a `shell.nix` file.
|
||||
|
||||
```nix
|
||||
{ pkgs ? import <nixpkgs> { }}:
|
||||
|
||||
pkgs.mkShell {
|
||||
nativeBuildInputs = with pkgs; [
|
||||
(octave.withPackages (opkgs: with opkgs; [ symbolic ]))
|
||||
];
|
||||
}
|
||||
```
|
||||
|
||||
### `buildOctavePackage` Steps {#sssec-buildOctavePackage-steps}
|
||||
|
||||
The `buildOctavePackage` does several things to make sure things work properly.
|
||||
|
||||
1. Sets the environment variable `OCTAVE_HISTFILE` to `/dev/null` during package compilation so that the commands run through the Octave interpreter directly are not logged.
|
||||
2. Skips the configuration step, because the packages are stored as gzipped tarballs, which Octave itself handles directly.
|
||||
3. Change the hierarchy of the tarball so that only a single directory is at the top-most level of the tarball.
|
||||
4. Use Octave itself to run the `pkg build` command, which unzips the tarball, extracts the necessary files written in Octave, and compiles any code written in C++ or Fortran, and places the fully compiled artifact in `$out`.
|
||||
|
||||
`buildOctavePackage` is built on top of `stdenv` in a standard way, allowing most things to be customized.
|
||||
|
||||
### Handling Dependencies {#sssec-octave-handling-dependencies}
|
||||
|
||||
In Octave packages, there are four sets of dependencies that can be specified:
|
||||
|
||||
`nativeBuildInputs`
|
||||
: Just like other packages, `nativeBuildInputs` is intended for architecture-dependent build-time-only dependencies.
|
||||
|
||||
`buildInputs`
|
||||
: Like other packages, `buildInputs` is intended for architecture-independent build-time-only dependencies.
|
||||
|
||||
`propagatedBuildInputs`
|
||||
: Similar to other packages, `propagatedBuildInputs` is intended for packages that are required for both building and running of the package.
|
||||
See [Symbolic](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules/symbolic/default.nix) for how this works and why it is needed.
|
||||
|
||||
`requiredOctavePackages`
|
||||
: This is a special dependency that ensures the specified Octave packages are dependent on others, and are made available simultaneously when loading them in Octave.
|
||||
|
||||
### Installing Octave Packages {#sssec-installing-octave-packages}
|
||||
|
||||
By default, the `buildOctavePackage` function does _not_ install the requested package into Octave for use.
|
||||
The function will only build the requested package.
|
||||
This is due to Octave maintaining an text-based database about which packages are installed where.
|
||||
To this end, when all the requested packages have been built, the Octave package and all its add-on packages are put together into an environment, similar to Python.
|
||||
|
||||
1. First, all the Octave binaries are wrapped with the environment variable `OCTAVE_SITE_INITFILE` set to a file in `$out`, which is required for Octave to be able to find the non-standard package database location.
|
||||
2. Because of the way `buildEnv` works, all tarballs that are present (which should be all Octave packages to install) should be removed.
|
||||
3. The path down to the default install location of Octave packages is recreated so that Nix-operated Octave can install the packages.
|
||||
4. Install the packages into the `$out` environment while writing package entries to the database file.
|
||||
This database file is unique for each different (according to Nix) environment invocation.
|
||||
5. Rewrite the Octave-wide startup file to read from the list of packages installed in that particular environment.
|
||||
6. Wrap any programs that are required by the Octave packages so that they work with all the paths defined within the environment.
|
|
@ -20,7 +20,7 @@ or use Mozilla's [Rust nightlies overlay](#using-the-rust-nightlies-overlay).
|
|||
Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`:
|
||||
|
||||
```nix
|
||||
{ lib, rustPlatform }:
|
||||
{ lib, fetchFromGitHub, rustPlatform }:
|
||||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "ripgrep";
|
||||
|
@ -116,22 +116,44 @@ is updated after every change to `Cargo.lock`. Therefore,
|
|||
a `Cargo.lock` file using the `cargoLock` argument. For example:
|
||||
|
||||
```nix
|
||||
rustPlatform.buildRustPackage rec {
|
||||
rustPlatform.buildRustPackage {
|
||||
pname = "myproject";
|
||||
version = "1.0.0";
|
||||
|
||||
cargoLock = {
|
||||
lockFile = ./Cargo.lock;
|
||||
}
|
||||
};
|
||||
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
This will retrieve the dependencies using fixed-output derivations from
|
||||
the specified lockfile. Note that setting `cargoLock.lockFile` doesn't
|
||||
add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still required
|
||||
to build a rust package. A simple fix is to use:
|
||||
the specified lockfile.
|
||||
|
||||
One caveat is that `Cargo.lock` cannot be patched in the `patchPhase`
|
||||
because it runs after the dependencies have already been fetched. If
|
||||
you need to patch or generate the lockfile you can alternatively set
|
||||
`cargoLock.lockFileContents` to a string of its contents:
|
||||
|
||||
```nix
|
||||
rustPlatform.buildRustPackage {
|
||||
pname = "myproject";
|
||||
version = "1.0.0";
|
||||
|
||||
cargoLock = let
|
||||
fixupLockFile = path: f (builtins.readFile path);
|
||||
in {
|
||||
lockFileContents = fixupLockFile ./Cargo.lock;
|
||||
};
|
||||
|
||||
# ...
|
||||
}
|
||||
```
|
||||
|
||||
Note that setting `cargoLock.lockFile` or `cargoLock.lockFileContents`
|
||||
doesn't add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still
|
||||
required to build a rust package. A simple fix is to use:
|
||||
|
||||
```nix
|
||||
postPatch = ''
|
||||
|
|
|
@ -79,7 +79,7 @@ A commonly adopted convention in `nixpkgs` is that executables provided by the p
|
|||
|
||||
The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${stdenv.glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${stdenv.glibc.bin}/bin/ldd`).
|
||||
|
||||
The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf/blob/master/README) for more details).
|
||||
The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf) for more details).
|
||||
|
||||
### File type groups {#multiple-output-file-type-groups}
|
||||
|
||||
|
|
|
@ -1853,6 +1853,12 @@
|
|||
githubId = 1762540;
|
||||
name = "Changlin Li";
|
||||
};
|
||||
chanley = {
|
||||
email = "charlieshanley@gmail.com";
|
||||
github = "charlieshanley";
|
||||
githubId = 8228888;
|
||||
name = "Charlie Hanley";
|
||||
};
|
||||
CharlesHD = {
|
||||
email = "charleshdespointes@gmail.com";
|
||||
github = "CharlesHD";
|
||||
|
@ -4441,6 +4447,12 @@
|
|||
fingerprint = "D618 7A03 A40A 3D56 62F5 4B46 03EF BF83 9A5F DC15";
|
||||
}];
|
||||
};
|
||||
hleboulanger = {
|
||||
email = "hleboulanger@protonmail.com";
|
||||
name = "Harold Leboulanger";
|
||||
github = "thbkrhsw";
|
||||
githubId = 33122;
|
||||
};
|
||||
hlolli = {
|
||||
email = "hlolli@gmail.com";
|
||||
github = "hlolli";
|
||||
|
@ -4565,6 +4577,16 @@
|
|||
githubId = 2789926;
|
||||
name = "Imran Hossain";
|
||||
};
|
||||
iagoq = {
|
||||
email = "18238046+iagocq@users.noreply.github.com";
|
||||
github = "iagocq";
|
||||
githubId = 18238046;
|
||||
name = "Iago Manoel Brito";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x35D39F9A9A1BC8DA";
|
||||
fingerprint = "DF90 9D58 BEE4 E73A 1B8C 5AF3 35D3 9F9A 9A1B C8DA";
|
||||
}];
|
||||
};
|
||||
iammrinal0 = {
|
||||
email = "nixpkgs@mrinalpurohit.in";
|
||||
github = "iammrinal0";
|
||||
|
@ -9182,6 +9204,12 @@
|
|||
githubId = 546296;
|
||||
name = "Eric Ren";
|
||||
};
|
||||
renesat = {
|
||||
name = "Ivan Smolyakov";
|
||||
email = "smol.ivan97@gmail.com";
|
||||
github = "renesat";
|
||||
githubId = 11363539;
|
||||
};
|
||||
renzo = {
|
||||
email = "renzocarbonara@gmail.com";
|
||||
github = "k0001";
|
||||
|
@ -9846,12 +9874,6 @@
|
|||
githubId = 11613056;
|
||||
name = "Scott Dier";
|
||||
};
|
||||
sdll = {
|
||||
email = "sasha.delly@gmail.com";
|
||||
github = "sdll";
|
||||
githubId = 17913919;
|
||||
name = "Sasha Illarionov";
|
||||
};
|
||||
SeanZicari = {
|
||||
email = "sean.zicari@gmail.com";
|
||||
github = "SeanZicari";
|
||||
|
|
10
third_party/nixpkgs/maintainers/scripts/haskell/dependencies.nix
vendored
Normal file
10
third_party/nixpkgs/maintainers/scripts/haskell/dependencies.nix
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
# Nix script to calculate the Haskell dependencies of every haskellPackage. Used by ./hydra-report.hs.
|
||||
let
|
||||
pkgs = import ../../.. {};
|
||||
inherit (pkgs) lib;
|
||||
getDeps = _: pkg: {
|
||||
deps = builtins.filter (x: !isNull x) (map (x: x.pname or null) (pkg.propagatedBuildInputs or []));
|
||||
broken = (pkg.meta.hydraPlatforms or [null]) == [];
|
||||
};
|
||||
in
|
||||
lib.mapAttrs getDeps pkgs.haskellPackages
|
|
@ -26,6 +26,8 @@ Because step 1) is quite expensive and takes roughly ~5 minutes the result is ca
|
|||
{-# LANGUAGE ScopedTypeVariables #-}
|
||||
{-# LANGUAGE TupleSections #-}
|
||||
{-# OPTIONS_GHC -Wall #-}
|
||||
{-# LANGUAGE ViewPatterns #-}
|
||||
{-# LANGUAGE TupleSections #-}
|
||||
|
||||
import Control.Monad (forM_, (<=<))
|
||||
import Control.Monad.Trans (MonadIO (liftIO))
|
||||
|
@ -41,7 +43,7 @@ import Data.List.NonEmpty (NonEmpty, nonEmpty)
|
|||
import qualified Data.List.NonEmpty as NonEmpty
|
||||
import Data.Map.Strict (Map)
|
||||
import qualified Data.Map.Strict as Map
|
||||
import Data.Maybe (fromMaybe, mapMaybe)
|
||||
import Data.Maybe (fromMaybe, mapMaybe, isNothing)
|
||||
import Data.Monoid (Sum (Sum, getSum))
|
||||
import Data.Sequence (Seq)
|
||||
import qualified Data.Sequence as Seq
|
||||
|
@ -70,6 +72,12 @@ import System.Directory (XdgDirectory (XdgCache), getXdgDirectory)
|
|||
import System.Environment (getArgs)
|
||||
import System.Process (readProcess)
|
||||
import Prelude hiding (id)
|
||||
import Data.List (sortOn)
|
||||
import Control.Concurrent.Async (concurrently)
|
||||
import Control.Exception (evaluate)
|
||||
import qualified Data.IntMap.Strict as IntMap
|
||||
import qualified Data.IntSet as IntSet
|
||||
import Data.Bifunctor (second)
|
||||
|
||||
newtype JobsetEvals = JobsetEvals
|
||||
{ evals :: Seq Eval
|
||||
|
@ -134,20 +142,17 @@ hydraEvalCommand = "hydra-eval-jobs"
|
|||
hydraEvalParams :: [String]
|
||||
hydraEvalParams = ["-I", ".", "pkgs/top-level/release-haskell.nix"]
|
||||
|
||||
handlesCommand :: FilePath
|
||||
handlesCommand = "nix-instantiate"
|
||||
nixExprCommand :: FilePath
|
||||
nixExprCommand = "nix-instantiate"
|
||||
|
||||
handlesParams :: [String]
|
||||
handlesParams = ["--eval", "--strict", "--json", "-"]
|
||||
|
||||
handlesExpression :: String
|
||||
handlesExpression = "with import ./. {}; with lib; zipAttrsWith (_: builtins.head) (mapAttrsToList (_: v: if v ? github then { \"${v.email}\" = v.github; } else {}) (import maintainers/maintainer-list.nix))"
|
||||
nixExprParams :: [String]
|
||||
nixExprParams = ["--eval", "--strict", "--json"]
|
||||
|
||||
-- | This newtype is used to parse a Hydra job output from @hydra-eval-jobs@.
|
||||
-- The only field we are interested in is @maintainers@, which is why this
|
||||
-- is just a newtype.
|
||||
--
|
||||
-- Note that there are occassionally jobs that don't have a maintainers
|
||||
-- Note that there are occasionally jobs that don't have a maintainers
|
||||
-- field, which is why this has to be @Maybe Text@.
|
||||
newtype Maintainers = Maintainers { maintainers :: Maybe Text }
|
||||
deriving stock (Generic, Show)
|
||||
|
@ -195,13 +200,49 @@ type EmailToGitHubHandles = Map Text Text
|
|||
-- @@
|
||||
type MaintainerMap = Map Text (NonEmpty Text)
|
||||
|
||||
-- | Generate a mapping of Hydra job names to maintainer GitHub handles.
|
||||
-- | Information about a package which lists its dependencies and whether the
|
||||
-- package is marked broken.
|
||||
data DepInfo = DepInfo {
|
||||
deps :: Set Text,
|
||||
broken :: Bool
|
||||
}
|
||||
deriving stock (Generic, Show)
|
||||
deriving anyclass (FromJSON, ToJSON)
|
||||
|
||||
-- | Map from package names to their DepInfo. This is the data we get out of a
|
||||
-- nix call.
|
||||
type DependencyMap = Map Text DepInfo
|
||||
|
||||
-- | Map from package names to its broken state, number of reverse dependencies (fst) and
|
||||
-- unbroken reverse dependencies (snd).
|
||||
type ReverseDependencyMap = Map Text (Int, Int)
|
||||
|
||||
-- | Calculate the (unbroken) reverse dependencies of a package by transitively
|
||||
-- going through all packages if it’s a dependency of them.
|
||||
calculateReverseDependencies :: DependencyMap -> ReverseDependencyMap
|
||||
calculateReverseDependencies depMap = Map.fromDistinctAscList $ zip keys (zip (rdepMap False) (rdepMap True))
|
||||
where
|
||||
-- This code tries to efficiently invert the dependency map and calculate
|
||||
-- it’s transitive closure by internally identifying every pkg with it’s index
|
||||
-- in the package list and then using memoization.
|
||||
keys = Map.keys depMap
|
||||
pkgToIndexMap = Map.fromDistinctAscList (zip keys [0..])
|
||||
intDeps = zip [0..] $ (\DepInfo{broken,deps} -> (broken,mapMaybe (`Map.lookup` pkgToIndexMap) $ Set.toList deps)) <$> Map.elems depMap
|
||||
rdepMap onlyUnbroken = IntSet.size <$> resultList
|
||||
where
|
||||
resultList = go <$> [0..]
|
||||
oneStepMap = IntMap.fromListWith IntSet.union $ (\(key,(_,deps)) -> (,IntSet.singleton key) <$> deps) <=< filter (\(_, (broken,_)) -> not (broken && onlyUnbroken)) $ intDeps
|
||||
go pkg = IntSet.unions (oneStep:((resultList !!) <$> IntSet.toList oneStep))
|
||||
where oneStep = IntMap.findWithDefault mempty pkg oneStepMap
|
||||
|
||||
-- | Generate a mapping of Hydra job names to maintainer GitHub handles. Calls
|
||||
-- hydra-eval-jobs and the nix script ./maintainer-handles.nix.
|
||||
getMaintainerMap :: IO MaintainerMap
|
||||
getMaintainerMap = do
|
||||
hydraJobs :: HydraJobs <-
|
||||
readJSONProcess hydraEvalCommand hydraEvalParams "" "Failed to decode hydra-eval-jobs output: "
|
||||
readJSONProcess hydraEvalCommand hydraEvalParams "Failed to decode hydra-eval-jobs output: "
|
||||
handlesMap :: EmailToGitHubHandles <-
|
||||
readJSONProcess handlesCommand handlesParams handlesExpression "Failed to decode nix output for lookup of github handles: "
|
||||
readJSONProcess nixExprCommand ("maintainers/scripts/haskell/maintainer-handles.nix":nixExprParams) "Failed to decode nix output for lookup of github handles: "
|
||||
pure $ Map.mapMaybe (splitMaintainersToGitHubHandles handlesMap) hydraJobs
|
||||
where
|
||||
-- Split a comma-spearated string of Maintainers into a NonEmpty list of
|
||||
|
@ -211,6 +252,12 @@ getMaintainerMap = do
|
|||
splitMaintainersToGitHubHandles handlesMap (Maintainers maint) =
|
||||
nonEmpty . mapMaybe (`Map.lookup` handlesMap) . Text.splitOn ", " $ fromMaybe "" maint
|
||||
|
||||
-- | Get the a map of all dependencies of every package by calling the nix
|
||||
-- script ./dependencies.nix.
|
||||
getDependencyMap :: IO DependencyMap
|
||||
getDependencyMap =
|
||||
readJSONProcess nixExprCommand ("maintainers/scripts/haskell/dependencies.nix":nixExprParams) "Failed to decode nix output for lookup of dependencies: "
|
||||
|
||||
-- | Run a process that produces JSON on stdout and and decode the JSON to a
|
||||
-- data type.
|
||||
--
|
||||
|
@ -219,11 +266,10 @@ readJSONProcess
|
|||
:: FromJSON a
|
||||
=> FilePath -- ^ Filename of executable.
|
||||
-> [String] -- ^ Arguments
|
||||
-> String -- ^ stdin to pass to the process
|
||||
-> String -- ^ String to prefix to JSON-decode error.
|
||||
-> IO a
|
||||
readJSONProcess exe args input err = do
|
||||
output <- readProcess exe args input
|
||||
readJSONProcess exe args err = do
|
||||
output <- readProcess exe args ""
|
||||
let eitherDecodedOutput = eitherDecodeStrict' . encodeUtf8 . Text.pack $ output
|
||||
case eitherDecodedOutput of
|
||||
Left decodeErr -> error $ err <> decodeErr <> "\nRaw: '" <> take 1000 output <> "'"
|
||||
|
@ -264,7 +310,13 @@ platformIcon (Platform x) = case x of
|
|||
data BuildResult = BuildResult {state :: BuildState, id :: Int} deriving (Show, Eq, Ord)
|
||||
newtype Platform = Platform {platform :: Text} deriving (Show, Eq, Ord)
|
||||
newtype Table row col a = Table (Map (row, col) a)
|
||||
type StatusSummary = Map Text (Table Text Platform BuildResult, Set Text)
|
||||
data SummaryEntry = SummaryEntry {
|
||||
summaryBuilds :: Table Text Platform BuildResult,
|
||||
summaryMaintainers :: Set Text,
|
||||
summaryReverseDeps :: Int,
|
||||
summaryUnbrokenReverseDeps :: Int
|
||||
}
|
||||
type StatusSummary = Map Text SummaryEntry
|
||||
|
||||
instance (Ord row, Ord col, Semigroup a) => Semigroup (Table row col a) where
|
||||
Table l <> Table r = Table (Map.unionWith (<>) l r)
|
||||
|
@ -275,11 +327,11 @@ instance Functor (Table row col) where
|
|||
instance Foldable (Table row col) where
|
||||
foldMap f (Table a) = foldMap f a
|
||||
|
||||
buildSummary :: MaintainerMap -> Seq Build -> StatusSummary
|
||||
buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary
|
||||
buildSummary :: MaintainerMap -> ReverseDependencyMap -> Seq Build -> StatusSummary
|
||||
buildSummary maintainerMap reverseDependencyMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary
|
||||
where
|
||||
unionSummary (Table l, l') (Table r, r') = (Table $ Map.union l r, l' <> r')
|
||||
toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (Table (Map.singleton (set, Platform system) (BuildResult state id)), maintainers)
|
||||
unionSummary (SummaryEntry (Table lb) lm lr lu) (SummaryEntry (Table rb) rm rr ru) = SummaryEntry (Table $ Map.union lb rb) (lm <> rm) (max lr rr) (max lu ru)
|
||||
toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (SummaryEntry (Table (Map.singleton (set, Platform system) (BuildResult state id))) maintainers reverseDeps unbrokenReverseDeps)
|
||||
where
|
||||
state :: BuildState
|
||||
state = case (finished, buildstatus) of
|
||||
|
@ -297,6 +349,7 @@ buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap
|
|||
name = maybe packageName NonEmpty.last splitted
|
||||
set = maybe "" (Text.intercalate "." . NonEmpty.init) splitted
|
||||
maintainers = maybe mempty (Set.fromList . toList) (Map.lookup job maintainerMap)
|
||||
(reverseDeps, unbrokenReverseDeps) = Map.findWithDefault (0,0) name reverseDependencyMap
|
||||
|
||||
readBuildReports :: IO (Eval, UTCTime, Seq Build)
|
||||
readBuildReports = do
|
||||
|
@ -339,25 +392,29 @@ makeSearchLink evalId linkLabel query = "[" <> linkLabel <> "](" <> "https://hyd
|
|||
statusToNumSummary :: StatusSummary -> NumSummary
|
||||
statusToNumSummary = fmap getSum . foldMap (fmap Sum . jobTotals)
|
||||
|
||||
jobTotals :: (Table Text Platform BuildResult, a) -> Table Platform BuildState Int
|
||||
jobTotals (Table mapping, _) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping)
|
||||
jobTotals :: SummaryEntry -> Table Platform BuildState Int
|
||||
jobTotals (summaryBuilds -> Table mapping) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping)
|
||||
|
||||
details :: Text -> [Text] -> [Text]
|
||||
details summary content = ["<details><summary>" <> summary <> " </summary>", ""] <> content <> ["</details>", ""]
|
||||
|
||||
printBuildSummary :: Eval -> UTCTime -> StatusSummary -> Text
|
||||
printBuildSummary :: Eval -> UTCTime -> StatusSummary -> [(Text, Int)] -> Text
|
||||
printBuildSummary
|
||||
Eval{id, jobsetevalinputs = JobsetEvalInputs{nixpkgs = Nixpkgs{revision}}}
|
||||
fetchTime
|
||||
summary =
|
||||
summary
|
||||
topBrokenRdeps =
|
||||
Text.unlines $
|
||||
headline <> totals
|
||||
headline <> [""] <> tldr <> ((" * "<>) <$> (errors <> warnings)) <> [""]
|
||||
<> totals
|
||||
<> optionalList "#### Maintained packages with build failure" (maintainedList fails)
|
||||
<> optionalList "#### Maintained packages with failed dependency" (maintainedList failedDeps)
|
||||
<> optionalList "#### Maintained packages with unknown error" (maintainedList unknownErr)
|
||||
<> optionalHideableList "#### Unmaintained packages with build failure" (unmaintainedList fails)
|
||||
<> optionalHideableList "#### Unmaintained packages with failed dependency" (unmaintainedList failedDeps)
|
||||
<> optionalHideableList "#### Unmaintained packages with unknown error" (unmaintainedList unknownErr)
|
||||
<> optionalHideableList "#### Top 50 broken packages, sorted by number of reverse dependencies" (brokenLine <$> topBrokenRdeps)
|
||||
<> ["","*:arrow_heading_up:: The number of packages that depend (directly or indirectly) on this package (if any). If two numbers are shown the first (lower) number considers only packages which currently have enabled hydra jobs, i.e. are not marked broken. The second (higher) number considers all packages.*",""]
|
||||
<> footer
|
||||
where
|
||||
footer = ["*Report generated with [maintainers/scripts/haskell/hydra-report.hs](https://github.com/NixOS/nixpkgs/blob/haskell-updates/maintainers/scripts/haskell/hydra-report.sh)*"]
|
||||
|
@ -365,7 +422,7 @@ printBuildSummary
|
|||
[ "#### Build summary"
|
||||
, ""
|
||||
]
|
||||
<> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT (statusToNumSummary summary)
|
||||
<> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT numSummary
|
||||
headline =
|
||||
[ "### [haskell-updates build report from hydra](https://hydra.nixos.org/jobset/nixpkgs/haskell-updates)"
|
||||
, "*evaluation ["
|
||||
|
@ -380,24 +437,49 @@ printBuildSummary
|
|||
<> Text.pack (formatTime defaultTimeLocale "%Y-%m-%d %H:%M UTC" fetchTime)
|
||||
<> "*"
|
||||
]
|
||||
jobsByState predicate = Map.filter (predicate . foldl' min Success . fmap state . fst) summary
|
||||
brokenLine (name, rdeps) = "[" <> name <> "](https://search.nixos.org/packages?channel=unstable&show=haskellPackages." <> name <> "&query=haskellPackages." <> name <> ") :arrow_heading_up: " <> Text.pack (show rdeps)
|
||||
numSummary = statusToNumSummary summary
|
||||
jobsByState predicate = Map.filter (predicate . worstState) summary
|
||||
worstState = foldl' min Success . fmap state . summaryBuilds
|
||||
fails = jobsByState (== Failed)
|
||||
failedDeps = jobsByState (== DependencyFailed)
|
||||
unknownErr = jobsByState (\x -> x > DependencyFailed && x < TimedOut)
|
||||
withMaintainer = Map.mapMaybe (\(x, m) -> (x,) <$> nonEmpty (Set.toList m))
|
||||
withoutMaintainer = Map.mapMaybe (\(x, m) -> if Set.null m then Just x else Nothing)
|
||||
withMaintainer = Map.mapMaybe (\e -> (summaryBuilds e,) <$> nonEmpty (Set.toList (summaryMaintainers e)))
|
||||
withoutMaintainer = Map.mapMaybe (\e -> if Set.null (summaryMaintainers e) then Just e else Nothing)
|
||||
optionalList heading list = if null list then mempty else [heading] <> list
|
||||
optionalHideableList heading list = if null list then mempty else [heading] <> details (showT (length list) <> " job(s)") list
|
||||
maintainedList = showMaintainedBuild <=< Map.toList . withMaintainer
|
||||
unmaintainedList = showBuild <=< Map.toList . withoutMaintainer
|
||||
showBuild (name, table) = printJob id name (table, "")
|
||||
unmaintainedList = showBuild <=< sortOn (\(snd -> x) -> (negate (summaryUnbrokenReverseDeps x), negate (summaryReverseDeps x))) . Map.toList . withoutMaintainer
|
||||
showBuild (name, entry) = printJob id name (summaryBuilds entry, Text.pack (if summaryReverseDeps entry > 0 then " :arrow_heading_up: " <> show (summaryUnbrokenReverseDeps entry) <>" | "<> show (summaryReverseDeps entry) else ""))
|
||||
showMaintainedBuild (name, (table, maintainers)) = printJob id name (table, Text.intercalate " " (fmap ("@" <>) (toList maintainers)))
|
||||
tldr = case (errors, warnings) of
|
||||
([],[]) -> [":green_circle: **Ready to merge**"]
|
||||
([],_) -> [":yellow_circle: **Potential issues**"]
|
||||
_ -> [":red_circle: **Branch not mergeable**"]
|
||||
warnings =
|
||||
if' (Unfinished > maybe Success worstState maintainedJob) "`maintained` jobset failed." <>
|
||||
if' (Unfinished == maybe Success worstState mergeableJob) "`mergeable` jobset is not finished." <>
|
||||
if' (Unfinished == maybe Success worstState maintainedJob) "`maintained` jobset is not finished."
|
||||
errors =
|
||||
if' (isNothing mergeableJob) "No `mergeable` job found." <>
|
||||
if' (isNothing maintainedJob) "No `maintained` job found." <>
|
||||
if' (Unfinished > maybe Success worstState mergeableJob) "`mergeable` jobset failed." <>
|
||||
if' (outstandingJobs (Platform "x86_64-linux") > 100) "Too much outstanding jobs on x86_64-linux." <>
|
||||
if' (outstandingJobs (Platform "aarch64-linux") > 100) "Too much outstanding jobs on aarch64-linux."
|
||||
if' p e = if p then [e] else mempty
|
||||
outstandingJobs platform | Table m <- numSummary = Map.findWithDefault 0 (platform, Unfinished) m
|
||||
maintainedJob = Map.lookup "maintained" summary
|
||||
mergeableJob = Map.lookup "mergeable" summary
|
||||
|
||||
printMaintainerPing :: IO ()
|
||||
printMaintainerPing = do
|
||||
maintainerMap <- getMaintainerMap
|
||||
(maintainerMap, (reverseDependencyMap, topBrokenRdeps)) <- concurrently getMaintainerMap do
|
||||
depMap <- getDependencyMap
|
||||
rdepMap <- evaluate . calculateReverseDependencies $ depMap
|
||||
let tops = take 50 . sortOn (negate . snd) . fmap (second fst) . filter (\x -> maybe False broken $ Map.lookup (fst x) depMap) . Map.toList $ rdepMap
|
||||
pure (rdepMap, tops)
|
||||
(eval, fetchTime, buildReport) <- readBuildReports
|
||||
putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap buildReport)))
|
||||
putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap reverseDependencyMap buildReport) topBrokenRdeps))
|
||||
|
||||
printMarkBrokenList :: IO ()
|
||||
printMarkBrokenList = do
|
||||
|
|
7
third_party/nixpkgs/maintainers/scripts/haskell/maintainer-handles.nix
vendored
Normal file
7
third_party/nixpkgs/maintainers/scripts/haskell/maintainer-handles.nix
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
# Nix script to lookup maintainer github handles from their email address. Used by ./hydra-report.hs.
|
||||
let
|
||||
pkgs = import ../../.. {};
|
||||
maintainers = import ../../maintainer-list.nix;
|
||||
inherit (pkgs) lib;
|
||||
mkMailGithubPair = _: maintainer: if maintainer ? github then { "${maintainer.email}" = maintainer.github; } else {};
|
||||
in lib.zipAttrsWith (_: builtins.head) (lib.mapAttrsToList mkMailGithubPair maintainers)
|
118
third_party/nixpkgs/maintainers/scripts/haskell/merge-and-open-pr.sh
vendored
Executable file
118
third_party/nixpkgs/maintainers/scripts/haskell/merge-and-open-pr.sh
vendored
Executable file
|
@ -0,0 +1,118 @@
|
|||
#! /usr/bin/env nix-shell
|
||||
#! nix-shell -i bash -p git gh -I nixpkgs=.
|
||||
#
|
||||
# Script to merge the currently open haskell-updates PR into master, bump the
|
||||
# Stackage version and Hackage versions, and open the next haskell-updates PR.
|
||||
|
||||
set -eu -o pipefail
|
||||
|
||||
# exit after printing first argument to this function
|
||||
function die {
|
||||
# echo the first argument
|
||||
echo "ERROR: $1"
|
||||
echo "Aborting!"
|
||||
|
||||
exit 1
|
||||
}
|
||||
|
||||
function help {
|
||||
echo "Usage: $0 HASKELL_UPDATES_PR_NUM"
|
||||
echo "Merge the currently open haskell-updates PR into master, and open the next one."
|
||||
echo
|
||||
echo " -h, --help print this help"
|
||||
echo " HASKELL_UPDATES_PR_NUM number of the currently open PR on NixOS/nixpkgs"
|
||||
echo " for the haskell-updates branch"
|
||||
echo
|
||||
echo "Example:"
|
||||
echo " \$ $0 137340"
|
||||
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Read in the current haskell-updates PR number from the command line.
|
||||
while [[ $# -gt 0 ]]; do
|
||||
key="$1"
|
||||
|
||||
case $key in
|
||||
-h|--help)
|
||||
help
|
||||
;;
|
||||
*)
|
||||
curr_haskell_updates_pr_num="$1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "${curr_haskell_updates_pr_num-}" ]] ; then
|
||||
die "You must pass the current haskell-updates PR number as the first argument to this script."
|
||||
fi
|
||||
|
||||
# Make sure you have gh authentication setup.
|
||||
if ! gh auth status 2>/dev/null ; then
|
||||
die "You must setup the \`gh\` command. Run \`gh auth login\`."
|
||||
fi
|
||||
|
||||
# Fetch nixpkgs to get an up-to-date origin/haskell-updates branch.
|
||||
echo "Fetching origin..."
|
||||
git fetch origin >/dev/null
|
||||
|
||||
# Make sure we are currently on a local haskell-updates branch.
|
||||
curr_branch="$(git rev-parse --abbrev-ref HEAD)"
|
||||
if [[ "$curr_branch" != "haskell-updates" ]]; then
|
||||
die "Current branch is not called \"haskell-updates\"."
|
||||
fi
|
||||
|
||||
# Make sure our local haskell-updates branch is on the same commit as
|
||||
# origin/haskell-updates.
|
||||
curr_branch_commit="$(git rev-parse haskell-updates)"
|
||||
origin_haskell_updates_commit="$(git rev-parse origin/haskell-updates)"
|
||||
if [[ "$curr_branch_commit" != "$origin_haskell_updates_commit" ]]; then
|
||||
die "Current branch is not at the same commit as origin/haskell-updates"
|
||||
fi
|
||||
|
||||
# Merge the current open haskell-updates PR.
|
||||
echo "Merging https://github.com/NixOS/nixpkgs/pull/${curr_haskell_updates_pr_num}..."
|
||||
gh pr merge --repo NixOS/nixpkgs --merge "$curr_haskell_updates_pr_num"
|
||||
|
||||
# Update stackage, Hackage hashes, and regenerate Haskell package set
|
||||
echo "Updating Stackage..."
|
||||
./maintainers/scripts/haskell/update-stackage.sh --do-commit
|
||||
echo "Updating Hackage hashes..."
|
||||
./maintainers/scripts/haskell/update-hackage.sh --do-commit
|
||||
echo "Regenerating Hackage packages..."
|
||||
./maintainers/scripts/haskell/regenerate-hackage-packages.sh --do-commit
|
||||
|
||||
# Push these new commits to the haskell-updates branch
|
||||
echo "Pushing commits just created to the haskell-updates branch"
|
||||
git push
|
||||
|
||||
# Open new PR
|
||||
new_pr_body=$(cat <<EOF
|
||||
### This Merge
|
||||
|
||||
This PR is the regular merge of the \`haskell-updates\` branch into \`master\`.
|
||||
|
||||
This branch is being continually built and tested by hydra at https://hydra.nixos.org/jobset/nixpkgs/haskell-updates.
|
||||
|
||||
We roughly aim to merge these \`haskell-updates\` PRs at least once every two weeks. See the @NixOS/haskell [team calendar](https://cloud.maralorn.de/apps/calendar/p/Mw5WLnzsP7fC4Zky) for who is currently in charge of this branch.
|
||||
|
||||
### haskellPackages Workflow Summary
|
||||
|
||||
Our workflow is currently described in [\`pkgs/development/haskell-modules/HACKING.md\`](https://github.com/NixOS/nixpkgs/blob/haskell-updates/pkgs/development/haskell-modules/HACKING.md).
|
||||
|
||||
The short version is this:
|
||||
* We regularly update the Stackage and Hackage pins on \`haskell-updates\` (normally at the beginning of a merge window).
|
||||
* The community fixes builds of Haskell packages on that branch.
|
||||
* We aim at at least one merge of \`haskell-updates\` into \`master\` every two weeks.
|
||||
* We only do the merge if the [\`mergeable\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/mergeable) job is succeeding on hydra.
|
||||
* If a [\`maintained\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/maintained) package is still broken at the time of merge, we will only merge if the maintainer has been pinged 7 days in advance. (If you care about a Haskell package, become a maintainer!)
|
||||
|
||||
---
|
||||
|
||||
This is the follow-up to #${curr_haskell_updates_pr_num}. Come to [#haskell:nixos.org](https://matrix.to/#/#haskell:nixos.org) if you have any questions.
|
||||
EOF
|
||||
)
|
||||
|
||||
echo "Opening a PR for the next haskell-updates merge cycle"
|
||||
gh pr create --repo NixOS/nixpkgs --base master --head haskell-updates --title "haskellPackages: update stackage and hackage" --body "$new_pr_body"
|
|
@ -37,6 +37,13 @@
|
|||
PostgreSQL now defaults to major version 13.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
spark now defaults to spark 3, updated from 2. A
|
||||
<link xlink:href="https://spark.apache.org/docs/latest/core-migration-guide.html#upgrading-from-core-24-to-30">migration
|
||||
guide</link> is available.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Activation scripts can now opt int to be run when running
|
||||
|
@ -48,6 +55,13 @@
|
|||
actions.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Pantheon desktop has been updated to version 6. Due to changes
|
||||
of screen locker, if locking doesn’t work for you, please try
|
||||
<literal>gsettings set org.gnome.desktop.lockdown disable-lock-screen false</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-new-services">
|
||||
|
@ -114,6 +128,13 @@
|
|||
<link linkend="opt-services.vikunja.enable">services.vikunja</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/evilsocket/opensnitch">opensnitch</link>,
|
||||
an application firewall. Available as
|
||||
<link linkend="opt-services.opensnitch.enable">services.opensnitch</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.snapraid.it/">snapraid</link>, a
|
||||
|
@ -182,8 +203,6 @@
|
|||
<link linkend="opt-services.isso.enable">isso</link>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.navidrome.org/">navidrome</link>,
|
||||
|
@ -192,8 +211,6 @@
|
|||
<link linkend="opt-services.navidrome.enable">navidrome</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://docs.fluidd.xyz/">fluidd</link>, a
|
||||
|
@ -250,11 +267,41 @@
|
|||
entry</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://spark.apache.org/">spark</link>, a
|
||||
unified analytics engine for large-scale data processing.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/JoseExposito/touchegg">touchegg</link>,
|
||||
a multi-touch gesture recognizer. Available as
|
||||
<link linkend="opt-services.touchegg.enable">services.touchegg</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/pantheon-tweaks/pantheon-tweaks">pantheon-tweaks</link>,
|
||||
an unofficial system settings panel for Pantheon. Available as
|
||||
<link linkend="opt-programs.pantheon-tweaks.enable">programs.pantheon-tweaks</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-incompatibilities">
|
||||
<title>Backward Incompatibilities</title>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>security.wrappers</literal> option now requires
|
||||
to always specify an owner, group and whether the
|
||||
setuid/setgid bit should be set. This is motivated by the fact
|
||||
that before NixOS 21.11, specifying either setuid or setgid
|
||||
but not owner/group resulted in wrappers owned by
|
||||
nobody/nogroup, which is unsafe.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>paperless</literal> module and package have been
|
||||
|
@ -1016,6 +1063,14 @@ Superuser created successfully.
|
|||
attempts from the SSH logs.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The
|
||||
<link xlink:href="options.html#opt-services.xserver.extraLayouts"><literal>services.xserver.extraLayouts</literal></link>
|
||||
no longer cause additional rebuilds when a layout is added or
|
||||
modified.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Sway: The terminal emulator <literal>rxvt-unicode</literal> is
|
||||
|
@ -1067,6 +1122,22 @@ Superuser created successfully.
|
|||
be removed in 22.05.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The dokuwiki module provides a new interface which allows to
|
||||
use different webservers with the new option
|
||||
<link xlink:href="options.html#opt-services.dokuwiki.webserver"><literal>services.dokuwiki.webserver</literal></link>.
|
||||
Currently <literal>caddy</literal> and
|
||||
<literal>nginx</literal> are supported. The definitions of
|
||||
dokuwiki sites should now be set in
|
||||
<link xlink:href="options.html#opt-services.dokuwiki.sites"><literal>services.dokuwiki.sites</literal></link>.
|
||||
</para>
|
||||
<para>
|
||||
Sites definitions that use the old interface are automatically
|
||||
migrated in the new option. This backward compatibility will
|
||||
be removed in 22.05.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The order of NSS (host) modules has been brought in line with
|
||||
|
|
|
@ -14,10 +14,14 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- PostgreSQL now defaults to major version 13.
|
||||
|
||||
- spark now defaults to spark 3, updated from 2. A [migration guide](https://spark.apache.org/docs/latest/core-migration-guide.html#upgrading-from-core-24-to-30) is available.
|
||||
|
||||
- Activation scripts can now opt int to be run when running `nixos-rebuild dry-activate` and detect the dry activation by reading `$NIXOS_ACTION`.
|
||||
This allows activation scripts to output what they would change if the activation was really run.
|
||||
The users/modules activation script supports this and outputs some of is actions.
|
||||
|
||||
- Pantheon desktop has been updated to version 6. Due to changes of screen locker, if locking doesn't work for you, please try `gsettings set org.gnome.desktop.lockdown disable-lock-screen false`.
|
||||
|
||||
## New Services {#sec-release-21.11-new-services}
|
||||
|
||||
- [btrbk](https://digint.ch/btrbk/index.html), a backup tool for btrfs subvolumes, taking advantage of btrfs specific capabilities to create atomic snapshots and transfer them incrementally to your backup locations. Available as [services.btrbk](options.html#opt-services.brtbk.instances).
|
||||
|
@ -37,6 +41,8 @@ pt-services.clipcat.enable).
|
|||
|
||||
- [vikunja](https://vikunja.io), a to-do list app. Available as [services.vikunja](#opt-services.vikunja.enable).
|
||||
|
||||
- [opensnitch](https://github.com/evilsocket/opensnitch), an application firewall. Available as [services.opensnitch](#opt-services.opensnitch.enable).
|
||||
|
||||
- [snapraid](https://www.snapraid.it/), a backup program for disk arrays.
|
||||
Available as [snapraid](#opt-snapraid.enable).
|
||||
|
||||
|
@ -58,7 +64,7 @@ pt-services.clipcat.enable).
|
|||
- [isso](https://posativ.org/isso/), a commenting server similar to Disqus.
|
||||
Available as [isso](#opt-services.isso.enable)
|
||||
|
||||
* [navidrome](https://www.navidrome.org/), a personal music streaming server with
|
||||
- [navidrome](https://www.navidrome.org/), a personal music streaming server with
|
||||
subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable).
|
||||
|
||||
- [fluidd](https://docs.fluidd.xyz/), a Klipper web interface for managing 3d printers using moonraker. Available as [fluidd](#opt-services.fluidd.enable).
|
||||
|
@ -78,8 +84,16 @@ subsonic-compatible api. Available as [navidrome](#opt-services.navidrome.enable
|
|||
or sends them to a downstream service for further analysis.
|
||||
Documented in [its manual entry](#module-services-parsedmarc).
|
||||
|
||||
- [spark](https://spark.apache.org/), a unified analytics engine for large-scale data processing.
|
||||
|
||||
- [touchegg](https://github.com/JoseExposito/touchegg), a multi-touch gesture recognizer. Available as [services.touchegg](#opt-services.touchegg.enable).
|
||||
|
||||
- [pantheon-tweaks](https://github.com/pantheon-tweaks/pantheon-tweaks), an unofficial system settings panel for Pantheon. Available as [programs.pantheon-tweaks](#opt-programs.pantheon-tweaks.enable).
|
||||
|
||||
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
|
||||
|
||||
- The `security.wrappers` option now requires to always specify an owner, group and whether the setuid/setgid bit should be set.
|
||||
This is motivated by the fact that before NixOS 21.11, specifying either setuid or setgid but not owner/group resulted in wrappers owned by nobody/nogroup, which is unsafe.
|
||||
|
||||
- The `paperless` module and package have been removed. All users should migrate to the
|
||||
successor `paperless-ng` instead. The Paperless project [has been
|
||||
|
@ -309,6 +323,8 @@ To be able to access the web UI this port needs to be opened in the firewall.
|
|||
|
||||
However, if [`services.fail2ban.enable`](options.html#opt-services.fail2ban.enable) is `true`, the `fail2ban` will override the verbosity to `"VERBOSE"`, so that `fail2ban` can observe the failed login attempts from the SSH logs.
|
||||
|
||||
- The [`services.xserver.extraLayouts`](options.html#opt-services.xserver.extraLayouts) no longer cause additional rebuilds when a layout is added or modified.
|
||||
|
||||
- Sway: The terminal emulator `rxvt-unicode` is no longer installed by default via `programs.sway.extraPackages`. The current default configuration uses `alacritty` (and soon `foot`) so this is only an issue when using a customized configuration and not installing `rxvt-unicode` explicitly.
|
||||
|
||||
- `python3` now defaults to Python 3.9. Python 3.9 introduces many deprecation warnings, please look at the [What's New In Python 3.9 post](https://docs.python.org/3/whatsnew/3.9.html) for more information.
|
||||
|
@ -321,6 +337,10 @@ To be able to access the web UI this port needs to be opened in the firewall.
|
|||
|
||||
Sites definitions that use the old interface are automatically migrated in the new option. This backward compatibility will be removed in 22.05.
|
||||
|
||||
- The dokuwiki module provides a new interface which allows to use different webservers with the new option [`services.dokuwiki.webserver`](options.html#opt-services.dokuwiki.webserver). Currently `caddy` and `nginx` are supported. The definitions of dokuwiki sites should now be set in [`services.dokuwiki.sites`](options.html#opt-services.dokuwiki.sites).
|
||||
|
||||
Sites definitions that use the old interface are automatically migrated in the new option. This backward compatibility will be removed in 22.05.
|
||||
|
||||
- The order of NSS (host) modules has been brought in line with upstream
|
||||
recommendations:
|
||||
|
||||
|
|
|
@ -116,7 +116,11 @@ in
|
|||
{ console.keyMap = with config.services.xserver;
|
||||
mkIf cfg.useXkbConfig
|
||||
(pkgs.runCommand "xkb-console-keymap" { preferLocalBuild = true; } ''
|
||||
'${pkgs.ckbcomp}/bin/ckbcomp' -model '${xkbModel}' -layout '${layout}' \
|
||||
'${pkgs.ckbcomp}/bin/ckbcomp' \
|
||||
${optionalString (config.environment.sessionVariables ? XKB_CONFIG_ROOT)
|
||||
"-I${config.environment.sessionVariables.XKB_CONFIG_ROOT}"
|
||||
} \
|
||||
-model '${xkbModel}' -layout '${layout}' \
|
||||
-option '${xkbOptions}' -variant '${xkbVariant}' > "$out"
|
||||
'');
|
||||
}
|
||||
|
|
|
@ -84,7 +84,7 @@ in {
|
|||
type = types.package;
|
||||
default = pkgs.krb5Full;
|
||||
defaultText = "pkgs.krb5Full";
|
||||
example = literalExample "pkgs.heimdalFull";
|
||||
example = literalExample "pkgs.heimdal";
|
||||
description = ''
|
||||
The Kerberos implementation that will be present in
|
||||
<literal>environment.systemPackages</literal> after enabling this
|
||||
|
|
|
@ -30,6 +30,15 @@ let
|
|||
vulnerabilities, while maintaining good performance.
|
||||
'';
|
||||
};
|
||||
|
||||
mimalloc = {
|
||||
libPath = "${pkgs.mimalloc}/lib/libmimalloc.so";
|
||||
description = ''
|
||||
A compact and fast general purpose allocator, which may
|
||||
optionally be built with mitigations against various heap
|
||||
vulnerabilities.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
providerConf = providers.${cfg.provider};
|
||||
|
@ -91,7 +100,10 @@ in
|
|||
"abstractions/base" = ''
|
||||
r /etc/ld-nix.so.preload,
|
||||
r ${config.environment.etc."ld-nix.so.preload".source},
|
||||
mr ${providerLibPath},
|
||||
include "${pkgs.apparmorRulesFromClosure {
|
||||
name = "mallocLib";
|
||||
baseRules = ["mr $path/lib/**.so*"];
|
||||
} [ mallocLib ] }"
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
|
32
third_party/nixpkgs/nixos/modules/misc/ids.nix
vendored
32
third_party/nixpkgs/nixos/modules/misc/ids.nix
vendored
|
@ -137,9 +137,9 @@ in
|
|||
#mongodb = 98; #dynamically allocated as of 2021-09-03
|
||||
#openldap = 99; # dynamically allocated as of PR#94610
|
||||
#users = 100; # unused
|
||||
cgminer = 101;
|
||||
# cgminer = 101; #dynamically allocated as of 2021-09-17
|
||||
munin = 102;
|
||||
logcheck = 103;
|
||||
#logcheck = 103; #dynamically allocated as of 2021-09-17
|
||||
#nix-ssh = 104; #dynamically allocated as of 2021-09-03
|
||||
dictd = 105;
|
||||
couchdb = 106;
|
||||
|
@ -153,7 +153,7 @@ in
|
|||
#btsync = 113; # unused
|
||||
#minecraft = 114; #dynamically allocated as of 2021-09-03
|
||||
vault = 115;
|
||||
rippled = 116;
|
||||
# rippled = 116; #dynamically allocated as of 2021-09-18
|
||||
murmur = 117;
|
||||
foundationdb = 118;
|
||||
newrelic = 119;
|
||||
|
@ -210,17 +210,17 @@ in
|
|||
#fleet = 173; # unused
|
||||
#input = 174; # unused
|
||||
sddm = 175;
|
||||
tss = 176;
|
||||
#tss = 176; # dynamically allocated as of 2021-09-17
|
||||
#memcached = 177; removed 2018-01-03
|
||||
ntp = 179;
|
||||
#ntp = 179; # dynamically allocated as of 2021-09-17
|
||||
zabbix = 180;
|
||||
#redis = 181; removed 2018-01-03
|
||||
unifi = 183;
|
||||
#unifi = 183; dynamically allocated as of 2021-09-17
|
||||
uptimed = 184;
|
||||
zope2 = 185;
|
||||
ripple-data-api = 186;
|
||||
#zope2 = 185; # dynamically allocated as of 2021-09-18
|
||||
#ripple-data-api = 186; dynamically allocated as of 2021-09-17
|
||||
mediatomb = 187;
|
||||
rdnssd = 188;
|
||||
#rdnssd = 188; #dynamically allocated as of 2021-09-18
|
||||
ihaskell = 189;
|
||||
i2p = 190;
|
||||
lambdabot = 191;
|
||||
|
@ -231,20 +231,20 @@ in
|
|||
skydns = 197;
|
||||
# ripple-rest = 198; # unused, removed 2017-08-12
|
||||
# nix-serve = 199; # unused, removed 2020-12-12
|
||||
tvheadend = 200;
|
||||
#tvheadend = 200; # dynamically allocated as of 2021-09-18
|
||||
uwsgi = 201;
|
||||
gitit = 202;
|
||||
riemanntools = 203;
|
||||
subsonic = 204;
|
||||
riak = 205;
|
||||
shout = 206;
|
||||
#shout = 206; # dynamically allocated as of 2021-09-18
|
||||
gateone = 207;
|
||||
namecoin = 208;
|
||||
#lxd = 210; # unused
|
||||
#kibana = 211;# dynamically allocated as of 2021-09-03
|
||||
xtreemfs = 212;
|
||||
calibre-server = 213;
|
||||
heapster = 214;
|
||||
#heapster = 214; #dynamically allocated as of 2021-09-17
|
||||
bepasty = 215;
|
||||
# pumpio = 216; # unused, removed 2018-02-24
|
||||
nm-openvpn = 217;
|
||||
|
@ -258,11 +258,11 @@ in
|
|||
rspamd = 225;
|
||||
# rmilter = 226; # unused, removed 2019-08-22
|
||||
cfdyndns = 227;
|
||||
gammu-smsd = 228;
|
||||
# gammu-smsd = 228; #dynamically allocated as of 2021-09-17
|
||||
pdnsd = 229;
|
||||
octoprint = 230;
|
||||
avahi-autoipd = 231;
|
||||
nntp-proxy = 232;
|
||||
# nntp-proxy = 232; #dynamically allocated as of 2021-09-17
|
||||
mjpg-streamer = 233;
|
||||
#radicale = 234;# dynamically allocated as of 2021-09-03
|
||||
hydra-queue-runner = 235;
|
||||
|
@ -276,7 +276,7 @@ in
|
|||
sniproxy = 244;
|
||||
nzbget = 245;
|
||||
mosquitto = 246;
|
||||
toxvpn = 247;
|
||||
#toxvpn = 247; # dynamically allocated as of 2021-09-18
|
||||
# squeezelite = 248; # DynamicUser = true
|
||||
turnserver = 249;
|
||||
#smokeping = 250;# dynamically allocated as of 2021-09-03
|
||||
|
@ -524,7 +524,7 @@ in
|
|||
#fleet = 173; # unused
|
||||
input = 174;
|
||||
sddm = 175;
|
||||
tss = 176;
|
||||
#tss = 176; #dynamically allocateda as of 2021-09-20
|
||||
#memcached = 177; # unused, removed 2018-01-03
|
||||
#ntp = 179; # unused
|
||||
zabbix = 180;
|
||||
|
|
|
@ -171,6 +171,7 @@
|
|||
./programs/npm.nix
|
||||
./programs/noisetorch.nix
|
||||
./programs/oblogout.nix
|
||||
./programs/pantheon-tweaks.nix
|
||||
./programs/partition-manager.nix
|
||||
./programs/plotinus.nix
|
||||
./programs/proxychains.nix
|
||||
|
@ -201,6 +202,7 @@
|
|||
./programs/vim.nix
|
||||
./programs/wavemon.nix
|
||||
./programs/waybar.nix
|
||||
./programs/weylus.nix
|
||||
./programs/wireshark.nix
|
||||
./programs/wshowkeys.nix
|
||||
./programs/xfs_quota.nix
|
||||
|
@ -297,6 +299,7 @@
|
|||
./services/cluster/kubernetes/pki.nix
|
||||
./services/cluster/kubernetes/proxy.nix
|
||||
./services/cluster/kubernetes/scheduler.nix
|
||||
./services/cluster/spark/default.nix
|
||||
./services/computing/boinc/client.nix
|
||||
./services/computing/foldingathome/client.nix
|
||||
./services/computing/slurm/slurm.nix
|
||||
|
@ -341,6 +344,7 @@
|
|||
./services/desktops/accountsservice.nix
|
||||
./services/desktops/bamf.nix
|
||||
./services/desktops/blueman.nix
|
||||
./services/desktops/cpupower-gui.nix
|
||||
./services/desktops/dleyna-renderer.nix
|
||||
./services/desktops/dleyna-server.nix
|
||||
./services/desktops/pantheon/files.nix
|
||||
|
@ -897,6 +901,7 @@
|
|||
./services/search/elasticsearch-curator.nix
|
||||
./services/search/hound.nix
|
||||
./services/search/kibana.nix
|
||||
./services/search/meilisearch.nix
|
||||
./services/search/solr.nix
|
||||
./services/security/certmgr.nix
|
||||
./services/security/cfssl.nix
|
||||
|
@ -913,6 +918,7 @@
|
|||
./services/security/nginx-sso.nix
|
||||
./services/security/oauth2_proxy.nix
|
||||
./services/security/oauth2_proxy_nginx.nix
|
||||
./services/security/opensnitch.nix
|
||||
./services/security/privacyidea.nix
|
||||
./services/security/physlock.nix
|
||||
./services/security/shibboleth-sp.nix
|
||||
|
@ -1054,6 +1060,7 @@
|
|||
./services/x11/gdk-pixbuf.nix
|
||||
./services/x11/imwheel.nix
|
||||
./services/x11/redshift.nix
|
||||
./services/x11/touchegg.nix
|
||||
./services/x11/urserver.nix
|
||||
./services/x11/urxvtd.nix
|
||||
./services/x11/window-managers/awesome.nix
|
||||
|
|
|
@ -141,8 +141,15 @@ in
|
|||
// mkService cfg.atopgpu.enable "atopgpu" [ atop ];
|
||||
timers = mkTimer cfg.atopRotateTimer.enable "atop-rotate" [ atop ];
|
||||
};
|
||||
security.wrappers =
|
||||
lib.mkIf cfg.setuidWrapper.enable { atop = { source = "${atop}/bin/atop"; }; };
|
||||
|
||||
security.wrappers = lib.mkIf cfg.setuidWrapper.enable {
|
||||
atop =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${atop}/bin/atop";
|
||||
};
|
||||
};
|
||||
}
|
||||
);
|
||||
}
|
||||
|
|
|
@ -22,8 +22,10 @@ in {
|
|||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = with pkgs; [ bandwhich ];
|
||||
security.wrappers.bandwhich = {
|
||||
source = "${pkgs.bandwhich}/bin/bandwhich";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw,cap_net_admin+ep";
|
||||
source = "${pkgs.bandwhich}/bin/bandwhich";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -105,11 +105,15 @@ in
|
|||
);
|
||||
|
||||
security.wrappers.udhcpc = {
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = "${pkgs.busybox}/bin/udhcpc";
|
||||
};
|
||||
|
||||
security.wrappers.captive-browser = {
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = pkgs.writeShellScript "captive-browser" ''
|
||||
export PREV_CONFIG_HOME="$XDG_CONFIG_HOME"
|
||||
|
|
|
@ -28,7 +28,9 @@ in {
|
|||
|
||||
# "nix-ccache --show-stats" and "nix-ccache --clear"
|
||||
security.wrappers.nix-ccache = {
|
||||
owner = "nobody";
|
||||
group = "nixbld";
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
source = pkgs.writeScript "nix-ccache.pl" ''
|
||||
#!${pkgs.perl}/bin/perl
|
||||
|
|
|
@ -81,7 +81,12 @@ in {
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.firejail.source = "${lib.getBin pkgs.firejail}/bin/firejail";
|
||||
security.wrappers.firejail =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${lib.getBin pkgs.firejail}/bin/firejail";
|
||||
};
|
||||
|
||||
environment.systemPackages = [ pkgs.firejail ] ++ [ wrappedBins ];
|
||||
};
|
||||
|
|
|
@ -56,6 +56,8 @@ in
|
|||
polkit.enable = true;
|
||||
wrappers = mkIf cfg.enableRenice {
|
||||
gamemoded = {
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.gamemode}/bin/gamemoded";
|
||||
capabilities = "cap_sys_nice+ep";
|
||||
};
|
||||
|
|
|
@ -11,8 +11,10 @@ in {
|
|||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.iftop ];
|
||||
security.wrappers.iftop = {
|
||||
source = "${pkgs.iftop}/bin/iftop";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = "${pkgs.iftop}/bin/iftop";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -10,8 +10,10 @@ in {
|
|||
};
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.iotop = {
|
||||
source = "${pkgs.iotop}/bin/iotop";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_admin+p";
|
||||
source = "${pkgs.iotop}/bin/iotop";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -11,6 +11,11 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.kbdlight ];
|
||||
security.wrappers.kbdlight.source = "${pkgs.kbdlight.out}/bin/kbdlight";
|
||||
security.wrappers.kbdlight =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.kbdlight.out}/bin/kbdlight";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -13,8 +13,10 @@ in {
|
|||
security.wrappers = mkMerge (map (
|
||||
exec: {
|
||||
"${exec}" = {
|
||||
source = "${pkgs.liboping}/bin/${exec}";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = "${pkgs.liboping}/bin/${exec}";
|
||||
};
|
||||
}
|
||||
) [ "oping" "noping" ]);
|
||||
|
|
|
@ -78,6 +78,8 @@ in {
|
|||
source = "${pkgs.msmtp}/bin/sendmail";
|
||||
setuid = false;
|
||||
setgid = false;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
};
|
||||
|
||||
environment.etc."msmtprc".text = let
|
||||
|
|
|
@ -31,8 +31,10 @@ in {
|
|||
environment.systemPackages = with pkgs; [ cfg.package ];
|
||||
|
||||
security.wrappers.mtr-packet = {
|
||||
source = "${cfg.package}/bin/mtr-packet";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = "${cfg.package}/bin/mtr-packet";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -18,8 +18,10 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.noisetorch = {
|
||||
source = "${cfg.package}/bin/noisetorch";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_sys_resource=+ep";
|
||||
source = "${cfg.package}/bin/noisetorch";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
19
third_party/nixpkgs/nixos/modules/programs/pantheon-tweaks.nix
vendored
Normal file
19
third_party/nixpkgs/nixos/modules/programs/pantheon-tweaks.nix
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
{
|
||||
meta = {
|
||||
maintainers = teams.pantheon.members;
|
||||
};
|
||||
|
||||
###### interface
|
||||
options = {
|
||||
programs.pantheon-tweaks.enable = mkEnableOption "Pantheon Tweaks, an unofficial system settings panel for Pantheon";
|
||||
};
|
||||
|
||||
###### implementation
|
||||
config = mkIf config.programs.pantheon-tweaks.enable {
|
||||
services.xserver.desktopManager.pantheon.extraSwitchboardPlugs = [ pkgs.pantheon-tweaks ];
|
||||
};
|
||||
}
|
|
@ -30,7 +30,7 @@ in
|
|||
###### implementation
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.variables.XDG_DATA_DIRS = [ "${pkgs.plotinus}/share/gsettings-schemas/${pkgs.plotinus.name}" ];
|
||||
environment.sessionVariables.XDG_DATA_DIRS = [ "${pkgs.plotinus}/share/gsettings-schemas/${pkgs.plotinus.name}" ];
|
||||
environment.variables.GTK3_MODULES = [ "${pkgs.plotinus}/lib/libplotinus.so" ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -43,6 +43,13 @@ let
|
|||
|
||||
'';
|
||||
|
||||
mkSetuidRoot = source:
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
inherit source;
|
||||
};
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
|
@ -109,14 +116,14 @@ in
|
|||
};
|
||||
|
||||
security.wrappers = {
|
||||
su.source = "${pkgs.shadow.su}/bin/su";
|
||||
sg.source = "${pkgs.shadow.out}/bin/sg";
|
||||
newgrp.source = "${pkgs.shadow.out}/bin/newgrp";
|
||||
newuidmap.source = "${pkgs.shadow.out}/bin/newuidmap";
|
||||
newgidmap.source = "${pkgs.shadow.out}/bin/newgidmap";
|
||||
su = mkSetuidRoot "${pkgs.shadow.su}/bin/su";
|
||||
sg = mkSetuidRoot "${pkgs.shadow.out}/bin/sg";
|
||||
newgrp = mkSetuidRoot "${pkgs.shadow.out}/bin/newgrp";
|
||||
newuidmap = mkSetuidRoot "${pkgs.shadow.out}/bin/newuidmap";
|
||||
newgidmap = mkSetuidRoot "${pkgs.shadow.out}/bin/newgidmap";
|
||||
} // lib.optionalAttrs config.users.mutableUsers {
|
||||
chsh.source = "${pkgs.shadow.out}/bin/chsh";
|
||||
passwd.source = "${pkgs.shadow.out}/bin/passwd";
|
||||
chsh = mkSetuidRoot "${pkgs.shadow.out}/bin/chsh";
|
||||
passwd = mkSetuidRoot "${pkgs.shadow.out}/bin/passwd";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -16,7 +16,12 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ singularity ];
|
||||
security.wrappers.singularity-suid.source = "${singularity}/libexec/singularity/bin/starter-suid.orig";
|
||||
security.wrappers.singularity-suid =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${singularity}/libexec/singularity/bin/starter-suid.orig";
|
||||
};
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/singularity/mnt/session 0770 root root -"
|
||||
"d /var/singularity/mnt/final 0770 root root -"
|
||||
|
|
|
@ -21,6 +21,11 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.slock ];
|
||||
security.wrappers.slock.source = "${pkgs.slock.out}/bin/slock";
|
||||
security.wrappers.slock =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.slock.out}/bin/slock";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -181,6 +181,8 @@ in
|
|||
source = "${pkgs.ssmtp}/bin/sendmail";
|
||||
setuid = false;
|
||||
setgid = false;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
};
|
||||
|
||||
};
|
||||
|
|
|
@ -19,8 +19,10 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.traceroute = {
|
||||
source = "${pkgs.traceroute}/bin/traceroute";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+p";
|
||||
source = "${pkgs.traceroute}/bin/traceroute";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -9,6 +9,11 @@ in {
|
|||
options.programs.udevil.enable = mkEnableOption "udevil";
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.udevil.source = "${lib.getBin pkgs.udevil}/bin/udevil";
|
||||
security.wrappers.udevil =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${lib.getBin pkgs.udevil}/bin/udevil";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -21,8 +21,10 @@ in {
|
|||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = with pkgs; [ wavemon ];
|
||||
security.wrappers.wavemon = {
|
||||
source = "${pkgs.wavemon}/bin/wavemon";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_admin+ep";
|
||||
source = "${pkgs.wavemon}/bin/wavemon";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
47
third_party/nixpkgs/nixos/modules/programs/weylus.nix
vendored
Normal file
47
third_party/nixpkgs/nixos/modules/programs/weylus.nix
vendored
Normal file
|
@ -0,0 +1,47 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.programs.weylus;
|
||||
in
|
||||
{
|
||||
options.programs.weylus = with types; {
|
||||
enable = mkEnableOption "weylus";
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Open ports needed for the functionality of the program.
|
||||
'';
|
||||
};
|
||||
|
||||
users = mkOption {
|
||||
type = listOf str;
|
||||
default = [ ];
|
||||
description = ''
|
||||
To enable stylus and multi-touch support, the user you're going to use must be added to this list.
|
||||
These users can synthesize input events system-wide, even when another user is logged in - untrusted users should not be added.
|
||||
'';
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = package;
|
||||
default = pkgs.weylus;
|
||||
defaultText = "pkgs.weylus";
|
||||
description = "Weylus package to install.";
|
||||
};
|
||||
};
|
||||
config = mkIf cfg.enable {
|
||||
networking.firewall = mkIf cfg.openFirewall {
|
||||
allowedTCPPorts = [ 1701 9001 ];
|
||||
};
|
||||
|
||||
hardware.uinput.enable = true;
|
||||
|
||||
users.groups.uinput.members = cfg.users;
|
||||
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
};
|
||||
}
|
|
@ -17,6 +17,11 @@ in {
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
security.wrappers.wshowkeys.source = "${pkgs.wshowkeys}/bin/wshowkeys";
|
||||
security.wrappers.wshowkeys =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.wshowkeys}/bin/wshowkeys";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -28,6 +28,11 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ sandbox ];
|
||||
security.wrappers.${sandbox.passthru.sandboxExecutableName}.source = "${sandbox}/bin/${sandbox.passthru.sandboxExecutableName}";
|
||||
security.wrappers.${sandbox.passthru.sandboxExecutableName} =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${sandbox}/bin/${sandbox.passthru.sandboxExecutableName}";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -241,9 +241,12 @@ in
|
|||
}
|
||||
];
|
||||
|
||||
security.wrappers = {
|
||||
doas.source = "${doas}/bin/doas";
|
||||
};
|
||||
security.wrappers.doas =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${doas}/bin/doas";
|
||||
};
|
||||
|
||||
environment.systemPackages = [
|
||||
doas
|
||||
|
|
|
@ -186,7 +186,12 @@ in
|
|||
config = mkIf (cfg.ssh.enable || cfg.pam.enable) {
|
||||
environment.systemPackages = [ pkgs.duo-unix ];
|
||||
|
||||
security.wrappers.login_duo.source = "${pkgs.duo-unix.out}/bin/login_duo";
|
||||
security.wrappers.login_duo =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.duo-unix.out}/bin/login_duo";
|
||||
};
|
||||
|
||||
system.activationScripts = {
|
||||
login_duo = mkIf cfg.ssh.enable ''
|
||||
|
|
|
@ -35,10 +35,10 @@ with lib;
|
|||
wants = [ "systemd-udevd.service" ];
|
||||
wantedBy = [ config.systemd.defaultUnit ];
|
||||
|
||||
before = [ config.systemd.defaultUnit ];
|
||||
after =
|
||||
[ "firewall.service"
|
||||
"systemd-modules-load.service"
|
||||
config.systemd.defaultUnit
|
||||
];
|
||||
|
||||
unitConfig.ConditionPathIsReadWrite = "/proc/sys/kernel";
|
||||
|
|
|
@ -869,9 +869,10 @@ in
|
|||
|
||||
security.wrappers = {
|
||||
unix_chkpwd = {
|
||||
source = "${pkgs.pam}/sbin/unix_chkpwd.orig";
|
||||
owner = "root";
|
||||
setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.pam}/sbin/unix_chkpwd.orig";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -32,8 +32,18 @@ in
|
|||
|
||||
# Make sure pmount and pumount are setuid wrapped.
|
||||
security.wrappers = {
|
||||
pmount.source = "${pkgs.pmount.out}/bin/pmount";
|
||||
pumount.source = "${pkgs.pmount.out}/bin/pumount";
|
||||
pmount =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.pmount.out}/bin/pmount";
|
||||
};
|
||||
pumount =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.pmount.out}/bin/pumount";
|
||||
};
|
||||
};
|
||||
|
||||
environment.systemPackages = [ pkgs.pmount ];
|
||||
|
|
|
@ -83,8 +83,18 @@ in
|
|||
security.pam.services.polkit-1 = {};
|
||||
|
||||
security.wrappers = {
|
||||
pkexec.source = "${pkgs.polkit.bin}/bin/pkexec";
|
||||
polkit-agent-helper-1.source = "${pkgs.polkit.out}/lib/polkit-1/polkit-agent-helper-1";
|
||||
pkexec =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.polkit.bin}/bin/pkexec";
|
||||
};
|
||||
polkit-agent-helper-1 =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.polkit.out}/lib/polkit-1/polkit-agent-helper-1";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
|
|
|
@ -146,6 +146,7 @@ in {
|
|||
# Create the tss user and group only if the default value is used
|
||||
users.users.${cfg.tssUser} = lib.mkIf (cfg.tssUser == "tss") {
|
||||
isSystemUser = true;
|
||||
group = "tss";
|
||||
};
|
||||
users.groups.${cfg.tssGroup} = lib.mkIf (cfg.tssGroup == "tss") {};
|
||||
|
||||
|
@ -172,7 +173,7 @@ in {
|
|||
BusName = "com.intel.tss2.Tabrmd";
|
||||
ExecStart = "${cfg.abrmd.package}/bin/tpm2-abrmd";
|
||||
User = "tss";
|
||||
Group = "nogroup";
|
||||
Group = "tss";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -5,85 +5,140 @@ let
|
|||
|
||||
parentWrapperDir = dirOf wrapperDir;
|
||||
|
||||
programs =
|
||||
(lib.mapAttrsToList
|
||||
(n: v: (if v ? program then v else v // {program=n;}))
|
||||
wrappers);
|
||||
|
||||
securityWrapper = pkgs.callPackage ./wrapper.nix {
|
||||
inherit parentWrapperDir;
|
||||
};
|
||||
|
||||
fileModeType =
|
||||
let
|
||||
# taken from the chmod(1) man page
|
||||
symbolic = "[ugoa]*([-+=]([rwxXst]*|[ugo]))+|[-+=][0-7]+";
|
||||
numeric = "[-+=]?[0-7]{0,4}";
|
||||
mode = "((${symbolic})(,${symbolic})*)|(${numeric})";
|
||||
in
|
||||
lib.types.strMatching mode
|
||||
// { description = "file mode string"; };
|
||||
|
||||
wrapperType = lib.types.submodule ({ name, config, ... }: {
|
||||
options.source = lib.mkOption
|
||||
{ type = lib.types.path;
|
||||
description = "The absolute path to the program to be wrapped.";
|
||||
};
|
||||
options.program = lib.mkOption
|
||||
{ type = with lib.types; nullOr str;
|
||||
default = name;
|
||||
description = ''
|
||||
The name of the wrapper program. Defaults to the attribute name.
|
||||
'';
|
||||
};
|
||||
options.owner = lib.mkOption
|
||||
{ type = lib.types.str;
|
||||
description = "The owner of the wrapper program.";
|
||||
};
|
||||
options.group = lib.mkOption
|
||||
{ type = lib.types.str;
|
||||
description = "The group of the wrapper program.";
|
||||
};
|
||||
options.permissions = lib.mkOption
|
||||
{ type = fileModeType;
|
||||
default = "u+rx,g+x,o+x";
|
||||
example = "a+rx";
|
||||
description = ''
|
||||
The permissions of the wrapper program. The format is that of a
|
||||
symbolic or numeric file mode understood by <command>chmod</command>.
|
||||
'';
|
||||
};
|
||||
options.capabilities = lib.mkOption
|
||||
{ type = lib.types.commas;
|
||||
default = "";
|
||||
description = ''
|
||||
A comma-separated list of capabilities to be given to the wrapper
|
||||
program. For capabilities supported by the system check the
|
||||
<citerefentry>
|
||||
<refentrytitle>capabilities</refentrytitle>
|
||||
<manvolnum>7</manvolnum>
|
||||
</citerefentry>
|
||||
manual page.
|
||||
|
||||
<note><para>
|
||||
<literal>cap_setpcap</literal>, which is required for the wrapper
|
||||
program to be able to raise caps into the Ambient set is NOT raised
|
||||
to the Ambient set so that the real program cannot modify its own
|
||||
capabilities!! This may be too restrictive for cases in which the
|
||||
real program needs cap_setpcap but it at least leans on the side
|
||||
security paranoid vs. too relaxed.
|
||||
</para></note>
|
||||
'';
|
||||
};
|
||||
options.setuid = lib.mkOption
|
||||
{ type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Whether to add the setuid bit the wrapper program.";
|
||||
};
|
||||
options.setgid = lib.mkOption
|
||||
{ type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Whether to add the setgid bit the wrapper program.";
|
||||
};
|
||||
});
|
||||
|
||||
###### Activation script for the setcap wrappers
|
||||
mkSetcapProgram =
|
||||
{ program
|
||||
, capabilities
|
||||
, source
|
||||
, owner ? "nobody"
|
||||
, group ? "nogroup"
|
||||
, permissions ? "u+rx,g+x,o+x"
|
||||
, owner
|
||||
, group
|
||||
, permissions
|
||||
, ...
|
||||
}:
|
||||
assert (lib.versionAtLeast (lib.getVersion config.boot.kernelPackages.kernel) "4.3");
|
||||
''
|
||||
cp ${securityWrapper}/bin/security-wrapper $wrapperDir/${program}
|
||||
echo -n "${source}" > $wrapperDir/${program}.real
|
||||
cp ${securityWrapper}/bin/security-wrapper "$wrapperDir/${program}"
|
||||
echo -n "${source}" > "$wrapperDir/${program}.real"
|
||||
|
||||
# Prevent races
|
||||
chmod 0000 $wrapperDir/${program}
|
||||
chown ${owner}.${group} $wrapperDir/${program}
|
||||
chmod 0000 "$wrapperDir/${program}"
|
||||
chown ${owner}.${group} "$wrapperDir/${program}"
|
||||
|
||||
# Set desired capabilities on the file plus cap_setpcap so
|
||||
# the wrapper program can elevate the capabilities set on
|
||||
# its file into the Ambient set.
|
||||
${pkgs.libcap.out}/bin/setcap "cap_setpcap,${capabilities}" $wrapperDir/${program}
|
||||
${pkgs.libcap.out}/bin/setcap "cap_setpcap,${capabilities}" "$wrapperDir/${program}"
|
||||
|
||||
# Set the executable bit
|
||||
chmod ${permissions} $wrapperDir/${program}
|
||||
chmod ${permissions} "$wrapperDir/${program}"
|
||||
'';
|
||||
|
||||
###### Activation script for the setuid wrappers
|
||||
mkSetuidProgram =
|
||||
{ program
|
||||
, source
|
||||
, owner ? "nobody"
|
||||
, group ? "nogroup"
|
||||
, setuid ? false
|
||||
, setgid ? false
|
||||
, permissions ? "u+rx,g+x,o+x"
|
||||
, owner
|
||||
, group
|
||||
, setuid
|
||||
, setgid
|
||||
, permissions
|
||||
, ...
|
||||
}:
|
||||
''
|
||||
cp ${securityWrapper}/bin/security-wrapper $wrapperDir/${program}
|
||||
echo -n "${source}" > $wrapperDir/${program}.real
|
||||
cp ${securityWrapper}/bin/security-wrapper "$wrapperDir/${program}"
|
||||
echo -n "${source}" > "$wrapperDir/${program}.real"
|
||||
|
||||
# Prevent races
|
||||
chmod 0000 $wrapperDir/${program}
|
||||
chown ${owner}.${group} $wrapperDir/${program}
|
||||
chmod 0000 "$wrapperDir/${program}"
|
||||
chown ${owner}.${group} "$wrapperDir/${program}"
|
||||
|
||||
chmod "u${if setuid then "+" else "-"}s,g${if setgid then "+" else "-"}s,${permissions}" $wrapperDir/${program}
|
||||
chmod "u${if setuid then "+" else "-"}s,g${if setgid then "+" else "-"}s,${permissions}" "$wrapperDir/${program}"
|
||||
'';
|
||||
|
||||
mkWrappedPrograms =
|
||||
builtins.map
|
||||
(s: if (s ? capabilities)
|
||||
then mkSetcapProgram
|
||||
({ owner = "root";
|
||||
group = "root";
|
||||
} // s)
|
||||
else if
|
||||
(s ? setuid && s.setuid) ||
|
||||
(s ? setgid && s.setgid) ||
|
||||
(s ? permissions)
|
||||
then mkSetuidProgram s
|
||||
else mkSetuidProgram
|
||||
({ owner = "root";
|
||||
group = "root";
|
||||
setuid = true;
|
||||
setgid = false;
|
||||
permissions = "u+rx,g+x,o+x";
|
||||
} // s)
|
||||
) programs;
|
||||
(opts:
|
||||
if opts.capabilities != ""
|
||||
then mkSetcapProgram opts
|
||||
else mkSetuidProgram opts
|
||||
) (lib.attrValues wrappers);
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
|
@ -95,45 +150,42 @@ in
|
|||
|
||||
options = {
|
||||
security.wrappers = lib.mkOption {
|
||||
type = lib.types.attrs;
|
||||
type = lib.types.attrsOf wrapperType;
|
||||
default = {};
|
||||
example = lib.literalExample
|
||||
''
|
||||
{ sendmail.source = "/nix/store/.../bin/sendmail";
|
||||
ping = {
|
||||
source = "${pkgs.iputils.out}/bin/ping";
|
||||
owner = "nobody";
|
||||
group = "nogroup";
|
||||
capabilities = "cap_net_raw+ep";
|
||||
};
|
||||
{
|
||||
# a setuid root program
|
||||
doas =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "''${pkgs.doas}/bin/doas";
|
||||
};
|
||||
|
||||
# a setgid program
|
||||
locate =
|
||||
{ setgid = true;
|
||||
owner = "root";
|
||||
group = "mlocate";
|
||||
source = "''${pkgs.locate}/bin/locate";
|
||||
};
|
||||
|
||||
# a program with the CAP_NET_RAW capability
|
||||
ping =
|
||||
{ owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_raw+ep";
|
||||
source = "''${pkgs.iputils.out}/bin/ping";
|
||||
};
|
||||
}
|
||||
'';
|
||||
description = ''
|
||||
This option allows the ownership and permissions on the setuid
|
||||
wrappers for specific programs to be overridden from the
|
||||
default (setuid root, but not setgid root).
|
||||
|
||||
<note>
|
||||
<para>The sub-attribute <literal>source</literal> is mandatory,
|
||||
it must be the absolute path to the program to be wrapped.
|
||||
</para>
|
||||
|
||||
<para>The sub-attribute <literal>program</literal> is optional and
|
||||
can give the wrapper program a new name. The default name is the same
|
||||
as the attribute name itself.</para>
|
||||
|
||||
<para>Additionally, this option can set capabilities on a
|
||||
wrapper program that propagates those capabilities down to the
|
||||
wrapped, real program.</para>
|
||||
|
||||
<para>NOTE: cap_setpcap, which is required for the wrapper
|
||||
program to be able to raise caps into the Ambient set is NOT
|
||||
raised to the Ambient set so that the real program cannot
|
||||
modify its own capabilities!! This may be too restrictive for
|
||||
cases in which the real program needs cap_setpcap but it at
|
||||
least leans on the side security paranoid vs. too
|
||||
relaxed.</para>
|
||||
</note>
|
||||
This option effectively allows adding setuid/setgid bits, capabilities,
|
||||
changing file ownership and permissions of a program without directly
|
||||
modifying it. This works by creating a wrapper program under the
|
||||
<option>security.wrapperDir</option> directory, which is then added to
|
||||
the shell <literal>PATH</literal>.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -151,13 +203,31 @@ in
|
|||
###### implementation
|
||||
config = {
|
||||
|
||||
security.wrappers = {
|
||||
# These are mount related wrappers that require the +s permission.
|
||||
fusermount.source = "${pkgs.fuse}/bin/fusermount";
|
||||
fusermount3.source = "${pkgs.fuse3}/bin/fusermount3";
|
||||
mount.source = "${lib.getBin pkgs.util-linux}/bin/mount";
|
||||
umount.source = "${lib.getBin pkgs.util-linux}/bin/umount";
|
||||
};
|
||||
assertions = lib.mapAttrsToList
|
||||
(name: opts:
|
||||
{ assertion = opts.setuid || opts.setgid -> opts.capabilities == "";
|
||||
message = ''
|
||||
The security.wrappers.${name} wrapper is not valid:
|
||||
setuid/setgid and capabilities are mutually exclusive.
|
||||
'';
|
||||
}
|
||||
) wrappers;
|
||||
|
||||
security.wrappers =
|
||||
let
|
||||
mkSetuidRoot = source:
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
inherit source;
|
||||
};
|
||||
in
|
||||
{ # These are mount related wrappers that require the +s permission.
|
||||
fusermount = mkSetuidRoot "${pkgs.fuse}/bin/fusermount";
|
||||
fusermount3 = mkSetuidRoot "${pkgs.fuse3}/bin/fusermount3";
|
||||
mount = mkSetuidRoot "${lib.getBin pkgs.util-linux}/bin/mount";
|
||||
umount = mkSetuidRoot "${lib.getBin pkgs.util-linux}/bin/umount";
|
||||
};
|
||||
|
||||
boot.specialFileSystems.${parentWrapperDir} = {
|
||||
fsType = "tmpfs";
|
||||
|
@ -179,19 +249,15 @@ in
|
|||
]}"
|
||||
'';
|
||||
|
||||
###### setcap activation script
|
||||
###### wrappers activation script
|
||||
system.activationScripts.wrappers =
|
||||
lib.stringAfter [ "specialfs" "users" ]
|
||||
''
|
||||
# Look in the system path and in the default profile for
|
||||
# programs to be wrapped.
|
||||
WRAPPER_PATH=${config.system.path}/bin:${config.system.path}/sbin
|
||||
|
||||
chmod 755 "${parentWrapperDir}"
|
||||
|
||||
# We want to place the tmpdirs for the wrappers to the parent dir.
|
||||
wrapperDir=$(mktemp --directory --tmpdir="${parentWrapperDir}" wrappers.XXXXXXXXXX)
|
||||
chmod a+rx $wrapperDir
|
||||
chmod a+rx "$wrapperDir"
|
||||
|
||||
${lib.concatStringsSep "\n" mkWrappedPrograms}
|
||||
|
||||
|
@ -199,16 +265,44 @@ in
|
|||
# Atomically replace the symlink
|
||||
# See https://axialcorps.com/2013/07/03/atomically-replacing-files-and-directories/
|
||||
old=$(readlink -f ${wrapperDir})
|
||||
if [ -e ${wrapperDir}-tmp ]; then
|
||||
rm --force --recursive ${wrapperDir}-tmp
|
||||
if [ -e "${wrapperDir}-tmp" ]; then
|
||||
rm --force --recursive "${wrapperDir}-tmp"
|
||||
fi
|
||||
ln --symbolic --force --no-dereference $wrapperDir ${wrapperDir}-tmp
|
||||
mv --no-target-directory ${wrapperDir}-tmp ${wrapperDir}
|
||||
rm --force --recursive $old
|
||||
ln --symbolic --force --no-dereference "$wrapperDir" "${wrapperDir}-tmp"
|
||||
mv --no-target-directory "${wrapperDir}-tmp" "${wrapperDir}"
|
||||
rm --force --recursive "$old"
|
||||
else
|
||||
# For initial setup
|
||||
ln --symbolic $wrapperDir ${wrapperDir}
|
||||
ln --symbolic "$wrapperDir" "${wrapperDir}"
|
||||
fi
|
||||
'';
|
||||
|
||||
###### wrappers consistency checks
|
||||
system.extraDependencies = lib.singleton (pkgs.runCommandLocal
|
||||
"ensure-all-wrappers-paths-exist" { }
|
||||
''
|
||||
# make sure we produce output
|
||||
mkdir -p $out
|
||||
|
||||
echo -n "Checking that Nix store paths of all wrapped programs exist... "
|
||||
|
||||
declare -A wrappers
|
||||
${lib.concatStringsSep "\n" (lib.mapAttrsToList (n: v:
|
||||
"wrappers['${n}']='${v.source}'") wrappers)}
|
||||
|
||||
for name in "''${!wrappers[@]}"; do
|
||||
path="''${wrappers[$name]}"
|
||||
if [[ "$path" =~ /nix/store ]] && [ ! -e "$path" ]; then
|
||||
test -t 1 && echo -ne '\033[1;31m'
|
||||
echo "FAIL"
|
||||
echo "The path $path does not exist!"
|
||||
echo 'Please, check the value of `security.wrappers."'$name'".source`.'
|
||||
test -t 1 && echo -ne '\033[0m'
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
|
||||
echo "OK"
|
||||
'');
|
||||
};
|
||||
}
|
||||
|
|
|
@ -5,28 +5,33 @@ with lib;
|
|||
let
|
||||
cfg = config.services.kubernetes;
|
||||
|
||||
defaultContainerdConfigFile = pkgs.writeText "containerd.toml" ''
|
||||
version = 2
|
||||
root = "/var/lib/containerd"
|
||||
state = "/run/containerd"
|
||||
oom_score = 0
|
||||
defaultContainerdSettings = {
|
||||
version = 2;
|
||||
root = "/var/lib/containerd";
|
||||
state = "/run/containerd";
|
||||
oom_score = 0;
|
||||
|
||||
[grpc]
|
||||
address = "/run/containerd/containerd.sock"
|
||||
grpc = {
|
||||
address = "/run/containerd/containerd.sock";
|
||||
};
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
sandbox_image = "pause:latest"
|
||||
plugins."io.containerd.grpc.v1.cri" = {
|
||||
sandbox_image = "pause:latest";
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".cni]
|
||||
bin_dir = "/opt/cni/bin"
|
||||
max_conf_num = 0
|
||||
cni = {
|
||||
bin_dir = "/opt/cni/bin";
|
||||
max_conf_num = 0;
|
||||
};
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
containerd.runtimes.runc = {
|
||||
runtime_type = "io.containerd.runc.v2";
|
||||
};
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes."io.containerd.runc.v2".options]
|
||||
SystemdCgroup = true
|
||||
'';
|
||||
containerd.runtimes."io.containerd.runc.v2".options = {
|
||||
SystemdCgroup = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
mkKubeConfig = name: conf: pkgs.writeText "${name}-kubeconfig" (builtins.toJSON {
|
||||
apiVersion = "v1";
|
||||
|
@ -248,7 +253,7 @@ in {
|
|||
(mkIf cfg.kubelet.enable {
|
||||
virtualisation.containerd = {
|
||||
enable = mkDefault true;
|
||||
configFile = mkDefault defaultContainerdConfigFile;
|
||||
settings = mkDefault defaultContainerdSettings;
|
||||
};
|
||||
})
|
||||
|
||||
|
|
162
third_party/nixpkgs/nixos/modules/services/cluster/spark/default.nix
vendored
Normal file
162
third_party/nixpkgs/nixos/modules/services/cluster/spark/default.nix
vendored
Normal file
|
@ -0,0 +1,162 @@
|
|||
{config, pkgs, lib, ...}:
|
||||
let
|
||||
cfg = config.services.spark;
|
||||
in
|
||||
with lib;
|
||||
{
|
||||
options = {
|
||||
services.spark = {
|
||||
master = {
|
||||
enable = mkEnableOption "Spark master service";
|
||||
bind = mkOption {
|
||||
type = types.str;
|
||||
description = "Address the spark master binds to.";
|
||||
default = "127.0.0.1";
|
||||
example = "0.0.0.0";
|
||||
};
|
||||
restartIfChanged = mkOption {
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Automatically restart master service on config change.
|
||||
This can be set to false to defer restarts on clusters running critical applications.
|
||||
Please consider the security implications of inadvertently running an older version,
|
||||
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
|
||||
'';
|
||||
default = true;
|
||||
};
|
||||
extraEnvironment = mkOption {
|
||||
type = types.attrsOf types.str;
|
||||
description = "Extra environment variables to pass to spark master. See spark-standalone documentation.";
|
||||
default = {};
|
||||
example = {
|
||||
SPARK_MASTER_WEBUI_PORT = 8181;
|
||||
SPARK_MASTER_OPTS = "-Dspark.deploy.defaultCores=5";
|
||||
};
|
||||
};
|
||||
};
|
||||
worker = {
|
||||
enable = mkEnableOption "Spark worker service";
|
||||
workDir = mkOption {
|
||||
type = types.path;
|
||||
description = "Spark worker work dir.";
|
||||
default = "/var/lib/spark";
|
||||
};
|
||||
master = mkOption {
|
||||
type = types.str;
|
||||
description = "Address of the spark master.";
|
||||
default = "127.0.0.1:7077";
|
||||
};
|
||||
restartIfChanged = mkOption {
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Automatically restart worker service on config change.
|
||||
This can be set to false to defer restarts on clusters running critical applications.
|
||||
Please consider the security implications of inadvertently running an older version,
|
||||
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
|
||||
'';
|
||||
default = true;
|
||||
};
|
||||
extraEnvironment = mkOption {
|
||||
type = types.attrsOf types.str;
|
||||
description = "Extra environment variables to pass to spark worker.";
|
||||
default = {};
|
||||
example = {
|
||||
SPARK_WORKER_CORES = 5;
|
||||
SPARK_WORKER_MEMORY = "2g";
|
||||
};
|
||||
};
|
||||
};
|
||||
confDir = mkOption {
|
||||
type = types.path;
|
||||
description = "Spark configuration directory. Spark will use the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc) from this directory.";
|
||||
default = "${cfg.package}/lib/${cfg.package.untarDir}/conf";
|
||||
defaultText = literalExample "\${cfg.package}/lib/\${cfg.package.untarDir}/conf";
|
||||
};
|
||||
logDir = mkOption {
|
||||
type = types.path;
|
||||
description = "Spark log directory.";
|
||||
default = "/var/log/spark";
|
||||
};
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
description = "Spark package.";
|
||||
default = pkgs.spark;
|
||||
defaultText = "pkgs.spark";
|
||||
example = literalExample ''pkgs.spark.overrideAttrs (super: rec {
|
||||
pname = "spark";
|
||||
version = "2.4.4";
|
||||
|
||||
src = pkgs.fetchzip {
|
||||
url = "mirror://apache/spark/"''${pname}-''${version}/''${pname}-''${version}-bin-without-hadoop.tgz";
|
||||
sha256 = "1a9w5k0207fysgpxx6db3a00fs5hdc2ncx99x4ccy2s0v5ndc66g";
|
||||
};
|
||||
})'';
|
||||
};
|
||||
};
|
||||
};
|
||||
config = lib.mkIf (cfg.worker.enable || cfg.master.enable) {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
systemd = {
|
||||
services = {
|
||||
spark-master = lib.mkIf cfg.master.enable {
|
||||
path = with pkgs; [ procps openssh nettools ];
|
||||
description = "spark master service.";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
restartIfChanged = cfg.master.restartIfChanged;
|
||||
environment = cfg.master.extraEnvironment // {
|
||||
SPARK_MASTER_HOST = cfg.master.bind;
|
||||
SPARK_CONF_DIR = cfg.confDir;
|
||||
SPARK_LOG_DIR = cfg.logDir;
|
||||
};
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
User = "spark";
|
||||
Group = "spark";
|
||||
WorkingDirectory = "${cfg.package}/lib/${cfg.package.untarDir}";
|
||||
ExecStart = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/start-master.sh";
|
||||
ExecStop = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/stop-master.sh";
|
||||
TimeoutSec = 300;
|
||||
StartLimitBurst=10;
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
spark-worker = lib.mkIf cfg.worker.enable {
|
||||
path = with pkgs; [ procps openssh nettools rsync ];
|
||||
description = "spark master service.";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
restartIfChanged = cfg.worker.restartIfChanged;
|
||||
environment = cfg.worker.extraEnvironment // {
|
||||
SPARK_MASTER = cfg.worker.master;
|
||||
SPARK_CONF_DIR = cfg.confDir;
|
||||
SPARK_LOG_DIR = cfg.logDir;
|
||||
SPARK_WORKER_DIR = cfg.worker.workDir;
|
||||
};
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
User = "spark";
|
||||
WorkingDirectory = "${cfg.package}/lib/${cfg.package.untarDir}";
|
||||
ExecStart = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/start-worker.sh spark://${cfg.worker.master}";
|
||||
ExecStop = "${cfg.package}/lib/${cfg.package.untarDir}/sbin/stop-worker.sh";
|
||||
TimeoutSec = 300;
|
||||
StartLimitBurst=10;
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
};
|
||||
tmpfiles.rules = [
|
||||
"d '${cfg.worker.workDir}' - spark spark - -"
|
||||
"d '${cfg.logDir}' - spark spark - -"
|
||||
];
|
||||
};
|
||||
users = {
|
||||
users.spark = {
|
||||
description = "spark user.";
|
||||
group = "spark";
|
||||
isSystemUser = true;
|
||||
};
|
||||
groups.spark = { };
|
||||
};
|
||||
};
|
||||
}
|
56
third_party/nixpkgs/nixos/modules/services/desktops/cpupower-gui.nix
vendored
Normal file
56
third_party/nixpkgs/nixos/modules/services/desktops/cpupower-gui.nix
vendored
Normal file
|
@ -0,0 +1,56 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.cpupower-gui;
|
||||
in {
|
||||
options = {
|
||||
services.cpupower-gui = {
|
||||
enable = mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
example = true;
|
||||
description = ''
|
||||
Enables dbus/systemd service needed by cpupower-gui.
|
||||
These services are responsible for retrieving and modifying cpu power
|
||||
saving settings.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ pkgs.cpupower-gui ];
|
||||
services.dbus.packages = [ pkgs.cpupower-gui ];
|
||||
systemd.user = {
|
||||
services.cpupower-gui-user = {
|
||||
description = "Apply cpupower-gui config at user login";
|
||||
wantedBy = [ "graphical-session.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${pkgs.cpupower-gui}/bin/cpupower-gui config";
|
||||
};
|
||||
};
|
||||
};
|
||||
systemd.services = {
|
||||
cpupower-gui = {
|
||||
description = "Apply cpupower-gui config at boot";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${pkgs.cpupower-gui}/bin/cpupower-gui config";
|
||||
};
|
||||
};
|
||||
cpupower-gui-helper = {
|
||||
description = "cpupower-gui system helper";
|
||||
aliases = [ "dbus-org.rnd2.cpupower_gui.helper.service" ];
|
||||
serviceConfig = {
|
||||
Type = "dbus";
|
||||
BusName = "org.rnd2.cpupower_gui.helper";
|
||||
ExecStart = "${pkgs.cpupower-gui}/lib/cpupower-gui/cpupower-gui-helper";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -52,8 +52,10 @@ with lib;
|
|||
security.pam.services.login.enableGnomeKeyring = true;
|
||||
|
||||
security.wrappers.gnome-keyring-daemon = {
|
||||
source = "${pkgs.gnome.gnome-keyring}/bin/gnome-keyring-daemon";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_ipc_lock=ep";
|
||||
source = "${pkgs.gnome.gnome-keyring}/bin/gnome-keyring-daemon";
|
||||
};
|
||||
|
||||
};
|
||||
|
|
|
@ -9,7 +9,7 @@ let
|
|||
in
|
||||
{
|
||||
|
||||
meta.maintainers = pkgs.pantheon.maintainers;
|
||||
meta.maintainers = teams.pantheon.members;
|
||||
|
||||
###### interface
|
||||
|
||||
|
|
|
@ -99,7 +99,12 @@ in
|
|||
|
||||
systemd.defaultUnit = "graphical.target";
|
||||
|
||||
users.users.greeter.isSystemUser = true;
|
||||
users.users.greeter = {
|
||||
isSystemUser = true;
|
||||
group = "greeter";
|
||||
};
|
||||
|
||||
users.groups.greeter = {};
|
||||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ queezle ];
|
||||
|
|
|
@ -149,12 +149,10 @@ in
|
|||
users.users = optionalAttrs (cfg.user == "tss") {
|
||||
tss = {
|
||||
group = "tss";
|
||||
uid = config.ids.uids.tss;
|
||||
isSystemUser = true;
|
||||
};
|
||||
};
|
||||
|
||||
users.groups = optionalAttrs (cfg.group == "tss") {
|
||||
tss.gid = config.ids.gids.tss;
|
||||
};
|
||||
users.groups = optionalAttrs (cfg.group == "tss") { tss = {}; };
|
||||
};
|
||||
}
|
||||
|
|
|
@ -215,12 +215,16 @@ in
|
|||
|
||||
users.users = optionalAttrs (cfg.user == "logcheck") {
|
||||
logcheck = {
|
||||
uid = config.ids.uids.logcheck;
|
||||
group = "logcheck";
|
||||
isSystemUser = true;
|
||||
shell = "/bin/sh";
|
||||
description = "Logcheck user account";
|
||||
extraGroups = cfg.extraGroups;
|
||||
};
|
||||
};
|
||||
users.groups = optionalAttrs (cfg.user == "logcheck") {
|
||||
logcheck = {};
|
||||
};
|
||||
|
||||
system.activationScripts.logcheck = ''
|
||||
mkdir -m 700 -p /var/{lib,lock}/logcheck
|
||||
|
|
|
@ -104,7 +104,12 @@ in
|
|||
gid = config.ids.gids.exim;
|
||||
};
|
||||
|
||||
security.wrappers.exim.source = "${cfg.package}/bin/exim";
|
||||
security.wrappers.exim =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${cfg.package}/bin/exim";
|
||||
};
|
||||
|
||||
systemd.services.exim = {
|
||||
description = "Exim Mail Daemon";
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{ config, lib, ... }:
|
||||
{ config, options, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
|
@ -11,6 +11,7 @@ with lib;
|
|||
services.mail = {
|
||||
|
||||
sendmailSetuidWrapper = mkOption {
|
||||
type = types.nullOr options.security.wrappers.type.nestedTypes.elemType;
|
||||
default = null;
|
||||
internal = true;
|
||||
description = ''
|
||||
|
|
|
@ -103,12 +103,15 @@ in {
|
|||
};
|
||||
|
||||
security.wrappers.smtpctl = {
|
||||
owner = "nobody";
|
||||
group = "smtpq";
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
source = "${cfg.package}/bin/smtpctl";
|
||||
};
|
||||
|
||||
services.mail.sendmailSetuidWrapper = mkIf cfg.setSendmail security.wrappers.smtpctl;
|
||||
services.mail.sendmailSetuidWrapper = mkIf cfg.setSendmail
|
||||
security.wrappers.smtpctl // { program = "sendmail"; };
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/spool/smtpd 711 root - - -"
|
||||
|
|
|
@ -673,6 +673,7 @@ in
|
|||
services.mail.sendmailSetuidWrapper = mkIf config.services.postfix.setSendmail {
|
||||
program = "sendmail";
|
||||
source = "${pkgs.postfix}/bin/sendmail";
|
||||
owner = "nobody";
|
||||
group = setgidGroup;
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
|
@ -681,6 +682,7 @@ in
|
|||
security.wrappers.mailq = {
|
||||
program = "mailq";
|
||||
source = "${pkgs.postfix}/bin/mailq";
|
||||
owner = "nobody";
|
||||
group = setgidGroup;
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
|
@ -689,6 +691,7 @@ in
|
|||
security.wrappers.postqueue = {
|
||||
program = "postqueue";
|
||||
source = "${pkgs.postfix}/bin/postqueue";
|
||||
owner = "nobody";
|
||||
group = setgidGroup;
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
|
@ -697,6 +700,7 @@ in
|
|||
security.wrappers.postdrop = {
|
||||
program = "postdrop";
|
||||
source = "${pkgs.postfix}/bin/postdrop";
|
||||
owner = "nobody";
|
||||
group = setgidGroup;
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
|
|
|
@ -86,7 +86,7 @@ in
|
|||
|
||||
config = mkOption {
|
||||
default = {};
|
||||
type = (types.either types.bool types.int);
|
||||
type = types.attrsOf (types.either types.bool types.int);
|
||||
description = "Additional config";
|
||||
example = {
|
||||
auto-fan = true;
|
||||
|
@ -110,10 +110,14 @@ in
|
|||
|
||||
users.users = optionalAttrs (cfg.user == "cgminer") {
|
||||
cgminer = {
|
||||
uid = config.ids.uids.cgminer;
|
||||
isSystemUser = true;
|
||||
group = "cgminer";
|
||||
description = "Cgminer user";
|
||||
};
|
||||
};
|
||||
users.groups = optionalAttrs (cfg.user == "cgminer") {
|
||||
cgminer = {};
|
||||
};
|
||||
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
|
|
|
@ -202,8 +202,8 @@ in {
|
|||
config = mkIf cfg.enable {
|
||||
users.users.${cfg.user} = {
|
||||
description = "gammu-smsd user";
|
||||
uid = config.ids.uids.gammu-smsd;
|
||||
extraGroups = [ "${cfg.device.group}" ];
|
||||
isSystemUser = true;
|
||||
group = cfg.device.group;
|
||||
};
|
||||
|
||||
environment.systemPackages = with cfg.backend; [ gammuPackage ]
|
||||
|
|
|
@ -88,6 +88,7 @@ in
|
|||
|
||||
users.users.gpsd =
|
||||
{ inherit uid;
|
||||
group = "gpsd";
|
||||
description = "gpsd daemon user";
|
||||
home = "/var/empty";
|
||||
};
|
||||
|
|
|
@ -45,8 +45,10 @@ in
|
|||
environment.systemPackages = [ pkgs.mame ];
|
||||
|
||||
security.wrappers."${mame}" = {
|
||||
source = "${pkgs.mame}/bin/${mame}";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_net_admin,cap_net_raw+eip";
|
||||
source = "${pkgs.mame}/bin/${mame}";
|
||||
};
|
||||
|
||||
systemd.services.mame = {
|
||||
|
|
|
@ -187,7 +187,9 @@ in {
|
|||
|
||||
users.users.ripple-data-api =
|
||||
{ description = "Ripple data api user";
|
||||
uid = config.ids.uids.ripple-data-api;
|
||||
isSystemUser = true;
|
||||
group = "ripple-data-api";
|
||||
};
|
||||
users.groups.ripple-data-api = {};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -407,12 +407,14 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
users.users.rippled =
|
||||
{ description = "Ripple server user";
|
||||
uid = config.ids.uids.rippled;
|
||||
users.users.rippled = {
|
||||
description = "Ripple server user";
|
||||
isSystemUser = true;
|
||||
group = "rippled";
|
||||
home = cfg.databasePath;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.rippled = {};
|
||||
|
||||
systemd.services.rippled = {
|
||||
after = [ "network.target" ];
|
||||
|
|
|
@ -52,7 +52,12 @@ in
|
|||
wants = [ "network.target" ];
|
||||
};
|
||||
|
||||
security.wrappers.screen.source = "${pkgs.screen}/bin/screen";
|
||||
security.wrappers.screen =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.screen}/bin/screen";
|
||||
};
|
||||
};
|
||||
|
||||
meta.doc = ./weechat.xml;
|
||||
|
|
|
@ -50,8 +50,10 @@ in {
|
|||
};
|
||||
|
||||
users.users.heapster = {
|
||||
uid = config.ids.uids.heapster;
|
||||
isSystemUser = true;
|
||||
group = "heapster";
|
||||
description = "Heapster user";
|
||||
};
|
||||
users.groups.heapster = {};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -71,7 +71,12 @@ in
|
|||
|
||||
environment.systemPackages = [ pkgs.incron ];
|
||||
|
||||
security.wrappers.incrontab.source = "${pkgs.incron}/bin/incrontab";
|
||||
security.wrappers.incrontab =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.incron}/bin/incrontab";
|
||||
};
|
||||
|
||||
# incron won't read symlinks
|
||||
environment.etc."incron.d/system" = {
|
||||
|
|
|
@ -9,9 +9,9 @@ let
|
|||
mkdir -p $out/libexec/netdata/plugins.d
|
||||
ln -s /run/wrappers/bin/apps.plugin $out/libexec/netdata/plugins.d/apps.plugin
|
||||
ln -s /run/wrappers/bin/cgroup-network $out/libexec/netdata/plugins.d/cgroup-network
|
||||
ln -s /run/wrappers/bin/freeipmi.plugin $out/libexec/netdata/plugins.d/freeipmi.plugin
|
||||
ln -s /run/wrappers/bin/perf.plugin $out/libexec/netdata/plugins.d/perf.plugin
|
||||
ln -s /run/wrappers/bin/slabinfo.plugin $out/libexec/netdata/plugins.d/slabinfo.plugin
|
||||
ln -s /run/wrappers/bin/freeipmi.plugin $out/libexec/netdata/plugins.d/freeipmi.plugin
|
||||
'';
|
||||
|
||||
plugins = [
|
||||
|
@ -211,44 +211,47 @@ in {
|
|||
|
||||
systemd.enableCgroupAccounting = true;
|
||||
|
||||
security.wrappers."apps.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/apps.plugin.org";
|
||||
capabilities = "cap_dac_read_search,cap_sys_ptrace+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
security.wrappers = {
|
||||
"apps.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/apps.plugin.org";
|
||||
capabilities = "cap_dac_read_search,cap_sys_ptrace+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
|
||||
security.wrappers."cgroup-network" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/cgroup-network.org";
|
||||
capabilities = "cap_setuid+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
"cgroup-network" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/cgroup-network.org";
|
||||
capabilities = "cap_setuid+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
|
||||
security.wrappers."freeipmi.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/freeipmi.plugin.org";
|
||||
capabilities = "cap_dac_override,cap_fowner+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
"perf.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/perf.plugin.org";
|
||||
capabilities = "cap_sys_admin+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
|
||||
security.wrappers."perf.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/perf.plugin.org";
|
||||
capabilities = "cap_sys_admin+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
"slabinfo.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/slabinfo.plugin.org";
|
||||
capabilities = "cap_dac_override+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
|
||||
security.wrappers."slabinfo.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/slabinfo.plugin.org";
|
||||
capabilities = "cap_dac_override+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
} // optionalAttrs (cfg.package.withIpmi) {
|
||||
"freeipmi.plugin" = {
|
||||
source = "${cfg.package}/libexec/netdata/plugins.d/freeipmi.plugin.org";
|
||||
capabilities = "cap_dac_override,cap_fowner+ep";
|
||||
owner = cfg.user;
|
||||
group = cfg.group;
|
||||
permissions = "u+rx,g+x,o-rwx";
|
||||
};
|
||||
};
|
||||
|
||||
security.pam.loginLimits = [
|
||||
|
|
|
@ -262,7 +262,12 @@ in
|
|||
};
|
||||
|
||||
security.wrappers = {
|
||||
fping.source = "${pkgs.fping}/bin/fping";
|
||||
fping =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.fping}/bin/fping";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.zabbix-proxy = {
|
||||
|
|
|
@ -217,6 +217,7 @@ in {
|
|||
home = "${dataDir}";
|
||||
createHome = true;
|
||||
isSystemUser = true;
|
||||
group = "dnscrypt-wrapper";
|
||||
};
|
||||
users.groups.dnscrypt-wrapper = { };
|
||||
|
||||
|
|
|
@ -164,7 +164,7 @@ in {
|
|||
path = [ pkgs.iptables ];
|
||||
preStart = optionalString (cfg.storageBackend == "etcd") ''
|
||||
echo "setting network configuration"
|
||||
until ${pkgs.etcdctl}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
|
||||
until ${pkgs.etcd}/bin/etcdctl set /coreos.com/network/config '${builtins.toJSON networkConfig}'
|
||||
do
|
||||
echo "setting network configuration, retry"
|
||||
sleep 1
|
||||
|
|
|
@ -6,8 +6,6 @@ let
|
|||
|
||||
inherit (pkgs) nntp-proxy;
|
||||
|
||||
proxyUser = "nntp-proxy";
|
||||
|
||||
cfg = config.services.nntp-proxy;
|
||||
|
||||
configBool = b: if b then "TRUE" else "FALSE";
|
||||
|
@ -210,16 +208,18 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
users.users.${proxyUser} =
|
||||
{ uid = config.ids.uids.nntp-proxy;
|
||||
description = "NNTP-Proxy daemon user";
|
||||
};
|
||||
users.users.nntp-proxy = {
|
||||
isSystemUser = true;
|
||||
group = "nntp-proxy";
|
||||
description = "NNTP-Proxy daemon user";
|
||||
};
|
||||
users.groups.nntp-proxy = {};
|
||||
|
||||
systemd.services.nntp-proxy = {
|
||||
description = "NNTP proxy";
|
||||
after = [ "network.target" "nss-lookup.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = { User="${proxyUser}"; };
|
||||
serviceConfig = { User="nntp-proxy"; };
|
||||
serviceConfig.ExecStart = "${nntp-proxy}/bin/nntp-proxy ${confFile}";
|
||||
preStart = ''
|
||||
if [ ! \( -f ${cfg.sslCert} -a -f ${cfg.sslKey} \) ]; then
|
||||
|
|
|
@ -10,8 +10,6 @@ let
|
|||
|
||||
stateDir = "/var/lib/ntp";
|
||||
|
||||
ntpUser = "ntp";
|
||||
|
||||
configFile = pkgs.writeText "ntp.conf" ''
|
||||
driftfile ${stateDir}/ntp.drift
|
||||
|
||||
|
@ -27,7 +25,7 @@ let
|
|||
${cfg.extraConfig}
|
||||
'';
|
||||
|
||||
ntpFlags = "-c ${configFile} -u ${ntpUser}:nogroup ${toString cfg.extraFlags}";
|
||||
ntpFlags = "-c ${configFile} -u ntp:ntp ${toString cfg.extraFlags}";
|
||||
|
||||
in
|
||||
|
||||
|
@ -119,11 +117,13 @@ in
|
|||
|
||||
systemd.services.systemd-timedated.environment = { SYSTEMD_TIMEDATED_NTP_SERVICES = "ntpd.service"; };
|
||||
|
||||
users.users.${ntpUser} =
|
||||
{ uid = config.ids.uids.ntp;
|
||||
users.users.ntp =
|
||||
{ isSystemUser = true;
|
||||
group = "ntp";
|
||||
description = "NTP daemon user";
|
||||
home = stateDir;
|
||||
};
|
||||
users.groups.ntp = {};
|
||||
|
||||
systemd.services.ntpd =
|
||||
{ description = "NTP Daemon";
|
||||
|
@ -135,7 +135,7 @@ in
|
|||
preStart =
|
||||
''
|
||||
mkdir -m 0755 -p ${stateDir}
|
||||
chown ${ntpUser} ${stateDir}
|
||||
chown ntp ${stateDir}
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
|
|
|
@ -61,10 +61,12 @@ in
|
|||
environment.etc."ntpd.conf".text = configFile;
|
||||
|
||||
users.users.ntp = {
|
||||
uid = config.ids.uids.ntp;
|
||||
isSystemUser = true;
|
||||
group = "ntp";
|
||||
description = "OpenNTP daemon user";
|
||||
home = "/var/empty";
|
||||
};
|
||||
users.groups.ntp = {};
|
||||
|
||||
systemd.services.openntpd = {
|
||||
description = "OpenNTP Server";
|
||||
|
|
|
@ -72,8 +72,10 @@ in
|
|||
|
||||
users.users.rdnssd = {
|
||||
description = "RDNSSD Daemon User";
|
||||
uid = config.ids.uids.rdnssd;
|
||||
isSystemUser = true;
|
||||
group = "rdnssd";
|
||||
};
|
||||
users.groups.rdnssd = {};
|
||||
|
||||
};
|
||||
|
||||
|
|
|
@ -83,11 +83,13 @@ in {
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
users.users.shout = {
|
||||
uid = config.ids.uids.shout;
|
||||
isSystemUser = true;
|
||||
group = "shout";
|
||||
description = "Shout daemon user";
|
||||
home = shoutHome;
|
||||
createHome = true;
|
||||
};
|
||||
users.groups.shout = {};
|
||||
|
||||
systemd.services.shout = {
|
||||
description = "Shout web IRC client";
|
||||
|
|
|
@ -278,8 +278,12 @@ in
|
|||
}
|
||||
];
|
||||
security.wrappers = {
|
||||
fping.source = "${pkgs.fping}/bin/fping";
|
||||
fping6.source = "${pkgs.fping}/bin/fping6";
|
||||
fping =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.fping}/bin/fping";
|
||||
};
|
||||
};
|
||||
environment.systemPackages = [ pkgs.fping ];
|
||||
users.users.${cfg.user} = {
|
||||
|
|
|
@ -59,10 +59,12 @@ with lib;
|
|||
|
||||
users.users = {
|
||||
toxvpn = {
|
||||
uid = config.ids.uids.toxvpn;
|
||||
isSystemUser = true;
|
||||
group = "toxvpn";
|
||||
home = "/var/lib/toxvpn";
|
||||
createHome = true;
|
||||
};
|
||||
};
|
||||
users.groups.toxvpn = {};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -29,8 +29,10 @@ in
|
|||
description = "Tvheadend Service user";
|
||||
home = "/var/lib/tvheadend";
|
||||
createHome = true;
|
||||
uid = config.ids.uids.tvheadend;
|
||||
isSystemUser = true;
|
||||
group = "tvheadend";
|
||||
};
|
||||
users.groups.tvheadend = {};
|
||||
|
||||
systemd.services.tvheadend = {
|
||||
description = "Tvheadend TV streaming server";
|
||||
|
|
|
@ -115,10 +115,12 @@ in
|
|||
config = mkIf cfg.enable {
|
||||
|
||||
users.users.unifi = {
|
||||
uid = config.ids.uids.unifi;
|
||||
isSystemUser = true;
|
||||
group = "unifi";
|
||||
description = "UniFi controller daemon user";
|
||||
home = "${stateDir}";
|
||||
};
|
||||
users.groups.unifi = {};
|
||||
|
||||
networking.firewall = mkIf cfg.openPorts {
|
||||
# https://help.ubnt.com/hc/en-us/articles/218506997
|
||||
|
|
|
@ -88,12 +88,14 @@ in {
|
|||
source = "${pkgs.x2goserver}/lib/x2go/libx2go-server-db-sqlite3-wrapper.pl";
|
||||
owner = "x2go";
|
||||
group = "x2go";
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
};
|
||||
security.wrappers.x2goprintWrapper = {
|
||||
source = "${pkgs.x2goserver}/bin/x2goprint";
|
||||
owner = "x2go";
|
||||
group = "x2go";
|
||||
setuid = false;
|
||||
setgid = true;
|
||||
};
|
||||
|
||||
|
|
|
@ -93,7 +93,12 @@ in
|
|||
|
||||
{ services.cron.enable = mkDefault (allFiles != []); }
|
||||
(mkIf (config.services.cron.enable) {
|
||||
security.wrappers.crontab.source = "${cronNixosPkg}/bin/crontab";
|
||||
security.wrappers.crontab =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${cronNixosPkg}/bin/crontab";
|
||||
};
|
||||
environment.systemPackages = [ cronNixosPkg ];
|
||||
environment.etc.crontab =
|
||||
{ source = pkgs.runCommand "crontabs" { inherit allFiles; preferLocalBuild = true; }
|
||||
|
|
|
@ -136,10 +136,13 @@ in
|
|||
owner = "fcron";
|
||||
group = "fcron";
|
||||
setgid = true;
|
||||
setuid = false;
|
||||
};
|
||||
fcronsighup = {
|
||||
source = "${pkgs.fcron}/bin/fcronsighup";
|
||||
owner = "root";
|
||||
group = "fcron";
|
||||
setuid = true;
|
||||
};
|
||||
};
|
||||
systemd.services.fcron = {
|
||||
|
|
|
@ -5,13 +5,13 @@ with lib;
|
|||
let
|
||||
cfg = config.services.elasticsearch;
|
||||
|
||||
es7 = builtins.compareVersions cfg.package.version "7" >= 0;
|
||||
|
||||
esConfig = ''
|
||||
network.host: ${cfg.listenAddress}
|
||||
cluster.name: ${cfg.cluster_name}
|
||||
${lib.optionalString cfg.single_node ''
|
||||
discovery.type: single-node
|
||||
gateway.auto_import_dangling_indices: true
|
||||
''}
|
||||
${lib.optionalString cfg.single_node "discovery.type: single-node"}
|
||||
${lib.optionalString (cfg.single_node && es7) "gateway.auto_import_dangling_indices: true"}
|
||||
|
||||
http.port: ${toString cfg.port}
|
||||
transport.port: ${toString cfg.tcp_port}
|
||||
|
|
129
third_party/nixpkgs/nixos/modules/services/search/meilisearch.nix
vendored
Normal file
129
third_party/nixpkgs/nixos/modules/services/search/meilisearch.nix
vendored
Normal file
|
@ -0,0 +1,129 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.meilisearch;
|
||||
|
||||
in
|
||||
{
|
||||
|
||||
meta.maintainers = with maintainers; [ Br1ght0ne ];
|
||||
|
||||
###### interface
|
||||
|
||||
options.services.meilisearch = {
|
||||
enable = mkEnableOption "MeiliSearch - a RESTful search API";
|
||||
|
||||
package = mkOption {
|
||||
description = "The package to use for meilisearch. Use this if you require specific features to be enabled. The default package has no features.";
|
||||
default = pkgs.meilisearch;
|
||||
defaultText = "pkgs.meilisearch";
|
||||
type = types.package;
|
||||
};
|
||||
|
||||
listenAddress = mkOption {
|
||||
description = "MeiliSearch listen address.";
|
||||
default = "127.0.0.1";
|
||||
type = types.str;
|
||||
};
|
||||
|
||||
listenPort = mkOption {
|
||||
description = "MeiliSearch port to listen on.";
|
||||
default = 7700;
|
||||
type = types.port;
|
||||
};
|
||||
|
||||
environment = mkOption {
|
||||
description = "Defines the running environment of MeiliSearch.";
|
||||
default = "development";
|
||||
type = types.enum [ "development" "production" ];
|
||||
};
|
||||
|
||||
# TODO change this to LoadCredentials once possible
|
||||
masterKeyEnvironmentFile = mkOption {
|
||||
description = ''
|
||||
Path to file which contains the master key.
|
||||
By doing so, all routes will be protected and will require a key to be accessed.
|
||||
If no master key is provided, all routes can be accessed without requiring any key.
|
||||
The format is the following:
|
||||
MEILI_MASTER_KEY=my_secret_key
|
||||
'';
|
||||
default = null;
|
||||
type = with types; nullOr path;
|
||||
};
|
||||
|
||||
noAnalytics = mkOption {
|
||||
description = ''
|
||||
Deactivates analytics.
|
||||
Analytics allow MeiliSearch to know how many users are using MeiliSearch,
|
||||
which versions and which platforms are used.
|
||||
This process is entirely anonymous.
|
||||
'';
|
||||
default = true;
|
||||
type = types.bool;
|
||||
};
|
||||
|
||||
logLevel = mkOption {
|
||||
description = ''
|
||||
Defines how much detail should be present in MeiliSearch's logs.
|
||||
MeiliSearch currently supports four log levels, listed in order of increasing verbosity:
|
||||
- 'ERROR': only log unexpected events indicating MeiliSearch is not functioning as expected
|
||||
- 'WARN:' log all unexpected events, regardless of their severity
|
||||
- 'INFO:' log all events. This is the default value
|
||||
- 'DEBUG': log all events and including detailed information on MeiliSearch's internal processes.
|
||||
Useful when diagnosing issues and debugging
|
||||
'';
|
||||
default = "INFO";
|
||||
type = types.str;
|
||||
};
|
||||
|
||||
maxIndexSize = mkOption {
|
||||
description = ''
|
||||
Sets the maximum size of the index.
|
||||
Value must be given in bytes or explicitly stating a base unit.
|
||||
For example, the default value can be written as 107374182400, '107.7Gb', or '107374 Mb'.
|
||||
Default is 100 GiB
|
||||
'';
|
||||
default = "107374182400";
|
||||
type = types.str;
|
||||
};
|
||||
|
||||
payloadSizeLimit = mkOption {
|
||||
description = ''
|
||||
Sets the maximum size of accepted JSON payloads.
|
||||
Value must be given in bytes or explicitly stating a base unit.
|
||||
For example, the default value can be written as 107374182400, '107.7Gb', or '107374 Mb'.
|
||||
Default is ~ 100 MB
|
||||
'';
|
||||
default = "104857600";
|
||||
type = types.str;
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
###### implementation
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.services.meilisearch = {
|
||||
description = "MeiliSearch daemon";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "network.target" ];
|
||||
environment = {
|
||||
MEILI_DB_PATH = "/var/lib/meilisearch";
|
||||
MEILI_HTTP_ADDR = "${cfg.listenAddress}:${toString cfg.listenPort}";
|
||||
MEILI_NO_ANALYTICS = toString cfg.noAnalytics;
|
||||
MEILI_ENV = cfg.environment;
|
||||
MEILI_DUMPS_DIR = "/var/lib/meilisearch/dumps";
|
||||
MEILI_LOG_LEVEL = cfg.logLevel;
|
||||
MEILI_MAX_INDEX_SIZE = cfg.maxIndexSize;
|
||||
};
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package}/bin/meilisearch";
|
||||
DynamicUser = true;
|
||||
StateDirectory = "meilisearch";
|
||||
EnvironmentFile = mkIf (cfg.masterKeyEnvironmentFile != null) cfg.masterKeyEnvironmentFile;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
24
third_party/nixpkgs/nixos/modules/services/security/opensnitch.nix
vendored
Normal file
24
third_party/nixpkgs/nixos/modules/services/security/opensnitch.nix
vendored
Normal file
|
@ -0,0 +1,24 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
name = "opensnitch";
|
||||
cfg = config.services.opensnitch;
|
||||
in {
|
||||
options = {
|
||||
services.opensnitch = {
|
||||
enable = mkEnableOption "Opensnitch application firewall";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
systemd = {
|
||||
packages = [ pkgs.opensnitch ];
|
||||
services.opensnitchd.wantedBy = [ "multi-user.target" ];
|
||||
};
|
||||
|
||||
};
|
||||
}
|
||||
|
|
@ -38,9 +38,6 @@ in
|
|||
setuid wrapper to allow any user to start physlock as root, which
|
||||
is a minor security risk. Call the physlock binary to use this instead
|
||||
of using the systemd service.
|
||||
|
||||
Note that you might need to relog to have the correct binary in your
|
||||
PATH upon changing this option.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -129,7 +126,12 @@ in
|
|||
|
||||
(mkIf cfg.allowAnyUser {
|
||||
|
||||
security.wrappers.physlock = { source = "${pkgs.physlock}/bin/physlock"; user = "root"; };
|
||||
security.wrappers.physlock =
|
||||
{ setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${pkgs.physlock}/bin/physlock";
|
||||
};
|
||||
|
||||
})
|
||||
]);
|
||||
|
|
|
@ -27,7 +27,7 @@ in
|
|||
{
|
||||
# No documentation about correct triggers, so guessing at them.
|
||||
|
||||
config = mkIf (cfg.enable && kerberos == pkgs.heimdalFull) {
|
||||
config = mkIf (cfg.enable && kerberos == pkgs.heimdal) {
|
||||
systemd.services.kadmind = {
|
||||
description = "Kerberos Administration Daemon";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
|
|
@ -37,7 +37,9 @@ in {
|
|||
users.users.localtimed = {
|
||||
description = "localtime daemon";
|
||||
isSystemUser = true;
|
||||
group = "localtimed";
|
||||
};
|
||||
users.groups.localtimed = {};
|
||||
|
||||
systemd.services.localtime = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
|
|
@ -44,8 +44,10 @@ in
|
|||
|
||||
security.wrappers = mkIf cfg.enableSysAdminCapability {
|
||||
replay-sorcery = {
|
||||
source = "${pkgs.replay-sorcery}/bin/replay-sorcery";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_sys_admin+ep";
|
||||
source = "${pkgs.replay-sorcery}/bin/replay-sorcery";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -1,16 +1,21 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
{ config, pkgs, lib, ... }:
|
||||
|
||||
let
|
||||
inherit (lib) mkDefault mkEnableOption mkForce mkIf mkMerge mkOption types maintainers recursiveUpdate;
|
||||
inherit (lib) any attrValues concatMapStrings concatMapStringsSep flatten literalExample;
|
||||
inherit (lib) filterAttrs mapAttrs mapAttrs' mapAttrsToList nameValuePair optional optionalAttrs optionalString;
|
||||
|
||||
inherit (lib) mkEnableOption mkForce mkIf mkMerge mkOption optionalAttrs recursiveUpdate types maintainers;
|
||||
inherit (lib) concatMapStringsSep flatten mapAttrs mapAttrs' mapAttrsToList nameValuePair concatMapStringSep;
|
||||
|
||||
eachSite = config.services.dokuwiki;
|
||||
|
||||
cfg = migrateOldAttrs config.services.dokuwiki;
|
||||
eachSite = cfg.sites;
|
||||
user = "dokuwiki";
|
||||
group = config.services.nginx.group;
|
||||
webserver = config.services.${cfg.webserver};
|
||||
stateDir = hostName: "/var/lib/dokuwiki/${hostName}/data";
|
||||
|
||||
dokuwikiAclAuthConfig = cfg: pkgs.writeText "acl.auth.php" ''
|
||||
# Migrate config.services.dokuwiki.<hostName> to config.services.dokuwiki.sites.<hostName>
|
||||
oldSites = filterAttrs (o: _: o != "sites" && o != "webserver");
|
||||
migrateOldAttrs = cfg: cfg // { sites = cfg.sites // oldSites cfg; };
|
||||
|
||||
dokuwikiAclAuthConfig = hostName: cfg: pkgs.writeText "acl.auth-${hostName}.php" ''
|
||||
# acl.auth.php
|
||||
# <?php exit()?>
|
||||
#
|
||||
|
@ -19,7 +24,7 @@ let
|
|||
${toString cfg.acl}
|
||||
'';
|
||||
|
||||
dokuwikiLocalConfig = cfg: pkgs.writeText "local.php" ''
|
||||
dokuwikiLocalConfig = hostName: cfg: pkgs.writeText "local-${hostName}.php" ''
|
||||
<?php
|
||||
$conf['savedir'] = '${cfg.stateDir}';
|
||||
$conf['superuser'] = '${toString cfg.superUser}';
|
||||
|
@ -28,11 +33,12 @@ let
|
|||
${toString cfg.extraConfig}
|
||||
'';
|
||||
|
||||
dokuwikiPluginsLocalConfig = cfg: pkgs.writeText "plugins.local.php" ''
|
||||
dokuwikiPluginsLocalConfig = hostName: cfg: pkgs.writeText "plugins.local-${hostName}.php" ''
|
||||
<?php
|
||||
${cfg.pluginsConfig}
|
||||
'';
|
||||
|
||||
|
||||
pkg = hostName: cfg: pkgs.stdenv.mkDerivation rec {
|
||||
pname = "dokuwiki-${hostName}";
|
||||
version = src.version;
|
||||
|
@ -43,13 +49,13 @@ let
|
|||
cp -r * $out/
|
||||
|
||||
# symlink the dokuwiki config
|
||||
ln -s ${dokuwikiLocalConfig cfg} $out/share/dokuwiki/local.php
|
||||
ln -s ${dokuwikiLocalConfig hostName cfg} $out/share/dokuwiki/local.php
|
||||
|
||||
# symlink plugins config
|
||||
ln -s ${dokuwikiPluginsLocalConfig cfg} $out/share/dokuwiki/plugins.local.php
|
||||
ln -s ${dokuwikiPluginsLocalConfig hostName cfg} $out/share/dokuwiki/plugins.local.php
|
||||
|
||||
# symlink acl
|
||||
ln -s ${dokuwikiAclAuthConfig cfg} $out/share/dokuwiki/acl.auth.php
|
||||
ln -s ${dokuwikiAclAuthConfig hostName cfg} $out/share/dokuwiki/acl.auth.php
|
||||
|
||||
# symlink additional plugin(s) and templates(s)
|
||||
${concatMapStringsSep "\n" (template: "ln -s ${template} $out/share/dokuwiki/lib/tpl/${template.name}") cfg.templates}
|
||||
|
@ -57,332 +63,385 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
siteOpts = { config, lib, name, ...}: {
|
||||
options = {
|
||||
enable = mkEnableOption "DokuWiki web application.";
|
||||
siteOpts = { config, lib, name, ... }:
|
||||
{
|
||||
options = {
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.dokuwiki;
|
||||
description = "Which DokuWiki package to use.";
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.dokuwiki;
|
||||
description = "Which dokuwiki package to use.";
|
||||
};
|
||||
stateDir = mkOption {
|
||||
type = types.path;
|
||||
default = "/var/lib/dokuwiki/${name}/data";
|
||||
description = "Location of the DokuWiki state directory.";
|
||||
};
|
||||
|
||||
hostName = mkOption {
|
||||
type = types.str;
|
||||
default = "localhost";
|
||||
description = "FQDN for the instance.";
|
||||
};
|
||||
acl = mkOption {
|
||||
type = types.nullOr types.lines;
|
||||
default = null;
|
||||
example = "* @ALL 8";
|
||||
description = ''
|
||||
Access Control Lists: see <link xlink:href="https://www.dokuwiki.org/acl"/>
|
||||
Mutually exclusive with services.dokuwiki.aclFile
|
||||
Set this to a value other than null to take precedence over aclFile option.
|
||||
|
||||
stateDir = mkOption {
|
||||
type = types.path;
|
||||
default = "/var/lib/dokuwiki/${name}/data";
|
||||
description = "Location of the dokuwiki state directory.";
|
||||
};
|
||||
|
||||
acl = mkOption {
|
||||
type = types.nullOr types.lines;
|
||||
default = null;
|
||||
example = "* @ALL 8";
|
||||
description = ''
|
||||
Access Control Lists: see <link xlink:href="https://www.dokuwiki.org/acl"/>
|
||||
Mutually exclusive with services.dokuwiki.aclFile
|
||||
Set this to a value other than null to take precedence over aclFile option.
|
||||
|
||||
Warning: Consider using aclFile instead if you do not
|
||||
want to store the ACL in the world-readable Nix store.
|
||||
'';
|
||||
};
|
||||
|
||||
aclFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = if (config.aclUse && config.acl == null) then "/var/lib/dokuwiki/${name}/acl.auth.php" else null;
|
||||
description = ''
|
||||
Location of the dokuwiki acl rules. Mutually exclusive with services.dokuwiki.acl
|
||||
Mutually exclusive with services.dokuwiki.acl which is preferred.
|
||||
Consult documentation <link xlink:href="https://www.dokuwiki.org/acl"/> for further instructions.
|
||||
Example: <link xlink:href="https://github.com/splitbrain/dokuwiki/blob/master/conf/acl.auth.php.dist"/>
|
||||
'';
|
||||
example = "/var/lib/dokuwiki/${name}/acl.auth.php";
|
||||
};
|
||||
|
||||
aclUse = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Necessary for users to log in into the system.
|
||||
Also limits anonymous users. When disabled,
|
||||
everyone is able to create and edit content.
|
||||
'';
|
||||
};
|
||||
|
||||
pluginsConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = ''
|
||||
$plugins['authad'] = 0;
|
||||
$plugins['authldap'] = 0;
|
||||
$plugins['authmysql'] = 0;
|
||||
$plugins['authpgsql'] = 0;
|
||||
'';
|
||||
description = ''
|
||||
List of the dokuwiki (un)loaded plugins.
|
||||
'';
|
||||
};
|
||||
|
||||
superUser = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = "@admin";
|
||||
description = ''
|
||||
You can set either a username, a list of usernames (“admin1,admin2”),
|
||||
or the name of a group by prepending an @ char to the groupname
|
||||
Consult documentation <link xlink:href="https://www.dokuwiki.org/config:superuser"/> for further instructions.
|
||||
'';
|
||||
};
|
||||
|
||||
usersFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = if config.aclUse then "/var/lib/dokuwiki/${name}/users.auth.php" else null;
|
||||
description = ''
|
||||
Location of the dokuwiki users file. List of users. Format:
|
||||
login:passwordhash:Real Name:email:groups,comma,separated
|
||||
Create passwordHash easily by using:$ mkpasswd -5 password `pwgen 8 1`
|
||||
Example: <link xlink:href="https://github.com/splitbrain/dokuwiki/blob/master/conf/users.auth.php.dist"/>
|
||||
Warning: Consider using aclFile instead if you do not
|
||||
want to store the ACL in the world-readable Nix store.
|
||||
'';
|
||||
example = "/var/lib/dokuwiki/${name}/users.auth.php";
|
||||
};
|
||||
|
||||
disableActions = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = "";
|
||||
example = "search,register";
|
||||
description = ''
|
||||
Disable individual action modes. Refer to
|
||||
<link xlink:href="https://www.dokuwiki.org/config:action_modes"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.nullOr types.lines;
|
||||
default = null;
|
||||
example = ''
|
||||
$conf['title'] = 'My Wiki';
|
||||
$conf['userewrite'] = 1;
|
||||
'';
|
||||
description = ''
|
||||
DokuWiki configuration. Refer to
|
||||
<link xlink:href="https://www.dokuwiki.org/config"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
plugins = mkOption {
|
||||
type = types.listOf types.path;
|
||||
default = [];
|
||||
description = ''
|
||||
List of path(s) to respective plugin(s) which are copied from the 'plugin' directory.
|
||||
<note><para>These plugins need to be packaged before use, see example.</para></note>
|
||||
'';
|
||||
example = ''
|
||||
# Let's package the icalevents plugin
|
||||
plugin-icalevents = pkgs.stdenv.mkDerivation {
|
||||
name = "icalevents";
|
||||
# Download the plugin from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip";
|
||||
sha256 = "e40ed7dd6bbe7fe3363bbbecb4de481d5e42385b5a0f62f6a6ce6bf3a1f9dfa8";
|
||||
};
|
||||
sourceRoot = ".";
|
||||
# We need unzip to build this package
|
||||
nativeBuildInputs = [ pkgs.unzip ];
|
||||
# Installing simply means copying all files to the output directory
|
||||
installPhase = "mkdir -p $out; cp -R * $out/";
|
||||
};
|
||||
|
||||
# And then pass this theme to the plugin list like this:
|
||||
plugins = [ plugin-icalevents ];
|
||||
'';
|
||||
};
|
||||
|
||||
templates = mkOption {
|
||||
type = types.listOf types.path;
|
||||
default = [];
|
||||
description = ''
|
||||
List of path(s) to respective template(s) which are copied from the 'tpl' directory.
|
||||
<note><para>These templates need to be packaged before use, see example.</para></note>
|
||||
'';
|
||||
example = ''
|
||||
# Let's package the bootstrap3 theme
|
||||
template-bootstrap3 = pkgs.stdenv.mkDerivation {
|
||||
name = "bootstrap3";
|
||||
# Download the theme from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip";
|
||||
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
nativeBuildInputs = [ pkgs.unzip ];
|
||||
# Installing simply means copying all files to the output directory
|
||||
installPhase = "mkdir -p $out; cp -R * $out/";
|
||||
};
|
||||
|
||||
# And then pass this theme to the template list like this:
|
||||
templates = [ template-bootstrap3 ];
|
||||
'';
|
||||
};
|
||||
|
||||
poolConfig = mkOption {
|
||||
type = with types; attrsOf (oneOf [ str int bool ]);
|
||||
default = {
|
||||
"pm" = "dynamic";
|
||||
"pm.max_children" = 32;
|
||||
"pm.start_servers" = 2;
|
||||
"pm.min_spare_servers" = 2;
|
||||
"pm.max_spare_servers" = 4;
|
||||
"pm.max_requests" = 500;
|
||||
};
|
||||
description = ''
|
||||
Options for the dokuwiki PHP pool. See the documentation on <literal>php-fpm.conf</literal>
|
||||
for details on configuration directives.
|
||||
'';
|
||||
|
||||
aclFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = if (config.aclUse && config.acl == null) then "/var/lib/dokuwiki/${name}/acl.auth.php" else null;
|
||||
description = ''
|
||||
Location of the dokuwiki acl rules. Mutually exclusive with services.dokuwiki.acl
|
||||
Mutually exclusive with services.dokuwiki.acl which is preferred.
|
||||
Consult documentation <link xlink:href="https://www.dokuwiki.org/acl"/> for further instructions.
|
||||
Example: <link xlink:href="https://github.com/splitbrain/dokuwiki/blob/master/conf/acl.auth.php.dist"/>
|
||||
'';
|
||||
example = "/var/lib/dokuwiki/${name}/acl.auth.php";
|
||||
};
|
||||
|
||||
aclUse = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Necessary for users to log in into the system.
|
||||
Also limits anonymous users. When disabled,
|
||||
everyone is able to create and edit content.
|
||||
'';
|
||||
};
|
||||
|
||||
pluginsConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = ''
|
||||
$plugins['authad'] = 0;
|
||||
$plugins['authldap'] = 0;
|
||||
$plugins['authmysql'] = 0;
|
||||
$plugins['authpgsql'] = 0;
|
||||
'';
|
||||
description = ''
|
||||
List of the dokuwiki (un)loaded plugins.
|
||||
'';
|
||||
};
|
||||
|
||||
superUser = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = "@admin";
|
||||
description = ''
|
||||
You can set either a username, a list of usernames (“admin1,admin2”),
|
||||
or the name of a group by prepending an @ char to the groupname
|
||||
Consult documentation <link xlink:href="https://www.dokuwiki.org/config:superuser"/> for further instructions.
|
||||
'';
|
||||
};
|
||||
|
||||
usersFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = if config.aclUse then "/var/lib/dokuwiki/${name}/users.auth.php" else null;
|
||||
description = ''
|
||||
Location of the dokuwiki users file. List of users. Format:
|
||||
login:passwordhash:Real Name:email:groups,comma,separated
|
||||
Create passwordHash easily by using:$ mkpasswd -5 password `pwgen 8 1`
|
||||
Example: <link xlink:href="https://github.com/splitbrain/dokuwiki/blob/master/conf/users.auth.php.dist"/>
|
||||
'';
|
||||
example = "/var/lib/dokuwiki/${name}/users.auth.php";
|
||||
};
|
||||
|
||||
disableActions = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = "";
|
||||
example = "search,register";
|
||||
description = ''
|
||||
Disable individual action modes. Refer to
|
||||
<link xlink:href="https://www.dokuwiki.org/config:action_modes"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
plugins = mkOption {
|
||||
type = types.listOf types.path;
|
||||
default = [];
|
||||
description = ''
|
||||
List of path(s) to respective plugin(s) which are copied from the 'plugin' directory.
|
||||
<note><para>These plugins need to be packaged before use, see example.</para></note>
|
||||
'';
|
||||
example = ''
|
||||
# Let's package the icalevents plugin
|
||||
plugin-icalevents = pkgs.stdenv.mkDerivation {
|
||||
name = "icalevents";
|
||||
# Download the plugin from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://github.com/real-or-random/dokuwiki-plugin-icalevents/releases/download/2017-06-16/dokuwiki-plugin-icalevents-2017-06-16.zip";
|
||||
sha256 = "e40ed7dd6bbe7fe3363bbbecb4de481d5e42385b5a0f62f6a6ce6bf3a1f9dfa8";
|
||||
};
|
||||
sourceRoot = ".";
|
||||
# We need unzip to build this package
|
||||
buildInputs = [ pkgs.unzip ];
|
||||
# Installing simply means copying all files to the output directory
|
||||
installPhase = "mkdir -p $out; cp -R * $out/";
|
||||
};
|
||||
|
||||
# And then pass this theme to the plugin list like this:
|
||||
plugins = [ plugin-icalevents ];
|
||||
'';
|
||||
};
|
||||
|
||||
templates = mkOption {
|
||||
type = types.listOf types.path;
|
||||
default = [];
|
||||
description = ''
|
||||
List of path(s) to respective template(s) which are copied from the 'tpl' directory.
|
||||
<note><para>These templates need to be packaged before use, see example.</para></note>
|
||||
'';
|
||||
example = ''
|
||||
# Let's package the bootstrap3 theme
|
||||
template-bootstrap3 = pkgs.stdenv.mkDerivation {
|
||||
name = "bootstrap3";
|
||||
# Download the theme from the dokuwiki site
|
||||
src = pkgs.fetchurl {
|
||||
url = "https://github.com/giterlizzi/dokuwiki-template-bootstrap3/archive/v2019-05-22.zip";
|
||||
sha256 = "4de5ff31d54dd61bbccaf092c9e74c1af3a4c53e07aa59f60457a8f00cfb23a6";
|
||||
};
|
||||
# We need unzip to build this package
|
||||
buildInputs = [ pkgs.unzip ];
|
||||
# Installing simply means copying all files to the output directory
|
||||
installPhase = "mkdir -p $out; cp -R * $out/";
|
||||
};
|
||||
|
||||
# And then pass this theme to the template list like this:
|
||||
templates = [ template-bootstrap3 ];
|
||||
'';
|
||||
};
|
||||
|
||||
poolConfig = mkOption {
|
||||
type = with types; attrsOf (oneOf [ str int bool ]);
|
||||
default = {
|
||||
"pm" = "dynamic";
|
||||
"pm.max_children" = 32;
|
||||
"pm.start_servers" = 2;
|
||||
"pm.min_spare_servers" = 2;
|
||||
"pm.max_spare_servers" = 4;
|
||||
"pm.max_requests" = 500;
|
||||
};
|
||||
description = ''
|
||||
Options for the DokuWiki PHP pool. See the documentation on <literal>php-fpm.conf</literal>
|
||||
for details on configuration directives.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.nullOr types.lines;
|
||||
default = null;
|
||||
example = ''
|
||||
$conf['title'] = 'My Wiki';
|
||||
$conf['userewrite'] = 1;
|
||||
'';
|
||||
description = ''
|
||||
DokuWiki configuration. Refer to
|
||||
<link xlink:href="https://www.dokuwiki.org/config"/>
|
||||
for details on supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
nginx = mkOption {
|
||||
type = types.submodule (
|
||||
recursiveUpdate
|
||||
(import ../web-servers/nginx/vhost-options.nix { inherit config lib; }) {}
|
||||
);
|
||||
default = {};
|
||||
example = {
|
||||
serverAliases = [
|
||||
"wiki.\${config.networking.domain}"
|
||||
];
|
||||
# To enable encryption and let let's encrypt take care of certificate
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
};
|
||||
description = ''
|
||||
With this option, you can customize the nginx virtualHost settings.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
in
|
||||
{
|
||||
# interface
|
||||
options = {
|
||||
services.dokuwiki = mkOption {
|
||||
type = types.attrsOf (types.submodule siteOpts);
|
||||
type = types.submodule {
|
||||
# Used to support old interface
|
||||
freeformType = types.attrsOf (types.submodule siteOpts);
|
||||
|
||||
# New interface
|
||||
options.sites = mkOption {
|
||||
type = types.attrsOf (types.submodule siteOpts);
|
||||
default = {};
|
||||
description = "Specification of one or more DokuWiki sites to serve";
|
||||
};
|
||||
|
||||
options.webserver = mkOption {
|
||||
type = types.enum [ "nginx" "caddy" ];
|
||||
default = "nginx";
|
||||
description = ''
|
||||
Whether to use nginx or caddy for virtual host management.
|
||||
|
||||
Further nginx configuration can be done by adapting <literal>services.nginx.virtualHosts.<name></literal>.
|
||||
See <xref linkend="opt-services.nginx.virtualHosts"/> for further information.
|
||||
|
||||
Further apache2 configuration can be done by adapting <literal>services.httpd.virtualHosts.<name></literal>.
|
||||
See <xref linkend="opt-services.httpd.virtualHosts"/> for further information.
|
||||
'';
|
||||
};
|
||||
};
|
||||
default = {};
|
||||
description = "Sepcification of one or more dokuwiki sites to serve.";
|
||||
description = "DokuWiki configuration";
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
# implementation
|
||||
|
||||
config = mkIf (eachSite != {}) {
|
||||
|
||||
warnings = mapAttrsToList (hostName: cfg: mkIf (cfg.superUser == null) "Not setting services.dokuwiki.${hostName} superUser will impair your ability to administer DokuWiki") eachSite;
|
||||
config = mkIf (eachSite != {}) (mkMerge [{
|
||||
|
||||
assertions = flatten (mapAttrsToList (hostName: cfg:
|
||||
[{
|
||||
assertion = cfg.aclUse -> (cfg.acl != null || cfg.aclFile != null);
|
||||
message = "Either services.dokuwiki.${hostName}.acl or services.dokuwiki.${hostName}.aclFile is mandatory if aclUse true";
|
||||
message = "Either services.dokuwiki.sites.${hostName}.acl or services.dokuwiki.sites.${hostName}.aclFile is mandatory if aclUse true";
|
||||
}
|
||||
{
|
||||
assertion = cfg.usersFile != null -> cfg.aclUse != false;
|
||||
message = "services.dokuwiki.${hostName}.aclUse must must be true if usersFile is not null";
|
||||
message = "services.dokuwiki.sites.${hostName}.aclUse must must be true if usersFile is not null";
|
||||
}
|
||||
]) eachSite);
|
||||
|
||||
warnings = mapAttrsToList (hostName: _: ''services.dokuwiki."${hostName}" is deprecated use services.dokuwiki.sites."${hostName}"'') (oldSites cfg);
|
||||
|
||||
services.phpfpm.pools = mapAttrs' (hostName: cfg: (
|
||||
nameValuePair "dokuwiki-${hostName}" {
|
||||
inherit user;
|
||||
inherit group;
|
||||
group = webserver.group;
|
||||
|
||||
phpEnv = {
|
||||
DOKUWIKI_LOCAL_CONFIG = "${dokuwikiLocalConfig cfg}";
|
||||
DOKUWIKI_PLUGINS_LOCAL_CONFIG = "${dokuwikiPluginsLocalConfig cfg}";
|
||||
DOKUWIKI_LOCAL_CONFIG = "${dokuwikiLocalConfig hostName cfg}";
|
||||
DOKUWIKI_PLUGINS_LOCAL_CONFIG = "${dokuwikiPluginsLocalConfig hostName cfg}";
|
||||
} // optionalAttrs (cfg.usersFile != null) {
|
||||
DOKUWIKI_USERS_AUTH_CONFIG = "${cfg.usersFile}";
|
||||
} //optionalAttrs (cfg.aclUse) {
|
||||
DOKUWIKI_ACL_AUTH_CONFIG = if (cfg.acl != null) then "${dokuwikiAclAuthConfig cfg}" else "${toString cfg.aclFile}";
|
||||
DOKUWIKI_ACL_AUTH_CONFIG = if (cfg.acl != null) then "${dokuwikiAclAuthConfig hostName cfg}" else "${toString cfg.aclFile}";
|
||||
};
|
||||
|
||||
settings = {
|
||||
"listen.mode" = "0660";
|
||||
"listen.owner" = user;
|
||||
"listen.group" = group;
|
||||
"listen.owner" = webserver.user;
|
||||
"listen.group" = webserver.group;
|
||||
} // cfg.poolConfig;
|
||||
})) eachSite;
|
||||
}
|
||||
)) eachSite;
|
||||
|
||||
}
|
||||
|
||||
{
|
||||
systemd.tmpfiles.rules = flatten (mapAttrsToList (hostName: cfg: [
|
||||
"d ${stateDir hostName}/attic 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/cache 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/index 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/locks 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/media 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/media_attic 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/media_meta 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/meta 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/pages 0750 ${user} ${webserver.group} - -"
|
||||
"d ${stateDir hostName}/tmp 0750 ${user} ${webserver.group} - -"
|
||||
] ++ lib.optional (cfg.aclFile != null) "C ${cfg.aclFile} 0640 ${user} ${webserver.group} - ${pkg hostName cfg}/share/dokuwiki/conf/acl.auth.php.dist"
|
||||
++ lib.optional (cfg.usersFile != null) "C ${cfg.usersFile} 0640 ${user} ${webserver.group} - ${pkg hostName cfg}/share/dokuwiki/conf/users.auth.php.dist"
|
||||
) eachSite);
|
||||
|
||||
users.users.${user} = {
|
||||
group = webserver.group;
|
||||
isSystemUser = true;
|
||||
};
|
||||
}
|
||||
|
||||
(mkIf (cfg.webserver == "nginx") {
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
virtualHosts = mapAttrs (hostName: cfg: mkMerge [ cfg.nginx {
|
||||
root = mkForce "${pkg hostName cfg}/share/dokuwiki";
|
||||
extraConfig = lib.optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;";
|
||||
virtualHosts = mapAttrs (hostName: cfg: {
|
||||
serverName = mkDefault hostName;
|
||||
root = "${pkg hostName cfg}/share/dokuwiki";
|
||||
|
||||
locations."~ /(conf/|bin/|inc/|install.php)" = {
|
||||
extraConfig = "deny all;";
|
||||
};
|
||||
locations = {
|
||||
"~ /(conf/|bin/|inc/|install.php)" = {
|
||||
extraConfig = "deny all;";
|
||||
};
|
||||
|
||||
locations."~ ^/data/" = {
|
||||
root = "${cfg.stateDir}";
|
||||
extraConfig = "internal;";
|
||||
};
|
||||
"~ ^/data/" = {
|
||||
root = "${stateDir hostName}";
|
||||
extraConfig = "internal;";
|
||||
};
|
||||
|
||||
locations."~ ^/lib.*\\.(js|css|gif|png|ico|jpg|jpeg)$" = {
|
||||
extraConfig = "expires 365d;";
|
||||
};
|
||||
"~ ^/lib.*\.(js|css|gif|png|ico|jpg|jpeg)$" = {
|
||||
extraConfig = "expires 365d;";
|
||||
};
|
||||
|
||||
locations."/" = {
|
||||
priority = 1;
|
||||
index = "doku.php";
|
||||
extraConfig = "try_files $uri $uri/ @dokuwiki;";
|
||||
};
|
||||
"/" = {
|
||||
priority = 1;
|
||||
index = "doku.php";
|
||||
extraConfig = ''try_files $uri $uri/ @dokuwiki;'';
|
||||
};
|
||||
|
||||
locations."@dokuwiki" = {
|
||||
extraConfig = ''
|
||||
"@dokuwiki" = {
|
||||
extraConfig = ''
|
||||
# rewrites "doku.php/" out of the URLs if you set the userwrite setting to .htaccess in dokuwiki config page
|
||||
rewrite ^/_media/(.*) /lib/exe/fetch.php?media=$1 last;
|
||||
rewrite ^/_detail/(.*) /lib/exe/detail.php?media=$1 last;
|
||||
rewrite ^/_export/([^/]+)/(.*) /doku.php?do=export_$1&id=$2 last;
|
||||
rewrite ^/(.*) /doku.php?id=$1&$args last;
|
||||
'';
|
||||
};
|
||||
'';
|
||||
};
|
||||
|
||||
locations."~ \\.php$" = {
|
||||
extraConfig = ''
|
||||
"~ \\.php$" = {
|
||||
extraConfig = ''
|
||||
try_files $uri $uri/ /doku.php;
|
||||
include ${pkgs.nginx}/conf/fastcgi_params;
|
||||
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
|
||||
fastcgi_param REDIRECT_STATUS 200;
|
||||
fastcgi_pass unix:${config.services.phpfpm.pools."dokuwiki-${hostName}".socket};
|
||||
${lib.optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;"}
|
||||
'';
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
}]) eachSite;
|
||||
}) eachSite;
|
||||
};
|
||||
})
|
||||
|
||||
systemd.tmpfiles.rules = flatten (mapAttrsToList (hostName: cfg: [
|
||||
"d ${cfg.stateDir}/attic 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/cache 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/index 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/locks 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/media 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/media_attic 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/media_meta 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/meta 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/pages 0750 ${user} ${group} - -"
|
||||
"d ${cfg.stateDir}/tmp 0750 ${user} ${group} - -"
|
||||
] ++ lib.optional (cfg.aclFile != null) "C ${cfg.aclFile} 0640 ${user} ${group} - ${pkg hostName cfg}/share/dokuwiki/conf/acl.auth.php.dist"
|
||||
++ lib.optional (cfg.usersFile != null) "C ${cfg.usersFile} 0640 ${user} ${group} - ${pkg hostName cfg}/share/dokuwiki/conf/users.auth.php.dist"
|
||||
) eachSite);
|
||||
(mkIf (cfg.webserver == "caddy") {
|
||||
services.caddy = {
|
||||
enable = true;
|
||||
virtualHosts = mapAttrs' (hostName: cfg: (
|
||||
nameValuePair "http://${hostName}" {
|
||||
extraConfig = ''
|
||||
root * ${pkg hostName cfg}/share/dokuwiki
|
||||
file_server
|
||||
|
||||
users.users.${user} = {
|
||||
group = group;
|
||||
isSystemUser = true;
|
||||
encode zstd gzip
|
||||
php_fastcgi unix/${config.services.phpfpm.pools."dokuwiki-${hostName}".socket}
|
||||
|
||||
@restrict_files {
|
||||
path /data/* /conf/* /bin/* /inc/* /vendor/* /install.php
|
||||
}
|
||||
|
||||
respond @restrict_files 404
|
||||
|
||||
@allow_media {
|
||||
path_regexp path ^/_media/(.*)$
|
||||
}
|
||||
rewrite @allow_media /lib/exe/fetch.php?media=/{http.regexp.path.1}
|
||||
|
||||
@allow_detail {
|
||||
path /_detail*
|
||||
}
|
||||
rewrite @allow_detail /lib/exe/detail.php?media={path}
|
||||
|
||||
@allow_export {
|
||||
path /_export*
|
||||
path_regexp export /([^/]+)/(.*)
|
||||
}
|
||||
rewrite @allow_export /doku.php?do=export_{http.regexp.export.1}&id={http.regexp.export.2}
|
||||
|
||||
try_files {path} {path}/ /doku.php?id={path}&{query}
|
||||
'';
|
||||
}
|
||||
)) eachSite;
|
||||
};
|
||||
};
|
||||
})
|
||||
|
||||
meta.maintainers = with maintainers; [ _1000101 ];
|
||||
]);
|
||||
|
||||
meta.maintainers = with maintainers; [
|
||||
_1000101
|
||||
onny
|
||||
];
|
||||
}
|
||||
|
|
|
@ -9,6 +9,13 @@ let
|
|||
RAILS_ENV = "production";
|
||||
NODE_ENV = "production";
|
||||
|
||||
# mastodon-web concurrency.
|
||||
WEB_CONCURRENCY = toString cfg.webProcesses;
|
||||
MAX_THREADS = toString cfg.webThreads;
|
||||
|
||||
# mastodon-streaming concurrency.
|
||||
STREAMING_CLUSTER_NUM = toString cfg.streamingProcesses;
|
||||
|
||||
DB_USER = cfg.database.user;
|
||||
|
||||
REDIS_HOST = cfg.redis.host;
|
||||
|
@ -146,18 +153,41 @@ in {
|
|||
type = lib.types.port;
|
||||
default = 55000;
|
||||
};
|
||||
streamingProcesses = lib.mkOption {
|
||||
description = ''
|
||||
Processes used by the mastodon-streaming service.
|
||||
Defaults to the number of CPU cores minus one.
|
||||
'';
|
||||
type = lib.types.nullOr lib.types.int;
|
||||
default = null;
|
||||
};
|
||||
|
||||
webPort = lib.mkOption {
|
||||
description = "TCP port used by the mastodon-web service.";
|
||||
type = lib.types.port;
|
||||
default = 55001;
|
||||
};
|
||||
webProcesses = lib.mkOption {
|
||||
description = "Processes used by the mastodon-web service.";
|
||||
type = lib.types.int;
|
||||
default = 2;
|
||||
};
|
||||
webThreads = lib.mkOption {
|
||||
description = "Threads per process used by the mastodon-web service.";
|
||||
type = lib.types.int;
|
||||
default = 5;
|
||||
};
|
||||
|
||||
sidekiqPort = lib.mkOption {
|
||||
description = "TCP port used by the mastodon-sidekiq service";
|
||||
description = "TCP port used by the mastodon-sidekiq service.";
|
||||
type = lib.types.port;
|
||||
default = 55002;
|
||||
};
|
||||
sidekiqThreads = lib.mkOption {
|
||||
description = "Worker threads used by the mastodon-sidekiq service.";
|
||||
type = lib.types.int;
|
||||
default = 25;
|
||||
};
|
||||
|
||||
vapidPublicKeyFile = lib.mkOption {
|
||||
description = ''
|
||||
|
@ -524,9 +554,10 @@ in {
|
|||
wantedBy = [ "multi-user.target" ];
|
||||
environment = env // {
|
||||
PORT = toString(cfg.sidekiqPort);
|
||||
DB_POOL = toString cfg.sidekiqThreads;
|
||||
};
|
||||
serviceConfig = {
|
||||
ExecStart = "${cfg.package}/bin/sidekiq -c 25 -r ${cfg.package}";
|
||||
ExecStart = "${cfg.package}/bin/sidekiq -c ${toString cfg.sidekiqThreads} -r ${cfg.package}";
|
||||
Restart = "always";
|
||||
RestartSec = 20;
|
||||
EnvironmentFile = "/var/lib/mastodon/.secrets_env";
|
||||
|
|
|
@ -103,7 +103,11 @@ in
|
|||
|
||||
config = mkIf (cfg.instances != {}) {
|
||||
|
||||
users.users.zope2.uid = config.ids.uids.zope2;
|
||||
users.users.zope2 = {
|
||||
isSystemUser = true;
|
||||
group = "zope2";
|
||||
};
|
||||
users.groups.zope2 = {};
|
||||
|
||||
systemd.services =
|
||||
let
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue