Project import generated by Copybara.

GitOrigin-RevId: 48037fd90426e44e4bf03e6479e88a11453b9b66
This commit is contained in:
Default email 2022-05-18 16:49:53 +02:00
parent 97d71c78a1
commit 2ce5db779a
2347 changed files with 43409 additions and 37943 deletions

View file

@ -192,8 +192,8 @@
/nixos/tests/knot.nix @mweinelt
# Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch @ehmry
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry
# Idris
/pkgs/development/idris-modules @Infinisil

View file

@ -0,0 +1,34 @@
---
name: Build failure
about: Create a report to help us improve
title: ''
labels: '0.kind: build failure'
assignees: ''
---
### Steps To Reproduce
Steps to reproduce the behavior:
1. build *X*
### Build log
```
log here if short otherwise a link to a gist
```
### Additional context
Add any other context about the problem here.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```

View file

@ -80,3 +80,49 @@ tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};
```
## `nixosTest` {#tester-nixosTest}
Run a NixOS VM network test using this evaluation of Nixpkgs.
NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
It is mostly equivalent to the function `import ./make-test-python.nix` from the
[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
except that the current application of Nixpkgs (`pkgs`) will be used, instead of
letting NixOS invoke Nixpkgs anew.
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
`nixpkgs.pkgs` option.
### Parameter
A [NixOS VM test network](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), or path to it. Example:
```nix
{
name = "my-test";
nodes = {
machine1 = { lib, pkgs, nodes, ... }: {
environment.systemPackages = [ pkgs.hello ];
services.foo.enable = true;
};
# machine2 = ...;
};
testScript = ''
start_all()
machine1.wait_for_unit("foo.service")
machine1.succeed("hello | foo-send")
'';
}
```
### Result
A derivation that runs the VM test.
Notable attributes:
* `nodes`: the evaluated NixOS configurations. Useful for debugging and exploring the configuration.
* `driverInteractive`: a script that launches an interactive Python session in the context of the `testScript`.

View file

@ -236,7 +236,7 @@ The `master` branch is the main development branch. It should only see non-break
### Staging branch {#submitting-changes-staging-branch}
The `staging` branch is a development branch where mass-rebuilds go. It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
The `staging` branch is a development branch where mass-rebuilds go. Mass rebuilds are commits that cause rebuilds for many packages, like more than 500 (or perhaps, if it's 'light' packages, 1000). It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
### Staging-next branch {#submitting-changes-staging-next-branch}

View file

@ -8,19 +8,16 @@ The various tools available will be listed in the [tools-overview](#javascript-t
## Getting unstuck / finding code examples
If you find you are lacking inspiration for packing javascript applications, the links below might prove useful.
Searching online for prior art can be helpful if you are running into solved problems.
If you find you are lacking inspiration for packing javascript applications, the links below might prove useful. Searching online for prior art can be helpful if you are running into solved problems.
### Github
- Searching Nix files for `mkYarnPackage`: <https://github.com/search?q=mkYarnPackage+language%3ANix&type=code>
- Searching just `flake.nix` files for `mkYarnPackage`: <https://github.com/search?q=mkYarnPackage+filename%3Aflake.nix&type=code>
### Gitlab
- Searching Nix files for `mkYarnPackage`: <https://gitlab.com/search?scope=blobs&search=mkYarnPackage+extension%3Anix>
- Searching just `flake.nix` files for `mkYarnPackage`: <https://gitlab.com/search?scope=blobs&search=mkYarnPackage+filename%3Aflake.nix>
## Tools overview {#javascript-tools-overview}
@ -35,109 +32,107 @@ It is often not documented which node version is used upstream, but if it is, tr
This can be a problem if upstream is using the latest and greatest and you are trying to use an earlier version of node. Some cryptic errors regarding V8 may appear.
An exception to this:
### Try to respect the package manager originally used by upstream (and use the upstream lock file) {#javascript-upstream-package-manager}
A lock file (package-lock.json, yarn.lock...) is supposed to make reproducible installations of node_modules for each tool.
Guidelines of package managers, recommend to commit those lock files to the repos. If a particular lock file is present, it is a strong indication of which package manager is used upstream.
It's better to try to use a nix tool that understand the lock file. Using a different tool might give you hard to understand error because different packages have been installed. An example of problems that could arise can be found [here](https://github.com/NixOS/nixpkgs/pull/126629). Upstream uses npm, but this is an attempt to package it with yarn2nix (that uses yarn.lock)
It's better to try to use a Nix tool that understand the lock file. Using a different tool might give you hard to understand error because different packages have been installed. An example of problems that could arise can be found [here](https://github.com/NixOS/nixpkgs/pull/126629). Upstream use NPM, but this is an attempt to package it with `yarn2nix` (that uses yarn.lock).
Using a different tool forces to commit a lock file to the repository. Those files are fairly large, so when packaging for nixpkgs, this approach does not scale well.
Exceptions to this rule are:
- when you encounter one of the bugs from a nix tool. In each of the tool specific instructions, known problems will be detailed. If you have a problem with a particular tool, then it's best to try another tool, even if this means you will have to recreate a lock file and commit it to nixpkgs. In general yarn2nix has less known problems and so a simple search in nixpkgs will reveal many yarn.lock files committed
- Some lock files contain particular version of a package that has been pulled off npm for some reason. In that case, you can recreate upstream lock (by removing the original and `npm install`, `yarn`, ...) and commit this to nixpkgs.
- The only tool that supports workspaces (a feature of npm that helps manage sub-directories with different package.json from a single top level package.json) is yarn2nix. If upstream has workspaces you should try yarn2nix.
- When you encounter one of the bugs from a Nix tool. In each of the tool specific instructions, known problems will be detailed. If you have a problem with a particular tool, then it's best to try another tool, even if this means you will have to recreate a lock file and commit it to nixpkgs. In general `yarn2nix` has less known problems and so a simple search in nixpkgs will reveal many yarn.lock files committed.
- Some lock files contain particular version of a package that has been pulled off NPM for some reason. In that case, you can recreate upstream lock (by removing the original and `npm install`, `yarn`, ...) and commit this to nixpkgs.
- The only tool that supports workspaces (a feature of NPM that helps manage sub-directories with different package.json from a single top level package.json) is `yarn2nix`. If upstream has workspaces you should try `yarn2nix`.
### Try to use upstream package.json {#javascript-upstream-package-json}
Exceptions to this rule are
Exceptions to this rule are:
- Sometimes the upstream repo assumes some dependencies be installed globally. In that case you can add them manually to the upstream package.json (`yarn add xxx` or `npm install xxx`, ...). Dependencies that are installed locally can be executed with `npx` for cli tools. (e.g. `npx postcss ...`, this is how you can call those dependencies in the phases).
- Sometimes there is a version conflict between some dependency requirements. In that case you can fix a version (by removing the `^`).
- Sometimes the script defined in the package.json does not work as is. Some scripts for example use cli tools that might not be available, or cd in directory with a different package.json (for workspaces notably). In that case, it's perfectly fine to look at what the particular script is doing and break this down in the phases. In the build script you can see `build:*` calling in turns several other build scripts like `build:ui` or `build:server`. If one of those fails, you can try to separate those into:
- Sometimes the upstream repo assumes some dependencies be installed globally. In that case you can add them manually to the upstream package.json (`yarn add xxx` or `npm install xxx`, ...). Dependencies that are installed locally can be executed with `npx` for CLI tools. (e.g. `npx postcss ...`, this is how you can call those dependencies in the phases).
- Sometimes there is a version conflict between some dependency requirements. In that case you can fix a version by removing the `^`.
- Sometimes the script defined in the package.json does not work as is. Some scripts for example use CLI tools that might not be available, or cd in directory with a different package.json (for workspaces notably). In that case, it's perfectly fine to look at what the particular script is doing and break this down in the phases. In the build script you can see `build:*` calling in turns several other build scripts like `build:ui` or `build:server`. If one of those fails, you can try to separate those into,
```Shell
yarn build:ui
yarn build:server
# OR
npm run build:ui
npm run build:server
```
```sh
yarn build:ui
yarn build:server
# OR
npm run build:ui
npm run build:server
```
when you need to override a package.json. It's nice to use the one from the upstream src and do some explicit override. Here is an example.
when you need to override a package.json. It's nice to use the one from the upstream source and do some explicit override. Here is an example:
```nix
patchedPackageJSON = final.runCommand "package.json" { } ''
```nix
patchedPackageJSON = final.runCommand "package.json" { } ''
${jq}/bin/jq '.version = "0.4.0" |
.devDependencies."@jsdoc/cli" = "^0.2.5"
${sonar-src}/package.json > $out
'';
```
'';
```
you will still need to commit the modified version of the lock files, but at least the overrides are explicit for everyone to see.
You will still need to commit the modified version of the lock files, but at least the overrides are explicit for everyone to see.
### Using node_modules directly {#javascript-using-node_modules}
each tool has an abstraction to just build the node_modules (dependencies) directory. you can always use the stdenv.mkDerivation with the node_modules to build the package (symlink the node_modules directory and then use the package build command). the node_modules abstraction can be also used to build some web framework frontends. For an example of this see how [plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix) is built. mkYarnModules to make the derivation containing node_modules. Then when building the frontend you can just symlink the node_modules directory
Each tool has an abstraction to just build the node_modules (dependencies) directory. You can always use the `stdenv.mkDerivation` with the node_modules to build the package (symlink the node_modules directory and then use the package build command). The node_modules abstraction can be also used to build some web framework frontends. For an example of this see how [plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix) is built. `mkYarnModules` to make the derivation containing node_modules. Then when building the frontend you can just symlink the node_modules directory.
## Javascript packages inside nixpkgs {#javascript-packages-nixpkgs}
The `pkgs/development/node-packages` folder contains a generated collection of
[NPM packages](https://npmjs.com/) that can be installed with the Nix package
manager.
The [pkgs/development/node-packages](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages) folder contains a generated collection of [NPM packages](https://npmjs.com/) that can be installed with the Nix package manager.
As a rule of thumb, the package set should only provide _end user_ software
packages, such as command-line utilities. Libraries should only be added to the
package set if there is a non-NPM package that requires it.
As a rule of thumb, the package set should only provide _end user_ software packages, such as command-line utilities. Libraries should only be added to the package set if there is a non-NPM package that requires it.
When it is desired to use NPM libraries in a development project, use the
`node2nix` generator directly on the `package.json` configuration file of the
project.
When it is desired to use NPM libraries in a development project, use the `node2nix` generator directly on the `package.json` configuration file of the project.
The package set provides support for the official stable Node.js versions.
The latest stable LTS release in `nodePackages`, as well as the latest stable
Current release in `nodePackages_latest`.
The package set provides support for the official stable Node.js versions. The latest stable LTS release in `nodePackages`, as well as the latest stable current release in `nodePackages_latest`.
If your package uses native addons, you need to examine what kind of native
build system it uses. Here are some examples:
If your package uses native addons, you need to examine what kind of native build system it uses. Here are some examples:
- `node-gyp`
- `node-gyp-builder`
- `node-pre-gyp`
After you have identified the correct system, you need to override your package
expression while adding in build system as a build input. For example, `dat`
requires `node-gyp-build`, so [we override](https://github.com/NixOS/nixpkgs/blob/32f5e5da4a1b3f0595527f5195ac3a91451e9b56/pkgs/development/node-packages/default.nix#L37-L40) its expression in [`default.nix`](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/default.nix):
After you have identified the correct system, you need to override your package expression while adding in build system as a build input. For example, `dat` requires `node-gyp-build`, so we override its expression in [pkgs/development/node-packages/overrides.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/overrides.nix):
```nix
dat = super.dat.override {
buildInputs = [ self.node-gyp-build pkgs.libtool pkgs.autoconf pkgs.automake ];
meta.broken = since "12";
};
dat = prev.dat.override (oldAttrs: {
buildInputs = [ final.node-gyp-build pkgs.libtool pkgs.autoconf pkgs.automake ];
meta = oldAttrs.meta // { broken = since "12"; };
});
```
### Adding and Updating Javascript packages in nixpkgs
To add a package from NPM to nixpkgs:
1. Modify `pkgs/development/node-packages/node-packages.json` to add, update
or remove package entries to have it included in `nodePackages` and
`nodePackages_latest`.
2. Run the script: `./pkgs/development/node-packages/generate.sh`.
3. Build your new package to test your changes:
`cd /path/to/nixpkgs && nix-build -A nodePackages.<new-or-updated-package>`.
To build against the latest stable Current Node.js version (e.g. 14.x):
`nix-build -A nodePackages_latest.<new-or-updated-package>`
4. Add and commit all modified and generated files.
1. Modify [pkgs/development/node-packages/node-packages.json](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/node-packages.json) to add, update or remove package entries to have it included in `nodePackages` and `nodePackages_latest`.
2. Run the script:
For more information about the generation process, consult the
[README.md](https://github.com/svanderburg/node2nix) file of the `node2nix`
tool.
```sh
./pkgs/development/node-packages/generate.sh
```
3. Build your new package to test your changes:
```sh
nix-build -A nodePackages.<new-or-updated-package>
```
To build against the latest stable Current Node.js version (e.g. 18.x):
```sh
nix-build -A nodePackages_latest.<new-or-updated-package>
```
If the package doesn't build, you may need to add an override as explained above.
4. If the package's name doesn't match any of the executables it provides, add an entry in [pkgs/development/node-packages/main-programs.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/node-packages/main-programs.nix). This will be the case for all scoped packages, e.g., `@angular/cli`.
5. Add and commit all modified and generated files.
For more information about the generation process, consult the [README.md](https://github.com/svanderburg/node2nix) file of the `node2nix` tool.
To update NPM packages in nixpkgs, run the same `generate.sh` script:
@ -148,10 +143,11 @@ To update NPM packages in nixpkgs, run the same `generate.sh` script:
#### Git protocol error
Some packages may have Git dependencies from GitHub specified with `git://`.
GitHub has
[disabled unecrypted Git connections](https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git),
so you may see the following error when running the generate script:
`The unauthenticated git protocol on port 9418 is no longer supported`.
GitHub has [disabled unecrypted Git connections](https://github.blog/2021-09-01-improving-git-protocol-security-github/#no-more-unauthenticated-git), so you may see the following error when running the generate script:
```
The unauthenticated git protocol on port 9418 is no longer supported
```
Use the following Git configuration to resolve the issue:
@ -165,34 +161,33 @@ git config --global url."https://github.com/".insteadOf git://github.com/
#### Preparation {#javascript-node2nix-preparation}
you will need to generate a nix expression for the dependencies
You will need to generate a Nix expression for the dependencies. Don't forget the `-l package-lock.json` if there is a lock file. Most probably you will need the `--development` to include the `devDependencies`
- don't forget the `-l package-lock.json` if there is a lock file
- Most probably you will need the `--development` to include the `devDependencies`
So the command will most likely be:
```sh
node2nix --development -l package-lock.json
```
so the command will most likely be
`node2nix --development -l package-lock.json`
[link to the doc in the repo](https://github.com/svanderburg/node2nix)
See `node2nix` [docs](https://github.com/svanderburg/node2nix) for more info.
#### Pitfalls {#javascript-node2nix-pitfalls}
- if upstream package.json does not have a "version" attribute, node2nix will crash. You will need to add it like shown in [the package.json section](#javascript-upstream-package-json)
- node2nix has some [bugs](https://github.com/svanderburg/node2nix/issues/238). related to working with lock files from npm distributed with nodejs-16_x
- node2nix does not like missing packages from npm. If you see something like `Cannot resolve version: vue-loader-v16@undefined` then you might want to try another tool. The package might have been pulled off of npm.
- If upstream package.json does not have a "version" attribute, `node2nix` will crash. You will need to add it like shown in [the package.json section](#javascript-upstream-package-json).
- `node2nix` has some [bugs](https://github.com/svanderburg/node2nix/issues/238) related to working with lock files from NPM distributed with `nodejs-16_x`.
- `node2nix` does not like missing packages from NPM. If you see something like `Cannot resolve version: vue-loader-v16@undefined` then you might want to try another tool. The package might have been pulled off of NPM.
### yarn2nix {#javascript-yarn2nix}
#### Preparation {#javascript-yarn2nix-preparation}
you will need at least a yarn.lock and yarn.nix file
You will need at least a yarn.lock and yarn.nix file.
- generate a yarn.lock in upstream if it is not already there
- `yarn2nix > yarn.nix` will generate the dependencies in a nix format
- Generate a yarn.lock in upstream if it is not already there.
- `yarn2nix > yarn.nix` will generate the dependencies in a Nix format.
#### mkYarnPackage {#javascript-yarn2nix-mkYarnPackage}
this will by default try to generate a binary. For package only generating static assets (Svelte, Vue, React...), you will need to explicitly override the build step with your instructions. It's important to use the `--offline` flag. For example if you script is `"build": "something"` in package.json use
This will by default try to generate a binary. For package only generating static assets (Svelte, Vue, React...), you will need to explicitly override the build step with your instructions. It's important to use the `--offline` flag. For example if you script is `"build": "something"` in package.json use:
```nix
buildPhase = ''
@ -200,14 +195,13 @@ buildPhase = ''
'';
```
The dist phase is also trying to build a binary, the only way to override it is with
The dist phase is also trying to build a binary, the only way to override it is with:
```nix
distPhase = "true";
```
the configure phase can sometimes fail because it tries to be too clever.
One common override is
The configure phase can sometimes fail because it tries to be too clever. One common override is:
```nix
configurePhase = "ln -s $node_modules node_modules";
@ -215,13 +209,17 @@ configurePhase = "ln -s $node_modules node_modules";
#### mkYarnModules {#javascript-yarn2nix-mkYarnModules}
this will generate a derivation including the node_modules. If you have to build a derivation for an integrated web framework (rails, phoenix..), this is probably the easiest way. [Plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39) offers a good example of how to do this.
This will generate a derivation including the node_modules. If you have to build a derivation for an integrated web framework (rails, phoenix..), this is probably the easiest way. [Plausible](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39) offers a good example of how to do this.
#### Overriding dependency behavior
In the `mkYarnPackage` record the property `pkgConfig` can be used to override packages when you encounter problems building.
For instance, say your package is throwing errors when trying to invoke node-sass: `ENOENT: no such file or directory, scandir '/build/source/node_modules/node-sass/vendor'`
For instance, say your package is throwing errors when trying to invoke node-sass:
```
ENOENT: no such file or directory, scandir '/build/source/node_modules/node-sass/vendor'
```
To fix this we will specify different versions of build inputs to use, as well as some post install steps to get the software built the way we want:
@ -241,9 +239,8 @@ mkYarnPackage rec {
#### Pitfalls {#javascript-yarn2nix-pitfalls}
- if version is missing from upstream package.json, yarn will silently install nothing. In that case, you will need to override package.json as shown in the [package.json section](#javascript-upstream-package-json)
- having trouble with node-gyp? Try adding these lines to the `yarnPreBuild` steps:
- If version is missing from upstream package.json, yarn will silently install nothing. In that case, you will need to override package.json as shown in the [package.json section](#javascript-upstream-package-json)
- Having trouble with `node-gyp`? Try adding these lines to the `yarnPreBuild` steps:
```nix
yarnPreBuild = ''
@ -259,20 +256,20 @@ mkYarnPackage rec {
## Outside of nixpkgs {#javascript-outside-nixpkgs}
There are some other options available that can't be used inside nixpkgs. Those other options are written in nix. Importing them in nixpkgs will require moving the source code into nixpkgs. Using [Import From Derivation](https://nixos.wiki/wiki/Import_From_Derivation) is not allowed in hydra at present. If you are packaging something outside nixpkgs, those can be considered
There are some other options available that can't be used inside nixpkgs. Those other options are written in Nix. Importing them in nixpkgs will require moving the source code into nixpkgs. Using [Import From Derivation](https://nixos.wiki/wiki/Import_From_Derivation) is not allowed in Hydra at present. If you are packaging something outside nixpkgs, those can be considered
### npmlock2nix {#javascript-npmlock2nix}
[npmlock2nix](https://github.com/nix-community/npmlock2nix) aims at building node_modules without code generation. It hasn't reached v1 yet, the api might be subject to change.
[npmlock2nix](https://github.com/nix-community/npmlock2nix) aims at building node_modules without code generation. It hasn't reached v1 yet, the API might be subject to change.
#### Pitfalls {#javascript-npmlock2nix-pitfalls}
- there are some [problems with npm v7](https://github.com/tweag/npmlock2nix/issues/45).
There are some [problems with npm v7](https://github.com/tweag/npmlock2nix/issues/45).
### nix-npm-buildpackage {#javascript-nix-npm-buildpackage}
[nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage) aims at building node_modules without code generation. It hasn't reached v1 yet, the api might change. It supports both package-lock.json and yarn.lock.
[nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage) aims at building node_modules without code generation. It hasn't reached v1 yet, the API might change. It supports both package-lock.json and yarn.lock.
#### Pitfalls {#javascript-nix-npm-buildpackage-pitfalls}
- there are some [problems with npm v7](https://github.com/serokell/nix-npm-buildpackage/issues/33).
There are some [problems with npm v7](https://github.com/serokell/nix-npm-buildpackage/issues/33).

View file

@ -288,7 +288,7 @@ self: super: {
ps: with ps; [
pyflakes
pytest
python-language-server
black
]
))

View file

@ -11,9 +11,6 @@ let
callLibs = file: import file { lib = self; };
in {
# interacting with flakes
flakes = callLibs ./flakes.nix;
# often used, or depending on very little
trivial = callLibs ./trivial.nix;
fixedPoints = callLibs ./fixed-points.nix;
@ -62,7 +59,6 @@ let
# linux kernel configuration
kernel = callLibs ./kernel.nix;
inherit (self.flakes) callLocklessFlake;
inherit (builtins) add addErrorContext attrNames concatLists
deepSeq elem elemAt filter genericClosure genList getAttr
hasAttr head isAttrs isBool isInt isList isString length

View file

@ -1,22 +0,0 @@
{ lib }:
rec {
/* imports a flake.nix without acknowledging its lock file, useful for
referencing subflakes from a parent flake. The second argument allows
specifying the inputs of this flake.
Example:
callLocklessFlake {
path = ./directoryContainingFlake;
inputs = { inherit nixpkgs; };
}
*/
callLocklessFlake = { path, inputs ? { } }:
let
self = { outPath = path; } //
((import (path + "/flake.nix")).outputs (inputs // { self = self; }));
in
self;
}

View file

@ -285,6 +285,11 @@ in mkLicense lset) ({
fullName = "DOC License";
};
drl10 = {
spdxId = "DRL-1.0";
fullName = "Detection Rule License 1.0";
};
eapl = {
fullName = "EPSON AVASYS PUBLIC LICENSE";
url = "https://avasys.jp/hp/menu000000700/hpg000000603.htm";

View file

@ -339,9 +339,10 @@ rec {
/* Translate a Nix value into a shell variable declaration, with proper escaping.
Supported value types are strings (mapped to regular variables), lists of strings
(mapped to Bash-style arrays) and attribute sets of strings (mapped to Bash-style
associative arrays). Note that "strings" include string-coercible values like paths.
The value can be a string (mapped to a regular variable), a list of strings
(mapped to a Bash-style array) or an attribute set of strings (mapped to a
Bash-style associative array). Note that "string" includes string-coercible
values like paths or derivations.
Strings are translated into POSIX sh-compatible code; lists and attribute sets
assume a shell that understands Bash syntax (e.g. Bash or ZSH).
@ -356,7 +357,7 @@ rec {
*/
toShellVar = name: value:
lib.throwIfNot (isValidPosixName name) "toShellVar: ${name} is not a valid shell variable name" (
if isAttrs value then
if isAttrs value && ! isCoercibleToString value then
"declare -A ${name}=(${
concatStringsSep " " (lib.mapAttrsToList (n: v:
"[${escapeShellArg n}]=${escapeShellArg v}"

View file

@ -1,8 +0,0 @@
{
outputs = { self, subflake, callLocklessFlake }: rec {
x = (callLocklessFlake {
path = subflake;
inputs = {};
}).subflakeOutput;
};
}

View file

@ -1,5 +0,0 @@
{
outputs = { self }: {
subflakeOutput = 1;
};
}

View file

@ -22,16 +22,6 @@ in
runTests {
# FLAKES
testCallLocklessFlake = {
expr = callLocklessFlake {
path = ./flakes/subflakeTest;
inputs = { subflake = ./flakes/subflakeTest/subflake; inherit callLocklessFlake; };
};
expected = { x = 1; outPath = ./flakes/subflakeTest; };
};
# TRIVIAL
testId = {
@ -269,6 +259,15 @@ runTests {
strings
possibly newlines
'';
drv = {
outPath = "/drv";
foo = "ignored attribute";
};
path = /path;
stringable = {
__toString = _: "hello toString";
bar = "ignored attribute";
};
}}
'';
expected = ''
@ -277,6 +276,9 @@ runTests {
declare -A assoc=(['with some']='strings
possibly newlines
')
drv='/drv'
path='/path'
stringable='hello toString'
'';
};

View file

@ -290,6 +290,8 @@ checkConfigOutput '^"a b"$' config.result ./functionTo/merging-list.nix
checkConfigError 'A definition for option .fun.\[function body\]. is not of type .string.. Definition values:\n\s*- In .*wrong-type.nix' config.result ./functionTo/wrong-type.nix
checkConfigOutput '^"b a"$' config.result ./functionTo/list-order.nix
checkConfigOutput '^"a c"$' config.result ./functionTo/merging-attrs.nix
checkConfigOutput '^"a bee"$' config.result ./functionTo/submodule-options.nix
checkConfigOutput '^"fun.\[function body\].a fun.\[function body\].b"$' config.optionsResult ./functionTo/submodule-options.nix
# moduleType
checkConfigOutput '^"a b"$' config.resultFoo ./declare-variants.nix ./define-variant.nix
@ -313,7 +315,7 @@ checkConfigOutput "bar" config.priorities ./raw.nix
## Option collision
checkConfigError \
'The option .set. in module .*/declare-set.nix. would be a parent of the following options, but its type .attribute set of signed integers. does not support nested options.\n\s*- option[(]s[)] with prefix .set.enable. in module .*/declare-enable-nested.nix.' \
'The option .set. in module .*/declare-set.nix. would be a parent of the following options, but its type .attribute set of signed integer. does not support nested options.\n\s*- option[(]s[)] with prefix .set.enable. in module .*/declare-enable-nested.nix.' \
config.set \
./declare-set.nix ./declare-enable-nested.nix

View file

@ -0,0 +1,61 @@
{ lib, config, options, ... }:
let
inherit (lib) types;
in
{
imports = [
# fun.<function-body>.a
({ ... }: {
options = {
fun = lib.mkOption {
type = types.functionTo (types.submodule {
options.a = lib.mkOption { default = "a"; };
});
};
};
})
# fun.<function-body>.b
({ ... }: {
options = {
fun = lib.mkOption {
type = types.functionTo (types.submodule {
options.b = lib.mkOption { default = "b"; };
});
};
};
})
];
options = {
result = lib.mkOption
{
type = types.str;
default = lib.concatStringsSep " " (lib.attrValues (config.fun (throw "shouldn't use input param")));
};
optionsResult = lib.mkOption
{
type = types.str;
default = lib.concatStringsSep " "
(lib.concatLists
(lib.mapAttrsToList
(k: v:
if k == "_module"
then [ ]
else [ (lib.showOption v.loc) ]
)
(
(options.fun.type.getSubOptions [ "fun" ])
)
)
);
};
};
config.fun = lib.mkMerge
[
(input: { b = "bee"; })
];
}

View file

@ -397,7 +397,7 @@ rec {
listOf = elemType: mkOptionType rec {
name = "listOf";
description = "list of ${elemType.description}s";
description = "list of ${elemType.description}";
check = isList;
merge = loc: defs:
map (x: x.value) (filter (x: x ? value) (concatLists (imap1 (n: def:
@ -426,7 +426,7 @@ rec {
attrsOf = elemType: mkOptionType rec {
name = "attrsOf";
description = "attribute set of ${elemType.description}s";
description = "attribute set of ${elemType.description}";
check = isAttrs;
merge = loc: defs:
mapAttrs (n: v: v.value) (filterAttrs (n: v: v ? value) (zipAttrsWith (name: defs:
@ -449,7 +449,7 @@ rec {
# error that it's not defined. Use only if conditional definitions don't make sense.
lazyAttrsOf = elemType: mkOptionType rec {
name = "lazyAttrsOf";
description = "lazy attribute set of ${elemType.description}s";
description = "lazy attribute set of ${elemType.description}";
check = isAttrs;
merge = loc: defs:
zipAttrsWith (name: defs:
@ -526,9 +526,11 @@ rec {
check = isFunction;
merge = loc: defs:
fnArgs: (mergeDefinitions (loc ++ [ "[function body]" ]) elemType (map (fn: { inherit (fn) file; value = fn.value fnArgs; }) defs)).mergedValue;
getSubOptions = elemType.getSubOptions;
getSubOptions = prefix: elemType.getSubOptions (prefix ++ [ "[function body]" ]);
getSubModules = elemType.getSubModules;
substSubModules = m: functionTo (elemType.substSubModules m);
functor = (defaultFunctor "functionTo") // { wrapped = elemType; };
nestedTypes.elemType = elemType;
};
# A submodule (like typed attribute set). See NixOS manual.

View file

@ -513,15 +513,26 @@
github = "alexnortung";
githubId = 1552267;
};
alexshpilkin = {
email = "ashpilkin@gmail.com";
github = "alexshpilkin";
githubId = 1010468;
keys = [{
longkeyid = "rsa4096/0x73E9AA114B3A894B";
fingerprint = "B595 D74D 6615 C010 469F 5A13 73E9 AA11 4B3A 894B";
}];
matrix = "@alexshpilkin:matrix.org";
name = "Alexander Shpilkin";
};
alexvorobiev = {
email = "alexander.vorobiev@gmail.com";
github = "alexvorobiev";
githubId = 782180;
name = "Alex Vorobiev";
};
alex-eyre = {
alexeyre = {
email = "A.Eyre@sms.ed.ac.uk";
github = "alex-eyre";
github = "alexeyre";
githubId = 38869148;
name = "Alex Eyre";
};
@ -811,6 +822,16 @@
githubId = 1771266;
name = "Vo Anh Duy";
};
Anillc = {
name = "Anillc";
email = "i@anillc.cn";
github = "Anillc";
githubId = 23411248;
keys = [{
longkeyid = "ed25519/0x0BE8A88F47B2145C";
fingerprint = "6141 1E4F FE10 CE7B 2E14 CD76 0BE8 A88F 47B2 145C";
}];
};
anirrudh = {
email = "anik597@gmail.com";
github = "anirrudh";
@ -972,6 +993,12 @@
githubId = 1118815;
name = "Vikram Narayanan";
};
armeenm = {
email = "mahdianarmeen@gmail.com";
github = "armeenm";
githubId = 29145250;
name = "Armeen Mahdian";
};
armijnhemel = {
email = "armijn@tjaldur.nl";
github = "armijnhemel";
@ -1517,6 +1544,12 @@
githubId = 410028;
name = "Tobias Bergkvist";
};
berryp = {
email = "berryphillips@gmail.com";
github = "berryp";
githubId = 19911;
name = "Berry Phillips";
};
betaboon = {
email = "betaboon@0x80.ninja";
github = "betaboon";
@ -1571,6 +1604,12 @@
githubId = 185443;
name = "Alexey Lebedeff";
};
binsky = {
email = "timo@binsky.org";
github = "binsky08";
githubId = 30630233;
name = "Timo Triebensky";
};
bjg = {
email = "bjg@gnu.org";
name = "Brian Gough";
@ -1914,6 +1953,12 @@
githubId = 7435854;
name = "Victor Calvert";
};
cameronfyfe = {
email = "cameron.j.fyfe@gmail.com";
github = "cameronfyfe";
githubId = 21013281;
name = "Cameron Fyfe";
};
cameronnemo = {
email = "cnemo@tutanota.com";
github = "cameronnemo";
@ -5023,6 +5068,12 @@
githubId = 222664;
name = "Matthew Leach";
};
hexchen = {
email = "nix@lilwit.ch";
github = "hexchen";
githubId = 41522204;
name = "hexchen";
};
hh = {
email = "hh@m-labs.hk";
github = "HarryMakes";
@ -5687,7 +5738,7 @@
githubId = 35612334;
};
jceb = {
name = "jceb";
name = "Jan Christoph Ebersbach";
email = "jceb@e-jc.de";
github = "jceb";
githubId = 101593;
@ -8760,10 +8811,10 @@
githubId = 5047140;
name = "Victor Collod";
};
musfay = {
email = "musfay@protonmail.com";
github = "musfay";
githubId = 33374965;
muscaln = {
email = "muscaln@protonmail.com";
github = "muscaln";
githubId = 96225281;
name = "Mustafa Çalışkan";
};
mupdt = {
@ -9003,6 +9054,12 @@
email = "nfjinjing@gmail.com";
name = "Jinjing Wang";
};
ngiger = {
email = "niklaus.giger@member.fsf.org";
github = "ngiger";
githubId = 265800;
name = "Niklaus Giger";
};
nh2 = {
email = "mail@nh2.me";
matrix = "@nh2:matrix.org";
@ -9527,8 +9584,8 @@
githubId = 14816024;
name = "oxalica";
keys = [{
longkeyid = "rsa4096/0xCED392DE0C483D00";
fingerprint = "5CB0 E9E5 D5D5 71F5 7F54 0FEA CED3 92DE 0C48 3D00";
longkeyid = "ed25519/0x7571654CF88E31C2";
fingerprint = "F90F FD6D 585C 2BA1 F13D E8A9 7571 654C F88E 31C2";
}];
};
oxij = {
@ -9650,6 +9707,12 @@
githubId = 14935550;
name = "Brad Pfannmuller";
};
parras = {
email = "c@philipp-arras.de";
github = "phiadaarr";
githubId = 33826198;
name = "Philipp Arras";
};
pashashocky = {
email = "pashashocky@gmail.com";
github = "pashashocky";
@ -12371,6 +12434,16 @@
githubId = 66133083;
name = "Tomas Bravo";
};
tchekda = {
email = "contact@tchekda.fr";
github = "tchekda";
githubId = 23559888;
keys = [{
longkeyid = "rsa4096/0xD0A007EDA4EADA0F";
fingerprint = "44CE A8DD 3B31 49CD 6246 9D8F D0A0 07ED A4EA DA0F";
}];
name = "David Tchekachev";
};
tckmn = {
email = "andy@tck.mn";
github = "tckmn";
@ -13976,6 +14049,13 @@
githubId = 6191421;
name = "Edward d'Albon";
};
zebreus = {
matrix = "@lennart:cicen.net";
email = "lennarteichhorn+nixpkgs@gmail.com";
github = "Zebreus";
githubId = 1557253;
name = "Lennart Eichhorn";
};
zef = {
email = "zef@zef.me";
name = "Zef Hemel";
@ -14478,4 +14558,16 @@
github = "bryanhonof";
githubId = 5932804;
};
bbenne10 = {
email = "Bryan.Bennett@protonmail.com";
matrix = "@bryan.bennett:matrix.org";
github = "bbenne10";
githubId = 687376;
name = "Bryan Bennett";
keys = [{
# compare with https://keybase.io/bbenne10
longkeyid = "rsa2048/0xEF90E3E98B8F5C0B";
fingerprint = "41EA 00B4 00F9 6970 1CB2 D3AF EF90 E3E9 8B8F 5C0B";
}];
};
}

View file

@ -1,6 +1,6 @@
#! /usr/bin/env nix-shell
#! nix-shell -p "haskellPackages.ghcWithPackages (p: [p.aeson p.req])"
#! nix-shell -p hydra-unstable
#! nix-shell -p hydra_unstable
#! nix-shell -i runhaskell
{-

View file

@ -6,6 +6,7 @@ basexx,https://github.com/teto/basexx.git,,,,,
binaryheap,https://github.com/Tieske/binaryheap.lua,,,,,vcunat
busted,,,,,,
cassowary,,,,,,marsam alerque
cldr,,,,,,alerque
compat53,,,,0.7-1,,vcunat
cosmo,,,,,,marsam
coxpcall,,,,1.17.0-1,,
@ -14,6 +15,7 @@ cyrussasl,https://github.com/JorjBauer/lua-cyrussasl.git,,,,,
digestif,https://github.com/astoff/digestif.git,,,0.2-1,lua5_3,
dkjson,,,,,,
fifo,,,,,,
fluent,,,,,,alerque
gitsigns.nvim,https://github.com/lewis6991/gitsigns.nvim.git,,,,lua5_1,
http,,,,0.3-0,,vcunat
inspect,,,,,,
@ -22,6 +24,9 @@ ldoc,https://github.com/stevedonovan/LDoc.git,,,,,
lgi,,,,,,
linenoise,https://github.com/hoelzro/lua-linenoise.git,,,,,
ljsyscall,,,,,lua5_1,lblasc
lmathx,,,,,lua5_3,alexshpilkin
lmpfrlib,,,,,lua5_3,alexshpilkin
loadkit,,,,,,alerque
lpeg,,,,,,vyp
lpeg_patterns,,,,,,
lpeglabel,,,,,,
@ -84,4 +89,5 @@ say,https://github.com/Olivine-Labs/say.git,,,,,
std._debug,https://github.com/lua-stdlib/_debug.git,,,,,
std.normalize,https://github.com/lua-stdlib/normalize.git,,,,,
stdlib,,,,41.2.2,,vyp
tl,,,,,,mephistophiles
vstruct,https://github.com/ToxicFrog/vstruct.git,,,,,

Can't render this file because it has a wrong number of fields in line 72.

View file

@ -445,6 +445,19 @@ with lib.maintainers; {
enableFeatureFreezePing = true;
};
numtide = {
members = [
mic92
flokli
jfroche
tazjin
zimbatm
];
enableFeatureFreezePing = true;
scope = "Group registration for Numtide team members who collectively maintain packages.";
shortName = "Numtide team";
};
openstack = {
members = [
emilytrau

View file

@ -248,7 +248,7 @@ $ nix-env -p /nix/var/nix/profiles/system -f '&lt;nixpkgs/nixos&gt;' -I nixos-co
(since your Nix install was probably single user):
</para>
<programlisting>
$ sudo chown -R 0.0 /nix
$ sudo chown -R 0:0 /nix
</programlisting>
</listitem>
<listitem>

View file

@ -569,8 +569,9 @@
<listitem>
<para>
The NixOS VM test framework,
<literal>pkgs.nixosTest</literal>/<literal>make-test-python.nix</literal>,
now requires detaching commands such as
<literal>pkgs.nixosTest</literal>/<literal>make-test-python.nix</literal>
(<literal>pkgs.testers.nixosTest</literal> since 22.05), now
requires detaching commands such as
<literal>succeed(&quot;foo &amp;&quot;)</literal> and
<literal>succeed(&quot;foo | xclip -i&quot;)</literal> to
close stdout. This can be done with a redirect such as

View file

@ -238,6 +238,14 @@
<link xlink:href="options.html#opt-services.ergochat.enable">services.ergochat</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://snipeitapp.com">Snipe-IT</link>, a
free open source IT asset/license management system. Available
as
<link xlink:href="options.html#opt-services.snipe-it.enable">services.snipe-it</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/ngoduykhanh/PowerDNS-Admin">PowerDNS-Admin</link>,
@ -323,6 +331,14 @@
<link linkend="opt-services.tetrd.enable">services.tetrd</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://upterm.dev">uptermd</link>, an
open-source solution for sharing terminal sessions instantly
over the public internet via secure tunnels. Available at
<link linkend="opt-services.uptermd.enable">services.uptermd</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://github.com/mbrubeck/agate">agate</link>,
@ -455,6 +471,12 @@
<link xlink:href="options.html#opt-services.nifi.enable">services.nifi</link>.
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://kanidm.github.io/kanidm/stable/">kanidm</link>,
an identity management server written in Rust.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="sec-release-22.05-incompatibilities">
@ -1159,6 +1181,16 @@
migration guide</link> for more details.
</para>
</listitem>
<listitem>
<para>
<literal>teleport</literal> has been upgraded to major version
9. Please see upstream
<link xlink:href="https://goteleport.com/docs/setup/operations/upgrading/">upgrade
instructions</link> and
<link xlink:href="https://goteleport.com/docs/changelog/#900">release
notes</link>.
</para>
</listitem>
<listitem>
<para>
For <literal>pkgs.python3.pkgs.ipython</literal>, its direct
@ -1440,6 +1472,16 @@
has been removed.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.minetestclient_4</literal> and
<literal>pkgs.minetestserver_4</literal> have been removed, as
the last 4.x release was in 2018.
<literal>pkgs.minetestclient</literal> (equivalent to
<literal>pkgs.minetest</literal> ) and
<literal>pkgs.minetestserver</literal> can be used instead.
</para>
</listitem>
<listitem>
<para>
<literal>pkgs.noto-fonts-cjk</literal> is now deprecated in
@ -1845,6 +1887,37 @@
during the time when the timer was inactive.
</para>
</listitem>
<listitem>
<para>
Mastodon now uses <literal>services.redis.servers</literal> to
start a new redis server, instead of using a global redis
server. This improves compatibility with other services that
use redis.
</para>
<para>
Note that this will recreate the redis database, although
according to the
<link xlink:href="https://docs.joinmastodon.org/admin/backups/">Mastodon
docs</link>, this is almost harmless:
</para>
<blockquote>
<para>
Losing the Redis database is almost harmless: The only
irrecoverable data will be the contents of the Sidekiq
queues and scheduled retries of previously failed jobs. The
home and list feeds are stored in Redis, but can be
regenerated with tootctl.
</para>
</blockquote>
<para>
If you do want to save the redis database, you can use the
following commands:
</para>
<programlisting language="bash">
redis-cli save
cp /var/lib/redis/dump.rdb &quot;/var/lib/redis-mastodon/dump.rdb&quot;
</programlisting>
</listitem>
<listitem>
<para>
If you are using Wayland you can choose to use the Ozone
@ -2273,6 +2346,14 @@
package has been updated to 6.0.0 and now requires .NET 6.0.
</para>
</listitem>
<listitem>
<para>
The <literal>phpPackages.box</literal> package has been
updated from 2.7.5 to 3.16.0. See the
<link xlink:href="https://github.com/box-project/box/blob/master/UPGRADE.md#from-27-to-30">upgrade
guide</link> for more details.
</para>
</listitem>
<listitem>
<para>
The <literal>zrepl</literal> package has been updated from
@ -2371,6 +2452,14 @@
desktop environments as needed.
</para>
</listitem>
<listitem>
<para>
<literal>mercury</literal> was updated to 22.01.1, which has
some breaking changes
(<link xlink:href="https://dl.mercurylang.org/release/release-notes-22.01.html">Mercury
22.01 news</link>).
</para>
</listitem>
<listitem>
<para>
xfsprogs was update to version 5.15, which enables inobtcount
@ -2425,6 +2514,16 @@
enabled.
</para>
</listitem>
<listitem>
<para>
The Nextcloud module now allows setting the value of the
<literal>max-age</literal> directive of the
<literal>Strict-Transport-Security</literal> HTTP header,
which is now controlled by the
<literal>services.nextcloud.https</literal> option, rather
than <literal>services.nginx.recommendedHttpHeaders</literal>.
</para>
</listitem>
<listitem>
<para>
The <literal>spark3</literal> package has been updated from
@ -2465,6 +2564,21 @@
hosts.
</para>
</listitem>
<listitem>
<para>
The option
<link xlink:href="options.html#opt-networking.useDHCP">networking.useDHCP</link>
isnt deprecated anymore. When using
<link xlink:href="options.html#opt-networking.useNetworkd"><literal>systemd-networkd</literal></link>,
a generic <literal>.network</literal>-unit is added which
enables DHCP for each interface matching
<literal>en*</literal>, <literal>eth*</literal> or
<literal>wl*</literal> with priority 99 (which means that it
doesnt have any effect if such an interface is matched by a
<literal>.network-</literal>unit with a lower priority). In
case of scripted networking, no behavior was changed.
</para>
</listitem>
</itemizedlist>
</section>
</section>

View file

@ -177,7 +177,7 @@ The first steps to all these are the same:
was probably single user):
```ShellSession
$ sudo chown -R 0.0 /nix
$ sudo chown -R 0:0 /nix
```
1. Set up the `/etc/NIXOS` and `/etc/NIXOS_LUSTRATE` files:

View file

@ -166,7 +166,7 @@ In addition to numerous new and upgraded packages, this release has the followin
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
- The NixOS VM test framework, `pkgs.nixosTest`/`make-test-python.nix`, now requires detaching commands such as `succeed("foo &")` and `succeed("foo | xclip -i")` to close stdout.
- The NixOS VM test framework, `pkgs.nixosTest`/`make-test-python.nix` (`pkgs.testers.nixosTest` since 22.05), now requires detaching commands such as `succeed("foo &")` and `succeed("foo | xclip -i")` to close stdout.
This can be done with a redirect such as `succeed("foo >&2 &")`. This breaking change was necessitated by a race condition causing tests to fail or hang.
It applies to all methods that invoke commands on the nodes, including `execute`, `succeed`, `fail`, `wait_until_succeeds`, `wait_until_fails`.

View file

@ -77,6 +77,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [ergochat](https://ergo.chat), a modern IRC with IRCv3 features. Available as [services.ergochat](options.html#opt-services.ergochat.enable).
- [Snipe-IT](https://snipeitapp.com), a free open source IT asset/license management system. Available as [services.snipe-it](options.html#opt-services.snipe-it.enable).
- [PowerDNS-Admin](https://github.com/ngoduykhanh/PowerDNS-Admin), a web interface for the PowerDNS server. Available at [services.powerdns-admin](options.html#opt-services.powerdns-admin.enable).
- [pgadmin4](https://github.com/postgres/pgadmin4), an admin interface for the PostgreSQL database. Available at [services.pgadmin](options.html#opt-services.pgadmin.enable).
@ -99,6 +101,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [tetrd](https://tetrd.app), share your internet connection from your device to your PC and vice versa through a USB cable. Available at [services.tetrd](#opt-services.tetrd.enable).
- [uptermd](https://upterm.dev), an open-source solution for sharing terminal sessions instantly over the public internet via secure tunnels. Available at [services.uptermd](#opt-services.uptermd.enable).
- [agate](https://github.com/mbrubeck/agate), a very simple server for the Gemini hypertext protocol. Available as [services.agate](options.html#opt-services.agate.enable).
- [ArchiSteamFarm](https://github.com/JustArchiNET/ArchiSteamFarm), a C# application with primary purpose of idling Steam cards from multiple accounts simultaneously. Available as [services.archisteamfarm](options.html#opt-services.archisteamfarm.enable).
@ -135,6 +139,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- [nifi](https://nifi.apache.org), an easy to use, powerful, and reliable system to process and distribute data. Available as [services.nifi](options.html#opt-services.nifi.enable).
- [kanidm](https://kanidm.github.io/kanidm/stable/), an identity management server written in Rust.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
## Backward Incompatibilities {#sec-release-22.05-incompatibilities}
@ -486,6 +492,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `autorestic` package has been upgraded from 1.3.0 to 1.5.0 which introduces breaking changes in config file, check [their migration guide](https://autorestic.vercel.app/migration/1.4_1.5) for more details.
- `teleport` has been upgraded to major version 9. Please see upstream [upgrade instructions](https://goteleport.com/docs/setup/operations/upgrading/) and [release notes](https://goteleport.com/docs/changelog/#900).
- For `pkgs.python3.pkgs.ipython`, its direct dependency `pkgs.python3.pkgs.matplotlib-inline`
(which is really an adapter to integrate matplotlib in ipython if it is installed) does
not depend on `pkgs.python3.pkgs.matplotlib` anymore.
@ -569,6 +577,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- `pkgs.pgadmin` now refers to `pkgs.pgadmin4`. `pgadmin3` has been removed.
- `pkgs.minetestclient_4` and `pkgs.minetestserver_4` have been removed, as the last 4.x release was in 2018. `pkgs.minetestclient` (equivalent to `pkgs.minetest` ) and `pkgs.minetestserver` can be used instead.
- `pkgs.noto-fonts-cjk` is now deprecated in favor of `pkgs.noto-fonts-cjk-sans`
and `pkgs.noto-fonts-cjk-serif` because they each have different release
schedules. To maintain compatibility with prior releases of Nixpkgs,
@ -691,6 +701,20 @@ In addition to numerous new and upgraded packages, this release has the followin
By default auto-upgrade will now run immediately if it would have been triggered at least
once during the time when the timer was inactive.
- Mastodon now uses `services.redis.servers` to start a new redis server, instead of using a global redis server.
This improves compatibility with other services that use redis.
Note that this will recreate the redis database, although according to the [Mastodon docs](https://docs.joinmastodon.org/admin/backups/),
this is almost harmless:
> Losing the Redis database is almost harmless: The only irrecoverable data will be the contents of the Sidekiq queues and scheduled retries of previously failed jobs.
> The home and list feeds are stored in Redis, but can be regenerated with tootctl.
If you do want to save the redis database, you can use the following commands:
```bash
redis-cli save
cp /var/lib/redis/dump.rdb "/var/lib/redis-mastodon/dump.rdb"
```
- If you are using Wayland you can choose to use the Ozone Wayland support
in Chrome and several Electron apps by setting the environment variable
`NIXOS_OZONE_WL=1` (for example via
@ -821,6 +845,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- The `vscode-extensions.ionide.ionide-fsharp` package has been updated to 6.0.0 and now requires .NET 6.0.
- The `phpPackages.box` package has been updated from 2.7.5 to 3.16.0. See the [upgrade guide](https://github.com/box-project/box/blob/master/UPGRADE.md#from-27-to-30) for more details.
- The `zrepl` package has been updated from 0.4.0 to 0.5:
- The RPC protocol version was bumped; all zrepl daemons in a setup must be updated and restarted before replication can resume.
@ -850,6 +876,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- The polkit service, available at `security.polkit.enable`, is now disabled by default. It will automatically be enabled through services and desktop environments as needed.
- `mercury` was updated to 22.01.1, which has some breaking changes ([Mercury 22.01 news](https://dl.mercurylang.org/release/release-notes-22.01.html)).
- xfsprogs was update to version 5.15, which enables inobtcount and bigtime by default on filesystem creation. Support for these features was added in kernel 5.10 and deemed stable in kernel 5.15.
If you want to be able to mount XFS filesystems created with this release of xfsprogs on kernel releases older than 5.10, you need to format them with `mkfs.xfs -m bigtime=0 -m inobtcount=0`.
@ -864,6 +892,8 @@ In addition to numerous new and upgraded packages, this release has the followin
- The Nextcloud module now supports to create a Mysql database automatically
with `services.nextcloud.database.createLocally` enabled.
- The Nextcloud module now allows setting the value of the `max-age` directive of the `Strict-Transport-Security` HTTP header, which is now controlled by the `services.nextcloud.https` option, rather than `services.nginx.recommendedHttpHeaders`.
- The `spark3` package has been updated from 3.1.2 to 3.2.1 ([#160075](https://github.com/NixOS/nixpkgs/pull/160075)):
- Testing has been enabled for `aarch64-linux` in addition to `x86_64-linux`.
@ -875,4 +905,11 @@ In addition to numerous new and upgraded packages, this release has the followin
`true` starting with NixOS 22.11. Enable it explicitly if you need to control
Snapserver remotely or connect streamig clients from other hosts.
- The option [networking.useDHCP](options.html#opt-networking.useDHCP) isn't deprecated anymore.
When using [`systemd-networkd`](options.html#opt-networking.useNetworkd), a generic
`.network`-unit is added which enables DHCP for each interface matching `en*`, `eth*`
or `wl*` with priority 99 (which means that it doesn't have any effect if such an interface is matched
by a `.network-`unit with a lower priority). In case of scripted networking, no behavior
was changed.
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->

View file

@ -38,7 +38,7 @@ rec {
{ key = "no-revision";
# Make the revision metadata constant, in order to avoid needless retesting.
# The human version (e.g. 21.05-pre) is left as is, because it is useful
# for external modules that test with e.g. nixosTest and rely on that
# for external modules that test with e.g. testers.nixosTest and rely on that
# version number.
config.system.nixos.revision = mkForce "constant-nixos-revision";
}

View file

@ -20,7 +20,13 @@
<title>Configuration Options</title>
<variablelist xml:id="configuration-variable-list">
<xsl:for-each select="attrs">
<xsl:variable name="id" select="concat('opt-', str:replace(str:replace(str:replace(str:replace(attr[@name = 'name']/string/@value, '*', '_'), '&lt;', '_'), '>', '_'), ':', '_'))" />
<xsl:variable name="id" select="
concat('opt-',
translate(
attr[@name = 'name']/string/@value,
'*&lt; >[]:',
'_______'
))" />
<varlistentry>
<term xlink:href="#{$id}">
<xsl:attribute name="xml:id"><xsl:value-of select="$id"/></xsl:attribute>

View file

@ -119,6 +119,7 @@ rec {
passthru = passthru // {
inherit nodes;
};
meta.mainProgram = "nixos-test-driver";
}
''
mkdir -p $out/bin

View file

@ -26,12 +26,32 @@ var ${home_region:=eu-west-1}
var ${bucket:=nixos-amis}
var ${service_role_name:=vmimport}
var ${regions:=eu-west-1 eu-west-2 eu-west-3 eu-central-1 eu-north-1
us-east-1 us-east-2 us-west-1 us-west-2
# Output of the command:
# > aws ec2 describe-regions --all-regions --query "Regions[].{Name:RegionName}" --output text | sort
var ${regions:=
af-south-1
ap-east-1
ap-northeast-1
ap-northeast-2
ap-northeast-3
ap-south-1
ap-southeast-1
ap-southeast-2
ap-southeast-3
ca-central-1
ap-southeast-1 ap-southeast-2 ap-northeast-1 ap-northeast-2
ap-south-1 ap-east-1
sa-east-1}
eu-central-1
eu-north-1
eu-south-1
eu-west-1
eu-west-2
eu-west-3
me-south-1
sa-east-1
us-east-1
us-east-2
us-west-1
us-west-2
}
regions=($regions)

View file

@ -27,7 +27,7 @@ with lib;
networking.useDHCP = false;
networking.interfaces.eth0.useDHCP = true;
# As this is intended as a stadalone image, undo some of the minimal profile stuff
# As this is intended as a standalone image, undo some of the minimal profile stuff
documentation.enable = true;
documentation.nixos.enable = true;
environment.noXlibs = false;

View file

@ -83,7 +83,6 @@ in {
broadcom-bt-firmware
b43Firmware_5_1_138
b43Firmware_6_30_163_46
b43FirmwareCutter
xow_dongle-firmware
] ++ optionals pkgs.stdenv.hostPlatform.isx86 [
facetimehd-calibration

View file

@ -0,0 +1,21 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.hardware.keyboard.uhk;
in
{
options.hardware.keyboard.uhk = {
enable = mkEnableOption ''
non-root access to the firmware of UHK keyboards.
You need it when you want to flash a new firmware on the keyboard.
Access to the keyboard is granted to users in the "input" group.
You may want to install the uhk-agent package.
'';
};
config = mkIf cfg.enable {
services.udev.packages = [ pkgs.uhk-udev-rules ];
};
}

View file

@ -46,5 +46,5 @@ with lib;
done
'';
system.stateVersion = mkDefault "18.03";
system.stateVersion = lib.mkDefault lib.trivial.release;
}

View file

@ -369,10 +369,10 @@ let
${lib.optionalString (refindBinary != null) ''
# GRUB apparently cannot do "chainloader" operations on "CD".
if [ "\$root" != "cd0" ]; then
menuentry 'rEFInd' --class refind {
# Force root to be the FAT partition
# Otherwise it breaks rEFInd's boot
search --set=root --no-floppy --fs-uuid 1234-5678
menuentry 'rEFInd' --class refind {
chainloader (\$root)/EFI/boot/${refindBinary}
}
fi
@ -400,10 +400,8 @@ let
# dates (cp -p, touch, mcopy -m, faketime for label), IDs (mkfs.vfat -i)
''
mkdir ./contents && cd ./contents
cp -rp "${efiDir}"/EFI .
mkdir ./boot
cp -p "${config.boot.kernelPackages.kernel}/${config.system.boot.loader.kernelFile}" \
"${config.system.build.initialRamdisk}/${config.system.boot.loader.initrdFile}" ./boot/
mkdir -p ./EFI/boot
cp -rp "${efiDir}"/EFI/boot/{grub.cfg,*.efi} ./EFI/boot
# Rewrite dates for everything in the FS
find . -exec touch --date=2000-01-01 {} +
@ -421,11 +419,11 @@ let
faketime "2000-01-01 00:00:00" mkfs.vfat -i 12345678 -n EFIBOOT "$out"
# Force a fixed order in mcopy for better determinism, and avoid file globbing
for d in $(find EFI boot -type d | sort); do
for d in $(find EFI -type d | sort); do
faketime "2000-01-01 00:00:00" mmd -i "$out" "::/$d"
done
for f in $(find EFI boot -type f | sort); do
for f in $(find EFI -type f | sort); do
mcopy -pvm -i "$out" "$f" "::/$f"
done

View file

@ -581,17 +581,19 @@ ${\join "", (map { " $_\n" } (uniq @attrs))}}
EOF
sub generateNetworkingDhcpConfig {
# FIXME disable networking.useDHCP by default when switching to networkd.
my $config = <<EOF;
# The global useDHCP flag is deprecated, therefore explicitly set to false here.
# Per-interface useDHCP will be mandatory in the future, so this generated config
# replicates the default behaviour.
networking.useDHCP = lib.mkDefault false;
# Enables DHCP on each ethernet and wireless interface. In case of scripted networking
# (the default) this is the recommended approach. When using systemd-networkd it's
# still possible to use this option, but it's recommended to use it in conjunction
# with explicit per-interface declarations with `networking.interfaces.<interface>.useDHCP`.
networking.useDHCP = lib.mkDefault true;
EOF
foreach my $path (glob "/sys/class/net/*") {
my $dev = basename($path);
if ($dev ne "lo") {
$config .= " networking.interfaces.$dev.useDHCP = lib.mkDefault true;\n";
$config .= " # networking.interfaces.$dev.useDHCP = lib.mkDefault true;\n";
}
}

View file

@ -34,7 +34,7 @@ let
name = "nixos-generate-config";
src = ./nixos-generate-config.pl;
perl = "${pkgs.perl.withPackages (p: [ p.FileSlurp ])}/bin/perl";
detectvirt = "${pkgs.systemd}/bin/systemd-detect-virt";
detectvirt = "${config.systemd.package}/bin/systemd-detect-virt";
btrfs = "${pkgs.btrfs-progs}/bin/btrfs";
inherit (config.system.nixos-generate-config) configuration desktopConfiguration;
xserverEnabled = config.services.xserver.enable;
@ -177,6 +177,10 @@ in
# users.users.jane = {
# isNormalUser = true;
# extraGroups = [ "wheel" ]; # Enable sudo for the user.
# packages = with pkgs; [
# firefox
# thunderbird
# ];
# };
# List packages installed in system profile. To search, run:
@ -184,7 +188,6 @@ in
# environment.systemPackages = with pkgs; [
# vim # Do not forget to add an editor to edit configuration.nix! The Nano editor is also installed by default.
# wget
# firefox
# ];
# Some programs need SUID wrappers, can be configured further or are

View file

@ -250,7 +250,7 @@ in
};
warnings = optional (isMorPLocate && cfg.localuser != null)
"mlocate does not support the services.locate.localuser option; updatedb will run as root. (Silence with services.locate.localuser = null.)"
"mlocate and plocate do not support the services.locate.localuser option. updatedb will run as root. Silence this warning by setting services.locate.localuser = null."
++ optional (isFindutils && cfg.pruneNames != [ ])
"findutils locate does not support pruning by directory component"
++ optional (isFindutils && cfg.pruneBindMounts)

View file

@ -53,7 +53,9 @@ in {
# see: https://inbox.vuxu.org/mandoc-tech/20210906171231.GF83680@athene.usta.de/T/#e85f773c1781e3fef85562b2794f9cad7b2909a3c
extraSetup = lib.mkIf config.documentation.man.generateCaches ''
${makewhatis} -T utf8 ${
lib.concatMapStringsSep " " (path: "\"$out/${path}\"") cfg.manPath
lib.concatMapStringsSep " " (path:
"$out/" + lib.escapeShellArg path
) cfg.manPath
}
'';
};

View file

@ -146,6 +146,15 @@ in
"/etc/os-release".source = initrdRelease;
"/etc/initrd-release".source = initrdRelease;
};
# We have to use `warnings` because when warning in the default of the option
# the warning would also be shown when building the manual since the manual
# has to evaluate the default.
#
# TODO Remove this and drop the default of the option so people are forced to set it.
# Doing this also means fixing the comment in nixos/modules/testing/test-instrumentation.nix
warnings = lib.optional (options.system.stateVersion.highestPrio == (lib.mkOptionDefault { }).priority)
"system.stateVersion is not set, defaulting to ${config.system.stateVersion}. Read why this matters on https://nixos.org/manual/nixos/stable/options.html#opt-system.stateVersion.";
};
# uses version info nixpkgs, which requires a full nixpkgs path

View file

@ -57,6 +57,7 @@
./hardware/sensor/hddtemp.nix
./hardware/sensor/iio.nix
./hardware/keyboard/teck.nix
./hardware/keyboard/uhk.nix
./hardware/keyboard/zsa.nix
./hardware/ksm.nix
./hardware/ledger.nix
@ -196,7 +197,6 @@
./programs/partition-manager.nix
./programs/plotinus.nix
./programs/proxychains.nix
./programs/phosh.nix
./programs/qt5ct.nix
./programs/screen.nix
./programs/sedutil.nix
@ -505,6 +505,7 @@
./services/mail/postfixadmin.nix
./services/mail/postsrsd.nix
./services/mail/postgrey.nix
./services/mail/public-inbox.nix
./services/mail/spamassassin.nix
./services/mail/rspamd.nix
./services/mail/rss2email.nix
@ -936,6 +937,7 @@
./services/networking/unifi.nix
./services/video/unifi-video.nix
./services/video/rtsp-simple-server.nix
./services/networking/uptermd.nix
./services/networking/v2ray.nix
./services/networking/vsftpd.nix
./services/networking/wasabibackend.nix
@ -975,6 +977,7 @@
./services/security/hockeypuck.nix
./services/security/hologram-server.nix
./services/security/hologram-agent.nix
./services/security/kanidm.nix
./services/security/munge.nix
./services/security/nginx-sso.nix
./services/security/oauth2_proxy.nix
@ -1078,6 +1081,7 @@
./services/web-apps/trilium.nix
./services/web-apps/selfoss.nix
./services/web-apps/shiori.nix
./services/web-apps/snipe-it.nix
./services/web-apps/vikunja.nix
./services/web-apps/virtlyst.nix
./services/web-apps/wiki-js.nix

View file

@ -5,8 +5,6 @@
programs.nix-ld.enable = lib.mkEnableOption ''nix-ld, Documentation: <link xlink:href="https://github.com/Mic92/nix-ld"/>'';
};
config = lib.mkIf config.programs.nix-ld.enable {
systemd.tmpfiles.rules = [
"L+ ${pkgs.nix-ld.ldPath} - - - - ${pkgs.nix-ld}/libexec/nix-ld"
];
systemd.tmpfiles.packages = [ pkgs.nix-ld ];
};
}

View file

@ -626,7 +626,7 @@ let
session optional ${pkgs.otpw}/lib/security/pam_otpw.so
'' +
optionalString cfg.startSession ''
session optional ${pkgs.systemd}/lib/security/pam_systemd.so
session optional ${config.systemd.package}/lib/security/pam_systemd.so
'' +
optionalString cfg.forwardXAuth ''
session optional pam_xauth.so xauthpath=${pkgs.xorg.xauth}/bin/xauth systemuser=99
@ -1242,7 +1242,7 @@ in
mr ${pkgs.gnome3.gnome-keyring}/lib/security/pam_gnome_keyring.so,
'' +
optionalString (isEnabled (cfg: cfg.startSession)) ''
mr ${pkgs.systemd}/lib/security/pam_systemd.so,
mr ${config.systemd.package}/lib/security/pam_systemd.so,
'' +
optionalString (isEnabled (cfg: cfg.enableAppArmor)
&& config.security.apparmor.enable) ''

View file

@ -98,7 +98,7 @@ let
# Prevent races
chmod 0000 "$wrapperDir/${program}"
chown ${owner}.${group} "$wrapperDir/${program}"
chown ${owner}:${group} "$wrapperDir/${program}"
# Set desired capabilities on the file plus cap_setpcap so
# the wrapper program can elevate the capabilities set on
@ -126,7 +126,7 @@ let
# Prevent races
chmod 0000 "$wrapperDir/${program}"
chown ${owner}.${group} "$wrapperDir/${program}"
chown ${owner}:${group} "$wrapperDir/${program}"
chmod "u${if setuid then "+" else "-"}s,g${if setgid then "+" else "-"}s,${permissions}" "$wrapperDir/${program}"
'';

View file

@ -2,12 +2,12 @@
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdnoreturn.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/xattr.h>
#include <fcntl.h>
#include <dirent.h>
#include <assert.h>
#include <errno.h>
#include <linux/capability.h>
#include <sys/prctl.h>
@ -16,10 +16,7 @@
#include <syscall.h>
#include <byteswap.h>
// Make sure assertions are not compiled out, we use them to codify
// invariants about this program and we want it to fail fast and
// loudly if they are violated.
#undef NDEBUG
#define ASSERT(expr) ((expr) ? (void) 0 : assert_failure(#expr))
extern char **environ;
@ -38,6 +35,12 @@ static char *wrapper_debug = "WRAPPER_DEBUG";
#define LE32_TO_H(x) (x)
#endif
static noreturn void assert_failure(const char *assertion) {
fprintf(stderr, "Assertion `%s` in NixOS's wrapper.c failed.\n", assertion);
fflush(stderr);
abort();
}
int get_last_cap(unsigned *last_cap) {
FILE* file = fopen("/proc/sys/kernel/cap_last_cap", "r");
if (file == NULL) {
@ -167,6 +170,7 @@ int readlink_malloc(const char *p, char **ret) {
}
int main(int argc, char **argv) {
ASSERT(argc >= 1);
char *self_path = NULL;
int self_path_size = readlink_malloc("/proc/self/exe", &self_path);
if (self_path_size < 0) {
@ -181,36 +185,36 @@ int main(int argc, char **argv) {
int len = strlen(wrapper_dir);
if (len > 0 && '/' == wrapper_dir[len - 1])
--len;
assert(!strncmp(self_path, wrapper_dir, len));
assert('/' == wrapper_dir[0]);
assert('/' == self_path[len]);
ASSERT(!strncmp(self_path, wrapper_dir, len));
ASSERT('/' == wrapper_dir[0]);
ASSERT('/' == self_path[len]);
// Make *really* *really* sure that we were executed as
// `self_path', and not, say, as some other setuid program. That
// is, our effective uid/gid should match the uid/gid of
// `self_path'.
struct stat st;
assert(lstat(self_path, &st) != -1);
ASSERT(lstat(self_path, &st) != -1);
assert(!(st.st_mode & S_ISUID) || (st.st_uid == geteuid()));
assert(!(st.st_mode & S_ISGID) || (st.st_gid == getegid()));
ASSERT(!(st.st_mode & S_ISUID) || (st.st_uid == geteuid()));
ASSERT(!(st.st_mode & S_ISGID) || (st.st_gid == getegid()));
// And, of course, we shouldn't be writable.
assert(!(st.st_mode & (S_IWGRP | S_IWOTH)));
ASSERT(!(st.st_mode & (S_IWGRP | S_IWOTH)));
// Read the path of the real (wrapped) program from <self>.real.
char real_fn[PATH_MAX + 10];
int real_fn_size = snprintf(real_fn, sizeof(real_fn), "%s.real", self_path);
assert(real_fn_size < sizeof(real_fn));
ASSERT(real_fn_size < sizeof(real_fn));
int fd_self = open(real_fn, O_RDONLY);
assert(fd_self != -1);
ASSERT(fd_self != -1);
char source_prog[PATH_MAX];
len = read(fd_self, source_prog, PATH_MAX);
assert(len != -1);
assert(len < sizeof(source_prog));
assert(len > 0);
ASSERT(len != -1);
ASSERT(len < sizeof(source_prog));
ASSERT(len > 0);
source_prog[len] = 0;
close(fd_self);

View file

@ -112,7 +112,7 @@ in
services.mysql.ensureUsers = optional (config.services.mysql.enable && cfg.config.mysql_dump_host == "localhost") {
name = user;
ensurePermissions = { "*.*" = "SELECT, SHOW VIEW, TRIGGER, LOCK TABLES"; };
ensurePermissions = { "*.*" = "SELECT, SHOW VIEW, TRIGGER, LOCK TABLES, EVENT"; };
};
};

View file

@ -4,7 +4,8 @@ with lib;
let
cfg = config.services.borgmatic;
cfgfile = pkgs.writeText "config.yaml" (builtins.toJSON cfg.settings);
settingsFormat = pkgs.formats.yaml { };
cfgfile = settingsFormat.generate "config.yaml" cfg.settings;
in {
options.services.borgmatic = {
enable = mkEnableOption "borgmatic";
@ -14,7 +15,7 @@ in {
See https://torsion.org/borgmatic/docs/reference/configuration/
'';
type = types.submodule {
freeformType = with lib.types; attrsOf anything;
freeformType = settingsFormat.type;
options.location = {
source_directories = mkOption {
type = types.listOf types.str;

View file

@ -99,8 +99,8 @@ in
package = mkOption {
type = types.package;
default = pkgs.hydra-unstable;
defaultText = literalExpression "pkgs.hydra-unstable";
default = pkgs.hydra_unstable;
defaultText = literalExpression "pkgs.hydra_unstable";
description = "The Hydra package.";
};
@ -300,17 +300,17 @@ in
};
preStart = ''
mkdir -p ${baseDir}
chown hydra.hydra ${baseDir}
chown hydra:hydra ${baseDir}
chmod 0750 ${baseDir}
ln -sf ${hydraConf} ${baseDir}/hydra.conf
mkdir -m 0700 -p ${baseDir}/www
chown hydra-www.hydra ${baseDir}/www
chown hydra-www:hydra ${baseDir}/www
mkdir -m 0700 -p ${baseDir}/queue-runner
mkdir -m 0750 -p ${baseDir}/build-logs
chown hydra-queue-runner.hydra ${baseDir}/queue-runner ${baseDir}/build-logs
chown hydra-queue-runner:hydra ${baseDir}/queue-runner ${baseDir}/build-logs
${optionalString haveLocalDB ''
if ! [ -e ${baseDir}/.db-created ]; then
@ -338,7 +338,7 @@ in
rmdir /nix/var/nix/gcroots/per-user/hydra-www/hydra-roots
fi
chown hydra.hydra ${cfg.gcRootsDir}
chown hydra:hydra ${cfg.gcRootsDir}
chmod 2775 ${cfg.gcRootsDir}
'';
serviceConfig.ExecStart = "${hydra-package}/bin/hydra-init";

View file

@ -61,6 +61,9 @@
{
"application.process.binary": "teams"
},
{
"application.process.binary": "teams-insiders"
},
{
"application.process.binary": "skypeforlinux"
}

View file

@ -87,6 +87,18 @@ in
a new map with default settings will be generated before starting the service.
'';
};
loadLatestSave = mkOption {
type = types.bool;
default = false;
description = ''
Load the latest savegame on startup. This overrides saveName, in that the latest
save will always be used even if a saved game of the given name exists. It still
controls the 'canonical' name of the savegame.
Set this to true to have the server automatically reload a recent autosave after
a crash or desync.
'';
};
# TODO Add more individual settings as nixos-options?
# TODO XXX The server tries to copy a newly created config file over the old one
# on shutdown, but fails, because it's in the nix store. When is this needed?
@ -250,8 +262,9 @@ in
"--config=${cfg.configFile}"
"--port=${toString cfg.port}"
"--bind=${cfg.bind}"
"--start-server=${mkSavePath cfg.saveName}"
(optionalString (!cfg.loadLatestSave) "--start-server=${mkSavePath cfg.saveName}")
"--server-settings=${serverSettingsFile}"
(optionalString cfg.loadLatestSave "--start-server-load-latest")
(optionalString (cfg.mods != []) "--mod-directory=${modDir}")
(optionalString (cfg.admins != []) "--server-adminlist=${serverAdminsFile}")
];

View file

@ -28,6 +28,7 @@ in {
description = "Backlight Adjustment Service";
wantedBy = [ "multi-user.target" ];
serviceConfig.ExecStart = "${pkgs.illum}/bin/illum-d";
serviceConfig.Restart = "on-failure";
};
};

View file

@ -171,6 +171,11 @@ let
mv etc/udev/hwdb.bin $out
'';
compressFirmware = if config.boot.kernelPackages.kernelAtLeast "5.3" then
pkgs.compressFirmwareXz
else
id;
# Udev has a 512-character limit for ENV{PATH}, so create a symlink
# tree to work around this.
udevPath = pkgs.buildEnv {
@ -267,7 +272,7 @@ in
'';
apply = list: pkgs.buildEnv {
name = "firmware";
paths = list;
paths = map compressFirmware list;
pathsToLink = [ "/lib/firmware" ];
ignoreCollisions = true;
};

View file

@ -26,8 +26,7 @@ in
config = mkIf cfg.enable {
# TODO: Rename to .conf in upcomming release
environment.etc."usbrelayd.ini".text = ''
environment.etc."usbrelayd.conf".text = ''
[MQTT]
BROKER = ${cfg.broker}
CLIENTNAME = ${cfg.clientName}
@ -41,4 +40,8 @@ in
};
users.groups.usbrelay = { };
};
meta = {
maintainers = with lib.maintainers; [ wentasah ];
};
}

View file

@ -360,7 +360,14 @@ in {
};
config = mkIf cfg.enable {
networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ cfg.port ];
assertions = [
{
assertion = cfg.openFirewall -> !isNull cfg.config;
message = "openFirewall can only be used with a declarative config";
}
];
networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ cfg.config.http.server_port ];
systemd.services.home-assistant = {
description = "Home Assistant";

View file

@ -109,7 +109,7 @@ in
'''
# Read from journal
pipe {
command => "''${pkgs.systemd}/bin/journalctl -f -o json"
command => "''${config.systemd.package}/bin/journalctl -f -o json"
type => "syslog" codec => json {}
}
'''

View file

@ -0,0 +1,579 @@
{ lib, pkgs, config, ... }:
with lib;
let
cfg = config.services.public-inbox;
stateDir = "/var/lib/public-inbox";
manref = name: vol: "<citerefentry><refentrytitle>${name}</refentrytitle><manvolnum>${toString vol}</manvolnum></citerefentry>";
gitIni = pkgs.formats.gitIni { listsAsDuplicateKeys = true; };
iniAtom = elemAt gitIni.type/*attrsOf*/.functor.wrapped/*attrsOf*/.functor.wrapped/*either*/.functor.wrapped 0;
useSpamAssassin = cfg.settings.publicinboxmda.spamcheck == "spamc" ||
cfg.settings.publicinboxwatch.spamcheck == "spamc";
publicInboxDaemonOptions = proto: defaultPort: {
args = mkOption {
type = with types; listOf str;
default = [];
description = "Command-line arguments to pass to ${manref "public-inbox-${proto}d" 1}.";
};
port = mkOption {
type = with types; nullOr (either str port);
default = defaultPort;
description = ''
Listening port.
Beware that public-inbox uses well-known ports number to decide whether to enable TLS or not.
Set to null and use <code>systemd.sockets.public-inbox-${proto}d.listenStreams</code>
if you need a more advanced listening.
'';
};
cert = mkOption {
type = with types; nullOr str;
default = null;
example = "/path/to/fullchain.pem";
description = "Path to TLS certificate to use for connections to ${manref "public-inbox-${proto}d" 1}.";
};
key = mkOption {
type = with types; nullOr str;
default = null;
example = "/path/to/key.pem";
description = "Path to TLS key to use for connections to ${manref "public-inbox-${proto}d" 1}.";
};
};
serviceConfig = srv:
let proto = removeSuffix "d" srv;
needNetwork = builtins.hasAttr proto cfg && cfg.${proto}.port == null;
in {
serviceConfig = {
# Enable JIT-compiled C (via Inline::C)
Environment = [ "PERL_INLINE_DIRECTORY=/run/public-inbox-${srv}/perl-inline" ];
# NonBlocking is REQUIRED to avoid a race condition
# if running simultaneous services.
NonBlocking = true;
#LimitNOFILE = 30000;
User = config.users.users."public-inbox".name;
Group = config.users.groups."public-inbox".name;
RuntimeDirectory = [
"public-inbox-${srv}/perl-inline"
];
RuntimeDirectoryMode = "700";
# This is for BindPaths= and BindReadOnlyPaths=
# to allow traversal of directories they create inside RootDirectory=
UMask = "0066";
StateDirectory = ["public-inbox"];
StateDirectoryMode = "0750";
WorkingDirectory = stateDir;
BindReadOnlyPaths = [
"/etc"
"/run/systemd"
"${config.i18n.glibcLocales}"
] ++
mapAttrsToList (name: inbox: inbox.description) cfg.inboxes ++
# Without confinement the whole Nix store
# is made available to the service
optionals (!config.systemd.services."public-inbox-${srv}".confinement.enable) [
"${pkgs.dash}/bin/dash:/bin/sh"
builtins.storeDir
];
# The following options are only for optimizing:
# systemd-analyze security public-inbox-'*'
AmbientCapabilities = "";
CapabilityBoundingSet = "";
# ProtectClock= adds DeviceAllow=char-rtc r
DeviceAllow = "";
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
PrivateNetwork = mkDefault (!needNetwork);
ProcSubset = "pid";
ProtectClock = true;
ProtectHome = mkDefault true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectProc = "invisible";
#ProtectSystem = "strict";
RemoveIPC = true;
RestrictAddressFamilies = [ "AF_UNIX" ] ++
optionals needNetwork [ "AF_INET" "AF_INET6" ];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallFilter = [
"@system-service"
"~@aio" "~@chown" "~@keyring" "~@memlock" "~@resources"
# Not removing @setuid and @privileged because Inline::C needs them.
# Not removing @timer because git upload-pack needs it.
];
SystemCallArchitectures = "native";
# The following options are redundant when confinement is enabled
RootDirectory = "/var/empty";
TemporaryFileSystem = "/";
PrivateMounts = true;
MountAPIVFS = true;
PrivateDevices = true;
PrivateTmp = true;
PrivateUsers = true;
ProtectControlGroups = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
};
confinement = {
# Until we agree upon doing it directly here in NixOS
# https://github.com/NixOS/nixpkgs/pull/104457#issuecomment-1115768447
# let the user choose to enable the confinement with:
# systemd.services.public-inbox-httpd.confinement.enable = true;
# systemd.services.public-inbox-imapd.confinement.enable = true;
# systemd.services.public-inbox-init.confinement.enable = true;
# systemd.services.public-inbox-nntpd.confinement.enable = true;
#enable = true;
mode = "full-apivfs";
# Inline::C needs a /bin/sh, and dash is enough
binSh = "${pkgs.dash}/bin/dash";
packages = [
pkgs.iana-etc
(getLib pkgs.nss)
pkgs.tzdata
];
};
};
in
{
options.services.public-inbox = {
enable = mkEnableOption "the public-inbox mail archiver";
package = mkOption {
type = types.package;
default = pkgs.public-inbox;
defaultText = literalExpression "pkgs.public-inbox";
description = "public-inbox package to use.";
};
path = mkOption {
type = with types; listOf package;
default = [];
example = literalExpression "with pkgs; [ spamassassin ]";
description = ''
Additional packages to place in the path of public-inbox-mda,
public-inbox-watch, etc.
'';
};
inboxes = mkOption {
description = ''
Inboxes to configure, where attribute names are inbox names.
'';
default = {};
type = types.attrsOf (types.submodule ({name, ...}: {
freeformType = types.attrsOf iniAtom;
options.inboxdir = mkOption {
type = types.str;
default = "${stateDir}/inboxes/${name}";
description = "The absolute path to the directory which hosts the public-inbox.";
};
options.address = mkOption {
type = with types; listOf str;
example = "example-discuss@example.org";
description = "The email addresses of the public-inbox.";
};
options.url = mkOption {
type = with types; nullOr str;
default = null;
example = "https://example.org/lists/example-discuss";
description = "URL where this inbox can be accessed over HTTP.";
};
options.description = mkOption {
type = types.str;
example = "user/dev discussion of public-inbox itself";
description = "User-visible description for the repository.";
apply = pkgs.writeText "public-inbox-description-${name}";
};
options.newsgroup = mkOption {
type = with types; nullOr str;
default = null;
description = "NNTP group name for the inbox.";
};
options.watch = mkOption {
type = with types; listOf str;
default = [];
description = "Paths for ${manref "public-inbox-watch" 1} to monitor for new mail.";
example = [ "maildir:/path/to/test.example.com.git" ];
};
options.watchheader = mkOption {
type = with types; nullOr str;
default = null;
example = "List-Id:<test@example.com>";
description = ''
If specified, ${manref "public-inbox-watch" 1} will only process
mail containing a matching header.
'';
};
options.coderepo = mkOption {
type = (types.listOf (types.enum (attrNames cfg.settings.coderepo))) // {
description = "list of coderepo names";
};
default = [];
description = "Nicknames of a 'coderepo' section associated with the inbox.";
};
}));
};
imap = {
enable = mkEnableOption "the public-inbox IMAP server";
} // publicInboxDaemonOptions "imap" 993;
http = {
enable = mkEnableOption "the public-inbox HTTP server";
mounts = mkOption {
type = with types; listOf str;
default = [ "/" ];
example = [ "/lists/archives" ];
description = ''
Root paths or URLs that public-inbox will be served on.
If domain parts are present, only requests to those
domains will be accepted.
'';
};
args = (publicInboxDaemonOptions "http" 80).args;
port = mkOption {
type = with types; nullOr (either str port);
default = 80;
example = "/run/public-inbox-httpd.sock";
description = ''
Listening port or systemd's ListenStream= entry
to be used as a reverse proxy, eg. in nginx:
<code>locations."/inbox".proxyPass = "http://unix:''${config.services.public-inbox.http.port}:/inbox";</code>
Set to null and use <code>systemd.sockets.public-inbox-httpd.listenStreams</code>
if you need a more advanced listening.
'';
};
};
mda = {
enable = mkEnableOption "the public-inbox Mail Delivery Agent";
args = mkOption {
type = with types; listOf str;
default = [];
description = "Command-line arguments to pass to ${manref "public-inbox-mda" 1}.";
};
};
postfix.enable = mkEnableOption "the integration into Postfix";
nntp = {
enable = mkEnableOption "the public-inbox NNTP server";
} // publicInboxDaemonOptions "nntp" 563;
spamAssassinRules = mkOption {
type = with types; nullOr path;
default = "${cfg.package.sa_config}/user/.spamassassin/user_prefs";
defaultText = literalExpression "\${cfg.package.sa_config}/user/.spamassassin/user_prefs";
description = "SpamAssassin configuration specific to public-inbox.";
};
settings = mkOption {
description = ''
Settings for the <link xlink:href="https://public-inbox.org/public-inbox-config.html">public-inbox config file</link>.
'';
default = {};
type = types.submodule {
freeformType = gitIni.type;
options.publicinbox = mkOption {
default = {};
description = "public inboxes";
type = types.submodule {
freeformType = with types; /*inbox name*/attrsOf (/*inbox option name*/attrsOf /*inbox option value*/iniAtom);
options.css = mkOption {
type = with types; listOf str;
default = [];
description = "The local path name of a CSS file for the PSGI web interface.";
};
options.nntpserver = mkOption {
type = with types; listOf str;
default = [];
example = [ "nntp://news.public-inbox.org" "nntps://news.public-inbox.org" ];
description = "NNTP URLs to this public-inbox instance";
};
options.wwwlisting = mkOption {
type = with types; enum [ "all" "404" "match=domain" ];
default = "404";
description = ''
Controls which lists (if any) are listed for when the root
public-inbox URL is accessed over HTTP.
'';
};
};
};
options.publicinboxmda.spamcheck = mkOption {
type = with types; enum [ "spamc" "none" ];
default = "none";
description = ''
If set to spamc, ${manref "public-inbox-watch" 1} will filter spam
using SpamAssassin.
'';
};
options.publicinboxwatch.spamcheck = mkOption {
type = with types; enum [ "spamc" "none" ];
default = "none";
description = ''
If set to spamc, ${manref "public-inbox-watch" 1} will filter spam
using SpamAssassin.
'';
};
options.publicinboxwatch.watchspam = mkOption {
type = with types; nullOr str;
default = null;
example = "maildir:/path/to/spam";
description = ''
If set, mail in this maildir will be trained as spam and
deleted from all watched inboxes
'';
};
options.coderepo = mkOption {
default = {};
description = "code repositories";
type = types.attrsOf (types.submodule {
freeformType = types.attrsOf iniAtom;
options.cgitUrl = mkOption {
type = types.str;
description = "URL of a cgit instance";
};
options.dir = mkOption {
type = types.str;
description = "Path to a git repository";
};
});
};
};
};
openFirewall = mkEnableOption "opening the firewall when using a port option";
};
config = mkIf cfg.enable {
assertions = [
{ assertion = config.services.spamassassin.enable || !useSpamAssassin;
message = ''
public-inbox is configured to use SpamAssassin, but
services.spamassassin.enable is false. If you don't need
spam checking, set `services.public-inbox.settings.publicinboxmda.spamcheck' and
`services.public-inbox.settings.publicinboxwatch.spamcheck' to null.
'';
}
{ assertion = cfg.path != [] || !useSpamAssassin;
message = ''
public-inbox is configured to use SpamAssassin, but there is
no spamc executable in services.public-inbox.path. If you
don't need spam checking, set
`services.public-inbox.settings.publicinboxmda.spamcheck' and
`services.public-inbox.settings.publicinboxwatch.spamcheck' to null.
'';
}
];
services.public-inbox.settings =
filterAttrsRecursive (n: v: v != null) {
publicinbox = mapAttrs (n: filterAttrs (n: v: n != "description")) cfg.inboxes;
};
users = {
users.public-inbox = {
home = stateDir;
group = "public-inbox";
isSystemUser = true;
};
groups.public-inbox = {};
};
networking.firewall = mkIf cfg.openFirewall
{ allowedTCPPorts = mkMerge
(map (proto: (mkIf (cfg.${proto}.enable && types.port.check cfg.${proto}.port) [ cfg.${proto}.port ]))
["imap" "http" "nntp"]);
};
services.postfix = mkIf (cfg.postfix.enable && cfg.mda.enable) {
# Not sure limiting to 1 is necessary, but better safe than sorry.
config.public-inbox_destination_recipient_limit = "1";
# Register the addresses as existing
virtual =
concatStringsSep "\n" (mapAttrsToList (_: inbox:
concatMapStringsSep "\n" (address:
"${address} ${address}"
) inbox.address
) cfg.inboxes);
# Deliver the addresses with the public-inbox transport
transport =
concatStringsSep "\n" (mapAttrsToList (_: inbox:
concatMapStringsSep "\n" (address:
"${address} public-inbox:${address}"
) inbox.address
) cfg.inboxes);
# The public-inbox transport
masterConfig.public-inbox = {
type = "unix";
privileged = true; # Required for user=
command = "pipe";
args = [
"flags=X" # Report as a final delivery
"user=${with config.users; users."public-inbox".name + ":" + groups."public-inbox".name}"
# Specifying a nexthop when using the transport
# (eg. test public-inbox:test) allows to
# receive mails with an extension (eg. test+foo).
"argv=${pkgs.writeShellScript "public-inbox-transport" ''
export HOME="${stateDir}"
export ORIGINAL_RECIPIENT="''${2:-1}"
export PATH="${makeBinPath cfg.path}:$PATH"
exec ${cfg.package}/bin/public-inbox-mda ${escapeShellArgs cfg.mda.args}
''} \${original_recipient} \${nexthop}"
];
};
};
systemd.sockets = mkMerge (map (proto:
mkIf (cfg.${proto}.enable && cfg.${proto}.port != null)
{ "public-inbox-${proto}d" = {
listenStreams = [ (toString cfg.${proto}.port) ];
wantedBy = [ "sockets.target" ];
};
}
) [ "imap" "http" "nntp" ]);
systemd.services = mkMerge [
(mkIf cfg.imap.enable
{ public-inbox-imapd = mkMerge [(serviceConfig "imapd") {
after = [ "public-inbox-init.service" "public-inbox-watch.service" ];
requires = [ "public-inbox-init.service" ];
serviceConfig = {
ExecStart = escapeShellArgs (
[ "${cfg.package}/bin/public-inbox-imapd" ] ++
cfg.imap.args ++
optionals (cfg.imap.cert != null) [ "--cert" cfg.imap.cert ] ++
optionals (cfg.imap.key != null) [ "--key" cfg.imap.key ]
);
};
}];
})
(mkIf cfg.http.enable
{ public-inbox-httpd = mkMerge [(serviceConfig "httpd") {
after = [ "public-inbox-init.service" "public-inbox-watch.service" ];
requires = [ "public-inbox-init.service" ];
serviceConfig = {
ExecStart = escapeShellArgs (
[ "${cfg.package}/bin/public-inbox-httpd" ] ++
cfg.http.args ++
# See https://public-inbox.org/public-inbox.git/tree/examples/public-inbox.psgi
# for upstream's example.
[ (pkgs.writeText "public-inbox.psgi" ''
#!${cfg.package.fullperl} -w
use strict;
use warnings;
use Plack::Builder;
use PublicInbox::WWW;
my $www = PublicInbox::WWW->new;
$www->preload;
builder {
# If reached through a reverse proxy,
# make it transparent by resetting some HTTP headers
# used by public-inbox to generate URIs.
enable 'ReverseProxy';
# No need to send a response body if it's an HTTP HEAD requests.
enable 'Head';
# Route according to configured domains and root paths.
${concatMapStrings (path: ''
mount q(${path}) => sub { $www->call(@_); };
'') cfg.http.mounts}
}
'') ]
);
};
}];
})
(mkIf cfg.nntp.enable
{ public-inbox-nntpd = mkMerge [(serviceConfig "nntpd") {
after = [ "public-inbox-init.service" "public-inbox-watch.service" ];
requires = [ "public-inbox-init.service" ];
serviceConfig = {
ExecStart = escapeShellArgs (
[ "${cfg.package}/bin/public-inbox-nntpd" ] ++
cfg.nntp.args ++
optionals (cfg.nntp.cert != null) [ "--cert" cfg.nntp.cert ] ++
optionals (cfg.nntp.key != null) [ "--key" cfg.nntp.key ]
);
};
}];
})
(mkIf (any (inbox: inbox.watch != []) (attrValues cfg.inboxes)
|| cfg.settings.publicinboxwatch.watchspam != null)
{ public-inbox-watch = mkMerge [(serviceConfig "watch") {
inherit (cfg) path;
wants = [ "public-inbox-init.service" ];
requires = [ "public-inbox-init.service" ] ++
optional (cfg.settings.publicinboxwatch.spamcheck == "spamc") "spamassassin.service";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/public-inbox-watch";
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
};
}];
})
({ public-inbox-init = let
PI_CONFIG = gitIni.generate "public-inbox.ini"
(filterAttrsRecursive (n: v: v != null) cfg.settings);
in mkMerge [(serviceConfig "init") {
wantedBy = [ "multi-user.target" ];
restartIfChanged = true;
restartTriggers = [ PI_CONFIG ];
script = ''
set -ux
install -D -p ${PI_CONFIG} ${stateDir}/.public-inbox/config
'' + optionalString useSpamAssassin ''
install -m 0700 -o spamd -d ${stateDir}/.spamassassin
${optionalString (cfg.spamAssassinRules != null) ''
ln -sf ${cfg.spamAssassinRules} ${stateDir}/.spamassassin/user_prefs
''}
'' + concatStrings (mapAttrsToList (name: inbox: ''
if [ ! -e ${stateDir}/inboxes/${escapeShellArg name} ]; then
# public-inbox-init creates an inbox and adds it to a config file.
# It tries to atomically write the config file by creating
# another file in the same directory, and renaming it.
# This has the sad consequence that we can't use
# /dev/null, or it would try to create a file in /dev.
conf_dir="$(mktemp -d)"
PI_CONFIG=$conf_dir/conf \
${cfg.package}/bin/public-inbox-init -V2 \
${escapeShellArgs ([ name "${stateDir}/inboxes/${name}" inbox.url ] ++ inbox.address)}
rm -rf $conf_dir
fi
ln -sf ${inbox.description} \
${stateDir}/inboxes/${escapeShellArg name}/description
export GIT_DIR=${stateDir}/inboxes/${escapeShellArg name}/all.git
if test -d "$GIT_DIR"; then
# Config is inherited by each epoch repository,
# so just needs to be set for all.git.
${pkgs.git}/bin/git config core.sharedRepository 0640
fi
'') cfg.inboxes
) + ''
shopt -s nullglob
for inbox in ${stateDir}/inboxes/*/; do
# This should be idempotent, but only do it for new
# inboxes anyway because it's only needed once, and could
# be slow for large pre-existing inboxes.
ls -1 "$inbox" | grep -q '^xap' ||
${cfg.package}/bin/public-inbox-index "$inbox"
done
'';
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
StateDirectory = [
"public-inbox/.public-inbox"
"public-inbox/.public-inbox/emergency"
"public-inbox/inboxes"
];
};
}];
})
];
environment.systemPackages = with pkgs; [ cfg.package ];
};
meta.maintainers = with lib.maintainers; [ julm qyliss ];
}

View file

@ -135,7 +135,7 @@ in
User = "spamd";
Group = "spamd";
StateDirectory = "spamassassin";
ExecStartPost = "+${pkgs.systemd}/bin/systemctl -q --no-block try-reload-or-restart spamd.service";
ExecStartPost = "+${config.systemd.package}/bin/systemctl -q --no-block try-reload-or-restart spamd.service";
};
script = ''

View file

@ -296,6 +296,7 @@ in {
default = if lib.versionAtLeast config.system.stateVersion "22.05"
then "${cfg.dataDir}/media_store"
else "${cfg.dataDir}/media";
defaultText = "${cfg.dataDir}/media_store for when system.stateVersion is at least 22.05, ${cfg.dataDir}/media when lower than 22.05";
description = ''
Directory where uploaded images and attachments are stored.
'';

View file

@ -222,6 +222,13 @@ in
for available options with which to populate settings.
'';
};
openRegistration = lib.mkOption {
type = lib.types.bool;
default = false;
description = ''
Allow open registration without secondary verification (reCAPTCHA).
'';
};
};
config = lib.mkIf cfg.enable {
@ -263,6 +270,8 @@ in
"--https-bind-address :${builtins.toString cfg.httpsPort}"
"--tls-cert ${cfg.tlsCert}"
"--tls-key ${cfg.tlsKey}"
] ++ lib.optionals cfg.openRegistration [
"--really-enable-open-registration"
]);
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
Restart = "on-failure";

View file

@ -204,7 +204,7 @@ in
NoNewPrivileges = true;
LockPersonality = true;
RestrictRealtime = true;
SystemCallFilter = ["@system-service" "~@priviledged" "@chown"];
SystemCallFilter = ["@system-service" "~@privileged" "@chown"];
SystemCallArchitectures = "native";
RestrictAddressFamilies = "AF_INET AF_INET6";
};

View file

@ -31,7 +31,7 @@ in {
settings = mkOption {
default = {};
description = "The INI configuration for Mbpfan.";
description = "INI configuration for Mbpfan.";
type = types.submodule {
freeformType = settingsFormat.type;
@ -39,32 +39,26 @@ in {
type = types.nullOr types.int;
default = 2000;
description = ''
The minimum fan speed. Setting to null enables automatic detection.
Check minimum fan limits with "cat /sys/devices/platform/applesmc.768/fan*_min".
'';
};
options.general.max_fan1_speed = mkOption {
type = types.nullOr types.int;
default = 6199;
description = ''
The maximum fan speed. Setting to null enables automatic detection.
Check maximum fan limits with "cat /sys/devices/platform/applesmc.768/fan*_max".
You can check minimum and maximum fan limits with
"cat /sys/devices/platform/applesmc.768/fan*_min" and
"cat /sys/devices/platform/applesmc.768/fan*_max" respectively.
Setting to null implies using default value from applesmc.
'';
};
options.general.low_temp = mkOption {
type = types.int;
default = 55;
description = "Temperature below which fan speed will be at minimum. Try ranges 55-63.";
description = "If temperature is below this, fans will run at minimum speed.";
};
options.general.high_temp = mkOption {
type = types.int;
default = 58;
description = "Fan will increase speed when higher than this temperature. Try ranges 58-66.";
description = "If temperature is above this, fan speed will gradually increase.";
};
options.general.max_temp = mkOption {
type = types.int;
default = 86;
description = "Fan will run at full speed above this temperature. Do not set it > 90.";
description = "If temperature is above this, fans will run at maximum speed.";
};
options.general.polling_interval = mkOption {
type = types.int;

View file

@ -277,7 +277,7 @@ in
Add settings here to override NixOS module generated settings.
Check the official repository for the available settings:
https://github.com/zedeus/nitter/blob/master/nitter.conf
https://github.com/zedeus/nitter/blob/master/nitter.example.conf
'';
};

View file

@ -3,7 +3,11 @@
with lib;
let
json = pkgs.formats.json { };
cfg = config.services.prometheus;
checkConfigEnabled =
(lib.isBool cfg.checkConfig && cfg.checkConfig)
|| cfg.checkConfig == "syntax-only";
workingDir = "/var/lib/" + cfg.stateDir;
@ -26,7 +30,7 @@ let
# a wrapper that verifies that the configuration is valid
promtoolCheck = what: name: file:
if cfg.checkConfig then
if checkConfigEnabled then
pkgs.runCommandLocal
"${name}-${replaceStrings [" "] [""] what}-checked"
{ buildInputs = [ cfg.package ]; } ''
@ -34,13 +38,7 @@ let
promtool ${what} $out
'' else file;
# Pretty-print JSON to a file
writePrettyJSON = name: x:
pkgs.runCommandLocal name { } ''
echo '${builtins.toJSON x}' | ${pkgs.jq}/bin/jq . > $out
'';
generatedPrometheusYml = writePrettyJSON "prometheus.yml" promConfig;
generatedPrometheusYml = json.generate "prometheus.yml" promConfig;
# This becomes the main config file for Prometheus
promConfig = {
@ -63,7 +61,7 @@ let
pkgs.writeText "prometheus.yml" cfg.configText
else generatedPrometheusYml;
in
promtoolCheck "check config" "prometheus.yml" yml;
promtoolCheck "check config ${lib.optionalString (cfg.checkConfig == "syntax-only") "--syntax-only"}" "prometheus.yml" yml;
cmdlineArgs = cfg.extraFlags ++ [
"--storage.tsdb.path=${workingDir}/data/"
@ -1731,16 +1729,20 @@ in
};
checkConfig = mkOption {
type = types.bool;
type = with types; either bool (enum [ "syntax-only" ]);
default = true;
example = "syntax-only";
description = ''
Check configuration with <literal>promtool
check</literal>. The call to <literal>promtool</literal> is
subject to sandboxing by Nix. When credentials are stored in
external files (<literal>password_file</literal>,
<literal>bearer_token_file</literal>, etc), they will not be
visible to <literal>promtool</literal> and it will report
errors, despite a correct configuration.
subject to sandboxing by Nix.
If you use credentials stored in external files
(<literal>password_file</literal>, <literal>bearer_token_file</literal>, etc),
they will not be visible to <literal>promtool</literal>
and it will report errors, despite a correct configuration.
To resolve this, you may set this option to <literal>"syntax-only"</literal>
in order to only syntax check the Prometheus configuration.
'';
};

View file

@ -181,7 +181,7 @@ with lib;
};
verbose = mkOption {
default = true;
default = false;
type = bool;
description = ''
Print verbose information.

View file

@ -36,11 +36,11 @@ config = mkIf cfg.enable {
preStart = ''
if [ ! -d ${cfg.settingsDir} ] ; then
mkdir -m 0750 -p ${cfg.settingsDir}
chown -R gateone.gateone ${cfg.settingsDir}
chown -R gateone:gateone ${cfg.settingsDir}
fi
if [ ! -d ${cfg.pidDir} ] ; then
mkdir -m 0750 -p ${cfg.pidDir}
chown -R gateone.gateone ${cfg.pidDir}
chown -R gateone:gateone ${cfg.pidDir}
fi
'';
#unitConfig.RequiresMountsFor = "${cfg.settingsDir}";

View file

@ -98,7 +98,7 @@ serverinfo {
*
* openssl genrsa -out rsa.key 2048
* openssl rsa -in rsa.key -pubout -out rsa.pub
* chown <ircd-user>.<ircd.group> rsa.key rsa.pub
* chown <ircd-user>:<ircd.group> rsa.key rsa.pub
* chmod 0600 rsa.key
* chmod 0644 rsa.pub
*/

View file

@ -1,7 +1,6 @@
{ config, options, lib, pkgs, stdenv, ... }:
let
cfg = config.services.pleroma;
cookieFile = "/var/lib/pleroma/.cookie";
in {
options = {
services.pleroma = with lib; {
@ -9,7 +8,7 @@ in {
package = mkOption {
type = types.package;
default = pkgs.pleroma.override { inherit cookieFile; };
default = pkgs.pleroma;
defaultText = literalExpression "pkgs.pleroma";
description = "Pleroma package to use.";
};
@ -101,6 +100,7 @@ in {
after = [ "network-online.target" "postgresql.service" ];
wantedBy = [ "multi-user.target" ];
restartTriggers = [ config.environment.etc."/pleroma/config.exs".source ];
environment.RELEASE_COOKIE = "/var/lib/pleroma/.cookie";
serviceConfig = {
User = cfg.user;
Group = cfg.group;
@ -118,10 +118,10 @@ in {
# Better be safe than sorry migration-wise.
ExecStartPre =
let preScript = pkgs.writers.writeBashBin "pleromaStartPre" ''
if [ ! -f "${cookieFile}" ] || [ ! -s "${cookieFile}" ]
if [ ! -f /var/lib/pleroma/.cookie ]
then
echo "Creating cookie file"
dd if=/dev/urandom bs=1 count=16 | ${pkgs.hexdump}/bin/hexdump -e '16/1 "%02x"' > "${cookieFile}"
dd if=/dev/urandom bs=1 count=16 | hexdump -e '16/1 "%02x"' > /var/lib/pleroma/.cookie
fi
${cfg.package}/bin/pleroma_ctl migrate
'';

View file

@ -108,7 +108,7 @@ with lib;
#username pptpd password *
EOF
chown root.root "$secrets"
chown root:root "$secrets"
chmod 600 "$secrets"
'';

View file

@ -82,7 +82,7 @@ in
serviceConfig.Type = "forking";
preStart = ''
mkdir -m 0755 -p ${stateDir}
chown ${prayerUser}.${prayerGroup} ${stateDir}
chown ${prayerUser}:${prayerGroup} ${stateDir}
'';
script = "${prayer}/sbin/prayer --config-file=${prayerCfg}";
};

View file

@ -164,7 +164,7 @@ in {
StateDirectoryMode = "0750";
# Hardening
CapabilityBoundingSet = [ "" ];
DeviceAllow = [ "/dev/stdin" ];
DeviceAllow = [ "/dev/stdin" "/dev/urandom" ];
DevicePolicy = "strict";
IPAddressAllow = mkIf bindLocalhost "localhost";
IPAddressDeny = mkIf bindLocalhost "any";

View file

@ -293,6 +293,7 @@ in
kexAlgorithms = mkOption {
type = types.listOf types.str;
default = [
"sntrup761x25519-sha512@openssh.com"
"curve25519-sha256"
"curve25519-sha256@libssh.org"
"diffie-hellman-group-exchange-sha256"
@ -301,7 +302,7 @@ in
Allowed key exchange algorithms
</para>
<para>
Defaults to recommended settings from both
Uses the lower bound recommended in both
<link xlink:href="https://stribika.github.io/2015/01/04/secure-secure-shell.html" />
and
<link xlink:href="https://infosec.mozilla.org/guidelines/openssh#modern-openssh-67" />

View file

@ -226,10 +226,10 @@ in
ACTION=="add", SUBSYSTEM=="net", ENV{INTERFACE}=="${i}", TAG+="systemd", ENV{SYSTEMD_WANTS}+="supplicant-${replaceChars [" "] ["-"] iface}.service", TAG+="SUPPLICANT_ASSIGNED"''))}
${optionalString (hasAttr "WLAN" cfg) ''
ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", TAG!="SUPPLICANT_ASSIGNED", TAG+="systemd", PROGRAM="${pkgs.systemd}/bin/systemd-escape -p %E{INTERFACE}", ENV{SYSTEMD_WANTS}+="supplicant-wlan@$result.service"
ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="wlan", TAG!="SUPPLICANT_ASSIGNED", TAG+="systemd", PROGRAM="/run/current-system/systemd/bin/systemd-escape -p %E{INTERFACE}", ENV{SYSTEMD_WANTS}+="supplicant-wlan@$result.service"
''}
${optionalString (hasAttr "LAN" cfg) ''
ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="lan", TAG!="SUPPLICANT_ASSIGNED", TAG+="systemd", PROGRAM="${pkgs.systemd}/bin/systemd-escape -p %E{INTERFACE}", ENV{SYSTEMD_WANTS}+="supplicant-lan@$result.service"
ACTION=="add", SUBSYSTEM=="net", ENV{DEVTYPE}=="lan", TAG!="SUPPLICANT_ASSIGNED", TAG+="systemd", PROGRAM="/run/current-system/systemd/bin/systemd-escape -p %E{INTERFACE}", ENV{SYSTEMD_WANTS}+="supplicant-lan@$result.service"
''}
'';
})];

View file

@ -2,9 +2,13 @@
with lib;
let cfg = config.services.tailscale;
let
cfg = config.services.tailscale;
firewallOn = config.networking.firewall.enable;
rpfMode = config.networking.firewall.checkReversePath;
rpfIsStrict = rpfMode == true || rpfMode == "strict";
in {
meta.maintainers = with maintainers; [ danderson mbaillie ];
meta.maintainers = with maintainers; [ danderson mbaillie twitchyliquid64 ];
options.services.tailscale = {
enable = mkEnableOption "Tailscale client daemon";
@ -36,17 +40,34 @@ in {
};
config = mkIf cfg.enable {
warnings = optional (firewallOn && rpfIsStrict) "Strict reverse path filtering breaks Tailscale exit node use and some subnet routing setups. Consider setting `networking.firewall.checkReversePath` = 'loose'";
environment.systemPackages = [ cfg.package ]; # for the CLI
systemd.packages = [ cfg.package ];
systemd.services.tailscaled = {
wantedBy = [ "multi-user.target" ];
path = [ pkgs.openresolv pkgs.procps ];
path = [
pkgs.openresolv # for configuring DNS in some configs
pkgs.procps # for collecting running services (opt-in feature)
pkgs.glibc # for `getent` to look up user shells
];
serviceConfig.Environment = [
"PORT=${toString cfg.port}"
''"FLAGS=--tun ${lib.escapeShellArg cfg.interfaceName}"''
] ++ (lib.optionals (cfg.permitCertUid != null) [
"TS_PERMIT_CERT_UID=${cfg.permitCertUid}"
]);
# Restart tailscaled with a single `systemctl restart` at the
# end of activation, rather than a `stop` followed by a later
# `start`. Activation over Tailscale can hang for tens of
# seconds in the stop+start setup, if the activation script has
# a significant delay between the stop and start phases
# (e.g. script blocked on another unit with a slow shutdown).
#
# Tailscale is aware of the correctness tradeoff involved, and
# already makes its upstream systemd unit robust against unit
# version mismatches on restart for compatibility with other
# linux distros.
stopIfChanged = false;
};
};
}

View file

@ -0,0 +1,106 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.uptermd;
in
{
options = {
services.uptermd = {
enable = mkEnableOption "uptermd";
openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Whether to open the firewall for the port in <option>services.uptermd.port</option>.
'';
};
port = mkOption {
type = types.port;
default = 2222;
description = ''
Port the server will listen on.
'';
};
listenAddress = mkOption {
type = types.str;
default = "[::]";
example = "127.0.0.1";
description = ''
Address the server will listen on.
'';
};
hostKey = mkOption {
type = types.nullOr types.path;
default = null;
example = "/run/keys/upterm_host_ed25519_key";
description = ''
Path to SSH host key. If not defined, an ed25519 keypair is generated automatically.
'';
};
extraFlags = mkOption {
type = types.listOf types.str;
default = [];
example = [ "--debug" ];
description = ''
Extra flags passed to the uptermd command.
'';
};
};
};
config = mkIf cfg.enable {
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.port ];
};
systemd.services.uptermd = {
description = "Upterm Daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [ pkgs.openssh ];
preStart = mkIf (cfg.hostKey == null) ''
if ! [ -f ssh_host_ed25519_key ]; then
ssh-keygen \
-t ed25519 \
-f ssh_host_ed25519_key \
-N ""
fi
'';
serviceConfig = {
StateDirectory = "uptermd";
WorkingDirectory = "/var/lib/uptermd";
ExecStart = "${pkgs.upterm}/bin/uptermd --ssh-addr ${cfg.listenAddress}:${toString cfg.port} --private-key ${if cfg.hostKey == null then "ssh_host_ed25519_key" else cfg.hostKey} ${concatStringsSep " " cfg.extraFlags}";
# Hardening
AmbientCapabilities = mkIf (cfg.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
CapabilityBoundingSet = mkIf (cfg.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
PrivateUsers = cfg.port >= 1024;
LockPersonality = true;
MemoryDenyWriteExecute = true;
PrivateDevices = true;
ProtectClock = true;
ProtectControlGroups = true;
ProtectHome = true;
ProtectHostname = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
RestrictNamespaces = true;
RestrictRealtime = true;
SystemCallArchitectures = "native";
SystemCallFilter = "@system-service";
};
};
};
}

View file

@ -211,7 +211,7 @@ let
postUp =
optional (values.privateKeyFile != null) "wg set ${name} private-key <(cat ${values.privateKeyFile})" ++
(concatMap (peer: optional (peer.presharedKeyFile != null) "wg set ${name} peer ${peer.publicKey} preshared-key <(cat ${peer.presharedKeyFile})") values.peers) ++
optional (values.postUp != null) values.postUp;
optional (values.postUp != "") values.postUp;
postUpFile = if postUp != [] then writeScriptFile "postUp.sh" (concatMapStringsSep "\n" (line: line) postUp) else null;
preDownFile = if values.preDown != "" then writeScriptFile "preDown.sh" values.preDown else null;
postDownFile = if values.postDown != "" then writeScriptFile "postDown.sh" values.postDown else null;

View file

@ -301,8 +301,9 @@ let
{
description = "WireGuard Peer - ${interfaceName} - ${peer.publicKey}";
requires = [ "wireguard-${interfaceName}.service" ];
after = [ "wireguard-${interfaceName}.service" ];
wantedBy = [ "multi-user.target" "wireguard-${interfaceName}.service" ];
wants = [ "network-online.target" ];
after = [ "wireguard-${interfaceName}.service" "network-online.target" ];
wantedBy = [ "wireguard-${interfaceName}.service" ];
environment.DEVICE = interfaceName;
environment.WG_ENDPOINT_RESOLUTION_RETRIES = "infinity";
path = with pkgs; [ iproute2 wireguard-tools ];
@ -379,8 +380,9 @@ let
nameValuePair "wireguard-${name}"
{
description = "WireGuard Tunnel - ${name}";
requires = [ "network-online.target" ];
after = [ "network.target" "network-online.target" ];
after = [ "network-pre.target" ];
wants = [ "network.target" ];
before = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
environment.DEVICE = name;
path = with pkgs; [ kmod iproute2 wireguard-tools ];

View file

@ -116,18 +116,18 @@ with lib;
#username xl2tpd password *
EOF
chown root.root ppp/chap-secrets
chown root:root ppp/chap-secrets
chmod 600 ppp/chap-secrets
# The documentation says this file should be present but doesn't explain why and things work even if not there:
[ -f l2tp-secrets ] || (echo -n "* * "; ${pkgs.apg}/bin/apg -n 1 -m 32 -x 32 -a 1 -M LCN) > l2tp-secrets
chown root.root l2tp-secrets
chown root:root l2tp-secrets
chmod 600 l2tp-secrets
popd > /dev/null
mkdir -p /run/xl2tpd
chown root.root /run/xl2tpd
chown root:root /run/xl2tpd
chmod 700 /run/xl2tpd
'';

View file

@ -0,0 +1,345 @@
{ config, lib, options, pkgs, ... }:
let
cfg = config.services.kanidm;
settingsFormat = pkgs.formats.toml { };
# Remove null values, so we can document optional values that don't end up in the generated TOML file.
filterConfig = lib.converge (lib.filterAttrsRecursive (_: v: v != null));
serverConfigFile = settingsFormat.generate "server.toml" (filterConfig cfg.serverSettings);
clientConfigFile = settingsFormat.generate "kanidm-config.toml" (filterConfig cfg.clientSettings);
unixConfigFile = settingsFormat.generate "kanidm-unixd.toml" (filterConfig cfg.unixSettings);
defaultServiceConfig = {
BindReadOnlyPaths = [
"/nix/store"
"-/etc/resolv.conf"
"-/etc/nsswitch.conf"
"-/etc/hosts"
"-/etc/localtime"
];
CapabilityBoundingSet = "";
# ProtectClock= adds DeviceAllow=char-rtc r
DeviceAllow = "";
# Implies ProtectSystem=strict, which re-mounts all paths
# DynamicUser = true;
LockPersonality = true;
MemoryDenyWriteExecute = true;
NoNewPrivileges = true;
PrivateDevices = true;
PrivateMounts = true;
PrivateNetwork = true;
PrivateTmp = true;
PrivateUsers = true;
ProcSubset = "pid";
ProtectClock = true;
ProtectHome = true;
ProtectHostname = true;
# Would re-mount paths ignored by temporary root
#ProtectSystem = "strict";
ProtectControlGroups = true;
ProtectKernelLogs = true;
ProtectKernelModules = true;
ProtectKernelTunables = true;
ProtectProc = "invisible";
RestrictAddressFamilies = [ ];
RestrictNamespaces = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
SystemCallArchitectures = "native";
SystemCallFilter = [ "@system-service" "~@privileged @resources @setuid @keyring" ];
# Does not work well with the temporary root
#UMask = "0066";
};
in
{
options.services.kanidm = {
enableClient = lib.mkEnableOption "the Kanidm client";
enableServer = lib.mkEnableOption "the Kanidm server";
enablePam = lib.mkEnableOption "the Kanidm PAM and NSS integration.";
serverSettings = lib.mkOption {
type = lib.types.submodule {
freeformType = settingsFormat.type;
options = {
bindaddress = lib.mkOption {
description = "Address/port combination the webserver binds to.";
example = "[::1]:8443";
type = lib.types.str;
};
# Should be optional but toml does not accept null
ldapbindaddress = lib.mkOption {
description = ''
Address and port the LDAP server is bound to. Setting this to <literal>null</literal> disables the LDAP interface.
'';
example = "[::1]:636";
default = null;
type = lib.types.nullOr lib.types.str;
};
origin = lib.mkOption {
description = "The origin of your Kanidm instance. Must have https as protocol.";
example = "https://idm.example.org";
type = lib.types.strMatching "^https://.*";
};
domain = lib.mkOption {
description = ''
The <literal>domain</literal> that Kanidm manages. Must be below or equal to the domain
specified in <literal>serverSettings.origin</literal>.
This can be left at <literal>null</literal>, only if your instance has the role <literal>ReadOnlyReplica</literal>.
While it is possible to change the domain later on, it requires extra steps!
Please consider the warnings and execute the steps described
<link xlink:href="https://kanidm.github.io/kanidm/stable/administrivia.html#rename-the-domain">in the documentation</link>.
'';
example = "example.org";
default = null;
type = lib.types.nullOr lib.types.str;
};
db_path = lib.mkOption {
description = "Path to Kanidm database.";
default = "/var/lib/kanidm/kanidm.db";
readOnly = true;
type = lib.types.path;
};
log_level = lib.mkOption {
description = "Log level of the server.";
default = "default";
type = lib.types.enum [ "default" "verbose" "perfbasic" "perffull" ];
};
role = lib.mkOption {
description = "The role of this server. This affects the replication relationship and thereby available features.";
default = "WriteReplica";
type = lib.types.enum [ "WriteReplica" "WriteReplicaNoUI" "ReadOnlyReplica" ];
};
};
};
default = { };
description = ''
Settings for Kanidm, see
<link xlink:href="https://github.com/kanidm/kanidm/blob/master/kanidm_book/src/server_configuration.md">the documentation</link>
and <link xlink:href="https://github.com/kanidm/kanidm/blob/master/examples/server.toml">example configuration</link>
for possible values.
'';
};
clientSettings = lib.mkOption {
type = lib.types.submodule {
freeformType = settingsFormat.type;
options.uri = lib.mkOption {
description = "Address of the Kanidm server.";
example = "http://127.0.0.1:8080";
type = lib.types.str;
};
};
description = ''
Configure Kanidm clients, needed for the PAM daemon. See
<link xlink:href="https://github.com/kanidm/kanidm/blob/master/kanidm_book/src/client_tools.md#kanidm-configuration">the documentation</link>
and <link xlink:href="https://github.com/kanidm/kanidm/blob/master/examples/config">example configuration</link>
for possible values.
'';
};
unixSettings = lib.mkOption {
type = lib.types.submodule {
freeformType = settingsFormat.type;
options.pam_allowed_login_groups = lib.mkOption {
description = "Kanidm groups that are allowed to login using PAM.";
example = "my_pam_group";
type = lib.types.listOf lib.types.str;
};
};
description = ''
Configure Kanidm unix daemon.
See <link xlink:href="https://github.com/kanidm/kanidm/blob/master/kanidm_book/src/pam_and_nsswitch.md#the-unix-daemon">the documentation</link>
and <link xlink:href="https://github.com/kanidm/kanidm/blob/master/examples/unixd">example configuration</link>
for possible values.
'';
};
};
config = lib.mkIf (cfg.enableClient || cfg.enableServer || cfg.enablePam) {
assertions =
[
{
assertion = !cfg.enableServer || ((cfg.serverSettings.tls_chain or null) == null) || (!lib.isStorePath cfg.serverSettings.tls_chain);
message = ''
<option>services.kanidm.serverSettings.tls_chain</option> points to
a file in the Nix store. You should use a quoted absolute path to
prevent this.
'';
}
{
assertion = !cfg.enableServer || ((cfg.serverSettings.tls_key or null) == null) || (!lib.isStorePath cfg.serverSettings.tls_key);
message = ''
<option>services.kanidm.serverSettings.tls_key</option> points to
a file in the Nix store. You should use a quoted absolute path to
prevent this.
'';
}
{
assertion = !cfg.enableClient || options.services.kanidm.clientSettings.isDefined;
message = ''
<option>services.kanidm.clientSettings</option> needs to be configured
if the client is enabled.
'';
}
{
assertion = !cfg.enablePam || options.services.kanidm.clientSettings.isDefined;
message = ''
<option>services.kanidm.clientSettings</option> needs to be configured
for the PAM daemon to connect to the Kanidm server.
'';
}
{
assertion = !cfg.enableServer || (cfg.serverSettings.domain == null
-> cfg.serverSettings.role == "WriteReplica" || cfg.serverSettings.role == "WriteReplicaNoUI");
message = ''
<option>services.kanidm.serverSettings.domain</option> can only be set if this instance
is not a ReadOnlyReplica. Otherwise the db would inherit it from
the instance it follows.
'';
}
];
environment.systemPackages = lib.mkIf cfg.enableClient [ pkgs.kanidm ];
systemd.services.kanidm = lib.mkIf cfg.enableServer {
description = "kanidm identity management daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
serviceConfig = defaultServiceConfig // {
StateDirectory = "kanidm";
StateDirectoryMode = "0700";
ExecStart = "${pkgs.kanidm}/bin/kanidmd server -c ${serverConfigFile}";
User = "kanidm";
Group = "kanidm";
AmbientCapabilities = [ "CAP_NET_BIND_SERVICE" ];
CapabilityBoundingSet = [ "CAP_NET_BIND_SERVICE" ];
# This would otherwise override the CAP_NET_BIND_SERVICE capability.
PrivateUsers = false;
# Port needs to be exposed to the host network
PrivateNetwork = false;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" ];
TemporaryFileSystem = "/:ro";
};
environment.RUST_LOG = "info";
};
systemd.services.kanidm-unixd = lib.mkIf cfg.enablePam {
description = "Kanidm PAM daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
restartTriggers = [ unixConfigFile clientConfigFile ];
serviceConfig = defaultServiceConfig // {
CacheDirectory = "kanidm-unixd";
CacheDirectoryMode = "0700";
RuntimeDirectory = "kanidm-unixd";
ExecStart = "${pkgs.kanidm}/bin/kanidm_unixd";
User = "kanidm-unixd";
Group = "kanidm-unixd";
BindReadOnlyPaths = [
"/nix/store"
"-/etc/resolv.conf"
"-/etc/nsswitch.conf"
"-/etc/hosts"
"-/etc/localtime"
"-/etc/kanidm"
"-/etc/static/kanidm"
];
BindPaths = [
# To create the socket
"/run/kanidm-unixd:/var/run/kanidm-unixd"
];
# Needs to connect to kanidmd
PrivateNetwork = false;
RestrictAddressFamilies = [ "AF_INET" "AF_INET6" "AF_UNIX" ];
TemporaryFileSystem = "/:ro";
};
environment.RUST_LOG = "info";
};
systemd.services.kanidm-unixd-tasks = lib.mkIf cfg.enablePam {
description = "Kanidm PAM home management daemon";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "kanidm-unixd.service" ];
partOf = [ "kanidm-unixd.service" ];
restartTriggers = [ unixConfigFile clientConfigFile ];
serviceConfig = {
ExecStart = "${pkgs.kanidm}/bin/kanidm_unixd_tasks";
BindReadOnlyPaths = [
"/nix/store"
"-/etc/resolv.conf"
"-/etc/nsswitch.conf"
"-/etc/hosts"
"-/etc/localtime"
"-/etc/kanidm"
"-/etc/static/kanidm"
];
BindPaths = [
# To manage home directories
"/home"
# To connect to kanidm-unixd
"/run/kanidm-unixd:/var/run/kanidm-unixd"
];
# CAP_DAC_OVERRIDE is needed to ignore ownership of unixd socket
CapabilityBoundingSet = [ "CAP_CHOWN" "CAP_FOWNER" "CAP_DAC_OVERRIDE" "CAP_DAC_READ_SEARCH" ];
IPAddressDeny = "any";
# Need access to users
PrivateUsers = false;
# Need access to home directories
ProtectHome = false;
RestrictAddressFamilies = [ "AF_UNIX" ];
TemporaryFileSystem = "/:ro";
};
environment.RUST_LOG = "info";
};
# These paths are hardcoded
environment.etc = lib.mkMerge [
(lib.mkIf options.services.kanidm.clientSettings.isDefined {
"kanidm/config".source = clientConfigFile;
})
(lib.mkIf cfg.enablePam {
"kanidm/unixd".source = unixConfigFile;
})
];
system.nssModules = lib.mkIf cfg.enablePam [ pkgs.kanidm ];
system.nssDatabases.group = lib.optional cfg.enablePam "kanidm";
system.nssDatabases.passwd = lib.optional cfg.enablePam "kanidm";
users.groups = lib.mkMerge [
(lib.mkIf cfg.enableServer {
kanidm = { };
})
(lib.mkIf cfg.enablePam {
kanidm-unixd = { };
})
];
users.users = lib.mkMerge [
(lib.mkIf cfg.enableServer {
kanidm = {
description = "Kanidm server";
isSystemUser = true;
group = "kanidm";
packages = with pkgs; [ kanidm ];
};
})
(lib.mkIf cfg.enablePam {
kanidm-unixd = {
description = "Kanidm PAM daemon";
isSystemUser = true;
group = "kanidm-unixd";
};
})
];
};
meta.maintainers = with lib.maintainers; [ erictapen Flakebi ];
meta.buildDocsInSandbox = false;
}

View file

@ -17,7 +17,7 @@ let
else "sshg-fw-ipset";
in pkgs.writeText "sshguard.conf" ''
BACKEND="${pkgs.sshguard}/libexec/${backend}"
LOGREADER="LANG=C ${pkgs.systemd}/bin/journalctl ${args}"
LOGREADER="LANG=C ${config.systemd.package}/bin/journalctl ${args}"
'';
in {

View file

@ -88,7 +88,7 @@ in {
account required pam_unix.so
session required pam_unix.so
session required pam_env.so conffile=/etc/pam/environment readenv=0
session required ${pkgs.systemd}/lib/security/pam_systemd.so
session required ${config.systemd.package}/lib/security/pam_systemd.so
'';
hardware.opengl.enable = mkDefault true;

View file

@ -189,6 +189,8 @@ in
User = cfg.user;
Group = cfg.group;
PrivateTmp = true;
Restart = "on-failure";
RestartSec = "10";
ExecStart = "${pkg}/bin/start-confluence.sh -fg";
ExecStop = "${pkg}/bin/stop-confluence.sh";
};

View file

@ -157,6 +157,8 @@ in
User = cfg.user;
Group = cfg.group;
PrivateTmp = true;
Restart = "on-failure";
RestartSec = "10";
ExecStart = "${pkg}/start_crowd.sh -fg";
};
};

View file

@ -197,6 +197,8 @@ in
User = cfg.user;
Group = cfg.group;
PrivateTmp = true;
Restart = "on-failure";
RestartSec = "10";
ExecStart = "${pkg}/bin/start-jira.sh -fg";
ExecStop = "${pkg}/bin/stop-jira.sh";
};

View file

@ -1023,6 +1023,7 @@ in
'';
serviceConfig = {
WorkingDirectory = cfg.workDir;
StateDirectory = [ cfg.workDir cfg.configuration.uploadsPath ];
ExecStart = "${cfg.package}/bin/hedgedoc";
EnvironmentFile = mkIf (cfg.environmentFile != null) [ cfg.environmentFile ];
Environment = [

View file

@ -294,7 +294,7 @@ in {
port = lib.mkOption {
description = "Redis port.";
type = lib.types.port;
default = 6379;
default = 31637;
};
};
@ -605,8 +605,10 @@ in {
enable = true;
hostname = lib.mkDefault "${cfg.localDomain}";
};
services.redis = lib.mkIf (cfg.redis.createLocally && cfg.redis.host == "127.0.0.1") {
services.redis.servers.mastodon = lib.mkIf (cfg.redis.createLocally && cfg.redis.host == "127.0.0.1") {
enable = true;
port = cfg.redis.port;
bind = "127.0.0.1";
};
services.postgresql = lib.mkIf databaseActuallyCreateLocally {
enable = true;

View file

@ -153,11 +153,11 @@ in {
package = mkOption {
type = types.package;
description = "Which package to use for the Nextcloud instance.";
relatedPackages = [ "nextcloud22" "nextcloud23" ];
relatedPackages = [ "nextcloud22" "nextcloud23" "nextcloud24" ];
};
phpPackage = mkOption {
type = types.package;
relatedPackages = [ "php74" "php80" ];
relatedPackages = [ "php74" "php80" "php81" ];
defaultText = "pkgs.php";
description = ''
PHP package to use for Nextcloud.
@ -546,16 +546,29 @@ in {
'';
};
nginx.recommendedHttpHeaders = mkOption {
nginx = {
recommendedHttpHeaders = mkOption {
type = types.bool;
default = true;
description = "Enable additional recommended HTTP response headers";
};
hstsMaxAge = mkOption {
type = types.ints.positive;
default = 15552000;
description = ''
Value for the <code>max-age</code> directive of the HTTP
<code>Strict-Transport-Security</code> header.
See section 6.1.1 of IETF RFC 6797 for detailed information on this
directive and header.
'';
};
};
};
config = mkIf cfg.enable (mkMerge [
{ warnings = let
latest = 23;
latest = 24;
upgradeWarning = major: nixos:
''
A legacy Nextcloud install (from before NixOS ${nixos}) may be installed.
@ -591,6 +604,7 @@ in {
++ (optional (versionOlder cfg.package.version "21") (upgradeWarning 20 "21.05"))
++ (optional (versionOlder cfg.package.version "22") (upgradeWarning 21 "21.11"))
++ (optional (versionOlder cfg.package.version "23") (upgradeWarning 22 "22.05"))
++ (optional (versionOlder cfg.package.version "24") (upgradeWarning 23 "22.05"))
++ (optional isUnsupportedMariadb ''
You seem to be using MariaDB at an unsupported version (i.e. at least 10.6)!
Please note that this isn't supported officially by Nextcloud. You can either
@ -613,14 +627,15 @@ in {
''
else if versionOlder stateVersion "21.11" then nextcloud21
else if versionOlder stateVersion "22.05" then nextcloud22
else nextcloud23
else nextcloud24
);
services.nextcloud.datadir = mkOptionDefault config.services.nextcloud.home;
services.nextcloud.phpPackage =
if versionOlder cfg.package.version "21" then pkgs.php74
else pkgs.php80;
else if versionOlder cfg.package.version "24" then pkgs.php80
else pkgs.php81;
}
{ assertions = [
@ -702,7 +717,7 @@ in {
'skeletondirectory' => '${cfg.skeletonDirectory}',
${optionalString cfg.caching.apcu "'memcache.local' => '\\OC\\Memcache\\APCu',"}
'log_type' => 'syslog',
'log_level' => '${builtins.toString cfg.logLevel}',
'loglevel' => '${builtins.toString cfg.logLevel}',
${optionalString (c.overwriteProtocol != null) "'overwriteprotocol' => '${c.overwriteProtocol}',"}
${optionalString (c.dbname != null) "'dbname' => '${c.dbname}',"}
${optionalString (c.dbhost != null) "'dbhost' => '${c.dbhost}',"}
@ -871,7 +886,7 @@ in {
# FIXME(@Ma27) Nextcloud isn't compatible with mariadb 10.6,
# this is a workaround.
# See https://help.nextcloud.com/t/update-to-next-cloud-21-0-2-has-get-an-error/117028/22
settings = {
settings = mkIf (versionOlder cfg.package.version "24") {
mysqld = {
innodb_read_only_compressed = 0;
};
@ -983,7 +998,9 @@ in {
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options sameorigin;
add_header Referrer-Policy no-referrer;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
''}
${optionalString (cfg.https) ''
add_header Strict-Transport-Security "max-age=${toString cfg.nginx.hstsMaxAge}; includeSubDomains" always;
''}
client_max_body_size ${cfg.maxUploadSize};
fastcgi_buffers 64 4K;

View file

@ -11,7 +11,7 @@
desktop client is packaged at <literal>pkgs.nextcloud-client</literal>.
</para>
<para>
The current default by NixOS is <package>nextcloud23</package> which is also the latest
The current default by NixOS is <package>nextcloud24</package> which is also the latest
major version available.
</para>
<section xml:id="module-services-nextcloud-basic-usage">

View file

@ -294,7 +294,7 @@ in
ln -sf "${cfg.dataDir}/client/img" "${runDir}/client/img"
chmod g+w "${runDir}/tmp/cache"
chown -R "${cfg.user}"."${cfg.group}" "${runDir}"
chown -R "${cfg.user}":"${cfg.group}" "${runDir}"
mkdir -m 0750 -p "${cfg.dataDir}"
@ -302,9 +302,9 @@ in
mkdir -m 0750 -p "${cfg.dataDir}/client/img"
cp -r "${pkgs.restya-board}/media/"* "${cfg.dataDir}/media"
cp -r "${pkgs.restya-board}/client/img/"* "${cfg.dataDir}/client/img"
chown "${cfg.user}"."${cfg.group}" "${cfg.dataDir}"
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/media"
chown -R "${cfg.user}"."${cfg.group}" "${cfg.dataDir}/client/img"
chown "${cfg.user}":"${cfg.group}" "${cfg.dataDir}"
chown -R "${cfg.user}":"${cfg.group}" "${cfg.dataDir}/media"
chown -R "${cfg.user}":"${cfg.group}" "${cfg.dataDir}/client/img"
${optionalString (cfg.database.host == null) ''
if ! [ -e "${cfg.dataDir}/.db-initialized" ]; then

View file

@ -0,0 +1,493 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.snipe-it;
snipe-it = pkgs.snipe-it.override {
dataDir = cfg.dataDir;
};
db = cfg.database;
mail = cfg.mail;
user = cfg.user;
group = cfg.group;
tlsEnabled = cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME;
# shell script for local administration
artisan = pkgs.writeScriptBin "snipe-it" ''
#! ${pkgs.runtimeShell}
cd ${snipe-it}
sudo=exec
if [[ "$USER" != ${user} ]]; then
sudo='exec /run/wrappers/bin/sudo -u ${user}'
fi
$sudo ${pkgs.php}/bin/php artisan $*
'';
in {
options.services.snipe-it = {
enable = mkEnableOption "A free open source IT asset/license management system";
user = mkOption {
default = "snipeit";
description = "User snipe-it runs as.";
type = types.str;
};
group = mkOption {
default = "snipeit";
description = "Group snipe-it runs as.";
type = types.str;
};
appKeyFile = mkOption {
description = ''
A file containing the Laravel APP_KEY - a 32 character long,
base64 encoded key used for encryption where needed. Can be
generated with <code>head -c 32 /dev/urandom | base64</code>.
'';
example = "/run/keys/snipe-it/appkey";
type = types.path;
};
hostName = lib.mkOption {
type = lib.types.str;
default = if config.networking.domain != null then
config.networking.fqdn
else
config.networking.hostName;
defaultText = lib.literalExpression "config.networking.fqdn";
example = "snipe-it.example.com";
description = ''
The hostname to serve Snipe-IT on.
'';
};
appURL = mkOption {
description = ''
The root URL that you want to host Snipe-IT on. All URLs in Snipe-IT will be generated using this value.
If you change this in the future you may need to run a command to update stored URLs in the database.
Command example: <code>snipe-it snipe-it:update-url https://old.example.com https://new.example.com</code>
'';
default = "http${lib.optionalString tlsEnabled "s"}://${cfg.hostName}";
defaultText = ''
http''${lib.optionalString tlsEnabled "s"}://''${cfg.hostName}
'';
example = "https://example.com";
type = types.str;
};
dataDir = mkOption {
description = "snipe-it data directory";
default = "/var/lib/snipe-it";
type = types.path;
};
database = {
host = mkOption {
type = types.str;
default = "localhost";
description = "Database host address.";
};
port = mkOption {
type = types.port;
default = 3306;
description = "Database host port.";
};
name = mkOption {
type = types.str;
default = "snipeit";
description = "Database name.";
};
user = mkOption {
type = types.str;
default = user;
defaultText = literalExpression "user";
description = "Database username.";
};
passwordFile = mkOption {
type = with types; nullOr path;
default = null;
example = "/run/keys/snipe-it/dbpassword";
description = ''
A file containing the password corresponding to
<option>database.user</option>.
'';
};
createLocally = mkOption {
type = types.bool;
default = false;
description = "Create the database and database user locally.";
};
};
mail = {
driver = mkOption {
type = types.enum [ "smtp" "sendmail" ];
default = "smtp";
description = "Mail driver to use.";
};
host = mkOption {
type = types.str;
default = "localhost";
description = "Mail host address.";
};
port = mkOption {
type = types.port;
default = 1025;
description = "Mail host port.";
};
encryption = mkOption {
type = with types; nullOr (enum [ "tls" "ssl" ]);
default = null;
description = "SMTP encryption mechanism to use.";
};
user = mkOption {
type = with types; nullOr str;
default = null;
example = "snipeit";
description = "Mail username.";
};
passwordFile = mkOption {
type = with types; nullOr path;
default = null;
example = "/run/keys/snipe-it/mailpassword";
description = ''
A file containing the password corresponding to
<option>mail.user</option>.
'';
};
backupNotificationAddress = mkOption {
type = types.str;
default = "backup@example.com";
description = "Email Address to send Backup Notifications to.";
};
from = {
name = mkOption {
type = types.str;
default = "Snipe-IT Asset Management";
description = "Mail \"from\" name.";
};
address = mkOption {
type = types.str;
default = "mail@example.com";
description = "Mail \"from\" address.";
};
};
replyTo = {
name = mkOption {
type = types.str;
default = "Snipe-IT Asset Management";
description = "Mail \"reply-to\" name.";
};
address = mkOption {
type = types.str;
default = "mail@example.com";
description = "Mail \"reply-to\" address.";
};
};
};
maxUploadSize = mkOption {
type = types.str;
default = "18M";
example = "1G";
description = "The maximum size for uploads (e.g. images).";
};
poolConfig = mkOption {
type = with types; attrsOf (oneOf [ str int bool ]);
default = {
"pm" = "dynamic";
"pm.max_children" = 32;
"pm.start_servers" = 2;
"pm.min_spare_servers" = 2;
"pm.max_spare_servers" = 4;
"pm.max_requests" = 500;
};
description = ''
Options for the snipe-it PHP pool. See the documentation on <literal>php-fpm.conf</literal>
for details on configuration directives.
'';
};
nginx = mkOption {
type = types.submodule (
recursiveUpdate
(import ../web-servers/nginx/vhost-options.nix { inherit config lib; }) {}
);
default = {};
example = literalExpression ''
{
serverAliases = [
"snipe-it.''${config.networking.domain}"
];
# To enable encryption and let let's encrypt take care of certificate
forceSSL = true;
enableACME = true;
}
'';
description = ''
With this option, you can customize the nginx virtualHost settings.
'';
};
config = mkOption {
type = with types;
attrsOf
(nullOr
(either
(oneOf [
bool
int
port
path
str
])
(submodule {
options = {
_secret = mkOption {
type = nullOr (oneOf [ str path ]);
description = ''
The path to a file containing the value the
option should be set to in the final
configuration file.
'';
};
};
})));
default = {};
example = literalExpression ''
{
ALLOWED_IFRAME_HOSTS = "https://example.com";
WKHTMLTOPDF = "''${pkgs.wkhtmltopdf}/bin/wkhtmltopdf";
AUTH_METHOD = "oidc";
OIDC_NAME = "MyLogin";
OIDC_DISPLAY_NAME_CLAIMS = "name";
OIDC_CLIENT_ID = "snipe-it";
OIDC_CLIENT_SECRET = {_secret = "/run/keys/oidc_secret"};
OIDC_ISSUER = "https://keycloak.example.com/auth/realms/My%20Realm";
OIDC_ISSUER_DISCOVER = true;
}
'';
description = ''
Snipe-IT configuration options to set in the
<filename>.env</filename> file.
Refer to <link xlink:href="https://snipe-it.readme.io/docs/configuration"/>
for details on supported values.
Settings containing secret data should be set to an attribute
set containing the attribute <literal>_secret</literal> - a
string pointing to a file containing the value the option
should be set to. See the example to get a better picture of
this: in the resulting <filename>.env</filename> file, the
<literal>OIDC_CLIENT_SECRET</literal> key will be set to the
contents of the <filename>/run/keys/oidc_secret</filename>
file.
'';
};
};
config = mkIf cfg.enable {
assertions = [
{ assertion = db.createLocally -> db.user == user;
message = "services.snipe-it.database.user must be set to ${user} if services.snipe-it.database.createLocally is set true.";
}
{ assertion = db.createLocally -> db.passwordFile == null;
message = "services.snipe-it.database.passwordFile cannot be specified if services.snipe-it.database.createLocally is set to true.";
}
];
environment.systemPackages = [ artisan ];
services.snipe-it.config = {
APP_ENV = "production";
APP_KEY._secret = cfg.appKeyFile;
APP_URL = cfg.appURL;
DB_HOST = db.host;
DB_PORT = db.port;
DB_DATABASE = db.name;
DB_USERNAME = db.user;
DB_PASSWORD._secret = db.passwordFile;
MAIL_DRIVER = mail.driver;
MAIL_FROM_NAME = mail.from.name;
MAIL_FROM_ADDR = mail.from.address;
MAIL_REPLYTO_NAME = mail.from.name;
MAIL_REPLYTO_ADDR = mail.from.address;
MAIL_BACKUP_NOTIFICATION_ADDRESS = mail.backupNotificationAddress;
MAIL_HOST = mail.host;
MAIL_PORT = mail.port;
MAIL_USERNAME = mail.user;
MAIL_ENCRYPTION = mail.encryption;
MAIL_PASSWORD._secret = mail.passwordFile;
APP_SERVICES_CACHE = "/run/snipe-it/cache/services.php";
APP_PACKAGES_CACHE = "/run/snipe-it/cache/packages.php";
APP_CONFIG_CACHE = "/run/snipe-it/cache/config.php";
APP_ROUTES_CACHE = "/run/snipe-it/cache/routes-v7.php";
APP_EVENTS_CACHE = "/run/snipe-it/cache/events.php";
SESSION_SECURE_COOKIE = tlsEnabled;
};
services.mysql = mkIf db.createLocally {
enable = true;
package = mkDefault pkgs.mariadb;
ensureDatabases = [ db.name ];
ensureUsers = [
{ name = db.user;
ensurePermissions = { "${db.name}.*" = "ALL PRIVILEGES"; };
}
];
};
services.phpfpm.pools.snipe-it = {
inherit user group;
phpPackage = pkgs.php74;
phpOptions = ''
post_max_size = ${cfg.maxUploadSize}
upload_max_filesize = ${cfg.maxUploadSize}
'';
settings = {
"listen.mode" = "0660";
"listen.owner" = user;
"listen.group" = group;
} // cfg.poolConfig;
};
services.nginx = {
enable = mkDefault true;
virtualHosts."${cfg.hostName}" = mkMerge [ cfg.nginx {
root = mkForce "${snipe-it}/public";
extraConfig = optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;";
locations = {
"/" = {
index = "index.php";
extraConfig = ''try_files $uri $uri/ /index.php?$query_string;'';
};
"~ \.php$" = {
extraConfig = ''
try_files $uri $uri/ /index.php?$query_string;
include ${config.services.nginx.package}/conf/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param REDIRECT_STATUS 200;
fastcgi_pass unix:${config.services.phpfpm.pools."snipe-it".socket};
${optionalString (cfg.nginx.addSSL || cfg.nginx.forceSSL || cfg.nginx.onlySSL || cfg.nginx.enableACME) "fastcgi_param HTTPS on;"}
'';
};
"~ \.(js|css|gif|png|ico|jpg|jpeg)$" = {
extraConfig = "expires 365d;";
};
};
}];
};
systemd.services.snipe-it-setup = {
description = "Preperation tasks for snipe-it";
before = [ "phpfpm-snipe-it.service" ];
after = optional db.createLocally "mysql.service";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = user;
WorkingDirectory = snipe-it;
RuntimeDirectory = "snipe-it/cache";
RuntimeDirectoryMode = 0700;
};
path = [ pkgs.replace-secret ];
script =
let
isSecret = v: isAttrs v && v ? _secret && (isString v._secret || builtins.isPath v._secret);
snipeITEnvVars = lib.generators.toKeyValue {
mkKeyValue = lib.flip lib.generators.mkKeyValueDefault "=" {
mkValueString = v: with builtins;
if isInt v then toString v
else if isString v then "\"${v}\""
else if true == v then "true"
else if false == v then "false"
else if isSecret v then
if (isString v._secret) then
hashString "sha256" v._secret
else
hashString "sha256" (builtins.readFile v._secret)
else throw "unsupported type ${typeOf v}: ${(lib.generators.toPretty {}) v}";
};
};
secretPaths = lib.mapAttrsToList (_: v: v._secret) (lib.filterAttrs (_: isSecret) cfg.config);
mkSecretReplacement = file: ''
replace-secret ${escapeShellArgs [
(
if (isString file) then
builtins.hashString "sha256" file
else
builtins.hashString "sha256" (builtins.readFile file)
)
file
"${cfg.dataDir}/.env"
]}
'';
secretReplacements = lib.concatMapStrings mkSecretReplacement secretPaths;
filteredConfig = lib.converge (lib.filterAttrsRecursive (_: v: ! elem v [ {} null ])) cfg.config;
snipeITEnv = pkgs.writeText "snipeIT.env" (snipeITEnvVars filteredConfig);
in ''
# error handling
set -euo pipefail
# set permissions
umask 077
# create .env file
install -T -m 0600 -o ${user} ${snipeITEnv} "${cfg.dataDir}/.env"
# replace secrets
${secretReplacements}
# prepend `base64:` if it does not exist in APP_KEY
if ! grep 'APP_KEY=base64:' "${cfg.dataDir}/.env" >/dev/null; then
sed -i 's/APP_KEY=/APP_KEY=base64:/' "${cfg.dataDir}/.env"
fi
# purge cache
rm "${cfg.dataDir}"/bootstrap/cache/*.php || true
# migrate db
${pkgs.php}/bin/php artisan migrate --force
'';
};
systemd.tmpfiles.rules = [
"d ${cfg.dataDir} 0710 ${user} ${group} - -"
"d ${cfg.dataDir}/bootstrap 0750 ${user} ${group} - -"
"d ${cfg.dataDir}/bootstrap/cache 0750 ${user} ${group} - -"
"d ${cfg.dataDir}/public 0750 ${user} ${group} - -"
"d ${cfg.dataDir}/public/uploads 0750 ${user} ${group} - -"
"d ${cfg.dataDir}/storage 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/app 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/fonts 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/framework 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/framework/cache 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/framework/sessions 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/framework/views 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/logs 0700 ${user} ${group} - -"
"d ${cfg.dataDir}/storage/uploads 0700 ${user} ${group} - -"
];
users = {
users = mkIf (user == "snipeit") {
snipeit = {
inherit group;
isSystemUser = true;
};
"${config.services.nginx.user}".extraGroups = [ group ];
};
groups = mkIf (group == "snipeit") {
snipeit = {};
};
};
};
meta.maintainers = with maintainers; [ yayayayaka ];
}

View file

@ -18,7 +18,7 @@ in
# determines the default: later modules (if enabled) are preferred.
# E.g., if Plasma 5 is enabled, it supersedes xterm.
imports = [
./none.nix ./xterm.nix ./xfce.nix ./plasma5.nix ./lumina.nix
./none.nix ./xterm.nix ./phosh.nix ./xfce.nix ./plasma5.nix ./lumina.nix
./lxqt.nix ./enlightenment.nix ./gnome.nix ./retroarch.nix ./kodi.nix
./mate.nix ./pantheon.nix ./surf-display.nix ./cde.nix
./cinnamon.nix
@ -72,7 +72,7 @@ in
apply = map (d: d // {
manage = "desktop";
start = d.start
+ optionalString (needBGCond d) ''
+ optionalString (needBGCond d) ''\n\n
if [ -e $HOME/.background-image ]; then
${pkgs.feh}/bin/feh --bg-${cfg.wallpaper.mode} ${optionalString cfg.wallpaper.combineScreens "--no-xinerama"} $HOME/.background-image
fi

View file

@ -3,7 +3,7 @@
with lib;
let
cfg = config.programs.phosh;
cfg = config.services.xserver.desktopManager.phosh;
# Based on https://source.puri.sm/Librem5/librem5-base/-/blob/4596c1056dd75ac7f043aede07887990fd46f572/default/sm.puri.OSK0.desktop
oskItem = pkgs.makeDesktopItem {
@ -118,12 +118,39 @@ let
[cursor]
theme = ${phoc.cursorTheme}
'';
in {
in
{
options = {
programs.phosh = {
enable = mkEnableOption ''
Whether to enable, Phosh, related packages and default configurations.
services.xserver.desktopManager.phosh = {
enable = mkOption {
type = types.bool;
default = false;
description = "Enable the Phone Shell.";
};
package = mkOption {
type = types.package;
default = pkgs.phosh;
defaultText = literalExpression "pkgs.phosh";
example = literalExpression "pkgs.phosh";
description = ''
Package that should be used for Phosh.
'';
};
user = mkOption {
description = "The user to run the Phosh service.";
type = types.str;
example = "alice";
};
group = mkOption {
description = "The group to run the Phosh service.";
type = types.str;
example = "users";
};
phocConfig = mkOption {
description = ''
Configurations for the Phoc compositor.
@ -135,14 +162,42 @@ in {
};
config = mkIf cfg.enable {
systemd.defaultUnit = "graphical.target";
# Inspired by https://gitlab.gnome.org/World/Phosh/phosh/-/blob/main/data/phosh.service
systemd.services.phosh = {
wantedBy = [ "graphical.target" ];
serviceConfig = {
ExecStart = "${cfg.package}/bin/phosh";
User = cfg.user;
Group = cfg.group;
PAMName = "login";
WorkingDirectory = "~";
Restart = "always";
TTYPath = "/dev/tty7";
TTYReset = "yes";
TTYVHangup = "yes";
TTYVTDisallocate = "yes";
# Fail to start if not controlling the tty.
StandardInput = "tty-fail";
StandardOutput = "journal";
StandardError = "journal";
# Log this user with utmp, letting it show up with commands 'w' and 'who'.
UtmpIdentifier = "tty7";
UtmpMode = "user";
};
};
environment.systemPackages = [
pkgs.phoc
pkgs.phosh
cfg.package
pkgs.squeekboard
oskItem
];
systemd.packages = [ pkgs.phosh ];
systemd.packages = [ cfg.package ];
programs.feedbackd.enable = true;
@ -152,7 +207,7 @@ in {
services.gnome.core-shell.enable = true;
services.gnome.core-os-services.enable = true;
services.xserver.displayManager.sessionPackages = [ pkgs.phosh ];
services.xserver.displayManager.sessionPackages = [ cfg.package ];
environment.etc."phosh/phoc.ini".source =
if builtins.isPath cfg.phocConfig then cfg.phocConfig

View file

@ -140,8 +140,13 @@ in
environment = {
GDM_X_SERVER_EXTRA_ARGS = toString
(filter (arg: arg != "-terminate") cfg.xserverArgs);
# GDM is needed for gnome-login.session
XDG_DATA_DIRS = "${gdm}/share:${cfg.sessionData.desktops}/share:${pkgs.gnome.gnome-control-center}/share";
XDG_DATA_DIRS = lib.makeSearchPath "share" [
gdm # for gnome-login.session
cfg.sessionData.desktops
pkgs.gnome.gnome-control-center # for accessibility icon
pkgs.gnome.adwaita-icon-theme
pkgs.hicolor-icon-theme # empty icon theme as a base
];
} // optionalAttrs (xSessionWrapper != null) {
# Make GDM use this wrapper before running the session, which runs the
# configured setupCommands. This relies on a patched GDM which supports
@ -298,7 +303,7 @@ in
session required pam_succeed_if.so audit quiet_success user = gdm
session required pam_env.so conffile=/etc/pam/environment readenv=0
session optional ${pkgs.systemd}/lib/security/pam_systemd.so
session optional ${config.systemd.package}/lib/security/pam_systemd.so
session optional pam_keyinit.so force revoke
session optional pam_permit.so
'';

View file

@ -287,7 +287,7 @@ in
session required pam_succeed_if.so audit quiet_success user = lightdm
session required pam_env.so conffile=/etc/pam/environment readenv=0
session optional ${pkgs.systemd}/lib/security/pam_systemd.so
session optional ${config.systemd.package}/lib/security/pam_systemd.so
session optional pam_keyinit.so force revoke
session optional pam_permit.so
'';

View file

@ -231,7 +231,7 @@ in
session required pam_succeed_if.so audit quiet_success user = sddm
session required pam_env.so conffile=/etc/pam/environment readenv=0
session optional ${pkgs.systemd}/lib/security/pam_systemd.so
session optional ${config.systemd.package}/lib/security/pam_systemd.so
session optional pam_keyinit.so force revoke
session optional pam_permit.so
'';

View file

@ -273,9 +273,6 @@ in
boot.kernelModules = [ "loop" "atkbd" ];
# The Linux kernel >= 2.6.27 provides firmware.
hardware.firmware = [ kernel ];
# Create /etc/modules-load.d/nixos.conf, which is read by
# systemd-modules-load.service to load required kernel modules.
environment.etc =

View file

@ -52,7 +52,7 @@ with lib;
'';
environment.etc."modprobe.d/debian.conf".source = pkgs.kmod-debian-aliases;
environment.etc."modprobe.d/systemd.conf".source = "${pkgs.systemd}/lib/modprobe.d/systemd.conf";
environment.etc."modprobe.d/systemd.conf".source = "${config.systemd.package}/lib/modprobe.d/systemd.conf";
environment.systemPackages = [ pkgs.kmod ];

View file

@ -779,6 +779,7 @@ let
"RouteDenyList"
"RouteAllowList"
"DHCPv6Client"
"RouteMetric"
])
(assertValueOneOf "UseDNS" boolValues)
(assertValueOneOf "UseDomains" (boolValues ++ ["route"]))

View file

@ -4,7 +4,10 @@ with lib;
let
inherit (pkgs) plymouth nixos-icons;
inherit (pkgs) nixos-icons;
plymouth = pkgs.plymouth.override {
systemd = config.boot.initrd.systemd.package;
};
cfg = config.boot.plymouth;
opt = options.boot.plymouth;
@ -143,7 +146,88 @@ in
systemd.services.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ];
systemd.paths.systemd-ask-password-plymouth.wantedBy = [ "multi-user.target" ];
boot.initrd.extraUtilsCommands = ''
boot.initrd.systemd = {
extraBin.plymouth = "${plymouth}/bin/plymouth"; # for the recovery shell
storePaths = [
"${lib.getBin config.boot.initrd.systemd.package}/bin/systemd-tty-ask-password-agent"
"${plymouth}/bin/plymouthd"
"${plymouth}/sbin/plymouthd"
];
packages = [ plymouth ]; # systemd units
contents = {
# Files
"/etc/plymouth/plymouthd.conf".source = configFile;
"/etc/plymouth/plymouthd.defaults".source = "${plymouth}/share/plymouth/plymouthd.defaults";
"/etc/plymouth/logo.png".source = cfg.logo;
# Directories
"/etc/plymouth/plugins".source = pkgs.runCommand "plymouth-initrd-plugins" {} ''
# Check if the actual requested theme is here
if [[ ! -d ${themesEnv}/share/plymouth/themes/${cfg.theme} ]]; then
echo "The requested theme: ${cfg.theme} is not provided by any of the packages in boot.plymouth.themePackages"
exit 1
fi
moduleName="$(sed -n 's,ModuleName *= *,,p' ${themesEnv}/share/plymouth/themes/${cfg.theme}/${cfg.theme}.plymouth)"
mkdir -p $out/renderers
# module might come from a theme
cp ${themesEnv}/lib/plymouth/{text,details,label,$moduleName}.so $out
cp ${plymouth}/lib/plymouth/renderers/{drm,frame-buffer}.so $out/renderers
'';
"/etc/plymouth/themes".source = pkgs.runCommand "plymouth-initrd-themes" {} ''
# Check if the actual requested theme is here
if [[ ! -d ${themesEnv}/share/plymouth/themes/${cfg.theme} ]]; then
echo "The requested theme: ${cfg.theme} is not provided by any of the packages in boot.plymouth.themePackages"
exit 1
fi
mkdir $out
cp -r ${themesEnv}/share/plymouth/themes/${cfg.theme} $out
# Copy more themes if the theme depends on others
for theme in $(grep -hRo '/etc/plymouth/themes/.*$' ${themesEnv} | xargs -n1 basename); do
if [[ -d "${themesEnv}/theme" ]]; then
cp -r "${themesEnv}/theme" $out
fi
done
'';
# Fonts
"/etc/plymouth/fonts".source = pkgs.runCommand "plymouth-initrd-fonts" {} ''
mkdir -p $out
cp ${cfg.font} $out
'';
"/etc/fonts/fonts.conf".text = ''
<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "urn:fontconfig:fonts.dtd">
<fontconfig>
<dir>/etc/plymouth/fonts</dir>
</fontconfig>
'';
};
# Properly enable units. These are the units that arch copies
services = {
plymouth-halt.wantedBy = [ "halt.target" ];
plymouth-kexec.wantedBy = [ "kexec.target" ];
plymouth-poweroff.wantedBy = [ "poweroff.target" ];
plymouth-quit-wait.wantedBy = [ "multi-user.target" ];
plymouth-quit.wantedBy = [ "multi-user.target" ];
plymouth-read-write.wantedBy = [ "sysinit.target" ];
plymouth-reboot.wantedBy = [ "reboot.target" ];
plymouth-start.wantedBy = [ "initrd-switch-root.target" "sysinit.target" ];
plymouth-switch-root-initramfs.wantedBy = [ "halt.target" "kexec.target" "plymouth-switch-root-initramfs.service" "poweroff.target" "reboot.target" ];
plymouth-switch-root.wantedBy = [ "initrd-switch-root.target" ];
};
};
# Insert required udev rules. We take stage 2 systemd because the udev
# rules are only generated when building with logind.
boot.initrd.services.udev.packages = [ (pkgs.runCommand "initrd-plymouth-udev-rules" {} ''
mkdir -p $out/etc/udev/rules.d
cp ${config.systemd.package.out}/lib/udev/rules.d/{70-uaccess,71-seat}.rules $out/etc/udev/rules.d
sed -i '/loginctl/d' $out/etc/udev/rules.d/71-seat.rules
'') ];
boot.initrd.extraUtilsCommands = lib.mkIf (!config.boot.initrd.systemd.enable) ''
copy_bin_and_libs ${plymouth}/bin/plymouth
copy_bin_and_libs ${plymouth}/bin/plymouthd
@ -198,18 +282,18 @@ in
EOF
'';
boot.initrd.extraUtilsCommandsTest = ''
boot.initrd.extraUtilsCommandsTest = mkIf (!config.boot.initrd.enable) ''
$out/bin/plymouthd --help >/dev/null
$out/bin/plymouth --help >/dev/null
'';
boot.initrd.extraUdevRulesCommands = ''
boot.initrd.extraUdevRulesCommands = mkIf (!config.boot.initrd.enable) ''
cp ${config.systemd.package}/lib/udev/rules.d/{70-uaccess,71-seat}.rules $out
sed -i '/loginctl/d' $out/71-seat.rules
'';
# We use `mkAfter` to ensure that LUKS password prompt would be shown earlier than the splash screen.
boot.initrd.preLVMCommands = mkAfter ''
boot.initrd.preLVMCommands = mkIf (!config.boot.initrd.enable) (mkAfter ''
mkdir -p /etc/plymouth
mkdir -p /run/plymouth
ln -s ${configFile} /etc/plymouth/plymouthd.conf
@ -221,16 +305,16 @@ in
plymouthd --mode=boot --pid-file=/run/plymouth/pid --attach-to-session
plymouth show-splash
'';
'');
boot.initrd.postMountCommands = ''
boot.initrd.postMountCommands = mkIf (!config.boot.initrd.enable) ''
plymouth update-root-fs --new-root-dir="$targetRoot"
'';
# `mkBefore` to ensure that any custom prompts would be visible.
boot.initrd.preFailCommands = mkBefore ''
boot.initrd.preFailCommands = mkIf (!config.boot.initrd.enable) (mkBefore ''
plymouth quit --wait
'';
'');
};

View file

@ -16,7 +16,7 @@ let
"LimitNOFILE" "LimitAS" "LimitNPROC" "LimitMEMLOCK" "LimitLOCKS"
"LimitSIGPENDING" "LimitMSGQUEUE" "LimitNICE" "LimitRTPRIO" "LimitRTTIME"
"OOMScoreAdjust" "CPUAffinity" "Hostname" "ResolvConf" "Timezone"
"LinkJournal"
"LinkJournal" "Ephemeral" "AmbientCapability"
])
(assertValueOneOf "Boot" boolValues)
(assertValueOneOf "ProcessTwo" boolValues)
@ -26,11 +26,13 @@ let
checkFiles = checkUnitConfig "Files" [
(assertOnlyFields [
"ReadOnly" "Volatile" "Bind" "BindReadOnly" "TemporaryFileSystem"
"Overlay" "OverlayReadOnly" "PrivateUsersChown"
"Overlay" "OverlayReadOnly" "PrivateUsersChown" "BindUser"
"Inaccessible" "PrivateUserOwnership"
])
(assertValueOneOf "ReadOnly" boolValues)
(assertValueOneOf "Volatile" (boolValues ++ [ "state" ]))
(assertValueOneOf "PrivateUsersChown" boolValues)
(assertValueOneOf "PrivateUserOwnership" [ "off" "chown" "map" "auto" ])
];
checkNetwork = checkUnitConfig "Network" [

View file

@ -190,7 +190,7 @@ in {
nixos-rebuild = "${config.system.build.nixos-rebuild}/bin/nixos-rebuild";
date = "${pkgs.coreutils}/bin/date";
readlink = "${pkgs.coreutils}/bin/readlink";
shutdown = "${pkgs.systemd}/bin/shutdown";
shutdown = "${config.systemd.package}/bin/shutdown";
upgradeFlag = optional (cfg.channel == null) "--upgrade";
in if cfg.allowReboot then ''
${nixos-rebuild} boot ${toString (cfg.flags ++ upgradeFlag)}

Some files were not shown because too many files have changed in this diff Show more