Project import generated by Copybara.

GitOrigin-RevId: d9dba88d08a9cdf483c3d45f0d7220cf97a4ce64
This commit is contained in:
Default email 2021-01-05 19:05:55 +02:00
parent 07c381ccb7
commit ffc78d3539
1033 changed files with 53722 additions and 20846 deletions

View file

@ -0,0 +1,28 @@
name: "Build NixOS manual"
on:
pull_request_target:
branches:
- master
paths:
- 'nixos/**'
jobs:
nixos:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v12
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v8
with:
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building NixOS manual
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true nixos/release.nix -A manual.x86_64-linux

View file

@ -0,0 +1,28 @@
name: "Build Nixpkgs manual"
on:
pull_request_target:
branches:
- master
paths:
- 'doc/**'
jobs:
nixpkgs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v12
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v8
with:
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building Nixpkgs manual
run: NIX_PATH=nixpkgs=$(pwd) nix-build --option restrict-eval true pkgs/top-level/release.nix -A manual

View file

@ -11,6 +11,10 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS' && github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase-staging') if: github.repository_owner == 'NixOS' && github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase-staging')
steps: steps:
- uses: peter-evans/create-or-update-comment@v1
with:
comment-id: ${{ github.event.comment.id }}
reactions: eyes
- uses: scherermichael-oss/action-has-permission@1.0.6 - uses: scherermichael-oss/action-has-permission@1.0.6
id: check-write-access id: check-write-access
with: with:

View file

@ -1,4 +1,4 @@
Copyright (c) 2003-2020 Eelco Dolstra and the Nixpkgs/NixOS contributors Copyright (c) 2003-2021 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the a copy of this software and associated documentation files (the

View file

@ -1,4 +1,4 @@
# Cataclysm: Dark Days Ahead # Cataclysm: Dark Days Ahead {#cataclysm-dark-days-ahead}
## How to install Cataclysm DDA ## How to install Cataclysm DDA

View file

@ -37,7 +37,7 @@ This works just like `runCommand`. The only difference is that it also provides
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build. Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
::: {.note} ::: note
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`. This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
::: :::

View file

@ -1,9 +1,4 @@
--- # Agda {#agda}
title: Agda
author: Alex Rice (alexarice)
date: 2020-01-06
---
# Agda
## How to use Agda ## How to use Agda

View file

@ -1,9 +1,4 @@
--- # Android {#android}
title: Android
author: Sander van der Burg
date: 2018-11-18
---
# Android
The Android build environment provides three major features and a number of The Android build environment provides three major features and a number of
supporting features. supporting features.

View file

@ -1,4 +1,4 @@
# Crystal # Crystal {#crystal}
## Building a Crystal package ## Building a Crystal package

View file

@ -1,4 +1,4 @@
# Emscripten # Emscripten {#emscripten}
[Emscripten](https://github.com/kripken/emscripten): An LLVM-to-JavaScript Compiler [Emscripten](https://github.com/kripken/emscripten): An LLVM-to-JavaScript Compiler

View file

@ -1,10 +1,4 @@
--- # Haskell {#haskell}
title: User's Guide for Haskell in Nixpkgs
author: Peter Simons
date: 2015-06-01
---
# Haskell
The documentation for the Haskell infrastructure is published at The documentation for the Haskell infrastructure is published at
<https://haskell4nix.readthedocs.io/>. The source code for that <https://haskell4nix.readthedocs.io/>. The source code for that

View file

@ -1,4 +1,4 @@
# Idris # Idris {#idris}
## Installing Idris ## Installing Idris

View file

@ -1,9 +1,4 @@
--- # iOS {#ios}
title: iOS
author: Sander van der Burg
date: 2019-11-10
---
# iOS
This component is basically a wrapper/workaround that makes it possible to This component is basically a wrapper/workaround that makes it possible to
expose an Xcode installation as a Nix package by means of symlinking to the expose an Xcode installation as a Nix package by means of symlinking to the

View file

@ -1,10 +1,4 @@
--- # User's Guide to Lua Infrastructure {#users-guide-to-lua-infrastructure}
title: Lua
author: Matthieu Coudron
date: 2019-02-05
---
# User's Guide to Lua Infrastructure
## Using Lua ## Using Lua

View file

@ -1,10 +1,4 @@
--- # Maven {#maven}
title: Maven
author: Farid Zakaria
date: 2020-10-15
---
# Maven
Maven is a well-known build tool for the Java ecosystem however it has some challenges when integrating into the Nix build system. Maven is a well-known build tool for the Java ecosystem however it has some challenges when integrating into the Nix build system.

View file

@ -1,5 +1,5 @@
Node.js # Node.js {#node.js}
=======
The `pkgs/development/node-packages` folder contains a generated collection of The `pkgs/development/node-packages` folder contains a generated collection of
[NPM packages](https://npmjs.com/) that can be installed with the Nix package [NPM packages](https://npmjs.com/) that can be installed with the Nix package
manager. manager.

View file

@ -1,4 +1,4 @@
# Python # Python {#python}
## User Guide ## User Guide

View file

@ -1,5 +1,4 @@
R # R {#r}
=
## Installation ## Installation

View file

@ -1,10 +1,4 @@
--- # Rust {#rust}
title: Rust
author: Matthias Beyer
date: 2017-03-05
---
# Rust
To install the rust compiler and cargo put To install the rust compiler and cargo put
@ -27,16 +21,16 @@ Rust applications are packaged by using the `buildRustPackage` helper from `rust
``` ```
rustPlatform.buildRustPackage rec { rustPlatform.buildRustPackage rec {
pname = "ripgrep"; pname = "ripgrep";
version = "11.0.2"; version = "12.1.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "BurntSushi"; owner = "BurntSushi";
repo = pname; repo = pname;
rev = version; rev = version;
sha256 = "1iga3320mgi7m853la55xip514a3chqsdi1a1rwv25lr9b1p7vd3"; sha256 = "1hqps7l5qrjh9f914r5i6kmcz6f1yb951nv4lby0cjnp5l253kps";
}; };
cargoSha256 = "17ldqr3asrdcsh4l29m3b5r37r5d0b3npq1lrgjmxb6vlx6a36qh"; cargoSha256 = "03wf9r2csi6jpa7v5sw5lpxkrk4wfzwmzx7k3991q3bdjzcwnnwp";
meta = with stdenv.lib; { meta = with stdenv.lib; {
description = "A fast line-oriented regex search tool, similar to ag and ack"; description = "A fast line-oriented regex search tool, similar to ag and ack";
@ -47,10 +41,31 @@ rustPlatform.buildRustPackage rec {
} }
``` ```
`buildRustPackage` requires a `cargoSha256` attribute which is computed over `buildRustPackage` requires either the `cargoSha256` or the
all crate sources of this package. Currently it is obtained by inserting a `cargoHash` attribute which is computed over all crate sources of this
fake checksum into the expression and building the package once. The correct package. `cargoHash256` is used for traditional Nix SHA-256 hashes,
checksum can then be taken from the failed build. such as the one in the example above. `cargoHash` should instead be
used for [SRI](https://www.w3.org/TR/SRI/) hashes. For example:
```
cargoHash = "sha256-l1vL2ZdtDRxSGvP0X/l3nMw8+6WF67KPutJEzUROjg8=";
```
Both types of hashes are permitted when contributing to nixpkgs. The
Cargo hash is obtained by inserting a fake checksum into the
expression and building the package once. The correct checksum can
then be taken from the failed build. A fake hash can be used for
`cargoSha256` as follows:
```
cargoSha256 = stdenv.lib.fakeSha256;
```
For `cargoHash` you can use:
```
cargoHash = stdenv.lib.fakeHash;
```
Per the instructions in the [Cargo Book](https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html) Per the instructions in the [Cargo Book](https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html)
best practices guide, Rust applications should always commit the `Cargo.lock` best practices guide, Rust applications should always commit the `Cargo.lock`

View file

@ -1,4 +1,3 @@
# TeX Live {#sec-language-texlive} # TeX Live {#sec-language-texlive}
Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute `texlive`. Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute `texlive`.

View file

@ -1,9 +1,4 @@
--- # Titanium {#titanium}
title: Titanium
author: Sander van der Burg
date: 2018-11-18
---
# Titanium
The Nixpkgs repository contains facilities to deploy a variety of versions of The Nixpkgs repository contains facilities to deploy a variety of versions of
the [Titanium SDK](https://www.appcelerator.com) versions, a cross-platform the [Titanium SDK](https://www.appcelerator.com) versions, a cross-platform

View file

@ -1,9 +1,4 @@
--- # Vim {#vim}
title: User's Guide for Vim in Nixpkgs
author: Marc Weber
date: 2016-06-25
---
# Vim
Both Neovim and Vim can be configured to include your favorite plugins Both Neovim and Vim can be configured to include your favorite plugins
and additional libraries. and additional libraries.

View file

@ -1,10 +1,4 @@
--- # Preface {#preface}
title: Preface
author: Frederik Rietdijk
date: 2015-11-25
---
# Preface
The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the
[Nix package manager](https://nixos.org/nix/), released under a [Nix package manager](https://nixos.org/nix/), released under a

View file

@ -817,14 +817,54 @@ lib.mapAttrs (n: v: v // { shortName = n; }) {
}; };
} // { } // {
# TODO: remove legacy aliases # TODO: remove legacy aliases
agpl3 = lib.licenses.agpl3Only; agpl3 = spdx {
fdl11 = lib.licenses.fdl11Only; spdxId = "AGPL-3.0";
fdl12 = lib.licenses.fdl12Only; fullName = "GNU Affero General Public License v3.0";
fdl13 = lib.licenses.fdl13Only; deprecated = true;
gpl1 = lib.licenses.gpl1Only; };
gpl2 = lib.licenses.gpl2Only; fdl11 = spdx {
gpl3 = lib.licenses.gpl3Only; spdxId = "GFDL-1.1";
lgpl2 = lib.licenses.lgpl2Only; fullName = "GNU Free Documentation License v1.1";
lgpl21 = lib.licenses.lgpl21Only; deprecated = true;
lgpl3 = lib.licenses.lgpl3Only; };
fdl12 = spdx {
spdxId = "GFDL-1.2";
fullName = "GNU Free Documentation License v1.2";
deprecated = true;
};
fdl13 = spdx {
spdxId = "GFDL-1.3";
fullName = "GNU Free Documentation License v1.3";
deprecated = true;
};
gpl1 = spdx {
spdxId = "GPL-1.0";
fullName = "GNU General Public License v1.0";
deprecated = true;
};
gpl2 = spdx {
spdxId = "GPL-2.0";
fullName = "GNU General Public License v2.0";
deprecated = true;
};
gpl3 = spdx {
spdxId = "GPL-3.0";
fullName = "GNU General Public License v3.0";
deprecated = true;
};
lgpl2 = spdx {
spdxId = "LGPL-2.0";
fullName = "GNU Library General Public License v2";
deprecated = true;
};
lgpl21 = spdx {
spdxId = "LGPL-2.1";
fullName = "GNU Lesser General Public License v2.1";
deprecated = true;
};
lgpl3 = spdx {
spdxId = "LGPL-3.0";
fullName = "GNU Lesser General Public License v3.0";
deprecated = true;
};
} }

View file

@ -124,6 +124,8 @@ rec {
then "${qemu-user}/bin/qemu-${final.qemuArch}" then "${qemu-user}/bin/qemu-${final.qemuArch}"
else if final.isWasi else if final.isWasi
then "${pkgs.wasmtime}/bin/wasmtime" then "${pkgs.wasmtime}/bin/wasmtime"
else if final.isMmix
then "${pkgs.mmixware}/bin/mmix"
else throw "Don't know how to run ${final.config} executables."; else throw "Don't know how to run ${final.config} executables.";
} // mapAttrs (n: v: v final.parsed) inspect.predicates } // mapAttrs (n: v: v final.parsed) inspect.predicates

View file

@ -490,8 +490,9 @@ rec {
# ARM # ARM
else if platform.isAarch32 then let else if platform.isAarch32 then let
version = platform.parsed.cpu.version or ""; version = platform.parsed.cpu.version or null;
in if lib.versionOlder version "6" then sheevaplug in if version == null then pcBase
else if lib.versionOlder version "6" then sheevaplug
else if lib.versionOlder version "7" then raspberrypi else if lib.versionOlder version "7" then raspberrypi
else armv7l-hf-multiplatform else armv7l-hf-multiplatform
else if platform.isAarch64 then aarch64-multiplatform else if platform.isAarch64 then aarch64-multiplatform

View file

@ -70,6 +70,18 @@
githubId = 7414843; githubId = 7414843;
name = "Nicholas von Klitzing"; name = "Nicholas von Klitzing";
}; };
_3noch = {
email = "eacameron@gmail.com";
github = "3noch";
githubId = 882455;
name = "Elliot Cameron";
};
_6AA4FD = {
email = "f6442954@gmail.com";
github = "6AA4FD";
githubId = 12578560;
name = "Quinn Bohner";
};
a1russell = { a1russell = {
email = "adamlr6+pub@gmail.com"; email = "adamlr6+pub@gmail.com";
github = "a1russell"; github = "a1russell";
@ -2867,6 +2879,12 @@
githubId = 30512529; githubId = 30512529;
name = "Evils"; name = "Evils";
}; };
ewok = {
email = "ewok@ewok.ru";
github = "ewok";
githubId = 454695;
name = "Artur Taranchiev";
};
exfalso = { exfalso = {
email = "0slemi0@gmail.com"; email = "0slemi0@gmail.com";
github = "exfalso"; github = "exfalso";
@ -3607,6 +3625,12 @@
email = "t@larkery.com"; email = "t@larkery.com";
name = "Tom Hinton"; name = "Tom Hinton";
}; };
hirenashah = {
email = "hiren@hiren.io";
github = "hirenashah";
githubId = 19825977;
name = "Hiren Shah";
};
hjones2199 = { hjones2199 = {
email = "hjones2199@gmail.com"; email = "hjones2199@gmail.com";
github = "hjones2199"; github = "hjones2199";
@ -3687,6 +3711,12 @@
githubId = 2789926; githubId = 2789926;
name = "Imran Hossain"; name = "Imran Hossain";
}; };
iammrinal0 = {
email = "nixpkgs@mrinalpurohit.in";
github = "iammrinal0";
githubId = 890062;
name = "Mrinal";
};
iand675 = { iand675 = {
email = "ian@iankduncan.com"; email = "ian@iankduncan.com";
github = "iand675"; github = "iand675";
@ -3917,6 +3947,12 @@
githubId = 2179419; githubId = 2179419;
name = "Arseniy Seroka"; name = "Arseniy Seroka";
}; };
jakeisnt = {
name = "Jacob Chvatal";
email = "jake@isnt.online";
github = "jakeisnt";
githubId = 29869612;
};
jakelogemann = { jakelogemann = {
email = "jake.logemann@gmail.com"; email = "jake.logemann@gmail.com";
github = "jakelogemann"; github = "jakelogemann";
@ -5919,6 +5955,12 @@
githubId = 1001112; githubId = 1001112;
name = "Marcin Janczyk"; name = "Marcin Janczyk";
}; };
mjlbach = {
email = "m.j.lbach@gmail.com";
github = "mjlbach";
githubId = 13316262;
name = "Michael Lingelbach";
};
mjp = { mjp = {
email = "mike@mythik.co.uk"; email = "mike@mythik.co.uk";
github = "MikePlayle"; github = "MikePlayle";
@ -6429,6 +6471,16 @@
githubId = 1219785; githubId = 1219785;
name = "Félix Baylac-Jacqué"; name = "Félix Baylac-Jacqué";
}; };
ninjin = {
email = "pontus@stenetorp.se";
github = "ninjin";
githubId = 354934;
name = "Pontus Stenetorp";
keys = [{
longkeyid = "rsa4096/0xD430287500E6483C";
fingerprint = "0966 2F9F 3FDA C22B C22E 4CE1 D430 2875 00E6 483C";
}];
};
nioncode = { nioncode = {
email = "nioncode+github@gmail.com"; email = "nioncode+github@gmail.com";
github = "nioncode"; github = "nioncode";
@ -6677,6 +6729,12 @@
githubId = 111265; githubId = 111265;
name = "Ozan Sener"; name = "Ozan Sener";
}; };
otavio = {
email = "otavio.salvador@ossystems.com.br";
github = "otavio";
githubId = 25278;
name = "Otavio Salvador";
};
otwieracz = { otwieracz = {
email = "slawek@otwiera.cz"; email = "slawek@otwiera.cz";
github = "otwieracz"; github = "otwieracz";
@ -9374,6 +9432,12 @@
fingerprint = "4D23 ECDF 880D CADF 5ECA 4458 874B D6F9 16FA A742"; fingerprint = "4D23 ECDF 880D CADF 5ECA 4458 874B D6F9 16FA A742";
}]; }];
}; };
vel = {
email = "llathasa@outlook.com";
github = "llathasa-veleth";
githubId = 61933599;
name = "vel";
};
velovix = { velovix = {
email = "xaviosx@gmail.com"; email = "xaviosx@gmail.com";
github = "velovix"; github = "velovix";

View file

@ -14,13 +14,12 @@ fi
tmp=$(mktemp -d) tmp=$(mktemp -d)
pushd $tmp >/dev/null pushd $tmp >/dev/null
wget -nH -r -c --no-parent "${WGET_ARGS[@]}" -A '*.tar.xz.sha256' -A '*.mirrorlist' >/dev/null wget -nH -r -c --no-parent "${WGET_ARGS[@]}" >/dev/null
find -type f -name '*.mirrorlist' -delete
csv=$(mktemp) csv=$(mktemp)
find . -type f | while read src; do find . -type f | while read src; do
# Sanitize file name # Sanitize file name
filename=$(gawk '{ print $2 }' "$src" | tr '@' '_') filename=$(basename "$src" | tr '@' '_')
nameVersion="${filename%.tar.*}" nameVersion="${filename%.tar.*}"
name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,' | sed -e 's,-everywhere-src$,,') name=$(echo "$nameVersion" | sed -e 's,-[[:digit:]].*,,' | sed -e 's,-opensource-src$,,' | sed -e 's,-everywhere-src$,,')
version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,') version=$(echo "$nameVersion" | sed -e 's,^\([[:alpha:]][[:alnum:]]*-\)\+,,')
@ -40,8 +39,8 @@ gawk -F , "{ print \$1 }" $csv | sort | uniq | while read name; do
latestVersion=$(echo "$versions" | sort -rV | head -n 1) latestVersion=$(echo "$versions" | sort -rV | head -n 1)
src=$(gawk -F , "/^$name,$latestVersion,/ { print \$3 }" $csv) src=$(gawk -F , "/^$name,$latestVersion,/ { print \$3 }" $csv)
filename=$(gawk -F , "/^$name,$latestVersion,/ { print \$4 }" $csv) filename=$(gawk -F , "/^$name,$latestVersion,/ { print \$4 }" $csv)
url="$(dirname "${src:2}")/$filename" url="${src:2}"
sha256=$(gawk '{ print $1 }' "$src") sha256=$(nix-hash --type sha256 --base32 --flat "$src")
cat >>"$SRCS" <<EOF cat >>"$SRCS" <<EOF
$name = { $name = {
version = "$latestVersion"; version = "$latestVersion";

View file

@ -7,7 +7,7 @@
<para> <para>
A profile with most (vanilla) hardening options enabled by default, A profile with most (vanilla) hardening options enabled by default,
potentially at the cost of features and performance. potentially at the cost of stability, features and performance.
</para> </para>
<para> <para>
@ -21,4 +21,12 @@
xlink:href="https://github.com/nixos/nixpkgs/tree/master/nixos/modules/profiles/hardened.nix"> xlink:href="https://github.com/nixos/nixpkgs/tree/master/nixos/modules/profiles/hardened.nix">
profile source</literal> for further detail on which settings are altered. profile source</literal> for further detail on which settings are altered.
</para> </para>
<warning>
<para>
This profile enables options that are known to affect system
stability. If you experience any stability issues when using the
profile, try disabling it. If you report an issue and use this
profile, always mention that you do.
</para>
</warning>
</section> </section>

View file

@ -168,6 +168,14 @@
<literal>/var/lib/powerdns</literal> to <literal>/run/pdns</literal>. <literal>/var/lib/powerdns</literal> to <literal>/run/pdns</literal>.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
xfsprogs was update from 4.19 to 5.10. It now enables reflink support by default on filesystem creation.
Support for reflinks was added with an experimental status to kernel 4.9 and deemed stable in kernel 4.16.
If you want to be able to mount XFS filesystems created with this release of xfsprogs on kernel releases older than those, you need to format them
with <literal>mkfs.xfs -m reflink=0</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
<package>btc1</package> has been abandoned upstream, and removed. <package>btc1</package> has been abandoned upstream, and removed.
@ -278,6 +286,16 @@
<xref linkend="opt-services.privoxy.enableTor" /> = true; <xref linkend="opt-services.privoxy.enableTor" /> = true;
</programlisting> </programlisting>
</listitem> </listitem>
<listitem>
<para>
The <literal>services.tor</literal> module has a new exhaustively typed <xref linkend="opt-services.tor.settings" /> option following RFC 0042; backward compatibility with old options has been preserved when aliasing was possible.
The corresponding systemd service has been hardened,
but there is a chance that the service still requires more permissions,
so please report any related trouble on the bugtracker.
Onion services v3 are now supported in <xref linkend="opt-services.tor.relay.onionServices" />.
A new <xref linkend="opt-services.tor.openFirewall" /> option as been introduced for allowing connections on all the TCP ports configured.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
The options <literal>services.slurm.dbdserver.storagePass</literal> The options <literal>services.slurm.dbdserver.storagePass</literal>
@ -287,6 +305,12 @@
This avoids that the password gets exposed in the nix store. This avoids that the password gets exposed in the nix store.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
The <literal>wafHook</literal> hook does not wrap Python anymore.
Packages depending on <literal>wafHook</literal> need to include any Python into their <literal>nativeBuildInputs</literal>.
</para>
</listitem>
<listitem> <listitem>
<para> <para>
Starting with version 1.7.0, the project formerly named <literal>CodiMD</literal> Starting with version 1.7.0, the project formerly named <literal>CodiMD</literal>
@ -295,6 +319,40 @@
Based on <xref linkend="opt-system.stateVersion" />, existing installations will continue to work. Based on <xref linkend="opt-system.stateVersion" />, existing installations will continue to work.
</para> </para>
</listitem> </listitem>
<listitem>
<para>
<package>fish-foreign-env</package> is now an alias for the
<package>fishPlugins.foreign-env</package> package, in which the fish
functions have been relocated to the
<literal>vendor_functions.d</literal> directory to be loaded automatically.
</para>
</listitem>
<listitem>
<para>
The prometheus json exporter is now managed by the prometheus community. Together with additional features
some backwards incompatibilities were introduced.
Most importantly the exporter no longer accepts a fixed command-line parameter to specify the URL of the
endpoint serving JSON. It now expects this URL to be passed as an URL parameter, when scraping the exporter's
<literal>/probe</literal> endpoint.
In the prometheus scrape configuration the scrape target might look like this:
<programlisting>
http://some.json-exporter.host:7979/probe?target=https://example.com/some/json/endpoint
</programlisting>
</para>
<para>
Existing configuration for the exporter needs to be updated, but can partially be re-used.
Documentation is available in the upstream repository and a small example for NixOS is available
in the corresponding NixOS test.
</para>
<para>
These changes also affect <xref linkend="opt-services.prometheus.exporters.rspamd.enable" />, which is
just a preconfigured instance of the json exporter.
</para>
<para>
For more information, take a look at the <link xlink:href="https://github.com/prometheus-community/json_exporter">
official documentation</link> of the json_exporter.
</para>
</listitem>
</itemizedlist> </itemizedlist>
</section> </section>

View file

@ -145,7 +145,8 @@ in
''; '';
systemd.services.systemd-vconsole-setup = systemd.services.systemd-vconsole-setup =
{ before = [ "display-manager.service" ]; {
before = optional config.services.xserver.enable "display-manager.service";
after = [ "systemd-udev-settle.service" ]; after = [ "systemd-udev-settle.service" ];
restartTriggers = [ vconsoleConf consoleEnv ]; restartTriggers = [ vconsoleConf consoleEnv ];
}; };

View file

@ -227,6 +227,15 @@ foreach my $u (@{$spec->{users}}) {
$u->{hashedPassword} = hashPassword($u->{password}); $u->{hashedPassword} = hashPassword($u->{password});
} }
if (!defined $u->{shell}) {
if (defined $existing) {
$u->{shell} = $existing->{shell};
} else {
warn "warning: no declarative or previous shell for $name, setting shell to nologin\n";
$u->{shell} = "/run/current-system/sw/bin/nologin";
}
}
$u->{fakePassword} = $existing->{fakePassword} // "x"; $u->{fakePassword} = $existing->{fakePassword} // "x";
$usersOut{$name} = $u; $usersOut{$name} = $u;

View file

@ -153,7 +153,7 @@ let
}; };
shell = mkOption { shell = mkOption {
type = types.either types.shellPackage types.path; type = types.nullOr (types.either types.shellPackage types.path);
default = pkgs.shadow; default = pkgs.shadow;
defaultText = "pkgs.shadow"; defaultText = "pkgs.shadow";
example = literalExample "pkgs.bashInteractive"; example = literalExample "pkgs.bashInteractive";

View file

@ -0,0 +1,67 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.hardware.opentabletdriver;
in
{
options = {
hardware.opentabletdriver = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Enable OpenTabletDriver udev rules, user service and blacklist kernel
modules known to conflict with OpenTabletDriver.
'';
};
blacklistedKernelModules = mkOption {
type = types.listOf types.str;
default = [ "hid-uclogic" "wacom" ];
description = ''
Blacklist of kernel modules known to conflict with OpenTabletDriver.
'';
};
package = mkOption {
type = types.package;
default = pkgs.opentabletdriver;
defaultText = "pkgs.opentabletdriver";
description = ''
OpenTabletDriver derivation to use.
'';
};
daemon = {
enable = mkOption {
default = true;
type = types.bool;
description = ''
Whether to start OpenTabletDriver daemon as a systemd user service.
'';
};
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
services.udev.packages = [ cfg.package ];
boot.blacklistedKernelModules = cfg.blacklistedKernelModules;
systemd.user.services.opentabletdriver = with pkgs; mkIf cfg.daemon.enable {
description = "Open source, cross-platform, user-mode tablet driver";
wantedBy = [ "graphical-session.target" ];
partOf = [ "graphical-session.target" ];
serviceConfig = {
Type = "simple";
ExecStart = "${cfg.package}/bin/otd-daemon -c ${cfg.package}/lib/OpenTabletDriver/Configurations";
Restart = "on-failure";
};
};
};
}

View file

@ -17,8 +17,7 @@
# The serial ports listed here are: # The serial ports listed here are:
# - ttyS0: for Tegra (Jetson TX1) # - ttyS0: for Tegra (Jetson TX1)
# - ttyAMA0: for QEMU's -machine virt # - ttyAMA0: for QEMU's -machine virt
# Also increase the amount of CMA to ensure the virtual console on the RPi3 works. boot.kernelParams = ["console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"];
boot.kernelParams = ["cma=32M" "console=ttyS0,115200n8" "console=ttyAMA0,115200n8" "console=tty0"];
boot.initrd.availableKernelModules = [ boot.initrd.availableKernelModules = [
# Allows early (earlier) modesetting for the Raspberry Pi # Allows early (earlier) modesetting for the Raspberry Pi
@ -30,13 +29,25 @@
sdImage = { sdImage = {
populateFirmwareCommands = let populateFirmwareCommands = let
configTxt = pkgs.writeText "config.txt" '' configTxt = pkgs.writeText "config.txt" ''
[pi3]
kernel=u-boot-rpi3.bin kernel=u-boot-rpi3.bin
[pi4]
kernel=u-boot-rpi4.bin
enable_gic=1
armstub=armstub8-gic.bin
# Otherwise the resolution will be weird in most cases, compared to
# what the pi3 firmware does by default.
disable_overscan=1
[all]
# Boot in 64-bit mode. # Boot in 64-bit mode.
arm_64bit=1 arm_64bit=1
# U-Boot used to need this to work, regardless of whether UART is actually used or not. # U-Boot needs this to work, regardless of whether UART is actually used or not.
# TODO: check when/if this can be removed. # Look in arch/arm/mach-bcm283x/Kconfig in the U-Boot tree to see if this is still
# a requirement in the future.
enable_uart=1 enable_uart=1
# Prevent the firmware from smashing the framebuffer setup done by the mainline kernel # Prevent the firmware from smashing the framebuffer setup done by the mainline kernel
@ -45,8 +56,17 @@
''; '';
in '' in ''
(cd ${pkgs.raspberrypifw}/share/raspberrypi/boot && cp bootcode.bin fixup*.dat start*.elf $NIX_BUILD_TOP/firmware/) (cd ${pkgs.raspberrypifw}/share/raspberrypi/boot && cp bootcode.bin fixup*.dat start*.elf $NIX_BUILD_TOP/firmware/)
cp ${pkgs.ubootRaspberryPi3_64bit}/u-boot.bin firmware/u-boot-rpi3.bin
# Add the config
cp ${configTxt} firmware/config.txt cp ${configTxt} firmware/config.txt
# Add pi3 specific files
cp ${pkgs.ubootRaspberryPi3_64bit}/u-boot.bin firmware/u-boot-rpi3.bin
# Add pi4 specific files
cp ${pkgs.ubootRaspberryPi4_64bit}/u-boot.bin firmware/u-boot-rpi4.bin
cp ${pkgs.raspberrypi-armstubs}/armstub8-gic.bin firmware/armstub8-gic.bin
cp ${pkgs.raspberrypifw}/share/raspberrypi/boot/bcm2711-rpi-4-b.dtb firmware/
''; '';
populateRootCommands = '' populateRootCommands = ''
mkdir -p ./files/boot mkdir -p ./files/boot

View file

@ -3,36 +3,6 @@
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:
{ {
imports = [ imports = [ ./sd-image-aarch64.nix ];
../../profiles/base.nix
../../profiles/installation-device.nix
./sd-image.nix
];
boot.loader.grub.enable = false;
boot.loader.raspberryPi.enable = true;
boot.loader.raspberryPi.version = 4;
boot.kernelPackages = pkgs.linuxPackages_rpi4; boot.kernelPackages = pkgs.linuxPackages_rpi4;
boot.consoleLogLevel = lib.mkDefault 7;
sdImage = {
firmwareSize = 128;
firmwarePartitionName = "NIXOS_BOOT";
# This is a hack to avoid replicating config.txt from boot.loader.raspberryPi
populateFirmwareCommands =
"${config.system.build.installBootLoader} ${config.system.build.toplevel} -d ./firmware";
# As the boot process is done entirely in the firmware partition.
populateRootCommands = "";
};
fileSystems."/boot/firmware" = {
# This effectively "renames" the attrsOf entry set in sd-image.nix
mountPoint = "/boot";
neededForBoot = true;
};
# the installation media is also the installation target,
# so we don't want to provide the installation configuration.nix.
installer.cloneConfig = false;
} }

View file

@ -104,7 +104,7 @@ in
''; '';
# Some more help text. # Some more help text.
services.mingetty.helpLine = services.getty.helpLine =
'' ''
Log in as "root" with an empty password. ${ Log in as "root" with an empty password. ${

View file

@ -122,7 +122,7 @@ in
device = "/dev/something"; device = "/dev/something";
}; };
services.mingetty = { services.getty = {
# Some more help text. # Some more help text.
helpLine = '' helpLine = ''
Log in as "root" with an empty password. ${ Log in as "root" with an empty password. ${

View file

@ -69,6 +69,9 @@ mount --rbind /sys "$mountPoint/sys"
# Run the activation script. Set $LOCALE_ARCHIVE to supress some Perl locale warnings. # Run the activation script. Set $LOCALE_ARCHIVE to supress some Perl locale warnings.
LOCALE_ARCHIVE="$system/sw/lib/locale/locale-archive" chroot "$mountPoint" "$system/activate" 1>&2 || true LOCALE_ARCHIVE="$system/sw/lib/locale/locale-archive" chroot "$mountPoint" "$system/activate" 1>&2 || true
# Create /tmp
chroot "$mountPoint" systemd-tmpfiles --create --remove --exclude-prefix=/dev 1>&2 || true
) )
exec chroot "$mountPoint" "${command[@]}" exec chroot "$mountPoint" "${command[@]}"

View file

@ -261,7 +261,7 @@ in
++ optionals cfg.doc.enable ([ manual.manualHTML nixos-help ] ++ optionals cfg.doc.enable ([ manual.manualHTML nixos-help ]
++ optionals config.services.xserver.enable [ pkgs.nixos-icons ]); ++ optionals config.services.xserver.enable [ pkgs.nixos-icons ]);
services.mingetty.helpLine = mkIf cfg.doc.enable ( services.getty.helpLine = mkIf cfg.doc.enable (
"\nRun 'nixos-help' for the NixOS manual." "\nRun 'nixos-help' for the NixOS manual."
); );
}) })

View file

@ -66,6 +66,7 @@
./hardware/tuxedo-keyboard.nix ./hardware/tuxedo-keyboard.nix
./hardware/usb-wwan.nix ./hardware/usb-wwan.nix
./hardware/onlykey.nix ./hardware/onlykey.nix
./hardware/opentabletdriver.nix
./hardware/wooting.nix ./hardware/wooting.nix
./hardware/uinput.nix ./hardware/uinput.nix
./hardware/video/amdgpu.nix ./hardware/video/amdgpu.nix
@ -141,6 +142,7 @@
./programs/light.nix ./programs/light.nix
./programs/mosh.nix ./programs/mosh.nix
./programs/mininet.nix ./programs/mininet.nix
./programs/msmtp.nix
./programs/mtr.nix ./programs/mtr.nix
./programs/nano.nix ./programs/nano.nix
./programs/neovim.nix ./programs/neovim.nix
@ -538,6 +540,7 @@
./services/monitoring/do-agent.nix ./services/monitoring/do-agent.nix
./services/monitoring/fusion-inventory.nix ./services/monitoring/fusion-inventory.nix
./services/monitoring/grafana.nix ./services/monitoring/grafana.nix
./services/monitoring/grafana-image-renderer.nix
./services/monitoring/grafana-reporter.nix ./services/monitoring/grafana-reporter.nix
./services/monitoring/graphite.nix ./services/monitoring/graphite.nix
./services/monitoring/hdaps.nix ./services/monitoring/hdaps.nix
@ -743,6 +746,7 @@
./services/networking/skydns.nix ./services/networking/skydns.nix
./services/networking/shadowsocks.nix ./services/networking/shadowsocks.nix
./services/networking/shairport-sync.nix ./services/networking/shairport-sync.nix
./services/networking/shellhub-agent.nix
./services/networking/shorewall.nix ./services/networking/shorewall.nix
./services/networking/shorewall6.nix ./services/networking/shorewall6.nix
./services/networking/shout.nix ./services/networking/shout.nix
@ -848,7 +852,7 @@
./services/torrent/peerflix.nix ./services/torrent/peerflix.nix
./services/torrent/rtorrent.nix ./services/torrent/rtorrent.nix
./services/torrent/transmission.nix ./services/torrent/transmission.nix
./services/ttys/agetty.nix ./services/ttys/getty.nix
./services/ttys/gpm.nix ./services/ttys/gpm.nix
./services/ttys/kmscon.nix ./services/ttys/kmscon.nix
./services/wayland/cage.nix ./services/wayland/cage.nix

View file

@ -1,5 +1,10 @@
# A profile with most (vanilla) hardening options enabled by default, # A profile with most (vanilla) hardening options enabled by default,
# potentially at the cost of features and performance. # potentially at the cost of stability, features and performance.
#
# This profile enables options that are known to affect system
# stability. If you experience any stability issues when using the
# profile, try disabling it. If you report an issue and use this
# profile, always mention that you do.
{ config, lib, pkgs, ... }: { config, lib, pkgs, ... }:

View file

@ -45,10 +45,10 @@ with lib;
}; };
# Automatically log in at the virtual consoles. # Automatically log in at the virtual consoles.
services.mingetty.autologinUser = "nixos"; services.getty.autologinUser = "nixos";
# Some more help text. # Some more help text.
services.mingetty.helpLine = '' services.getty.helpLine = ''
The "nixos" and "root" accounts have empty passwords. The "nixos" and "root" accounts have empty passwords.
An ssh daemon is running. You then must set a password An ssh daemon is running. You then must set a password

View file

@ -27,8 +27,8 @@ if (!defined $res || scalar @$res == 0) {
my $package = @$res[0]->{package}; my $package = @$res[0]->{package};
if ($ENV{"NIX_AUTO_INSTALL"} // "") { if ($ENV{"NIX_AUTO_INSTALL"} // "") {
print STDERR <<EOF; print STDERR <<EOF;
The program $program is currently not installed. It is provided by The program '$program' is currently not installed. It is provided by
the package $package, which I will now install for you. the package '$package', which I will now install for you.
EOF EOF
; ;
exit 126 if system("nix-env", "-iA", "nixos.$package") == 0; exit 126 if system("nix-env", "-iA", "nixos.$package") == 0;
@ -36,16 +36,17 @@ EOF
exec("nix-shell", "-p", $package, "--run", shell_quote("exec", @ARGV)); exec("nix-shell", "-p", $package, "--run", shell_quote("exec", @ARGV));
} else { } else {
print STDERR <<EOF; print STDERR <<EOF;
The program $program is currently not installed. You can install it by typing: The program '$program' is not in your PATH. You can make it available in an
nix-env -iA nixos.$package ephemeral shell by typing:
nix-shell -p $package
EOF EOF
} }
} else { } else {
print STDERR <<EOF; print STDERR <<EOF;
The program $program is currently not installed. It is provided by The program '$program' is not in your PATH. It is provided by several packages.
several packages. You can install it by typing one of the following: You can make it available in an ephemeral shell by typing one of the following:
EOF EOF
print STDERR " nix-env -iA nixos.$_->{package}\n" foreach @$res; print STDERR " nix-shell -p $_->{package}\n" foreach @$res;
} }
exit 127; exit 127;

View file

@ -112,7 +112,7 @@ in
environment.etc."fish/nixos-env-preinit.fish".text = '' environment.etc."fish/nixos-env-preinit.fish".text = ''
# This happens before $__fish_datadir/config.fish sets fish_function_path, so it is currently # This happens before $__fish_datadir/config.fish sets fish_function_path, so it is currently
# unset. We set it and then completely erase it, leaving its configuration to $__fish_datadir/config.fish # unset. We set it and then completely erase it, leaving its configuration to $__fish_datadir/config.fish
set fish_function_path ${pkgs.fish-foreign-env}/share/fish-foreign-env/functions $__fish_datadir/functions set fish_function_path ${pkgs.fishPlugins.foreign-env}/share/fish/vendor_functions.d $__fish_datadir/functions
# source the NixOS environment config # source the NixOS environment config
if [ -z "$__NIXOS_SET_ENVIRONMENT_DONE" ] if [ -z "$__NIXOS_SET_ENVIRONMENT_DONE" ]
@ -128,7 +128,7 @@ in
# if we haven't sourced the general config, do it # if we haven't sourced the general config, do it
if not set -q __fish_nixos_general_config_sourced if not set -q __fish_nixos_general_config_sourced
set fish_function_path ${pkgs.fish-foreign-env}/share/fish-foreign-env/functions $fish_function_path set --prepend fish_function_path ${pkgs.fishPlugins.foreign-env}/share/fish/vendor_functions.d
fenv source /etc/fish/foreign-env/shellInit > /dev/null fenv source /etc/fish/foreign-env/shellInit > /dev/null
set -e fish_function_path[1] set -e fish_function_path[1]
@ -142,7 +142,7 @@ in
# if we haven't sourced the login config, do it # if we haven't sourced the login config, do it
status --is-login; and not set -q __fish_nixos_login_config_sourced status --is-login; and not set -q __fish_nixos_login_config_sourced
and begin and begin
set fish_function_path ${pkgs.fish-foreign-env}/share/fish-foreign-env/functions $fish_function_path set --prepend fish_function_path ${pkgs.fishPlugins.foreign-env}/share/fish/vendor_functions.d
fenv source /etc/fish/foreign-env/loginShellInit > /dev/null fenv source /etc/fish/foreign-env/loginShellInit > /dev/null
set -e fish_function_path[1] set -e fish_function_path[1]
@ -158,7 +158,7 @@ in
and begin and begin
${fishAliases} ${fishAliases}
set fish_function_path ${pkgs.fish-foreign-env}/share/fish-foreign-env/functions $fish_function_path set --prepend fish_function_path ${pkgs.fishPlugins.foreign-env}/share/fish/vendor_functions.d
fenv source /etc/fish/foreign-env/interactiveShellInit > /dev/null fenv source /etc/fish/foreign-env/interactiveShellInit > /dev/null
set -e fish_function_path[1] set -e fish_function_path[1]

View file

@ -0,0 +1,104 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.programs.msmtp;
in {
meta.maintainers = with maintainers; [ pacien ];
options = {
programs.msmtp = {
enable = mkEnableOption "msmtp - an SMTP client";
setSendmail = mkOption {
type = types.bool;
default = true;
description = ''
Whether to set the system sendmail to msmtp's.
'';
};
defaults = mkOption {
type = types.attrs;
default = {};
example = {
aliases = "/etc/aliases";
port = 587;
tls = true;
};
description = ''
Default values applied to all accounts.
See msmtp(1) for the available options.
'';
};
accounts = mkOption {
type = with types; attrsOf attrs;
default = {};
example = {
"default" = {
host = "smtp.example";
auth = true;
user = "someone";
passwordeval = "cat /secrets/password.txt";
};
};
description = ''
Named accounts and their respective configurations.
The special name "default" allows a default account to be defined.
See msmtp(1) for the available options.
Use `programs.msmtp.extraConfig` instead of this attribute set-based
option if ordered account inheritance is needed.
It is advised to use the `passwordeval` setting to read the password
from a secret file to avoid having it written in the world-readable
nix store. The password file must end with a newline (`\n`).
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Extra lines to add to the msmtp configuration verbatim.
See msmtp(1) for the syntax and available options.
'';
};
};
};
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.msmtp ];
services.mail.sendmailSetuidWrapper = mkIf cfg.setSendmail {
program = "sendmail";
source = "${pkgs.msmtp}/bin/sendmail";
setuid = false;
setgid = false;
};
environment.etc."msmtprc".text = let
mkValueString = v:
if v == true then "on"
else if v == false then "off"
else generators.mkValueStringDefault {} v;
mkKeyValueString = k: v: "${k} ${mkValueString v}";
mkInnerSectionString =
attrs: concatStringsSep "\n" (mapAttrsToList mkKeyValueString attrs);
mkAccountString = name: attrs: ''
account ${name}
${mkInnerSectionString attrs}
'';
in ''
defaults
${mkInnerSectionString cfg.defaults}
${concatStringsSep "\n" (mapAttrsToList mkAccountString cfg.accounts)}
${cfg.extraConfig}
'';
};
}

View file

@ -162,15 +162,16 @@ in
(mkIf (cfg.authPassFile != null) { AuthPassFile = cfg.authPassFile; }) (mkIf (cfg.authPassFile != null) { AuthPassFile = cfg.authPassFile; })
]; ];
environment.etc."ssmtp/ssmtp.conf".source = # careful here: ssmtp REQUIRES all config lines to end with a newline char!
let environment.etc."ssmtp/ssmtp.conf".text = with generators; toKeyValue {
toStr = value: mkKeyValue = mkKeyValueDefault {
mkValueString = value:
if value == true then "YES" if value == true then "YES"
else if value == false then "NO" else if value == false then "NO"
else builtins.toString value else mkValueStringDefault {} value
; ;
in } "=";
pkgs.writeText "ssmtp.conf" (concatStringsSep "\n" (mapAttrsToList (key: value: "${key}=${toStr value}") cfg.settings)); } cfg.settings;
environment.systemPackages = [pkgs.ssmtp]; environment.systemPackages = [pkgs.ssmtp];

View file

@ -394,7 +394,7 @@ let
${optionalString cfg.requireWheel ${optionalString cfg.requireWheel
"auth required pam_wheel.so use_uid"} "auth required pam_wheel.so use_uid"}
${optionalString cfg.logFailures ${optionalString cfg.logFailures
"auth required pam_tally.so"} "auth required pam_faillock.so"}
${optionalString (config.security.pam.enableSSHAgentAuth && cfg.sshAgentAuth) ${optionalString (config.security.pam.enableSSHAgentAuth && cfg.sshAgentAuth)
"auth sufficient ${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so file=${lib.concatStringsSep ":" config.services.openssh.authorizedKeysFiles}"} "auth sufficient ${pkgs.pam_ssh_agent_auth}/libexec/pam_ssh_agent_auth.so file=${lib.concatStringsSep ":" config.services.openssh.authorizedKeysFiles}"}
${optionalString cfg.fprintAuth ${optionalString cfg.fprintAuth

View file

@ -5,7 +5,7 @@ with lib;
let let
dataDir = "/var/lib/matrix-appservice-discord"; dataDir = "/var/lib/matrix-appservice-discord";
registrationFile = "${dataDir}/discord-registration.yaml"; registrationFile = "${dataDir}/discord-registration.yaml";
appDir = "${pkgs.matrix-appservice-discord}/lib/node_modules/matrix-appservice-discord"; appDir = "${pkgs.matrix-appservice-discord}/${pkgs.matrix-appservice-discord.passthru.nodeAppDir}";
cfg = config.services.matrix-appservice-discord; cfg = config.services.matrix-appservice-discord;
# TODO: switch to configGen.json once RFC42 is implemented # TODO: switch to configGen.json once RFC42 is implemented
settingsFile = pkgs.writeText "matrix-appservice-discord-settings.json" (builtins.toJSON cfg.settings); settingsFile = pkgs.writeText "matrix-appservice-discord-settings.json" (builtins.toJSON cfg.settings);
@ -22,12 +22,6 @@ in {
default = { default = {
database = { database = {
filename = "${dataDir}/discord.db"; filename = "${dataDir}/discord.db";
# TODO: remove those old config keys once the following issues are solved:
# * https://github.com/Half-Shot/matrix-appservice-discord/issues/490
# * https://github.com/Half-Shot/matrix-appservice-discord/issues/498
userStorePath = "${dataDir}/user-store.db";
roomStorePath = "${dataDir}/room-store.db";
}; };
# empty values necessary for registration file generation # empty values necessary for registration file generation

View file

@ -0,0 +1,150 @@
{ lib, pkgs, config, ... }:
with lib;
let
cfg = config.services.grafana-image-renderer;
format = pkgs.formats.json { };
configFile = format.generate "grafana-image-renderer-config.json" cfg.settings;
in {
options.services.grafana-image-renderer = {
enable = mkEnableOption "grafana-image-renderer";
chromium = mkOption {
type = types.package;
description = ''
The chromium to use for image rendering.
'';
};
verbose = mkEnableOption "verbosity for the service";
provisionGrafana = mkEnableOption "Grafana configuration for grafana-image-renderer";
settings = mkOption {
type = types.submodule {
freeformType = format.type;
options = {
service = {
port = mkOption {
type = types.port;
default = 8081;
description = ''
The TCP port to use for the rendering server.
'';
};
logging.level = mkOption {
type = types.enum [ "error" "warning" "info" "debug" ];
default = "info";
description = ''
The log-level of the <filename>grafana-image-renderer.service</filename>-unit.
'';
};
};
rendering = {
width = mkOption {
default = 1000;
type = types.ints.positive;
description = ''
Width of the PNG used to display the alerting graph.
'';
};
height = mkOption {
default = 500;
type = types.ints.positive;
description = ''
Height of the PNG used to display the alerting graph.
'';
};
mode = mkOption {
default = "default";
type = types.enum [ "default" "reusable" "clustered" ];
description = ''
Rendering mode of <package>grafana-image-renderer</package>:
<itemizedlist>
<listitem><para><literal>default:</literal> Creates on browser-instance
per rendering request.</para></listitem>
<listitem><para><literal>reusable:</literal> One browser instance
will be started and reused for each rendering request.</para></listitem>
<listitem><para><literal>clustered:</literal> allows to precisely
configure how many browser-instances are supposed to be used. The values
for that mode can be declared in <literal>rendering.clustering</literal>.
</para></listitem>
</itemizedlist>
'';
};
args = mkOption {
type = types.listOf types.str;
default = [ "--no-sandbox" ];
description = ''
List of CLI flags passed to <package>chromium</package>.
'';
};
};
};
};
default = {};
description = ''
Configuration attributes for <package>grafana-image-renderer</package>.
See <link xlink:href="https://github.com/grafana/grafana-image-renderer/blob/ce1f81438e5f69c7fd7c73ce08bab624c4c92e25/default.json" />
for supported values.
'';
};
};
config = mkIf cfg.enable {
assertions = [
{ assertion = cfg.provisionGrafana -> config.services.grafana.enable;
message = ''
To provision a Grafana instance to use grafana-image-renderer,
`services.grafana.enable` must be set to `true`!
'';
}
];
services.grafana.extraOptions = mkIf cfg.provisionGrafana {
RENDERING_SERVER_URL = "http://localhost:${toString cfg.settings.service.port}/render";
RENDERING_CALLBACK_URL = "http://localhost:${toString config.services.grafana.port}";
};
services.grafana-image-renderer.chromium = mkDefault pkgs.chromium;
services.grafana-image-renderer.settings = {
rendering = mapAttrs (const mkDefault) {
chromeBin = "${cfg.chromium}/bin/chromium";
verboseLogging = cfg.verbose;
timezone = config.time.timeZone;
};
services = {
logging.level = mkIf cfg.verbose (mkDefault "debug");
metrics.enabled = mkDefault false;
};
};
systemd.services.grafana-image-renderer = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
description = " A Grafana backend plugin that handles rendering of panels & dashboards to PNGs using headless browser (Chromium/Chrome)";
environment = {
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD = "true";
};
serviceConfig = {
DynamicUser = true;
PrivateTmp = true;
ExecStart = "${pkgs.grafana-image-renderer}/bin/grafana-image-renderer server --config=${configFile}";
Restart = "always";
};
};
};
meta.maintainers = with maintainers; [ ma27 ];
}

View file

@ -5,10 +5,11 @@ with lib;
let let
cfg = config.services.grafana; cfg = config.services.grafana;
opt = options.services.grafana; opt = options.services.grafana;
declarativePlugins = pkgs.linkFarm "grafana-plugins" (builtins.map (pkg: { name = pkg.pname; path = pkg; }) cfg.declarativePlugins);
envOptions = { envOptions = {
PATHS_DATA = cfg.dataDir; PATHS_DATA = cfg.dataDir;
PATHS_PLUGINS = "${cfg.dataDir}/plugins"; PATHS_PLUGINS = if builtins.isNull cfg.declarativePlugins then "${cfg.dataDir}/plugins" else declarativePlugins;
PATHS_LOGS = "${cfg.dataDir}/log"; PATHS_LOGS = "${cfg.dataDir}/log";
SERVER_PROTOCOL = cfg.protocol; SERVER_PROTOCOL = cfg.protocol;
@ -260,6 +261,12 @@ in {
defaultText = "pkgs.grafana"; defaultText = "pkgs.grafana";
type = types.package; type = types.package;
}; };
declarativePlugins = mkOption {
type = with types; nullOr (listOf path);
default = null;
description = "If non-null, then a list of packages containing Grafana plugins to install. If set, plugins cannot be manually installed.";
example = literalExample "with pkgs.grafanaPlugins; [ grafana-piechart-panel ]";
};
dataDir = mkOption { dataDir = mkOption {
description = "Data directory."; description = "Data directory.";

View file

@ -112,17 +112,21 @@ let
http://tools.ietf.org/html/rfc4366#section-3.1 http://tools.ietf.org/html/rfc4366#section-3.1
''; '';
}; };
remote_timeout = mkDefOpt types.str "30s" ''
Timeout for requests to the remote write endpoint.
'';
relabel_configs = mkOpt (types.listOf promTypes.relabel_config) ''
List of remote write relabel configurations.
List of relabel configurations.
'';
name = mkOpt types.string '' name = mkOpt types.string ''
Name of the remote write config, which if specified must be unique among remote write configs. Name of the remote read config, which if specified must be unique among remote read configs.
The name will be used in metrics and logging in place of a generated value to help users distinguish between The name will be used in metrics and logging in place of a generated value to help users distinguish between
remote write configs. remote read configs.
'';
required_matchers = mkOpt (types.attrsOf types.str) ''
An optional list of equality matchers which have to be
present in a selector to query the remote read endpoint.
'';
remote_timeout = mkOpt types.str ''
Timeout for requests to the remote read endpoint.
'';
read_recent = mkOpt types.bool ''
Whether reads should be made for queries for time ranges that
the local storage should have complete data for.
''; '';
basic_auth = mkOpt (types.submodule { basic_auth = mkOpt (types.submodule {
options = { options = {
@ -136,30 +140,22 @@ let
password_file = mkOpt types.str "HTTP password file"; password_file = mkOpt types.str "HTTP password file";
}; };
}) '' }) ''
Sets the `Authorization` header on every remote write request with the Sets the `Authorization` header on every remote read request with the
configured username and password. configured username and password.
password and password_file are mutually exclusive. password and password_file are mutually exclusive.
''; '';
bearer_token = mkOpt types.str '' bearer_token = mkOpt types.str ''
Sets the `Authorization` header on every remote write request with Sets the `Authorization` header on every remote read request with
the configured bearer token. It is mutually exclusive with `bearer_token_file`. the configured bearer token. It is mutually exclusive with `bearer_token_file`.
''; '';
bearer_token_file = mkOpt types.str '' bearer_token_file = mkOpt types.str ''
Sets the `Authorization` header on every remote write request with the bearer token Sets the `Authorization` header on every remote read request with the bearer token
read from the configured file. It is mutually exclusive with `bearer_token`. read from the configured file. It is mutually exclusive with `bearer_token`.
''; '';
tls_config = mkOpt promTypes.tls_config '' tls_config = mkOpt promTypes.tls_config ''
Configures the remote write request's TLS settings. Configures the remote read request's TLS settings.
''; '';
proxy_url = mkOpt types.str "Optional Proxy URL."; proxy_url = mkOpt types.str "Optional Proxy URL.";
metadata_config = {
send = mkDefOpt types.bool "true" ''
Whether metric metadata is sent to remote storage or not.
'';
send_interval = mkDefOpt types.str "1m" ''
How frequently metric metadata is sent to remote storage.
'';
};
}; };
}; };
@ -172,13 +168,12 @@ let
http://tools.ietf.org/html/rfc4366#section-3.1 http://tools.ietf.org/html/rfc4366#section-3.1
''; '';
}; };
remote_timeout = mkDefOpt types.str "30s" '' remote_timeout = mkOpt types.str ''
Timeout for requests to the remote write endpoint. Timeout for requests to the remote write endpoint.
''; '';
relabel_configs = mkOpt (types.listOf promTypes.relabel_config) '' write_relabel_configs = mkOpt (types.listOf promTypes.relabel_config) ''
List of remote write relabel configurations. List of remote write relabel configurations.
List of relabel configurations. '';
'';
name = mkOpt types.string '' name = mkOpt types.string ''
Name of the remote write config, which if specified must be unique among remote write configs. Name of the remote write config, which if specified must be unique among remote write configs.
The name will be used in metrics and logging in place of a generated value to help users distinguish between The name will be used in metrics and logging in place of a generated value to help users distinguish between
@ -212,14 +207,50 @@ let
Configures the remote write request's TLS settings. Configures the remote write request's TLS settings.
''; '';
proxy_url = mkOpt types.str "Optional Proxy URL."; proxy_url = mkOpt types.str "Optional Proxy URL.";
metadata_config = { queue_config = mkOpt (types.submodule {
send = mkDefOpt types.bool "true" '' options = {
Whether metric metadata is sent to remote storage or not. capacity = mkOpt types.int ''
''; Number of samples to buffer per shard before we block reading of more
send_interval = mkDefOpt types.str "1m" '' samples from the WAL. It is recommended to have enough capacity in each
How frequently metric metadata is sent to remote storage. shard to buffer several requests to keep throughput up while processing
''; occasional slow remote requests.
}; '';
max_shards = mkOpt types.int ''
Maximum number of shards, i.e. amount of concurrency.
'';
min_shards = mkOpt types.int ''
Minimum number of shards, i.e. amount of concurrency.
'';
max_samples_per_send = mkOpt types.int ''
Maximum number of samples per send.
'';
batch_send_deadline = mkOpt types.str ''
Maximum time a sample will wait in buffer.
'';
min_backoff = mkOpt types.str ''
Initial retry delay. Gets doubled for every retry.
'';
max_backoff = mkOpt types.str ''
Maximum retry delay.
'';
};
}) ''
Configures the queue used to write to remote storage.
'';
metadata_config = mkOpt (types.submodule {
options = {
send = mkOpt types.bool ''
Whether metric metadata is sent to remote storage or not.
'';
send_interval = mkOpt types.str ''
How frequently metric metadata is sent to remote storage.
'';
};
}) ''
Configures the sending of series metadata to remote storage.
Metadata configuration is subject to change at any point
or be removed in future releases.
'';
}; };
}; };
@ -554,10 +585,10 @@ let
regular expression matches. regular expression matches.
''; '';
action = mkDefOpt (types.enum ["replace" "keep" "drop"]) "replace" '' action =
mkDefOpt (types.enum ["replace" "keep" "drop" "hashmod" "labelmap" "labeldrop" "labelkeep"]) "replace" ''
Action to perform based on regex matching. Action to perform based on regex matching.
''; '';
}; };
}; };

View file

@ -23,6 +23,7 @@ let
exporterOpts = genAttrs [ exporterOpts = genAttrs [
"apcupsd" "apcupsd"
"bind" "bind"
"bird"
"blackbox" "blackbox"
"collectd" "collectd"
"dnsmasq" "dnsmasq"
@ -235,8 +236,6 @@ in
services.prometheus.exporters.minio.minioAddress = mkDefault "http://localhost:9000"; services.prometheus.exporters.minio.minioAddress = mkDefault "http://localhost:9000";
services.prometheus.exporters.minio.minioAccessKey = mkDefault config.services.minio.accessKey; services.prometheus.exporters.minio.minioAccessKey = mkDefault config.services.minio.accessKey;
services.prometheus.exporters.minio.minioAccessSecret = mkDefault config.services.minio.secretKey; services.prometheus.exporters.minio.minioAccessSecret = mkDefault config.services.minio.secretKey;
})] ++ [(mkIf config.services.rspamd.enable {
services.prometheus.exporters.rspamd.url = mkDefault "http://localhost:11334/stat";
})] ++ [(mkIf config.services.prometheus.exporters.rtl_433.enable { })] ++ [(mkIf config.services.prometheus.exporters.rtl_433.enable {
hardware.rtl-sdr.enable = mkDefault true; hardware.rtl-sdr.enable = mkDefault true;
})] ++ [(mkIf config.services.nginx.enable { })] ++ [(mkIf config.services.nginx.enable {

View file

@ -0,0 +1,46 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.bird;
in
{
port = 9324;
extraOpts = {
birdVersion = mkOption {
type = types.enum [ 1 2 ];
default = 2;
description = ''
Specifies whether BIRD1 or BIRD2 is in use.
'';
};
birdSocket = mkOption {
type = types.path;
default = "/var/run/bird.ctl";
description = ''
Path to BIRD2 (or BIRD1 v4) socket.
'';
};
newMetricFormat = mkOption {
type = types.bool;
default = true;
description = ''
Enable the new more-generic metric format.
'';
};
};
serviceOpts = {
serviceConfig = {
SupplementaryGroups = singleton (if cfg.birdVersion == 1 then "bird" else "bird2");
ExecStart = ''
${pkgs.prometheus-bird-exporter}/bin/bird_exporter \
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
-bird.socket ${cfg.birdSocket} \
-bird.v2=${if cfg.birdVersion == 2 then "true" else "false"} \
-format.new=${if cfg.newMetricFormat then "true" else "false"} \
${concatStringsSep " \\\n " cfg.extraFlags}
'';
};
};
}

View file

@ -8,28 +8,36 @@ in
{ {
port = 7979; port = 7979;
extraOpts = { extraOpts = {
url = mkOption {
type = types.str;
description = ''
URL to scrape JSON from.
'';
};
configFile = mkOption { configFile = mkOption {
type = types.path; type = types.path;
description = '' description = ''
Path to configuration file. Path to configuration file.
''; '';
}; };
listenAddress = {}; # not used
}; };
serviceOpts = { serviceOpts = {
serviceConfig = { serviceConfig = {
ExecStart = '' ExecStart = ''
${pkgs.prometheus-json-exporter}/bin/prometheus-json-exporter \ ${pkgs.prometheus-json-exporter}/bin/json_exporter \
--port ${toString cfg.port} \ --config.file ${escapeShellArg cfg.configFile} \
${cfg.url} ${escapeShellArg cfg.configFile} \ --web.listen-address="${cfg.listenAddress}:${toString cfg.port}" \
${concatStringsSep " \\\n " cfg.extraFlags} ${concatStringsSep " \\\n " cfg.extraFlags}
''; '';
}; };
}; };
imports = [
(mkRemovedOptionModule [ "url" ] ''
This option was removed. The URL of the endpoint serving JSON
must now be provided to the exporter by prometheus via the url
parameter `target'.
In prometheus a scrape URL would look like this:
http://some.json-exporter.host:7979/probe?target=https://example.com/some/json/endpoint
For more information, take a look at the official documentation
(https://github.com/prometheus-community/json_exporter) of the json_exporter.
'')
({ options.warnings = options.warnings; options.assertions = options.assertions; })
];
} }

View file

@ -10,64 +10,55 @@ let
echo '${builtins.toJSON conf}' | ${pkgs.buildPackages.jq}/bin/jq '.' > $out echo '${builtins.toJSON conf}' | ${pkgs.buildPackages.jq}/bin/jq '.' > $out
''; '';
generateConfig = extraLabels: (map (path: { generateConfig = extraLabels: {
name = "rspamd_${replaceStrings [ "." " " ] [ "_" "_" ] path}"; metrics = (map (path: {
path = "$.${path}"; name = "rspamd_${replaceStrings [ "." " " ] [ "_" "_" ] path}";
labels = extraLabels; path = "$.${path}";
}) [ labels = extraLabels;
"actions.'add header'" }) [
"actions.'no action'" "actions.'add header'"
"actions.'rewrite subject'" "actions.'no action'"
"actions.'soft reject'" "actions.'rewrite subject'"
"actions.greylist" "actions.'soft reject'"
"actions.reject" "actions.greylist"
"bytes_allocated" "actions.reject"
"chunks_allocated" "bytes_allocated"
"chunks_freed" "chunks_allocated"
"chunks_oversized" "chunks_freed"
"connections" "chunks_oversized"
"control_connections" "connections"
"ham_count" "control_connections"
"learned" "ham_count"
"pools_allocated" "learned"
"pools_freed" "pools_allocated"
"read_only" "pools_freed"
"scanned" "read_only"
"shared_chunks_allocated" "scanned"
"spam_count" "shared_chunks_allocated"
"total_learns" "spam_count"
]) ++ [{ "total_learns"
name = "rspamd_statfiles"; ]) ++ [{
type = "object"; name = "rspamd_statfiles";
path = "$.statfiles[*]"; type = "object";
labels = recursiveUpdate { path = "$.statfiles[*]";
symbol = "$.symbol"; labels = recursiveUpdate {
type = "$.type"; symbol = "$.symbol";
} extraLabels; type = "$.type";
values = { } extraLabels;
revision = "$.revision"; values = {
size = "$.size"; revision = "$.revision";
total = "$.total"; size = "$.size";
used = "$.used"; total = "$.total";
languages = "$.languages"; used = "$.used";
users = "$.users"; languages = "$.languages";
}; users = "$.users";
}]; };
}];
};
in in
{ {
port = 7980; port = 7980;
extraOpts = { extraOpts = {
listenAddress = {}; # not used
url = mkOption {
type = types.str;
description = ''
URL to the rspamd metrics endpoint.
Defaults to http://localhost:11334/stat when
<option>services.rspamd.enable</option> is true.
'';
};
extraLabels = mkOption { extraLabels = mkOption {
type = types.attrsOf types.str; type = types.attrsOf types.str;
default = { default = {
@ -84,9 +75,25 @@ in
}; };
}; };
serviceOpts.serviceConfig.ExecStart = '' serviceOpts.serviceConfig.ExecStart = ''
${pkgs.prometheus-json-exporter}/bin/prometheus-json-exporter \ ${pkgs.prometheus-json-exporter}/bin/json_exporter \
--port ${toString cfg.port} \ --config.file ${prettyJSON (generateConfig cfg.extraLabels)} \
${cfg.url} ${prettyJSON (generateConfig cfg.extraLabels)} \ --web.listen-address "${cfg.listenAddress}:${toString cfg.port}" \
${concatStringsSep " \\\n " cfg.extraFlags} ${concatStringsSep " \\\n " cfg.extraFlags}
''; '';
imports = [
(mkRemovedOptionModule [ "url" ] ''
This option was removed. The URL of the rspamd metrics endpoint
must now be provided to the exporter by prometheus via the url
parameter `target'.
In prometheus a scrape URL would look like this:
http://some.rspamd-exporter.host:7980/probe?target=http://some.rspamd.host:11334/stat
For more information, take a look at the official documentation
(https://github.com/prometheus-community/json_exporter) of the json_exporter.
'')
({ options.warnings = options.warnings; options.assertions = options.assertions; })
];
} }

View file

@ -4,13 +4,7 @@ with lib;
let let
cfg = config.services.corerad; cfg = config.services.corerad;
settingsFormat = pkgs.formats.toml {};
writeTOML = name: x:
pkgs.runCommandNoCCLocal name {
passAsFile = ["config"];
config = builtins.toJSON x;
buildInputs = [ pkgs.go-toml ];
} "jsontoml < $configPath > $out";
in { in {
meta.maintainers = with maintainers; [ mdlayher ]; meta.maintainers = with maintainers; [ mdlayher ];
@ -19,7 +13,7 @@ in {
enable = mkEnableOption "CoreRAD IPv6 NDP RA daemon"; enable = mkEnableOption "CoreRAD IPv6 NDP RA daemon";
settings = mkOption { settings = mkOption {
type = types.uniq types.attrs; type = settingsFormat.type;
example = literalExample '' example = literalExample ''
{ {
interfaces = [ interfaces = [
@ -64,7 +58,7 @@ in {
config = mkIf cfg.enable { config = mkIf cfg.enable {
# Prefer the config file over settings if both are set. # Prefer the config file over settings if both are set.
services.corerad.configFile = mkDefault (writeTOML "corerad.toml" cfg.settings); services.corerad.configFile = mkDefault (settingsFormat.generate "corerad.toml" cfg.settings);
systemd.services.corerad = { systemd.services.corerad = {
description = "CoreRAD IPv6 NDP RA daemon"; description = "CoreRAD IPv6 NDP RA daemon";

View file

@ -16,7 +16,7 @@ let
${concatMapStrings (f: "actionsfile ${f}\n") cfg.actionsFiles} ${concatMapStrings (f: "actionsfile ${f}\n") cfg.actionsFiles}
${concatMapStrings (f: "filterfile ${f}\n") cfg.filterFiles} ${concatMapStrings (f: "filterfile ${f}\n") cfg.filterFiles}
'' + optionalString cfg.enableTor '' '' + optionalString cfg.enableTor ''
forward-socks4a / ${config.services.tor.client.socksListenAddressFaster} . forward-socks5t / 127.0.0.1:9063 .
toggle 1 toggle 1
enable-remote-toggle 0 enable-remote-toggle 0
enable-edit-actions 0 enable-edit-actions 0
@ -123,6 +123,11 @@ in
serviceConfig.ProtectSystem = "full"; serviceConfig.ProtectSystem = "full";
}; };
services.tor.settings.SOCKSPort = mkIf cfg.enableTor [
# Route HTTP traffic over a faster port (without IsolateDestAddr).
{ addr = "127.0.0.1"; port = 9063; IsolateDestAddr = false; }
];
}; };
meta.maintainers = with lib.maintainers; [ rnhmjoj ]; meta.maintainers = with lib.maintainers; [ rnhmjoj ];

View file

@ -0,0 +1,91 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.shellhub-agent;
in {
###### interface
options = {
services.shellhub-agent = {
enable = mkOption {
type = types.bool;
default = false;
description = ''
Whether to enable the ShellHub Agent daemon, which allows
secure remote logins.
'';
};
package = mkOption {
type = types.package;
default = pkgs.shellhub-agent;
defaultText = "pkgs.shellhub-agent";
description = ''
Which ShellHub Agent package to use.
'';
};
tenantId = mkOption {
type = types.str;
example = "ba0a880c-2ada-11eb-a35e-17266ef329d6";
description = ''
The tenant ID to use when connecting to the ShellHub
Gateway.
'';
};
server = mkOption {
type = types.str;
default = "https://cloud.shellhub.io";
description = ''
Server address of ShellHub Gateway to connect.
'';
};
privateKey = mkOption {
type = types.path;
default = "/var/lib/shellhub-agent/private.key";
description = ''
Location where to store the ShellHub Agent private
key.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
systemd.services.shellhub-agent = {
description = "ShellHub Agent";
wantedBy = [ "multi-user.target" ];
requires = [ "local-fs.target" ];
wants = [ "network-online.target" ];
after = [
"local-fs.target"
"network.target"
"network-online.target"
"time-sync.target"
];
environment.SERVER_ADDRESS = cfg.server;
environment.PRIVATE_KEY = cfg.privateKey;
environment.TENANT_ID = cfg.tenantId;
serviceConfig = {
# The service starts sessions for different users.
User = "root";
Restart = "on-failure";
ExecStart = "${cfg.package}/bin/agent";
};
};
environment.systemPackages = [ cfg.package ];
};
}

File diff suppressed because it is too large Load diff

View file

@ -4,7 +4,7 @@ with lib;
let let
autologinArg = optionalString (config.services.mingetty.autologinUser != null) "--autologin ${config.services.mingetty.autologinUser}"; autologinArg = optionalString (config.services.getty.autologinUser != null) "--autologin ${config.services.getty.autologinUser}";
gettyCmd = extraArgs: "@${pkgs.util-linux}/sbin/agetty agetty --login-program ${pkgs.shadow}/bin/login ${autologinArg} ${extraArgs}"; gettyCmd = extraArgs: "@${pkgs.util-linux}/sbin/agetty agetty --login-program ${pkgs.shadow}/bin/login ${autologinArg} ${extraArgs}";
in in
@ -13,9 +13,13 @@ in
###### interface ###### interface
imports = [
(mkRenamedOptionModule [ "services" "mingetty" ] [ "services" "getty" ])
];
options = { options = {
services.mingetty = { services.getty = {
autologinUser = mkOption { autologinUser = mkOption {
type = types.nullOr types.str; type = types.nullOr types.str;
@ -29,7 +33,7 @@ in
greetingLine = mkOption { greetingLine = mkOption {
type = types.str; type = types.str;
description = '' description = ''
Welcome line printed by mingetty. Welcome line printed by agetty.
The default shows current NixOS version label, machine type and tty. The default shows current NixOS version label, machine type and tty.
''; '';
}; };
@ -38,7 +42,7 @@ in
type = types.lines; type = types.lines;
default = ""; default = "";
description = '' description = ''
Help line printed by mingetty below the welcome line. Help line printed by agetty below the welcome line.
Used by the installation CD to give some hints on Used by the installation CD to give some hints on
how to proceed. how to proceed.
''; '';
@ -65,7 +69,7 @@ in
config = { config = {
# Note: this is set here rather than up there so that changing # Note: this is set here rather than up there so that changing
# nixos.label would not rebuild manual pages # nixos.label would not rebuild manual pages
services.mingetty.greetingLine = mkDefault ''<<< Welcome to NixOS ${config.system.nixos.label} (\m) - \l >>>''; services.getty.greetingLine = mkDefault ''<<< Welcome to NixOS ${config.system.nixos.label} (\m) - \l >>>'';
systemd.services."getty@" = systemd.services."getty@" =
{ serviceConfig.ExecStart = [ { serviceConfig.ExecStart = [
@ -76,7 +80,7 @@ in
}; };
systemd.services."serial-getty@" = systemd.services."serial-getty@" =
let speeds = concatStringsSep "," (map toString config.services.mingetty.serialSpeed); in let speeds = concatStringsSep "," (map toString config.services.getty.serialSpeed); in
{ serviceConfig.ExecStart = [ { serviceConfig.ExecStart = [
"" # override upstream default with an empty ExecStart "" # override upstream default with an empty ExecStart
(gettyCmd "%I ${speeds} $TERM") (gettyCmd "%I ${speeds} $TERM")
@ -106,8 +110,8 @@ in
{ # Friendly greeting on the virtual consoles. { # Friendly greeting on the virtual consoles.
source = pkgs.writeText "issue" '' source = pkgs.writeText "issue" ''
${config.services.mingetty.greetingLine} ${config.services.getty.greetingLine}
${config.services.mingetty.helpLine} ${config.services.getty.helpLine}
''; '';
}; };

View file

@ -27,6 +27,33 @@ let
) cfg.virtualHosts; ) cfg.virtualHosts;
enableIPv6 = config.networking.enableIPv6; enableIPv6 = config.networking.enableIPv6;
defaultFastcgiParams = {
SCRIPT_FILENAME = "$document_root$fastcgi_script_name";
QUERY_STRING = "$query_string";
REQUEST_METHOD = "$request_method";
CONTENT_TYPE = "$content_type";
CONTENT_LENGTH = "$content_length";
SCRIPT_NAME = "$fastcgi_script_name";
REQUEST_URI = "$request_uri";
DOCUMENT_URI = "$document_uri";
DOCUMENT_ROOT = "$document_root";
SERVER_PROTOCOL = "$server_protocol";
REQUEST_SCHEME = "$scheme";
HTTPS = "$https if_not_empty";
GATEWAY_INTERFACE = "CGI/1.1";
SERVER_SOFTWARE = "nginx/$nginx_version";
REMOTE_ADDR = "$remote_addr";
REMOTE_PORT = "$remote_port";
SERVER_ADDR = "$server_addr";
SERVER_PORT = "$server_port";
SERVER_NAME = "$server_name";
REDIRECT_STATUS = "200";
};
recommendedProxyConfig = pkgs.writeText "nginx-recommended-proxy-headers.conf" '' recommendedProxyConfig = pkgs.writeText "nginx-recommended-proxy-headers.conf" ''
proxy_set_header Host $host; proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Real-IP $remote_addr;
@ -283,6 +310,10 @@ let
proxy_set_header Upgrade $http_upgrade; proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade; proxy_set_header Connection $connection_upgrade;
''} ''}
${concatStringsSep "\n"
(mapAttrsToList (n: v: ''fastcgi_param ${n} "${v}";'')
(optionalAttrs (config.fastcgiParams != {})
(defaultFastcgiParams // config.fastcgiParams)))}
${optionalString (config.index != null) "index ${config.index};"} ${optionalString (config.index != null) "index ${config.index};"}
${optionalString (config.tryFiles != null) "try_files ${config.tryFiles};"} ${optionalString (config.tryFiles != null) "try_files ${config.tryFiles};"}
${optionalString (config.root != null) "root ${config.root};"} ${optionalString (config.root != null) "root ${config.root};"}

View file

@ -101,6 +101,16 @@ with lib;
''; '';
}; };
fastcgiParams = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
FastCGI parameters to override. Unlike in the Nginx
configuration file, overriding only some default parameters
won't unset the default values for other parameters.
'';
};
extraConfig = mkOption { extraConfig = mkOption {
type = types.lines; type = types.lines;
default = ""; default = "";

View file

@ -8,8 +8,7 @@ let
cfg = xcfg.desktopManager.plasma5; cfg = xcfg.desktopManager.plasma5;
inherit (pkgs) kdeApplications kdeFrameworks plasma5; inherit (pkgs) kdeApplications kdeFrameworks plasma5;
libsForQt5 = pkgs.libsForQt514; inherit (pkgs) qt5 libsForQt5;
qt5 = pkgs.qt514;
inherit (pkgs) writeText; inherit (pkgs) writeText;
pulseaudio = config.hardware.pulseaudio; pulseaudio = config.hardware.pulseaudio;

View file

@ -20,20 +20,13 @@ def copy_if_not_exists(source, dest):
if not os.path.exists(dest): if not os.path.exists(dest):
shutil.copyfile(source, dest) shutil.copyfile(source, dest)
def generation_dir(profile, generation): def system_dir(profile, generation):
if profile: if profile:
return "/nix/var/nix/profiles/system-profiles/%s-%d-link" % (profile, generation) return "/nix/var/nix/profiles/system-profiles/%s-%d-link" % (profile, generation)
else: else:
return "/nix/var/nix/profiles/system-%d-link" % (generation) return "/nix/var/nix/profiles/system-%d-link" % (generation)
def system_dir(profile, generation, specialisation): BOOT_ENTRY = """title NixOS{profile}
d = generation_dir(profile, generation)
if specialisation:
return os.path.join(d, "specialisation", specialisation)
else:
return d
BOOT_ENTRY = """title NixOS{profile}{specialisation}
version Generation {generation} {description} version Generation {generation} {description}
linux {kernel} linux {kernel}
initrd {initrd} initrd {initrd}
@ -49,26 +42,24 @@ MEMTEST_BOOT_ENTRY = """title MemTest86
efi /efi/memtest86/BOOTX64.efi efi /efi/memtest86/BOOTX64.efi
""" """
def generation_conf_filename(profile, generation, specialisation): def write_loader_conf(profile, generation):
profile_part = f"-{profile}" if profile else ""
specialisation_part = f"-specialisation-{specialisation}" if specialisation else ""
return f"nixos{profile_part}{specialisation_part}-generation-{generation}.conf"
def write_loader_conf(profile, generation, specialisation):
with open("@efiSysMountPoint@/loader/loader.conf.tmp", 'w') as f: with open("@efiSysMountPoint@/loader/loader.conf.tmp", 'w') as f:
if "@timeout@" != "": if "@timeout@" != "":
f.write("timeout @timeout@\n") f.write("timeout @timeout@\n")
f.write("default %s\n" % generation_conf_filename(profile, generation, specialisation)) if profile:
f.write("default nixos-%s-generation-%d.conf\n" % (profile, generation))
else:
f.write("default nixos-generation-%d.conf\n" % (generation))
if not @editor@: if not @editor@:
f.write("editor 0\n"); f.write("editor 0\n");
f.write("console-mode @consoleMode@\n"); f.write("console-mode @consoleMode@\n");
os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf") os.rename("@efiSysMountPoint@/loader/loader.conf.tmp", "@efiSysMountPoint@/loader/loader.conf")
def profile_path(profile, generation, specialisation, name): def profile_path(profile, generation, name):
return os.readlink("%s/%s" % (system_dir(profile, generation, specialisation), name)) return os.readlink("%s/%s" % (system_dir(profile, generation), name))
def copy_from_profile(profile, generation, specialisation, name, dry_run=False): def copy_from_profile(profile, generation, name, dry_run=False):
store_file_path = profile_path(profile, generation, specialisation, name) store_file_path = profile_path(profile, generation, name)
suffix = os.path.basename(store_file_path) suffix = os.path.basename(store_file_path)
store_dir = os.path.basename(os.path.dirname(store_file_path)) store_dir = os.path.basename(os.path.dirname(store_file_path))
efi_file_path = "/efi/nixos/%s-%s.efi" % (store_dir, suffix) efi_file_path = "/efi/nixos/%s-%s.efi" % (store_dir, suffix)
@ -96,17 +87,19 @@ def describe_generation(generation_dir):
return description return description
def write_entry(profile, generation, specialisation, machine_id): def write_entry(profile, generation, machine_id):
kernel = copy_from_profile(profile, generation, specialisation, "kernel") kernel = copy_from_profile(profile, generation, "kernel")
initrd = copy_from_profile(profile, generation, specialisation, "initrd") initrd = copy_from_profile(profile, generation, "initrd")
try: try:
append_initrd_secrets = profile_path(profile, generation, specialisation, "append-initrd-secrets") append_initrd_secrets = profile_path(profile, generation, "append-initrd-secrets")
subprocess.check_call([append_initrd_secrets, "@efiSysMountPoint@%s" % (initrd)]) subprocess.check_call([append_initrd_secrets, "@efiSysMountPoint@%s" % (initrd)])
except FileNotFoundError: except FileNotFoundError:
pass pass
entry_file = "@efiSysMountPoint@/loader/entries/%s" % ( if profile:
generation_conf_filename(profile, generation, specialisation)) entry_file = "@efiSysMountPoint@/loader/entries/nixos-%s-generation-%d.conf" % (profile, generation)
generation_dir = os.readlink(system_dir(profile, generation, specialisation)) else:
entry_file = "@efiSysMountPoint@/loader/entries/nixos-generation-%d.conf" % (generation)
generation_dir = os.readlink(system_dir(profile, generation))
tmp_path = "%s.tmp" % (entry_file) tmp_path = "%s.tmp" % (entry_file)
kernel_params = "systemConfig=%s init=%s/init " % (generation_dir, generation_dir) kernel_params = "systemConfig=%s init=%s/init " % (generation_dir, generation_dir)
@ -114,7 +107,6 @@ def write_entry(profile, generation, specialisation, machine_id):
kernel_params = kernel_params + params_file.read() kernel_params = kernel_params + params_file.read()
with open(tmp_path, 'w') as f: with open(tmp_path, 'w') as f:
f.write(BOOT_ENTRY.format(profile=" [" + profile + "]" if profile else "", f.write(BOOT_ENTRY.format(profile=" [" + profile + "]" if profile else "",
specialisation=" (%s)" % specialisation if specialisation else "",
generation=generation, generation=generation,
kernel=kernel, kernel=kernel,
initrd=initrd, initrd=initrd,
@ -143,14 +135,7 @@ def get_generations(profile=None):
gen_lines.pop() gen_lines.pop()
configurationLimit = @configurationLimit@ configurationLimit = @configurationLimit@
return [ (profile, int(line.split()[0]), None) for line in gen_lines ][-configurationLimit:] return [ (profile, int(line.split()[0])) for line in gen_lines ][-configurationLimit:]
def get_specialisations(profile, generation, _):
specialisations_dir = os.path.join(
system_dir(profile, generation, None), "specialisation")
if not os.path.exists(specialisations_dir):
return []
return [(profile, generation, spec) for spec in os.listdir(specialisations_dir)]
def remove_old_entries(gens): def remove_old_entries(gens):
rex_profile = re.compile("^@efiSysMountPoint@/loader/entries/nixos-(.*)-generation-.*\.conf$") rex_profile = re.compile("^@efiSysMountPoint@/loader/entries/nixos-(.*)-generation-.*\.conf$")
@ -238,8 +223,6 @@ def main():
remove_old_entries(gens) remove_old_entries(gens)
for gen in gens: for gen in gens:
write_entry(*gen, machine_id) write_entry(*gen, machine_id)
for specialisation in get_specialisations(*gen):
write_entry(*specialisation, machine_id)
if os.readlink(system_dir(*gen)) == args.default_config: if os.readlink(system_dir(*gen)) == args.default_config:
write_loader_conf(*gen) write_loader_conf(*gen)

View file

@ -7,7 +7,7 @@ let
echo "attempting to fetch configuration from EC2 user data..." echo "attempting to fetch configuration from EC2 user data..."
export HOME=/root export HOME=/root
export PATH=${pkgs.lib.makeBinPath [ config.nix.package pkgs.systemd pkgs.gnugrep pkgs.git pkgs.gnutar pkgs.gzip pkgs.gnused config.system.build.nixos-rebuild]}:$PATH export PATH=${pkgs.lib.makeBinPath [ config.nix.package pkgs.systemd pkgs.gnugrep pkgs.git pkgs.gnutar pkgs.gzip pkgs.gnused pkgs.xz config.system.build.nixos-rebuild]}:$PATH
export NIX_PATH=nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos:nixos-config=/etc/nixos/configuration.nix:/nix/var/nix/profiles/per-user/root/channels export NIX_PATH=nixpkgs=/nix/var/nix/profiles/per-user/root/channels/nixos:nixos-config=/etc/nixos/configuration.nix:/nix/var/nix/profiles/per-user/root/channels
userData=/etc/ec2-metadata/user-data userData=/etc/ec2-metadata/user-data

View file

@ -11,7 +11,7 @@ with lib;
users.users.root.initialHashedPassword = mkOverride 150 ""; users.users.root.initialHashedPassword = mkOverride 150 "";
# Some more help text. # Some more help text.
services.mingetty.helpLine = services.getty.helpLine =
'' ''
Log in as "root" with an empty password. Log in as "root" with an empty password.

View file

@ -56,10 +56,10 @@ let
ip -6 route add $HOST_ADDRESS6 dev eth0 ip -6 route add $HOST_ADDRESS6 dev eth0
ip -6 route add default via $HOST_ADDRESS6 ip -6 route add default via $HOST_ADDRESS6
fi fi
${concatStringsSep "\n" (mapAttrsToList renderExtraVeth cfg.extraVeths)}
fi fi
${concatStringsSep "\n" (mapAttrsToList renderExtraVeth cfg.extraVeths)}
# Start the regular stage 1 script. # Start the regular stage 1 script.
exec "$1" exec "$1"
'' ''
@ -223,8 +223,8 @@ let
${ipcall cfg "ip route" "$LOCAL_ADDRESS" "localAddress"} ${ipcall cfg "ip route" "$LOCAL_ADDRESS" "localAddress"}
${ipcall cfg "ip -6 route" "$LOCAL_ADDRESS6" "localAddress6"} ${ipcall cfg "ip -6 route" "$LOCAL_ADDRESS6" "localAddress6"}
fi fi
${concatStringsSep "\n" (mapAttrsToList renderExtraVeth cfg.extraVeths)}
fi fi
${concatStringsSep "\n" (mapAttrsToList renderExtraVeth cfg.extraVeths)}
'' ''
); );

View file

@ -176,10 +176,10 @@ let
description = '' description = ''
Define which other containers this one depends on. They will be added to both After and Requires for the unit. Define which other containers this one depends on. They will be added to both After and Requires for the unit.
Use the same name as the attribute under <literal>virtualisation.oci-containers</literal>. Use the same name as the attribute under <literal>virtualisation.oci-containers.containers</literal>.
''; '';
example = literalExample '' example = literalExample ''
virtualisation.oci-containers = { virtualisation.oci-containers.containers = {
node1 = {}; node1 = {};
node2 = { node2 = {
dependsOn = [ "node1" ]; dependsOn = [ "node1" ];

View file

@ -158,6 +158,7 @@ in
home-assistant = handleTest ./home-assistant.nix {}; home-assistant = handleTest ./home-assistant.nix {};
hostname = handleTest ./hostname.nix {}; hostname = handleTest ./hostname.nix {};
hound = handleTest ./hound.nix {}; hound = handleTest ./hound.nix {};
hub = handleTest ./git/hub.nix {};
hydra = handleTest ./hydra {}; hydra = handleTest ./hydra {};
i3wm = handleTest ./i3wm.nix {}; i3wm = handleTest ./i3wm.nix {};
icingaweb2 = handleTest ./icingaweb2.nix {}; icingaweb2 = handleTest ./icingaweb2.nix {};

View file

@ -247,5 +247,12 @@ import ./make-test-python.nix ({ pkgs, ... }: {
).strip() ).strip()
== "${if pkgs.system == "aarch64-linux" then "amd64" else "arm64"}" == "${if pkgs.system == "aarch64-linux" then "amd64" else "arm64"}"
) )
with subtest("buildLayeredImage doesn't dereference /nix/store symlink layers"):
docker.succeed(
"docker load --input='${examples.layeredStoreSymlink}'",
"docker run --rm ${examples.layeredStoreSymlink.imageName} bash -c 'test -L ${examples.layeredStoreSymlink.passthru.symlink}'",
"docker rmi ${examples.layeredStoreSymlink.imageName}",
)
''; '';
}) })

View file

@ -4,8 +4,11 @@ import ./make-test-python.nix {
machine = { pkgs, ... }: { machine = { pkgs, ... }: {
imports = [ common/user-account.nix ]; imports = [ common/user-account.nix ];
services.postfix.enable = true; services.postfix.enable = true;
services.dovecot2.enable = true; services.dovecot2 = {
services.dovecot2.protocols = [ "imap" "pop3" ]; enable = true;
protocols = [ "imap" "pop3" ];
modules = [ pkgs.dovecot_pigeonhole ];
};
environment.systemPackages = let environment.systemPackages = let
sendTestMail = pkgs.writeScriptBin "send-testmail" '' sendTestMail = pkgs.writeScriptBin "send-testmail" ''
#!${pkgs.runtimeShell} #!${pkgs.runtimeShell}

View file

@ -0,0 +1,17 @@
import ../make-test-python.nix ({ pkgs, ...} : {
name = "hub";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ nequissimus ];
};
nodes.hub = { pkgs, ... }:
{
environment.systemPackages = [ pkgs.gitAndTools.hub ];
};
testScript =
''
assert "git version ${pkgs.git.version}\nhub version ${pkgs.gitAndTools.hub.version}\n" in hub.succeed("hub version")
assert "These GitHub commands are provided by hub" in hub.succeed("hub help")
'';
})

View file

@ -17,6 +17,10 @@ let
}; };
extraNodeConfs = { extraNodeConfs = {
declarativePlugins = {
services.grafana.declarativePlugins = [ pkgs.grafanaPlugins.grafana-clock-panel ];
};
postgresql = { postgresql = {
services.grafana.database = { services.grafana.database = {
host = "127.0.0.1:5432"; host = "127.0.0.1:5432";
@ -52,7 +56,7 @@ let
nameValuePair dbName (mkMerge [ nameValuePair dbName (mkMerge [
baseGrafanaConf baseGrafanaConf
(extraNodeConfs.${dbName} or {}) (extraNodeConfs.${dbName} or {})
])) [ "sqlite" "postgresql" "mysql" ]); ])) [ "sqlite" "declarativePlugins" "postgresql" "mysql" ]);
in { in {
name = "grafana"; name = "grafana";
@ -66,6 +70,14 @@ in {
testScript = '' testScript = ''
start_all() start_all()
with subtest("Declarative plugins installed"):
declarativePlugins.wait_for_unit("grafana.service")
declarativePlugins.wait_for_open_port(3000)
declarativePlugins.succeed(
"curl -sSfN -u testadmin:snakeoilpwd http://127.0.0.1:3000/api/plugins | grep -q grafana-clock-panel"
)
declarativePlugins.shutdown()
with subtest("Successful API query as admin user with sqlite db"): with subtest("Successful API query as admin user with sqlite db"):
sqlite.wait_for_unit("grafana.service") sqlite.wait_for_unit("grafana.service")
sqlite.wait_for_open_port(3000) sqlite.wait_for_open_port(3000)

View file

@ -50,7 +50,7 @@ import ./make-test-python.nix ({ pkgs, latestKernel ? false, ... }:
with subtest("Virtual console logout"): with subtest("Virtual console logout"):
machine.send_chars("exit\n") machine.send_chars("exit\n")
machine.wait_until_fails("pgrep -u alice bash") machine.wait_until_fails("pgrep -u alice bash")
machine.screenshot("mingetty") machine.screenshot("getty")
with subtest("Check whether ctrl-alt-delete works"): with subtest("Check whether ctrl-alt-delete works"):
machine.send_key("ctrl-alt-delete") machine.send_key("ctrl-alt-delete")

View file

@ -1,11 +1,19 @@
{ system ? builtins.currentSystem,
config ? {},
pkgs ? import ../.. { inherit system config; }
}:
with import ../lib/testing-python.nix { inherit system pkgs; };
let let
lib = pkgs.lib;
# Makes a test for a PostgreSQL package, given by name and looked up from `pkgs`. # Makes a test for a PostgreSQL package, given by name and looked up from `pkgs`.
makePostgresqlWalReceiverTest = postgresqlPackage: makePostgresqlWalReceiverTest = postgresqlPackage:
{ {
name = postgresqlPackage; name = postgresqlPackage;
value = value =
import ./make-test-python.nix ({ pkgs, lib, ... }: let let
pkg = pkgs."${postgresqlPackage}"; pkg = pkgs."${postgresqlPackage}";
postgresqlDataDir = "/var/lib/postgresql/${pkg.psqlSchema}"; postgresqlDataDir = "/var/lib/postgresql/${pkg.psqlSchema}";
replicationUser = "wal_receiver_user"; replicationUser = "wal_receiver_user";
@ -19,7 +27,7 @@ let
then pkgs.writeTextDir "recovery.signal" "" then pkgs.writeTextDir "recovery.signal" ""
else pkgs.writeTextDir "recovery.conf" "restore_command = 'cp ${walBackupDir}/%f %p'"; else pkgs.writeTextDir "recovery.conf" "restore_command = 'cp ${walBackupDir}/%f %p'";
in { in makeTest {
name = "postgresql-wal-receiver-${postgresqlPackage}"; name = "postgresql-wal-receiver-${postgresqlPackage}";
meta.maintainers = with lib.maintainers; [ pacien ]; meta.maintainers = with lib.maintainers; [ pacien ];
@ -104,7 +112,7 @@ let
"test $(sudo -u postgres psql --pset='pager=off' --tuples-only --command='select count(distinct val) from dummy;') -eq 100" "test $(sudo -u postgres psql --pset='pager=off' --tuples-only --command='select count(distinct val) from dummy;') -eq 100"
) )
''; '';
}); };
}; };
# Maps the generic function over all attributes of PostgreSQL packages # Maps the generic function over all attributes of PostgreSQL packages

View file

@ -96,6 +96,31 @@ let
''; '';
}; };
bird = {
exporterConfig = {
enable = true;
};
metricProvider = {
services.bird2.enable = true;
services.bird2.config = ''
protocol kernel MyObviousTestString {
ipv4 {
import all;
export none;
};
}
protocol device {
}
'';
};
exporterTest = ''
wait_for_unit("prometheus-bird-exporter.service")
wait_for_open_port(9324)
succeed("curl -sSf http://localhost:9324/metrics | grep -q 'MyObviousTestString'")
'';
};
blackbox = { blackbox = {
exporterConfig = { exporterConfig = {
enable = true; enable = true;
@ -197,10 +222,11 @@ let
exporterConfig = { exporterConfig = {
enable = true; enable = true;
url = "http://localhost"; url = "http://localhost";
configFile = pkgs.writeText "json-exporter-conf.json" (builtins.toJSON [{ configFile = pkgs.writeText "json-exporter-conf.json" (builtins.toJSON {
name = "json_test_metric"; metrics = [
path = "$.test"; { name = "json_test_metric"; path = "$.test"; }
}]); ];
});
}; };
metricProvider = { metricProvider = {
systemd.services.prometheus-json-exporter.after = [ "nginx.service" ]; systemd.services.prometheus-json-exporter.after = [ "nginx.service" ];
@ -216,7 +242,9 @@ let
wait_for_open_port(80) wait_for_open_port(80)
wait_for_unit("prometheus-json-exporter.service") wait_for_unit("prometheus-json-exporter.service")
wait_for_open_port(7979) wait_for_open_port(7979)
succeed("curl -sSf localhost:7979/metrics | grep -q 'json_test_metric 1'") succeed(
"curl -sSf 'localhost:7979/probe?target=http://localhost' | grep -q 'json_test_metric 1'"
)
''; '';
}; };
@ -634,7 +662,7 @@ let
wait_for_open_port(11334) wait_for_open_port(11334)
wait_for_open_port(7980) wait_for_open_port(7980)
wait_until_succeeds( wait_until_succeeds(
"curl -sSf localhost:7980/metrics | grep -q 'rspamd_scanned{host=\"rspamd\"} 0'" "curl -sSf 'localhost:7980/probe?target=http://localhost:11334/stat' | grep -q 'rspamd_scanned{host=\"rspamd\"} 0'"
) )
''; '';
}; };

View file

@ -2,6 +2,7 @@ let
password1 = "foobar"; password1 = "foobar";
password2 = "helloworld"; password2 = "helloworld";
password3 = "bazqux"; password3 = "bazqux";
password4 = "asdf123";
in import ./make-test-python.nix ({ pkgs, ... }: { in import ./make-test-python.nix ({ pkgs, ... }: {
name = "shadow"; name = "shadow";
meta = with pkgs.stdenv.lib.maintainers; { maintainers = [ nequissimus ]; }; meta = with pkgs.stdenv.lib.maintainers; { maintainers = [ nequissimus ]; };
@ -19,6 +20,10 @@ in import ./make-test-python.nix ({ pkgs, ... }: {
password = password2; password = password2;
shell = pkgs.shadow; shell = pkgs.shadow;
}; };
users.ash = {
password = password4;
shell = pkgs.bash;
};
}; };
}; };
@ -41,6 +46,15 @@ in import ./make-test-python.nix ({ pkgs, ... }: {
shadow.wait_for_file("/tmp/1") shadow.wait_for_file("/tmp/1")
assert "emma" in shadow.succeed("cat /tmp/1") assert "emma" in shadow.succeed("cat /tmp/1")
with subtest("Switch user"):
shadow.send_chars("su - ash\n")
shadow.sleep(2)
shadow.send_chars("${password4}\n")
shadow.sleep(2)
shadow.send_chars("whoami > /tmp/3\n")
shadow.wait_for_file("/tmp/3")
assert "ash" in shadow.succeed("cat /tmp/3")
with subtest("Change password"): with subtest("Change password"):
shadow.send_key("alt-f3") shadow.send_key("alt-f3")
shadow.wait_until_succeeds(f"[ $(fgconsole) = 3 ]") shadow.wait_until_succeeds(f"[ $(fgconsole) = 3 ]")

View file

@ -39,29 +39,6 @@ in
''; '';
}; };
# Check that specialisations create corresponding boot entries.
specialisation = makeTest {
name = "systemd-boot-specialisation";
meta.maintainers = with pkgs.stdenv.lib.maintainers; [ lukegb ];
machine = { pkgs, lib, ... }: {
imports = [ common ];
specialisation.something.configuration = {};
};
testScript = ''
machine.start()
machine.wait_for_unit("multi-user.target")
machine.succeed(
"test -e /boot/loader/entries/nixos-specialisation-something-generation-1.conf"
)
machine.succeed(
"grep -q 'title NixOS (something)' /boot/loader/entries/nixos-specialisation-something-generation-1.conf"
)
'';
};
# Boot without having created an EFI entry--instead using default "/EFI/BOOT/BOOTX64.EFI" # Boot without having created an EFI entry--instead using default "/EFI/BOOT/BOOTX64.EFI"
fallback = makeTest { fallback = makeTest {
name = "systemd-boot-fallback"; name = "systemd-boot-fallback";

View file

@ -17,7 +17,7 @@ rec {
environment.systemPackages = with pkgs; [ netcat ]; environment.systemPackages = with pkgs; [ netcat ];
services.tor.enable = true; services.tor.enable = true;
services.tor.client.enable = true; services.tor.client.enable = true;
services.tor.controlPort = 9051; services.tor.settings.ControlPort = 9051;
}; };
testScript = '' testScript = ''

View file

@ -1,5 +1,5 @@
{ stdenv, fetchFromGitHub, cairo, fftw, gtkmm2, lv2, lvtk, pkgconfig { stdenv, fetchFromGitHub, cairo, fftw, gtkmm2, lv2, lvtk, pkgconfig
, wafHook }: , wafHook, python3 }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "ams-lv2"; pname = "ams-lv2";
@ -12,7 +12,7 @@ stdenv.mkDerivation rec {
sha256 = "1lz2mvk4gqsyf92yxd3aaldx0d0qi28h4rnnvsaz4ls0ccqm80nk"; sha256 = "1lz2mvk4gqsyf92yxd3aaldx0d0qi28h4rnnvsaz4ls0ccqm80nk";
}; };
nativeBuildInputs = [ pkgconfig wafHook ]; nativeBuildInputs = [ pkgconfig wafHook python3 ];
buildInputs = [ cairo fftw gtkmm2 lv2 lvtk ]; buildInputs = [ cairo fftw gtkmm2 lv2 lvtk ];
meta = with stdenv.lib; { meta = with stdenv.lib; {

View file

@ -1,8 +1,8 @@
{ stdenv, fetchzip, wxGTK30, pkgconfig, file, gettext, { stdenv, fetchzip, wxGTK30, pkgconfig, file, gettext,
libvorbis, libmad, libjack2, lv2, lilv, serd, sord, sratom, suil, alsaLib, libsndfile, soxr, flac, lame, libvorbis, libmad, libjack2, lv2, lilv, serd, sord, sratom, suil, alsaLib, libsndfile, soxr, flac, lame,
expat, libid3tag, ffmpeg_3, soundtouch, /*, portaudio - given up fighting their portaudio.patch */ expat, libid3tag, ffmpeg_3, soundtouch, /*, portaudio - given up fighting their portaudio.patch */
autoconf, automake, libtool cmake
}: }:
with stdenv.lib; with stdenv.lib;
@ -15,16 +15,8 @@ stdenv.mkDerivation rec {
sha256 = "1xk0piv72d2xd3p7igr916fhcbrm76fhjr418k1rlqdzzg1hfljn"; sha256 = "1xk0piv72d2xd3p7igr916fhcbrm76fhjr418k1rlqdzzg1hfljn";
}; };
preConfigure = /* we prefer system-wide libs */ '' cmakeFlags = [
autoreconf -vi # use system libraries "-DCMAKE_BUILD_TYPE=Release"
# we will get a (possibly harmless) warning during configure without this
substituteInPlace configure \
--replace /usr/bin/file ${file}/bin/file
'';
configureFlags = [
"--with-libsamplerate"
]; ];
# audacity only looks for lame and ffmpeg at runtime, so we need to link them in manually # audacity only looks for lame and ffmpeg at runtime, so we need to link them in manually
@ -43,15 +35,13 @@ stdenv.mkDerivation rec {
"-lswscale" "-lswscale"
]; ];
nativeBuildInputs = [ pkgconfig autoconf automake libtool ]; nativeBuildInputs = [ pkgconfig cmake ];
buildInputs = [ buildInputs = [
file gettext wxGTK30 expat alsaLib file gettext wxGTK30 expat alsaLib
libsndfile soxr libid3tag libjack2 lv2 lilv serd sord sratom suil wxGTK30.gtk libsndfile soxr libid3tag libjack2 lv2 lilv serd sord sratom suil wxGTK30.gtk
ffmpeg_3 libmad lame libvorbis flac soundtouch ffmpeg_3 libmad lame libvorbis flac soundtouch
]; #ToDo: detach sbsms ]; #ToDo: detach sbsms
enableParallelBuilding = true;
dontDisableStatic = true; dontDisableStatic = true;
doCheck = false; # Test fails doCheck = false; # Test fails

View file

@ -1,23 +1,77 @@
{ fetchurl, bitwig-studio1, pulseaudio, libjack2, xorg }: { stdenv, fetchurl, alsaLib, cairo, dpkg, freetype
, gdk-pixbuf, glib, gtk3, lib, xorg
, libglvnd, libjack2, ffmpeg_3
, libxkbcommon, xdg_utils, zlib, pulseaudio
, wrapGAppsHook, makeWrapper }:
bitwig-studio1.overrideAttrs (oldAttrs: rec { stdenv.mkDerivation rec {
name = "bitwig-studio-${version}"; pname = "bitwig-studio";
version = "3.2.8"; version = "3.3.1";
src = fetchurl { src = fetchurl {
url = "https://downloads.bitwig.com/stable/${version}/bitwig-studio-${version}.deb"; url = "https://downloads.bitwig.com/stable/${version}/${pname}-${version}.deb";
sha256 = "18ldgmnv7bigb4mch888kjpf4abalpiwmlhwd7rjb9qf6p72fhpj"; sha256 = "0f7xysk0cl48q7i28m25hasmrp30grgm3kah0s7xmkjgm33887pi";
}; };
buildInputs = oldAttrs.buildInputs ++ [ xorg.libXtst ]; nativeBuildInputs = [ dpkg makeWrapper wrapGAppsHook ];
runtimeDependencies = [ pulseaudio libjack2 ]; unpackCmd = ''
mkdir -p root
dpkg-deb -x $curSrc root
'';
dontBuild = true;
dontWrapGApps = true; # we only want $gappsWrapperArgs here
buildInputs = with xorg; [
alsaLib cairo freetype gdk-pixbuf glib gtk3 libxcb xcbutil xcbutilwm zlib libXtst libxkbcommon pulseaudio libjack2 libX11 libglvnd libXcursor stdenv.cc.cc.lib
];
binPath = lib.makeBinPath [
xdg_utils ffmpeg_3
];
ldLibraryPath = lib.strings.makeLibraryPath buildInputs;
installPhase = '' installPhase = ''
${oldAttrs.installPhase} mkdir -p $out/bin
cp -r opt/bitwig-studio $out/libexec
# recover commercial jre ln -s $out/libexec/bitwig-studio $out/bin/bitwig-studio
rm -f $out/libexec/lib/jre cp -r usr/share $out/share
cp -r opt/bitwig-studio/lib/jre $out/libexec/lib substitute usr/share/applications/bitwig-studio.desktop \
$out/share/applications/bitwig-studio.desktop \
--replace /usr/bin/bitwig-studio $out/bin/bitwig-studio
''; '';
})
postFixup = ''
# patchelf fails to set rpath on BitwigStudioEngine, so we use
# the LD_LIBRARY_PATH way
find $out -type f -executable \
-not -name '*.so.*' \
-not -name '*.so' \
-not -name '*.jar' \
-not -path '*/resources/*' | \
while IFS= read -r f ; do
patchelf --set-interpreter "${stdenv.cc.bintools.dynamicLinker}" $f
wrapProgram $f \
"''${gappsWrapperArgs[@]}" \
--prefix PATH : "${binPath}" \
--prefix LD_LIBRARY_PATH : "${ldLibraryPath}"
done
'';
meta = with stdenv.lib; {
description = "A digital audio workstation";
longDescription = ''
Bitwig Studio is a multi-platform music-creation system for
production, performance and DJing, with a focus on flexible
editing tools and a super-fast workflow.
'';
homepage = "https://www.bitwig.com/";
license = licenses.unfree;
platforms = [ "x86_64-linux" ];
maintainers = with maintainers; [ bfortz michalrus mrVanDalo ];
};
}

View file

@ -1,22 +0,0 @@
{ stdenv, fetchurl, ffmpeg, sox }:
stdenv.mkDerivation rec {
pname = "bs1770gain";
version = "0.5.2";
src = fetchurl {
url = "mirror://sourceforge/bs1770gain/${pname}-${version}.tar.gz";
sha256 = "1p6yz5q7czyf9ard65sp4kawdlkg40cfscr3b24znymmhs3p7rbk";
};
buildInputs = [ ffmpeg sox ];
NIX_CFLAGS_COMPILE = "-Wno-error";
meta = with stdenv.lib; {
description = "A audio/video loudness scanner implementing ITU-R BS.1770";
license = licenses.gpl2Plus;
homepage = "http://bs1770gain.sourceforge.net/";
platforms = platforms.all;
};
}

View file

@ -1,12 +1,12 @@
{ stdenv, fetchurl, cmake }: { stdenv, fetchurl, cmake }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "0.6.1"; version = "0.6.3";
pname = "game-music-emu"; pname = "game-music-emu";
src = fetchurl { src = fetchurl {
url = "https://bitbucket.org/mpyne/game-music-emu/downloads/${pname}-${version}.tar.bz2"; url = "https://bitbucket.org/mpyne/game-music-emu/downloads/${pname}-${version}.tar.xz";
sha256 = "08fk7zddpn7v93d0fa7fcypx7hvgwx9b5psj9l6m8b87k2hbw4fw"; sha256 = "07857vdkak306d9s5g6fhmjyxk7vijzjhkmqb15s7ihfxx9lx8xb";
}; };
buildInputs = [ cmake ]; buildInputs = [ cmake ];
@ -16,6 +16,6 @@ stdenv.mkDerivation rec {
description = "A collection of video game music file emulators"; description = "A collection of video game music file emulators";
license = licenses.lgpl21Plus; license = licenses.lgpl21Plus;
platforms = platforms.all; platforms = platforms.all;
maintainers = [ ]; maintainers = with maintainers; [ luc65r ];
}; };
} }

View file

@ -3,13 +3,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "geonkick"; pname = "geonkick";
version = "2.5.1"; version = "2.6.1";
src = fetchFromGitLab { src = fetchFromGitLab {
owner = "iurie-sw"; owner = "iurie-sw";
repo = pname; repo = pname;
rev = "v${version}"; rev = "v${version}";
sha256 = "14svwrxqw15j6wjy3x8s28yyrafa31bm7d1ns5h6gvpndccwc1kw"; sha256 = "1l647j11pb9lkknnh4q99mmfcvr644b02lfcdjh98z60vqm1s54c";
}; };
nativeBuildInputs = [ cmake pkg-config ]; nativeBuildInputs = [ cmake pkg-config ];

View file

@ -1,4 +1,4 @@
{ stdenv, fetchurl, fftwSinglePrec, lv2, pkgconfig, wafHook }: { stdenv, fetchurl, fftwSinglePrec, lv2, pkgconfig, wafHook, python3 }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "mda-lv2"; pname = "mda-lv2";
@ -9,7 +9,7 @@ stdenv.mkDerivation rec {
sha256 = "1a3cv6w5xby9yn11j695rbh3c4ih7rxfxmkca9s1324ljphh06m8"; sha256 = "1a3cv6w5xby9yn11j695rbh3c4ih7rxfxmkca9s1324ljphh06m8";
}; };
nativeBuildInputs = [ pkgconfig wafHook ]; nativeBuildInputs = [ pkgconfig wafHook python3 ];
buildInputs = [ fftwSinglePrec lv2 ]; buildInputs = [ fftwSinglePrec lv2 ];
meta = with stdenv.lib; { meta = with stdenv.lib; {

View file

@ -2,14 +2,14 @@
, SDL2, alsaLib, libjack2, lhasa, perl, rtmidi, zlib, zziplib }: , SDL2, alsaLib, libjack2, lhasa, perl, rtmidi, zlib, zziplib }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
version = "1.02.00"; version = "1.03.00";
pname = "milkytracker"; pname = "milkytracker";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "milkytracker"; owner = "milkytracker";
repo = "MilkyTracker"; repo = "MilkyTracker";
rev = "v${version}"; rev = "v${version}";
sha256 = "05a6d7l98k9i82dwrgi855dnccm3f2lkb144gi244vhk1156n0ca"; sha256 = "025fj34gq2kmkpwcswcyx7wdxb89vm944dh685zi4bxx0hz16vvk";
}; };
nativeBuildInputs = [ cmake pkgconfig makeWrapper ]; nativeBuildInputs = [ cmake pkgconfig makeWrapper ];

View file

@ -14,9 +14,9 @@ let
mopidy-gmusic = callPackage ./gmusic.nix { }; mopidy-gmusic = callPackage ./gmusic.nix { };
mopidy-local = callPackage ./local.nix { }; mopidy-iris = callPackage ./iris.nix { };
mopidy-spotify = callPackage ./spotify.nix { }; mopidy-local = callPackage ./local.nix { };
mopidy-moped = callPackage ./moped.nix { }; mopidy-moped = callPackage ./moped.nix { };
@ -26,20 +26,21 @@ let
mopidy-mpris = callPackage ./mpris.nix { }; mopidy-mpris = callPackage ./mpris.nix { };
mopidy-musicbox-webclient = callPackage ./musicbox-webclient.nix { };
mopidy-scrobbler = callPackage ./scrobbler.nix { };
mopidy-somafm = callPackage ./somafm.nix { }; mopidy-somafm = callPackage ./somafm.nix { };
mopidy-spotify-tunigo = callPackage ./spotify-tunigo.nix { };
mopidy-youtube = callPackage ./youtube.nix { };
mopidy-soundcloud = callPackage ./soundcloud.nix { }; mopidy-soundcloud = callPackage ./soundcloud.nix { };
mopidy-musicbox-webclient = callPackage ./musicbox-webclient.nix { }; mopidy-spotify = callPackage ./spotify.nix { };
mopidy-iris = callPackage ./iris.nix { }; mopidy-spotify-tunigo = callPackage ./spotify-tunigo.nix { };
mopidy-tunein = callPackage ./tunein.nix { }; mopidy-tunein = callPackage ./tunein.nix { };
mopidy-youtube = callPackage ./youtube.nix { };
}; };
in self in self

View file

@ -38,10 +38,6 @@ pythonPackages.buildPythonApplication rec {
# There are no tests # There are no tests
doCheck = false; doCheck = false;
preFixup = ''
gappsWrapperArgs+=(--prefix GST_PLUGIN_SYSTEM_PATH : "$GST_PLUGIN_SYSTEM_PATH")
'';
meta = with stdenv.lib; { meta = with stdenv.lib; {
homepage = "https://www.mopidy.com/"; homepage = "https://www.mopidy.com/";
description = '' description = ''

View file

@ -0,0 +1,24 @@
{ stdenv, python3Packages, mopidy }:
python3Packages.buildPythonApplication rec {
pname = "Mopidy-Scrobbler";
version = "2.0.1";
src = python3Packages.fetchPypi {
inherit pname version;
sha256 = "11vxgax4xgkggnq4fr1rh2rcvzspkkimck5p3h4phdj3qpnj0680";
};
propagatedBuildInputs = with python3Packages; [ mopidy pylast ];
# no tests implemented
doCheck = false;
pythonImportsCheck = [ "mopidy_scrobbler" ];
meta = with stdenv.lib; {
homepage = "https://github.com/mopidy/mopidy-scrobbler";
description = "Mopidy extension for scrobbling played tracks to Last.fm.";
license = licenses.asl20;
maintainers = with maintainers; [ jakeisnt ];
};
}

View file

@ -7,7 +7,7 @@ rustPlatform.buildRustPackage rec {
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "betta-cyber"; owner = "betta-cyber";
repo = "netease-music-tui"; repo = "netease-music-tui";
rev = "${version}"; rev = version;
sha256 = "0m5b3q493d32kxznm4apn56216l07b1c49km236i03mpfvdw7m1f"; sha256 = "0m5b3q493d32kxznm4apn56216l07b1c49km236i03mpfvdw7m1f";
}; };

View file

@ -0,0 +1,27 @@
{ stdenv, fetchFromGitHub, meson, pkg-config, ninja, liblo, libjack2, fltk }:
stdenv.mkDerivation rec {
pname = "new-session-manager";
version = "1.4.0";
src = fetchFromGitHub {
owner = "linuxaudio";
repo = "new-session-manager";
rev = "v${version}";
sha256 = "PqOv4tx3NLxL2+GWIUVgL72EQYMyDPIMrAkyby3TZ+0=";
};
nativeBuildInputs = [ meson pkg-config ninja ];
buildInputs = [ liblo libjack2 fltk ];
hardeningDisable = [ "format" ];
meta = with stdenv.lib; {
homepage = "https://linuxaudio.github.io/new-session-manager/";
description = "A session manager designed for audio applications.";
maintainers = [ maintainers._6AA4FD ];
license = licenses.gpl3Plus;
platforms = ["x86_64-linux"];
};
}

View file

@ -1,24 +1,49 @@
{ stdenv, lib, cmake, pkgconfig, libogg, fetchFromGitHub, libiconv }: { stdenv, fetchFromGitHub, fetchpatch, cmake, pkg-config, libiconv, libogg
, ffmpeg, glibcLocales, perl, perlPackages }:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "opustags"; pname = "opustags";
version = "1.4.0"; version = "1.5.1";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "fmang"; owner = "fmang";
repo = "opustags"; repo = "opustags";
rev = version; rev = version;
sha256 = "1y0czl72paawy342ff9ickaamkih43k59yfcdw7bnddypyfa7nbg"; sha256 = "1dicv4s395b9gb4jpr0rnxdq9azr45pid62q3x08lb7cvyq3yxbh";
}; };
patches = [
# Fix building on darwin
(fetchpatch {
url = "https://github.com/fmang/opustags/commit/64fc6f8f6d20e034892e89abff0236c85cae98dc.patch";
sha256 = "1djifzqhf1w51gbpqbndsh3gnl9iizp6hppxx8x2a92i9ns22zpg";
})
(fetchpatch {
url = "https://github.com/fmang/opustags/commit/f98208c1a1d10c15f98b127bbfdf88a7b15b08dc.patch";
sha256 = "1h3v0r336fca0y8zq1vl2wr8gaqs3vvrrckx7pvji4k1jpiqvp38";
})
];
buildInputs = [ libogg ]; buildInputs = [ libogg ];
nativeBuildInputs = [ cmake pkgconfig ] ++ lib.optional stdenv.isDarwin libiconv; nativeBuildInputs = [ cmake pkg-config ] ++ stdenv.lib.optional stdenv.isDarwin libiconv;
meta = with lib; { doCheck = true;
checkInputs = [ ffmpeg glibcLocales perl ] ++ (with perlPackages; [ ListMoreUtils ]);
checkPhase = ''
export LANG="en_US.UTF-8"
export LC_ALL="en_US.UTF-8"
make check
'';
meta = with stdenv.lib; {
homepage = "https://github.com/fmang/opustags"; homepage = "https://github.com/fmang/opustags";
description = "Ogg Opus tags editor"; description = "Ogg Opus tags editor";
platforms = platforms.all; platforms = platforms.all;
maintainers = [ maintainers.kmein ]; broken = stdenv.isDarwin;
maintainers = with maintainers; [ kmein SuperSandro2000 ];
license = licenses.bsd3; license = licenses.bsd3;
}; };
} }

View file

@ -0,0 +1,27 @@
{ stdenv, fetchFromGitHub, lv2 }:
stdenv.mkDerivation rec {
version = "v1.1.3";
pname = "plujain-ramp";
src = fetchFromGitHub {
owner = "Houston4444";
repo = "plujain-ramp";
rev = "1bc1fed211e140c7330d6035122234afe78e5257";
sha256 = "1k7qpr8c15d623c4zqxwdklp98amildh03cqsnqq5ia9ba8z3016";
};
buildInputs = [
lv2
];
installFlags = [ "INSTALL_PATH=$(out)/lib/lv2" ];
meta = with stdenv.lib; {
description = "A mono rhythmic tremolo LV2 Audio Plugin";
homepage = "https://github.com/Houston4444/plujain-ramp";
license = licenses.gpl2Only;
platforms = platforms.linux;
maintainers = [ maintainers.hirenashah ];
};
}

View file

@ -8,13 +8,13 @@
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "pt2-clone"; pname = "pt2-clone";
version = "1.27"; version = "1.28";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "8bitbubsy"; owner = "8bitbubsy";
repo = "pt2-clone"; repo = "pt2-clone";
rev = "v${version}"; rev = "v${version}";
sha256 = "1hg36pfzgdbhd5bkzi3cpn6v39q8xis2jk7w6qm615r587393pwd"; sha256 = "1c2x43f46l7556kl9y9qign0g6ywdkh7ywkzv6c9y63n68ph20x2";
}; };
nativeBuildInputs = [ cmake ]; nativeBuildInputs = [ cmake ];

View file

@ -1,26 +1,26 @@
{ stdenv, fetchFromGitHub, { stdenv, fetchFromGitHub, pkg-config, lv2, fftw, cmake, libXpm
automake, pkgconfig, lv2, fftw, cmake, xorg, libjack2, libsamplerate, libsndfile , libXft, libjack2, libsamplerate, libsndfile }:
}:
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
repo = "rkrlv2"; pname = "rkrlv2";
name = "${repo}-b2.0"; version = "beta_3";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "ssj71"; owner = "ssj71";
inherit repo; repo = pname;
rev = "beta_2"; rev = version;
sha256 = "128jcilbrd1l65c01w2bazsb21x78mng0jjkhi3x9crf1n9qbh2m"; sha256 = "WjpPNUEYw4aGrh57J+7kkxKFXgCJWNaWAmueFbNUJJo=";
}; };
nativeBuildInputs = [ pkgconfig ]; nativeBuildInputs = [ cmake pkg-config ];
buildInputs = with xorg; [ automake lv2 fftw cmake libXpm libjack2 libsamplerate libsndfile libXft ]; buildInputs = [ libXft libXpm lv2 fftw libjack2 libsamplerate libsndfile ];
meta = { meta = with stdenv.lib; {
description = "Rakarrak effects ported to LV2"; description = "Rakarrak effects ported to LV2";
homepage = "https://github.com/ssj71/rkrlv2"; homepage = "https://github.com/ssj71/rkrlv2";
license = stdenv.lib.licenses.gpl3; license = licenses.gpl2Only;
maintainers = [ stdenv.lib.maintainers.joelmo ]; maintainers = [ maintainers.joelmo ];
platforms = stdenv.lib.platforms.linux; platforms = platforms.unix;
broken = stdenv.isAarch64; # g++: error: unrecognized command line option '-mfpmath=sse'
}; };
} }

View file

@ -1,4 +1,4 @@
{ stdenv, makeWrapper, fetchFromBitbucket, fetchFromGitHub, pkgconfig { stdenv, makeWrapper, fetchzip, fetchFromGitHub, pkgconfig
, alsaLib, curl, glew, glfw, gtk2-x11, jansson, libjack2, libXext, libXi , alsaLib, curl, glew, glfw, gtk2-x11, jansson, libjack2, libXext, libXi
, libzip, rtaudio, rtmidi, speex, libsamplerate }: , libzip, rtaudio, rtmidi, speex, libsamplerate }:
@ -7,10 +7,8 @@ let
# Others are downloaded with `make deps`. Due to previous issues with the # Others are downloaded with `make deps`. Due to previous issues with the
# `glfw` submodule (see above) and because we can not access the network when # `glfw` submodule (see above) and because we can not access the network when
# building in a sandbox, we fetch the dependency source manually. # building in a sandbox, we fetch the dependency source manually.
pfft-source = fetchFromBitbucket { pfft-source = fetchzip {
owner = "jpommier"; url = "https://vcvrack.com/downloads/dep/pffft.zip";
repo = "pffft";
rev = "74d7261be17cf659d5930d4830609406bd7553e3";
sha256 = "084csgqa6f1a270bhybjayrh3mpyi2jimc87qkdgsqcp8ycsx1l1"; sha256 = "084csgqa6f1a270bhybjayrh3mpyi2jimc87qkdgsqcp8ycsx1l1";
}; };
nanovg-source = fetchFromGitHub { nanovg-source = fetchFromGitHub {

View file

@ -28,13 +28,13 @@ in
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "monero-gui"; pname = "monero-gui";
version = "0.17.1.7"; version = "0.17.1.8";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "monero-project"; owner = "monero-project";
repo = "monero-gui"; repo = "monero-gui";
rev = "v${version}"; rev = "v${version}";
sha256 = "1dd2ddkxh9ynxnscysl46hj4dm063h1v13fnyah69am26qzzbby4"; sha256 = "13cjrfdkr7c2ff8j2rg8hvhlc00af38vcs67wlx2109i2baq4pp3";
}; };
nativeBuildInputs = [ nativeBuildInputs = [

View file

@ -17,13 +17,13 @@ assert trezorSupport -> all (x: x!=null) [ libusb1 protobuf python3 ];
stdenv.mkDerivation rec { stdenv.mkDerivation rec {
pname = "monero"; pname = "monero";
version = "0.17.1.7"; version = "0.17.1.8";
src = fetchFromGitHub { src = fetchFromGitHub {
owner = "monero-project"; owner = "monero-project";
repo = "monero"; repo = "monero";
rev = "v${version}"; rev = "v${version}";
sha256 = "1fdw4i4rw87yz3hz4yc1gdw0gr2mmf9038xaw2l4rrk5y50phjp4"; sha256 = "10blazbk1602slx3wrmw4jfgkdry55iclrhm5drdficc5v3h735g";
fetchSubmodules = true; fetchSubmodules = true;
}; };

Some files were not shown because too many files have changed in this diff Show more