Project import generated by Copybara.

GitOrigin-RevId: a52e974cff8fb80c427e0d55c01b3b8c770ccec4
This commit is contained in:
Default email 2020-11-06 01:33:48 +01:00
parent 07b76f5cf9
commit eefb417d14
563 changed files with 16580 additions and 11943 deletions

View file

@ -176,6 +176,10 @@
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# Neovim
/pkgs/applications/editors/neovim @jonringer
/pkgs/applications/editors/neovim @teto
# VimPlugins
/pkgs/misc/vim-plugins @jonringer @softinio
@ -202,6 +206,12 @@
/nixos/tests/cri-o.nix @NixOS/podman @zowoq
/nixos/tests/podman.nix @NixOS/podman @zowoq
# Docker tools
/pkgs/build-support/docker @roberth
/nixos/tests/docker-tools-overlay.nix @roberth
/nixos/tests/docker-tools.nix @roberth
/doc/builders/images/dockertools.xml @roberth
# Blockchains
/pkgs/applications/blockchains @mmahut

View file

@ -8,7 +8,7 @@
</p>
[Nixpkgs](https://github.com/nixos/nixpkgs) is a collection of over
40,000 software packages that can be installed with the
60,000 software packages that can be installed with the
[Nix](https://nixos.org/nix/) package manager. It also implements
[NixOS](https://nixos.org/nixos/), a purely-functional Linux distribution.

View file

@ -19,6 +19,7 @@
<xi:include href="ios.section.xml" />
<xi:include href="java.xml" />
<xi:include href="lua.section.xml" />
<xi:include href="maven.section.xml" />
<xi:include href="node.section.xml" />
<xi:include href="ocaml.xml" />
<xi:include href="perl.xml" />

View file

@ -32,7 +32,7 @@ nativeBuildInputs = [ jdk ];
</para>
<para>
If your Java package provides a program, you need to generate a wrapper script to run it using the OpenJRE. You can use <literal>makeWrapper</literal> for this:
If your Java package provides a program, you need to generate a wrapper script to run it using a JRE. You can use <literal>makeWrapper</literal> for this:
<programlisting>
nativeBuildInputs = [ makeWrapper ];
@ -43,7 +43,21 @@ installPhase =
--add-flags "-cp $out/share/java/foo.jar org.foo.Main"
'';
</programlisting>
Note the use of <literal>jre</literal>, which is the part of the OpenJDK package that contains the Java Runtime Environment. By using <literal>${jre}/bin/java</literal> instead of <literal>${jdk}/bin/java</literal>, you prevent your package from depending on the JDK at runtime.
Since the introduction of the Java Platform Module System in Java 9, Java distributions typically no longer ship with a general-purpose JRE: instead, they allow generating a JRE with only the modules required for your application(s). Because we can't predict what modules will be needed on a general-purpose system, the default <package>jre</package> package is the full JDK. When building a minimal system/image, you can override the <literal>modules</literal> parameter on <literal>jre_minimal</literal> to build a JRE with only the modules relevant for you:
<programlisting>
let
my_jre = pkgs.jre_minimal.override {
modules = [
# The modules used by 'something' and 'other' combined:
"java.base"
"java.logging"
];
};
something = (pkgs.something.override { jre = my_jre; });
other = (pkgs.other.override { jre = my_jre; });
in
...
</programlisting>
</para>
<para>

View file

@ -0,0 +1,354 @@
---
title: Maven
author: Farid Zakaria
date: 2020-10-15
---
# Maven
Maven is a well-known build tool for the Java ecosystem however it has some challenges when integrating into the Nix build system.
The following provides a list of common patterns with how to package a Maven project (or any JVM language that can export to Maven) as a Nix package.
For the purposes of this example let's consider a very basic Maven project with the following `pom.xml` with a single dependency on [emoji-java](https://github.com/vdurmont/emoji-java).
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>io.github.fzakaria</groupId>
<artifactId>maven-demo</artifactId>
<version>1.0</version>
<packaging>jar</packaging>
<name>NixOS Maven Demo</name>
<dependencies>
<dependency>
<groupId>com.vdurmont</groupId>
<artifactId>emoji-java</artifactId>
<version>5.1.1</version>
</dependency>
</dependencies>
</project>
```
Our main class file will be very simple:
```java
import com.vdurmont.emoji.EmojiParser;
public class Main {
public static void main(String[] args) {
String str = "NixOS :grinning: is super cool :smiley:!";
String result = EmojiParser.parseToUnicode(str);
System.out.println(result);
}
}
```
You find this demo project at https://github.com/fzakaria/nixos-maven-example
## Solving for dependencies
### buildMaven with NixOS/mvn2nix-maven-plugin
> ⚠️ Although `buildMaven` is the "blessed" way within nixpkgs, as of 2020, it hasn't seen much activity in quite a while.
`buildMaven` is an alternative method that tries to follow similar patterns of other programming languages by generating a lock file. It relies on the maven plugin [mvn2nix-maven-plugin](https://github.com/NixOS/mvn2nix-maven-plugin).
First you generate a `project-info.json` file using the maven plugin.
> This should be executed in the project's source repository or be told which `pom.xml` to execute with.
```bash
# run this step within the project's source repository
mvn org.nixos.mvn2nix:mvn2nix-maven-plugin:mvn2nix
cat project-info.json | jq | head
{
"project": {
"artifactId": "maven-demo",
"groupId": "org.nixos",
"version": "1.0",
"classifier": "",
"extension": "jar",
"dependencies": [
{
"artifactId": "maven-resources-plugin",
```
This file is then given to the `buildMaven` function, and it returns 2 attributes.
**`repo`**:
A Maven repository that is a symlink farm of all the dependencies found in the `project-info.json`
**`build`**:
A simple derivation that runs through `mvn compile` & `mvn package` to build the JAR. You may use this as inspiration for more complicated derivations.
Here is an [example](https://github.com/fzakaria/nixos-maven-example/blob/main/build-maven-repository.nix) of building the Maven repository
```nix
{ pkgs ? import <nixpkgs> { } }:
with pkgs;
(buildMaven ./project-info.json).repo
```
The benefit over the _double invocation_ as we will see below, is that the _/nix/store_ entry is a _linkFarm_ of every package, so that changes to your dependency set doesn't involve downloading everything from scratch.
```bash
tree $(nix-build --no-out-link build-maven-repository.nix) | head
/nix/store/g87va52nkc8jzbmi1aqdcf2f109r4dvn-maven-repository
├── antlr
│   └── antlr
│   └── 2.7.2
│   ├── antlr-2.7.2.jar -> /nix/store/d027c8f2cnmj5yrynpbq2s6wmc9cb559-antlr-2.7.2.jar
│   └── antlr-2.7.2.pom -> /nix/store/mv42fc5gizl8h5g5vpywz1nfiynmzgp2-antlr-2.7.2.pom
├── avalon-framework
│   └── avalon-framework
│   └── 4.1.3
│   ├── avalon-framework-4.1.3.jar -> /nix/store/iv5fp3955w3nq28ff9xfz86wvxbiw6n9-avalon-framework-4.1.3.jar
```
### Double Invocation
> ⚠️ This pattern is the simplest but may cause unnecessary rebuilds due to the output hash changing.
The double invocation is a _simple_ way to get around the problem that `nix-build` may be sandboxed and have no Internet connectivity.
It treats the entire Maven repository as a single source to be downloaded, relying on Maven's dependency resolution to satisfy the output hash. This is similar to fetchers like `fetchgit`, except it has to run a Maven build to determine what to download.
The first step will be to build the Maven project as a fixed-output derivation in order to collect the Maven repository -- below is an [example](https://github.com/fzakaria/nixos-maven-example/blob/main/double-invocation-repository.nix).
> Traditionally the Maven repository is at `~/.m2/repository`. We will override this to be the `$out` directory.
```nix
{ stdenv, maven }:
stdenv.mkDerivation {
name = "maven-repository";
buildInputs = [ maven ];
src = ./.; # or fetchFromGitHub, cleanSourceWith, etc
buildPhase = ''
mvn package -Dmaven.repo.local=$out
'';
# keep only *.{pom,jar,sha1,nbm} and delete all ephemeral files with lastModified timestamps inside
installPhase = ''
find $out -type f \
-name \*.lastUpdated -or \
-name resolver-status.properties -or \
-name _remote.repositories \
-delete
'';
# don't do any fixup
dontFixup = true;
outputHashAlgo = "sha256";
outputHashMode = "recursive";
# replace this with the correct SHA256
outputHash = stdenv.lib.fakeSha256;
}
```
The build will fail, and tell you the expected `outputHash` to place. When you've set the hash, the build will return with a `/nix/store` entry whose contents are the full Maven repository.
> Some additional files are deleted that would cause the output hash to change potentially on subsequent runs.
```bash
tree $(nix-build --no-out-link double-invocation-repository.nix) | head
/nix/store/8kicxzp98j68xyi9gl6jda67hp3c54fq-maven-repository
├── backport-util-concurrent
│   └── backport-util-concurrent
│   └── 3.1
│   ├── backport-util-concurrent-3.1.pom
│   └── backport-util-concurrent-3.1.pom.sha1
├── classworlds
│   └── classworlds
│   ├── 1.1
│   │   ├── classworlds-1.1.jar
```
If your package uses _SNAPSHOT_ dependencies or _version ranges_; there is a strong likelihood that over-time your output hash will change since the resolved dependencies may change. Hence this method is less recommended then using `buildMaven`.
## Building a JAR
Regardless of which strategy is chosen above, the step to build the derivation is the same.
```nix
{ stdenv, lib, maven, callPackage }:
# pick a repository derivation, here we will use buildMaven
let repository = callPackage ./build-maven-repository.nix { };
in stdenv.mkDerivation rec {
pname = "maven-demo";
version = "1.0";
src = builtins.fetchTarball "https://github.com/fzakaria/nixos-maven-example/archive/main.tar.gz";
buildInputs = [ maven ];
buildPhase = ''
echo "Using repository ${repository}"
mvn --offline -Dmaven.repo.local=${repository} package;
'';
installPhase = ''
install -Dm644 target/${pname}-${version}.jar $out/share/java
'';
}
```
> We place the library in `$out/share/java` since JDK package has a _stdenv setup hook_ that adds any JARs in the `share/java` directories of the build inputs to the CLASSPATH environment.
```bash
tree $(nix-build --no-out-link build-jar.nix)
/nix/store/7jw3xdfagkc2vw8wrsdv68qpsnrxgvky-maven-demo-1.0
└── share
└── java
└── maven-demo-1.0.jar
2 directories, 1 file
```
## Runnable JAR
The previous example builds a `jar` file but that's not a file one can run.
You need to use it with `java -jar $out/share/java/output.jar` and make sure to provide the required dependencies on the classpath.
The following explains how to use `makeWrapper` in order to make the derivation produce an executable that will run the JAR file you created.
We will use the same repository we built above (either _double invocation_ or _buildMaven_) to setup a CLASSPATH for our JAR.
The following two methods are more suited to Nix then building an [UberJar](https://imagej.net/Uber-JAR) which may be the more traditional approach.
### CLASSPATH
> This is ideal if you are providing a derivation for _nixpkgs_ and don't want to patch the project's `pom.xml`.
We will read the Maven repository and flatten it to a single list. This list will then be concatenated with the _CLASSPATH_ separator to create the full classpath.
We make sure to provide this classpath to the `makeWrapper`.
```nix
{ stdenv, lib, maven, callPackage, makeWrapper, jre }:
let
repository = callPackage ./build-maven-repository.nix { };
in stdenv.mkDerivation rec {
pname = "maven-demo";
version = "1.0";
src = builtins.fetchTarball
"https://github.com/fzakaria/nixos-maven-example/archive/main.tar.gz";
buildInputs = [ maven makeWrapper ];
buildPhase = ''
echo "Using repository ${repository}"
mvn --offline -Dmaven.repo.local=${repository} package;
'';
installPhase = ''
mkdir -p $out/bin
classpath=$(find ${repository} -name "*.jar" -printf ':%h/%f');
install -Dm644 target/${pname}-${version}.jar $out/share/java
# create a wrapper that will automatically set the classpath
# this should be the paths from the dependency derivation
makeWrapper ${jre}/bin/java $out/bin/${pname} \
--add-flags "-classpath $out/share/java/${pname}-${version}.jar:''${classpath#:}" \
--add-flags "Main"
'';
}
```
### MANIFEST file via Maven Plugin
> This is ideal if you are the project owner and want to change your `pom.xml` to set the CLASSPATH within it.
Augment the `pom.xml` to create a JAR with the following manifest:
```xml
<build>
<plugins>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<classpathPrefix>../../repository/</classpathPrefix>
<classpathLayoutType>repository</classpathLayoutType>
<mainClass>Main</mainClass>
</manifest>
<manifestEntries>
<Class-Path>.</Class-Path>
</manifestEntries>
</archive>
</configuration>
</plugin>
</plugins>
</build>
```
The above plugin instructs the JAR to look for the necessary dependencies in the `lib/` relative folder. The layout of the folder is also in the _maven repository_ style.
```bash
unzip -q -c $(nix-build --no-out-link runnable-jar.nix)/share/java/maven-demo-1.0.jar META-INF/MANIFEST.MF
Manifest-Version: 1.0
Archiver-Version: Plexus Archiver
Built-By: nixbld
Class-Path: . ../../repository/com/vdurmont/emoji-java/5.1.1/emoji-jav
a-5.1.1.jar ../../repository/org/json/json/20170516/json-20170516.jar
Created-By: Apache Maven 3.6.3
Build-Jdk: 1.8.0_265
Main-Class: Main
```
We will modify the derivation above to add a symlink to our repository so that it's accessible to our JAR during the `installPhase`.
```nix
{ stdenv, lib, maven, callPackage, makeWrapper, jre }:
# pick a repository derivation, here we will use buildMaven
let repository = callPackage ./build-maven-repository.nix { };
in stdenv.mkDerivation rec {
pname = "maven-demo";
version = "1.0";
src = builtins.fetchTarball
"https://github.com/fzakaria/nixos-maven-example/archive/main.tar.gz";
buildInputs = [ maven makeWrapper ];
buildPhase = ''
echo "Using repository ${repository}"
mvn --offline -Dmaven.repo.local=${repository} package;
'';
installPhase = ''
mkdir -p $out/bin
# create a symbolic link for the repository directory
ln -s ${repository} $out/repository
install -Dm644 target/${pname}-${version}.jar $out/share/java
# create a wrapper that will automatically set the classpath
# this should be the paths from the dependency derivation
makeWrapper ${jre}/bin/java $out/bin/${pname} \
--add-flags "-jar $out/share/java/${pname}-${version}.jar"
'';
}
```
> Our script produces a dependency on `jre` rather than `jdk` to restrict the runtime closure necessary to run the application.
This will give you an executable shell-script that launches your JAR with all the dependencies available.
```bash
tree $(nix-build --no-out-link runnable-jar.nix)
/nix/store/8d4c3ibw8ynsn01ibhyqmc1zhzz75s26-maven-demo-1.0
├── bin
│   └── maven-demo
├── repository -> /nix/store/g87va52nkc8jzbmi1aqdcf2f109r4dvn-maven-repository
└── share
└── java
└── maven-demo-1.0.jar
$(nix-build --no-out-link --option tarball-ttl 1 runnable-jar.nix)/bin/maven-demo
NixOS 😀 is super cool 😃!
```

View file

@ -16,9 +16,9 @@ cargo
into the `environment.systemPackages` or bring them into
scope with `nix-shell -p rustc cargo`.
For daily builds (beta and nightly) use either rustup from
nixpkgs or use the [Rust nightlies
overlay](#using-the-rust-nightlies-overlay).
For other versions such as daily builds (beta and nightly),
use either `rustup` from nixpkgs (which will manage the rust installation in your home directory),
or use Mozilla's [Rust nightlies overlay](#using-the-rust-nightlies-overlay).
## Compiling Rust applications with Cargo
@ -478,8 +478,15 @@ Mozilla provides an overlay for nixpkgs to bring a nightly version of Rust into
This overlay can _also_ be used to install recent unstable or stable versions
of Rust, if desired.
To use this overlay, clone
[nixpkgs-mozilla](https://github.com/mozilla/nixpkgs-mozilla),
### Rust overlay installation
You can use this overlay by either changing your local nixpkgs configuration,
or by adding the overlay declaratively in a nix expression, e.g. in `configuration.nix`.
For more information see [#sec-overlays-install](the manual on installing overlays).
#### Imperative rust overlay installation
Clone [nixpkgs-mozilla](https://github.com/mozilla/nixpkgs-mozilla),
and create a symbolic link to the file
[rust-overlay.nix](https://github.com/mozilla/nixpkgs-mozilla/blob/master/rust-overlay.nix)
in the `~/.config/nixpkgs/overlays` directory.
@ -488,7 +495,34 @@ in the `~/.config/nixpkgs/overlays` directory.
$ mkdir -p ~/.config/nixpkgs/overlays
$ ln -s $(pwd)/nixpkgs-mozilla/rust-overlay.nix ~/.config/nixpkgs/overlays/rust-overlay.nix
The latest version can be installed with the following command:
### Declarative rust overlay installation
Add the following to your `configuration.nix`, `home-configuration.nix`, `shell.nix`, or similar:
```
nixpkgs = {
overlays = [
(import (builtins.fetchTarball https://github.com/mozilla/nixpkgs-mozilla/archive/master.tar.gz))
# Further overlays go here
];
};
```
Note that this will fetch the latest overlay version when rebuilding your system.
### Rust overlay usage
The overlay contains attribute sets corresponding to different versions of the rust toolchain, such as:
* `latest.rustChannels.stable`
* `latest.rustChannels.nightly`
* a function `rustChannelOf`, called as `(rustChannelOf { date = "2018-04-11"; channel = "nightly"; })`, or...
* `(nixpkgs.rustChannelOf { rustToolchain = ./rust-toolchain; })` if you have a local `rust-toolchain` file (see https://github.com/mozilla/nixpkgs-mozilla#using-in-nix-expressions for an example)
Each of these contain packages such as `rust`, which contains your usual rust development tools with the respective toolchain chosen.
For example, you might want to add `latest.rustChannels.stable.rust` to the list of packages in your configuration.
Imperatively, the latest stable version can be installed with the following command:
$ nix-env -Ai nixos.latest.rustChannels.stable.rust

View file

@ -2070,7 +2070,7 @@ nativeBuildInputs = [ breakpointHook ];
The <literal>installManPage</literal> function takes one or more paths to manpages to install. The manpages must have a section suffix, and may optionally be compressed (with <literal>.gz</literal> suffix). This function will place them into the correct directory.
</para>
<para>
The <literal>installShellCompletion</literal> function takes one or more paths to shell completion files. By default it will autodetect the shell type from the completion file extension, but you may also specify it by passing one of <literal>--bash</literal>, <literal>--fish</literal>, or <literal>--zsh</literal>. These flags apply to all paths listed after them (up until another shell flag is given). Each path may also have a custom installation name provided by providing a flag <literal>--name NAME</literal> before the path. If this flag is not provided, zsh completions will be renamed automatically such that <literal>foobar.zsh</literal> becomes <literal>_foobar</literal>.
The <literal>installShellCompletion</literal> function takes one or more paths to shell completion files. By default it will autodetect the shell type from the completion file extension, but you may also specify it by passing one of <literal>--bash</literal>, <literal>--fish</literal>, or <literal>--zsh</literal>. These flags apply to all paths listed after them (up until another shell flag is given). Each path may also have a custom installation name provided by providing a flag <literal>--name NAME</literal> before the path. If this flag is not provided, zsh completions will be renamed automatically such that <literal>foobar.zsh</literal> becomes <literal>_foobar</literal>. A root name may be provided for all paths using the flag <literal>--cmd NAME</literal>; this synthesizes the appropriate name depending on the shell (e.g. <literal>--cmd foo</literal> will synthesize the name <literal>foo.bash</literal> for bash and <literal>_foo</literal> for zsh). The path may also be a fifo or named fd (such as produced by <literal>&lt;(cmd)</literal>), in which case the shell and name must be provided.
<programlisting>
nativeBuildInputs = [ installShellFiles ];
postInstall = ''
@ -2081,6 +2081,11 @@ postInstall = ''
installShellCompletion --zsh --name _foobar share/completions.zsh
# implicit behavior
installShellCompletion share/completions/foobar.{bash,fish,zsh}
# using named fd
installShellCompletion --cmd foobar \
--bash &lt;($out/bin/foobar --bash-completion) \
--fish &lt;($out/bin/foobar --fish-completion) \
--zsh &lt;($out/bin/foobar --zsh-completion)
'';
</programlisting>
</para>

View file

@ -4007,6 +4007,12 @@
githubId = 2502736;
name = "James Hillyerd";
};
jiehong = {
email = "nixos@majiehong.com";
github = "Jiehong";
githubId = 1061229;
name = "Jiehong Ma";
};
jirkamarsik = {
email = "jiri.marsik89@gmail.com";
github = "jirkamarsik";
@ -4278,6 +4284,12 @@
githubId = 16374374;
name = "Joshua Campbell";
};
jshholland = {
email = "josh@inv.alid.pw";
github = "jshholland";
githubId = 107689;
name = "Josh Holland";
};
jtcoolen = {
email = "jtcoolen@pm.me";
name = "Julien Coolen";
@ -4816,6 +4828,12 @@
githubId = 20250323;
name = "Lucio Delelis";
};
ldenefle = {
email = "ldenefle@gmail.com";
github = "ldenefle";
githubId = 20558127;
name = "Lucas Denefle";
};
ldesgoui = {
email = "ldesgoui@gmail.com";
github = "ldesgoui";
@ -5268,6 +5286,12 @@
githubId = 1238350;
name = "Matthias Herrmann";
};
majesticmullet = {
email = "hoccthomas@gmail.com.au";
github = "MajesticMullet";
githubId = 31056089;
name = "Tom Ho";
};
makefu = {
email = "makefu@syntax-fehler.de";
github = "makefu";
@ -5520,6 +5544,12 @@
fingerprint = "D709 03C8 0BE9 ACDC 14F0 3BFB 77BF E531 397E DE94";
}];
};
meatcar = {
email = "nixpkgs@denys.me";
github = "meatcar";
githubId = 191622;
name = "Denys Pavlov";
};
meditans = {
email = "meditans@gmail.com";
github = "meditans";
@ -6439,6 +6469,12 @@
githubId = 167209;
name = "Masanori Ogino";
};
omgbebebe = {
email = "omgbebebe@gmail.com";
github = "omgbebebe";
githubId = 588167;
name = "Sergey Bubnov";
};
omnipotententity = {
email = "omnipotententity@gmail.com";
github = "omnipotententity";
@ -7065,6 +7101,12 @@
fingerprint = "7573 56D7 79BB B888 773E 415E 736C CDF9 EF51 BD97";
}];
};
r-burns = {
email = "rtburns@protonmail.com";
github = "r-burns";
githubId = 52847440;
name = "Ryan Burns";
};
raboof = {
email = "arnout@bzzt.net";
github = "raboof";

View file

@ -6,7 +6,7 @@
<title>Service Management</title>
<para>
In NixOS, all system services are started and monitored using the systemd
program. Systemd is the “init” process of the system (i.e. PID 1), the
program. systemd is the “init” process of the system (i.e. PID 1), the
parent of all other processes. It manages a set of so-called “units”,
which can be things like system services (programs), but also mount points,
swap files, devices, targets (groups of units) and more. Units can have
@ -16,10 +16,17 @@
dependencies of this unit cause all system services to be started, file
systems to be mounted, swap files to be activated, and so on.
</para>
<section xml:id="sect-nixos-systemd-general">
<title>Interacting with a running systemd</title>
<para>
The command <command>systemctl</command> is the main way to interact with
<command>systemd</command>. Without any arguments, it shows the status of
active units:
<command>systemd</command>. The following paragraphs demonstrate ways to
interact with any OS running systemd as init system. NixOS is of no
exception. The <link xlink:href="#sect-nixos-systemd-nixos">next section
</link> explains NixOS specific things worth knowing.
</para>
<para>
Without any arguments, <literal>systmctl</literal> the status of active units:
<screen>
<prompt>$ </prompt>systemctl
-.mount loaded active mounted /
@ -66,7 +73,68 @@ Jan 07 15:55:57 hagbard systemd[1]: Started PostgreSQL Server.
starting or stopping (or has failed). Starting a unit will cause the
dependencies of that unit to be started as well (if necessary).
</para>
<!-- - cgroups: each service and user session is a cgroup
<!-- TODO: document cgroups, draft:
each service and user session is a cgroup
- cgroup resource management -->
</section>
<section xml:id="sect-nixos-systemd-nixos">
<title>systemd in NixOS</title>
<para>
Packages in Nixpkgs sometimes provide systemd units with them, usually in
e.g <literal>#pkg-out#/lib/systemd/</literal>. Putting such a package in
<literal>environment.systemPackages</literal> doesn't make the service
available to users or the system.
</para>
<para>
In order to enable a systemd <emphasis>system</emphasis> service with
provided upstream package, use (e.g):
<programlisting>
<xref linkend="opt-systemd.packages"/> = [ pkgs.packagekit ];
</programlisting>
</para>
<para>
Usually NixOS modules written by the community do the above, plus take care of
other details. If a module was written for a service you are interested in,
you'd probably need only to use
<literal>services.#name#.enable = true;</literal>. These services are defined
in Nixpkgs'
<link xlink:href="https://github.com/NixOS/nixpkgs/tree/master/nixos/modules">
<literal>nixos/modules/</literal> directory </link>. In case the service is
simple enough, the above method should work, and start the service on boot.
</para>
<para>
<emphasis>User</emphasis> systemd services on the other hand, should be
treated differently. Given a package that has a systemd unit file at
<literal>#pkg-out#/lib/systemd/user/</literal>, using
<xref linkend="opt-systemd.packages"/> will make you able to start the service via
<literal>systemctl --user start</literal>, but it won't start automatically on login.
<!-- TODO: Document why systemd.packages doesn't work for user services or fix this.
https://github.com/NixOS/nixpkgs/blob/2cd6594a8710a801038af2b72348658f732ce84a/nixos/modules/system/boot/systemd-lib.nix#L177-L198
This has been talked over at https://discourse.nixos.org/t/how-to-enable-upstream-systemd-user-services-declaratively/7649/5
-->
However, You can imperatively enable it by adding the package's attribute to
<link linkend="opt-environment.systemPackages">
<literal>systemd.packages</literal></link> and then do this (e.g):
<screen>
<prompt>$ </prompt>mkdir -p ~/.config/systemd/user/default.target.wants
<prompt>$ </prompt>ln -s /run/current-system/sw/lib/systemd/user/syncthing.service ~/.config/systemd/user/default.target.wants/
<prompt>$ </prompt>systemctl --user daemon-reload
<prompt>$ </prompt>systemctl --user enable syncthing.service
</screen>
If you are interested in a timer file, use <literal>timers.target.wants</literal>
instead of <literal>default.target.wants</literal> in the 1st and 2nd command.
</para>
<para>
Using <literal>systemctl --user enable syncthing.service</literal> instead of
the above, will work, but it'll use the absolute path of
<literal>syncthing.service</literal> for the symlink, and this path is in
<literal>/nix/store/.../lib/systemd/user/</literal>. Hence
<link xlink:href="#sec-nix-gc">garbage collection</link> will remove that file
and you will wind up with a broken symlink in your systemd configuration, which
in turn will not make the service / timer start on login.
</para>
</section>
</chapter>

View file

@ -39,7 +39,19 @@
<itemizedlist>
<listitem>
<para />
<para>
<link xlink:href="https://www.keycloak.org/">Keycloak</link>,
an open source identity and access management server with
support for <link
xlink:href="https://openid.net/connect/">OpenID Connect</link>,
<link xlink:href="https://oauth.net/2/">OAUTH 2.0</link> and
<link xlink:href="https://en.wikipedia.org/wiki/SAML_2.0">SAML
2.0</link>.
</para>
<para>
See the <link linkend="module-services-keycloak">Keycloak
section of the NixOS manual</link> for more information.
</para>
</listitem>
</itemizedlist>
@ -109,6 +121,21 @@
<literal>/var/lib/powerdns</literal> to <literal>/run/pdns</literal>.
</para>
</listitem>
<listitem>
<para>
<package>btc1</package> has been abandoned upstream, and removed.
</para>
</listitem>
<listitem>
<para>
<package>riak-cs</package> package removed along with <varname>services.riak-cs</varname> module.
</para>
</listitem>
<listitem>
<para>
<package>stanchion</package> package removed along with <varname>services.stanchion</varname> module.
</para>
</listitem>
</itemizedlist>
</section>

View file

@ -183,6 +183,11 @@ sub pciCheck {
push @imports, "(modulesPath + \"/hardware/network/broadcom-43xx.nix\")";
}
# In case this is a virtio scsi device, we need to explicitly make this available.
if ($vendor eq "0x1af4" && $device eq "0x1004") {
push @initrdAvailableKernelModules, "virtio_scsi";
}
# Can't rely on $module here, since the module may not be loaded
# due to missing firmware. Ideally we would check modules.pcimap
# here.

View file

@ -290,8 +290,8 @@ in
hound = 259;
leaps = 260;
ipfs = 261;
stanchion = 262;
riak-cs = 263;
# stanchion = 262; # unused, removed 2020-10-14
# riak-cs = 263; # unused, removed 2020-10-14
infinoted = 264;
sickbeard = 265;
headphones = 266;
@ -593,8 +593,8 @@ in
hound = 259;
leaps = 260;
ipfs = 261;
stanchion = 262;
riak-cs = 263;
# stanchion = 262; # unused, removed 2020-10-14
# riak-cs = 263; # unused, removed 2020-10-14
infinoted = 264;
sickbeard = 265;
headphones = 266;

View file

@ -296,8 +296,6 @@
./services/databases/postgresql.nix
./services/databases/redis.nix
./services/databases/riak.nix
./services/databases/riak-cs.nix
./services/databases/stanchion.nix
./services/databases/victoriametrics.nix
./services/databases/virtuoso.nix
./services/desktops/accountsservice.nix
@ -394,6 +392,7 @@
./services/logging/logcheck.nix
./services/logging/logrotate.nix
./services/logging/logstash.nix
./services/logging/promtail.nix
./services/logging/rsyslogd.nix
./services/logging/syslog-ng.nix
./services/logging/syslogd.nix
@ -403,7 +402,6 @@
./services/mail/dovecot.nix
./services/mail/dspam.nix
./services/mail/exim.nix
./services/mail/freepops.nix
./services/mail/mail.nix
./services/mail/mailcatcher.nix
./services/mail/mailhog.nix
@ -682,6 +680,7 @@
./services/networking/murmur.nix
./services/networking/mxisd.nix
./services/networking/namecoind.nix
./services/networking/nar-serve.nix
./services/networking/nat.nix
./services/networking/ndppd.nix
./services/networking/networkmanager.nix
@ -865,6 +864,7 @@
./services/web-apps/ihatemoney
./services/web-apps/jirafeau.nix
./services/web-apps/jitsi-meet.nix
./services/web-apps/keycloak.nix
./services/web-apps/limesurvey.nix
./services/web-apps/mattermost.nix
./services/web-apps/mediawiki.nix

View file

@ -1,202 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.riak-cs;
in
{
###### interface
options = {
services.riak-cs = {
enable = mkEnableOption "riak-cs";
package = mkOption {
type = types.package;
default = pkgs.riak-cs;
defaultText = "pkgs.riak-cs";
example = literalExample "pkgs.riak-cs";
description = ''
Riak package to use.
'';
};
nodeName = mkOption {
type = types.str;
default = "riak-cs@127.0.0.1";
description = ''
Name of the Erlang node.
'';
};
anonymousUserCreation = mkOption {
type = types.bool;
default = false;
description = ''
Anonymous user creation.
'';
};
riakHost = mkOption {
type = types.str;
default = "127.0.0.1:8087";
description = ''
Name of riak hosting service.
'';
};
listener = mkOption {
type = types.str;
default = "127.0.0.1:8080";
description = ''
Name of Riak CS listening service.
'';
};
stanchionHost = mkOption {
type = types.str;
default = "127.0.0.1:8085";
description = ''
Name of stanchion hosting service.
'';
};
stanchionSsl = mkOption {
type = types.bool;
default = true;
description = ''
Tell stanchion to use SSL.
'';
};
distributedCookie = mkOption {
type = types.str;
default = "riak";
description = ''
Cookie for distributed node communication. All nodes in the
same cluster should use the same cookie or they will not be able to
communicate.
'';
};
dataDir = mkOption {
type = types.path;
default = "/var/db/riak-cs";
description = ''
Data directory for Riak CS.
'';
};
logDir = mkOption {
type = types.path;
default = "/var/log/riak-cs";
description = ''
Log directory for Riak CS.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional text to be appended to <filename>riak-cs.conf</filename>.
'';
};
extraAdvancedConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional text to be appended to <filename>advanced.config</filename>.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc."riak-cs/riak-cs.conf".text = ''
nodename = ${cfg.nodeName}
distributed_cookie = ${cfg.distributedCookie}
platform_log_dir = ${cfg.logDir}
riak_host = ${cfg.riakHost}
listener = ${cfg.listener}
stanchion_host = ${cfg.stanchionHost}
anonymous_user_creation = ${if cfg.anonymousUserCreation then "on" else "off"}
${cfg.extraConfig}
'';
environment.etc."riak-cs/advanced.config".text = ''
${cfg.extraAdvancedConfig}
'';
users.users.riak-cs = {
name = "riak-cs";
uid = config.ids.uids.riak-cs;
group = "riak";
description = "Riak CS server user";
};
systemd.services.riak-cs = {
description = "Riak CS Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [
pkgs.utillinux # for `logger`
pkgs.bash
];
environment.HOME = "${cfg.dataDir}";
environment.RIAK_CS_DATA_DIR = "${cfg.dataDir}";
environment.RIAK_CS_LOG_DIR = "${cfg.logDir}";
environment.RIAK_CS_ETC_DIR = "/etc/riak";
preStart = ''
if ! test -e ${cfg.logDir}; then
mkdir -m 0755 -p ${cfg.logDir}
chown -R riak-cs ${cfg.logDir}
fi
if ! test -e ${cfg.dataDir}; then
mkdir -m 0700 -p ${cfg.dataDir}
chown -R riak-cs ${cfg.dataDir}
fi
'';
serviceConfig = {
ExecStart = "${cfg.package}/bin/riak-cs console";
ExecStop = "${cfg.package}/bin/riak-cs stop";
StandardInput = "tty";
User = "riak-cs";
Group = "riak-cs";
PermissionsStartOnly = true;
# Give Riak a decent amount of time to clean up.
TimeoutStopSec = 120;
LimitNOFILE = 65536;
};
unitConfig.RequiresMountsFor = [
"${cfg.dataDir}"
"${cfg.logDir}"
"/etc/riak"
];
};
};
}

View file

@ -1,194 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.stanchion;
in
{
###### interface
options = {
services.stanchion = {
enable = mkEnableOption "stanchion";
package = mkOption {
type = types.package;
default = pkgs.stanchion;
defaultText = "pkgs.stanchion";
example = literalExample "pkgs.stanchion";
description = ''
Stanchion package to use.
'';
};
nodeName = mkOption {
type = types.str;
default = "stanchion@127.0.0.1";
description = ''
Name of the Erlang node.
'';
};
adminKey = mkOption {
type = types.str;
default = "";
description = ''
Name of admin user.
'';
};
adminSecret = mkOption {
type = types.str;
default = "";
description = ''
Name of admin secret
'';
};
riakHost = mkOption {
type = types.str;
default = "127.0.0.1:8087";
description = ''
Name of riak hosting service.
'';
};
listener = mkOption {
type = types.str;
default = "127.0.0.1:8085";
description = ''
Name of Riak CS listening service.
'';
};
stanchionHost = mkOption {
type = types.str;
default = "127.0.0.1:8085";
description = ''
Name of stanchion hosting service.
'';
};
distributedCookie = mkOption {
type = types.str;
default = "riak";
description = ''
Cookie for distributed node communication. All nodes in the
same cluster should use the same cookie or they will not be able to
communicate.
'';
};
dataDir = mkOption {
type = types.path;
default = "/var/db/stanchion";
description = ''
Data directory for Stanchion.
'';
};
logDir = mkOption {
type = types.path;
default = "/var/log/stanchion";
description = ''
Log directory for Stanchion.
'';
};
extraConfig = mkOption {
type = types.lines;
default = "";
description = ''
Additional text to be appended to <filename>stanchion.conf</filename>.
'';
};
};
};
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ cfg.package ];
environment.etc."stanchion/advanced.config".text = ''
[{stanchion, []}].
'';
environment.etc."stanchion/stanchion.conf".text = ''
listener = ${cfg.listener}
riak_host = ${cfg.riakHost}
${optionalString (cfg.adminKey == "") "#"} admin.key=${optionalString (cfg.adminKey != "") cfg.adminKey}
${optionalString (cfg.adminSecret == "") "#"} admin.secret=${optionalString (cfg.adminSecret != "") cfg.adminSecret}
platform_bin_dir = ${pkgs.stanchion}/bin
platform_data_dir = ${cfg.dataDir}
platform_etc_dir = /etc/stanchion
platform_lib_dir = ${pkgs.stanchion}/lib
platform_log_dir = ${cfg.logDir}
nodename = ${cfg.nodeName}
distributed_cookie = ${cfg.distributedCookie}
${cfg.extraConfig}
'';
users.users.stanchion = {
name = "stanchion";
uid = config.ids.uids.stanchion;
group = "stanchion";
description = "Stanchion server user";
};
users.groups.stanchion.gid = config.ids.gids.stanchion;
systemd.tmpfiles.rules = [
"d '${cfg.logDir}' - stanchion stanchion --"
"d '${cfg.dataDir}' 0700 stanchion stanchion --"
];
systemd.services.stanchion = {
description = "Stanchion Server";
wantedBy = [ "multi-user.target" ];
after = [ "network.target" ];
path = [
pkgs.utillinux # for `logger`
pkgs.bash
];
environment.HOME = "${cfg.dataDir}";
environment.STANCHION_DATA_DIR = "${cfg.dataDir}";
environment.STANCHION_LOG_DIR = "${cfg.logDir}";
environment.STANCHION_ETC_DIR = "/etc/stanchion";
serviceConfig = {
ExecStart = "${cfg.package}/bin/stanchion console";
ExecStop = "${cfg.package}/bin/stanchion stop";
StandardInput = "tty";
User = "stanchion";
Group = "stanchion";
# Give Stanchion a decent amount of time to clean up.
TimeoutStopSec = 120;
LimitNOFILE = 65536;
};
unitConfig.RequiresMountsFor = [
"${cfg.dataDir}"
"${cfg.logDir}"
"/etc/stanchion"
];
};
};
}

View file

@ -87,6 +87,8 @@ in {
bluetooth = {
wantedBy = [ "bluetooth.target" ];
aliases = [ "dbus-org.bluez.service" ];
# restarting can leave people without a mouse/keyboard
unitConfig.X-RestartIfChanged = false;
};
};

View file

@ -0,0 +1,95 @@
{ config, lib, pkgs, ... }: with lib;
let
cfg = config.services.promtail;
prettyJSON = conf: pkgs.runCommandLocal "promtail-config.json" {} ''
echo '${builtins.toJSON conf}' | ${pkgs.buildPackages.jq}/bin/jq 'del(._module)' > $out
'';
in {
options.services.promtail = with types; {
enable = mkEnableOption "the Promtail ingresser";
configuration = mkOption {
type = with lib.types; let
valueType = nullOr (oneOf [
bool
int
float
str
(lazyAttrsOf valueType)
(listOf valueType)
]) // {
description = "JSON value";
emptyValue.value = {};
deprecationMessage = null;
};
in valueType;
description = ''
Specify the configuration for Promtail in Nix.
'';
};
extraFlags = mkOption {
type = listOf str;
default = [];
example = [ "--server.http-listen-port=3101" ];
description = ''
Specify a list of additional command line flags,
which get escaped and are then passed to Loki.
'';
};
};
config = mkIf cfg.enable {
services.promtail.configuration.positions.filename = mkDefault "/var/cache/promtail/positions.yaml";
systemd.services.promtail = {
description = "Promtail log ingress";
wantedBy = [ "multi-user.target" ];
stopIfChanged = false;
serviceConfig = {
Restart = "on-failure";
ExecStart = "${pkgs.grafana-loki}/bin/promtail -config.file=${prettyJSON cfg.configuration} ${escapeShellArgs cfg.extraFlags}";
ProtectSystem = "strict";
ProtectHome = true;
PrivateTmp = true;
PrivateDevices = true;
ProtectKernelTunables = true;
ProtectControlGroups = true;
RestrictSUIDSGID = true;
PrivateMounts = true;
CacheDirectory = "promtail";
User = "promtail";
Group = "promtail";
CapabilityBoundingSet = "";
NoNewPrivileges = true;
ProtectKernelModules = true;
SystemCallArchitectures = "native";
ProtectKernelLogs = true;
ProtectClock = true;
LockPersonality = true;
ProtectHostname = true;
RestrictRealtime = true;
MemoryDenyWriteExecute = true;
PrivateUsers = true;
} // (optionalAttrs (!pkgs.stdenv.isAarch64) { # FIXME: figure out why this breaks on aarch64
SystemCallFilter = "@system-service";
});
};
users.groups.promtail = {};
users.users.promtail = {
description = "Promtail service user";
isSystemUser = true;
group = "promtail";
};
};
}

View file

@ -1,89 +0,0 @@
{ config, lib, pkgs, ... }:
with lib;
let
cfg = config.services.mail.freepopsd;
in
{
options = {
services.mail.freepopsd = {
enable = mkOption {
default = false;
type = with types; bool;
description = ''
Enables Freepops, a POP3 webmail wrapper.
'';
};
port = mkOption {
default = 2000;
type = with types; uniq int;
description = ''
Port on which the pop server will listen.
'';
};
threads = mkOption {
default = 5;
type = with types; uniq int;
description = ''
Max simultaneous connections.
'';
};
bind = mkOption {
default = "0.0.0.0";
type = types.str;
description = ''
Bind over an IPv4 address instead of any.
'';
};
logFile = mkOption {
default = "/var/log/freepopsd";
example = "syslog";
type = types.str;
description = ''
Filename of the log file or syslog to rely on the logging daemon.
'';
};
suid = {
user = mkOption {
default = "nobody";
type = types.str;
description = ''
User name under which freepopsd will be after binding the port.
'';
};
group = mkOption {
default = "nogroup";
type = types.str;
description = ''
Group under which freepopsd will be after binding the port.
'';
};
};
};
};
config = mkIf cfg.enable {
systemd.services.freepopsd = {
description = "Freepopsd (webmail over POP3)";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
script = ''
${pkgs.freepops}/bin/freepopsd \
-p ${toString cfg.port} \
-t ${toString cfg.threads} \
-b ${cfg.bind} \
-vv -l ${cfg.logFile} \
-s ${cfg.suid.user}.${cfg.suid.group}
'';
};
};
}

View file

@ -45,6 +45,7 @@ let
"rspamd"
"rtl_433"
"snmp"
"sql"
"surfboard"
"tor"
"unifi"
@ -218,6 +219,14 @@ in
Please specify either 'services.prometheus.exporters.mail.configuration'
or 'services.prometheus.exporters.mail.configFile'.
'';
} {
assertion = cfg.sql.enable -> (
(cfg.sql.configFile == null) != (cfg.sql.configuration == null)
);
message = ''
Please specify either 'services.prometheus.exporters.sql.configuration' or
'services.prometheus.exporters.sql.configFile'
'';
} ];
}] ++ [(mkIf config.services.minio.enable {
services.prometheus.exporters.minio.minioAddress = mkDefault "http://localhost:9000";

View file

@ -0,0 +1,104 @@
{ config, lib, pkgs, options }:
with lib;
let
cfg = config.services.prometheus.exporters.sql;
cfgOptions = {
options = with types; {
jobs = mkOption {
type = attrsOf (submodule jobOptions);
default = { };
description = "An attrset of metrics scraping jobs to run.";
};
};
};
jobOptions = {
options = with types; {
interval = mkOption {
type = str;
description = ''
How often to run this job, specified in
<link xlink:href="https://golang.org/pkg/time/#ParseDuration">Go duration</link> format.
'';
};
connections = mkOption {
type = listOf str;
description = "A list of connection strings of the SQL servers to scrape metrics from";
};
startupSql = mkOption {
type = listOf str;
default = [];
description = "A list of SQL statements to execute once after making a connection.";
};
queries = mkOption {
type = attrsOf (submodule queryOptions);
description = "SQL queries to run.";
};
};
};
queryOptions = {
options = with types; {
help = mkOption {
type = nullOr str;
default = null;
description = "A human-readable description of this metric.";
};
labels = mkOption {
type = listOf str;
default = [ ];
description = "A set of columns that will be used as Prometheus labels.";
};
query = mkOption {
type = str;
description = "The SQL query to run.";
};
values = mkOption {
type = listOf str;
description = "A set of columns that will be used as values of this metric.";
};
};
};
configFile =
if cfg.configFile != null
then cfg.configFile
else
let
nameInline = mapAttrsToList (k: v: v // { name = k; });
renameStartupSql = j: removeAttrs (j // { startup_sql = j.startupSql; }) [ "startupSql" ];
configuration = {
jobs = map renameStartupSql
(nameInline (mapAttrs (k: v: (v // { queries = nameInline v.queries; })) cfg.configuration.jobs));
};
in
builtins.toFile "config.yaml" (builtins.toJSON configuration);
in
{
extraOpts = {
configFile = mkOption {
type = with types; nullOr path;
default = null;
description = ''
Path to configuration file.
'';
};
configuration = mkOption {
type = with types; nullOr (submodule cfgOptions);
default = null;
description = ''
Exporter configuration as nix attribute set. Mutually exclusive with 'configFile' option.
'';
};
};
port = 9237;
serviceOpts = {
serviceConfig = {
ExecStart = ''
${pkgs.prometheus-sql-exporter}/bin/sql_exporter \
-web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
-config.file ${configFile} \
${concatStringsSep " \\\n " cfg.extraFlags}
'';
};
};
}

View file

@ -44,6 +44,13 @@ in {
enable = mkEnableOption "Interplanetary File System (WARNING: may cause severe network degredation)";
package = mkOption {
type = types.package;
default = pkgs.ipfs;
defaultText = "pkgs.ipfs";
description = "Which IPFS package to use.";
};
user = mkOption {
type = types.str;
default = "ipfs";
@ -176,7 +183,7 @@ in {
###### implementation
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.ipfs ];
environment.systemPackages = [ cfg.package ];
environment.variables.IPFS_PATH = cfg.dataDir;
programs.fuse = mkIf cfg.autoMount {
@ -207,14 +214,14 @@ in {
"d '${cfg.ipnsMountDir}' - ${cfg.user} ${cfg.group} - -"
];
systemd.packages = [ pkgs.ipfs ];
systemd.packages = [ cfg.package ];
systemd.services.ipfs-init = {
description = "IPFS Initializer";
environment.IPFS_PATH = cfg.dataDir;
path = [ pkgs.ipfs ];
path = [ cfg.package ];
script = ''
if [[ ! -f ${cfg.dataDir}/config ]]; then
@ -239,7 +246,7 @@ in {
};
systemd.services.ipfs = {
path = [ "/run/wrappers" pkgs.ipfs ];
path = [ "/run/wrappers" cfg.package ];
environment.IPFS_PATH = cfg.dataDir;
wants = [ "ipfs-init.service" ];
@ -267,7 +274,7 @@ in {
cfg.extraConfig))
);
serviceConfig = {
ExecStart = ["" "${pkgs.ipfs}/bin/ipfs daemon ${ipfsFlags}"];
ExecStart = ["" "${cfg.package}/bin/ipfs daemon ${ipfsFlags}"];
User = cfg.user;
Group = cfg.group;
} // optionalAttrs (cfg.serviceFdlimit != null) { LimitNOFILE = cfg.serviceFdlimit; };

View file

@ -69,6 +69,11 @@ let
if-carrier-up = "";
}.${cfg.wait}}
${optionalString (config.networking.enableIPv6 == false) ''
# Don't solicit or accept IPv6 Router Advertisements and DHCPv6 if disabled IPv6
noipv6
''}
${cfg.extraConfig}
'';

View file

@ -0,0 +1,55 @@
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.nar-serve;
in
{
meta = {
maintainers = [ maintainers.rizary ];
};
options = {
services.nar-serve = {
enable = mkEnableOption "Serve NAR file contents via HTTP";
port = mkOption {
type = types.int;
default = 8383;
description = ''
Port number where nar-serve will listen on.
'';
};
cacheURL = mkOption {
type = types.str;
default = "https://cache.nixos.org/";
description = ''
Binary cache URL to connect to.
The URL format is compatible with the nix remote url style, such as:
- http://, https:// for binary caches via HTTP or HTTPS
- s3:// for binary caches stored in Amazon S3
- gs:// for binary caches stored in Google Cloud Storage
'';
};
};
};
config = mkIf cfg.enable {
systemd.services.nar-serve = {
description = "NAR server";
after = [ "network.target" ];
wantedBy = [ "multi-user.target" ];
environment.PORT = toString cfg.port;
environment.NAR_CACHE_URL = cfg.cacheURL;
serviceConfig = {
Restart = "always";
RestartSec = "5s";
ExecStart = "${pkgs.nar-serve}/bin/nar-serve";
DynamicUser = true;
};
};
};
}

View file

@ -6,6 +6,7 @@ let
cfg = config.services.chrony;
stateDir = "/var/lib/chrony";
driftFile = "${stateDir}/chrony.drift";
keyFile = "${stateDir}/chrony.keys";
configFile = pkgs.writeText "chrony.conf" ''
@ -16,7 +17,7 @@ let
"initstepslew ${toString cfg.initstepslew.threshold} ${concatStringsSep " " cfg.servers}"
}
driftfile ${stateDir}/chrony.drift
driftfile ${driftFile}
keyfile ${keyFile}
${optionalString (!config.time.hardwareClockInLocalTime) "rtconutc"}
@ -95,6 +96,7 @@ in
systemd.tmpfiles.rules = [
"d ${stateDir} 0755 chrony chrony - -"
"f ${driftFile} 0640 chrony chrony -"
"f ${keyFile} 0640 chrony chrony -"
];

View file

@ -16,8 +16,12 @@ let
serverConfig = {
options = {
accept = mkOption {
type = types.int;
description = "On which port stunnel should listen for incoming TLS connections.";
type = types.either types.str types.int;
description = ''
On which [host:]port stunnel should listen for incoming TLS connections.
Note that unlike other softwares stunnel ipv6 address need no brackets,
so to listen on all IPv6 addresses on port 1234 one would use ':::1234'.
'';
};
connect = mkOption {
@ -129,7 +133,6 @@ in
type = with types; attrsOf (submodule serverConfig);
example = {
fancyWebserver = {
enable = true;
accept = 443;
connect = 8080;
cert = "/path/to/pem/file";

View file

@ -18,30 +18,10 @@ in {
config = mkIf cfg.enable {
environment.systemPackages = [ pkgs.tailscale ]; # for the CLI
systemd.services.tailscale = {
description = "Tailscale client daemon";
after = [ "network-pre.target" ];
wants = [ "network-pre.target" ];
systemd.packages = [ pkgs.tailscale ];
systemd.services.tailscaled = {
wantedBy = [ "multi-user.target" ];
startLimitIntervalSec = 0;
serviceConfig = {
ExecStart =
"${pkgs.tailscale}/bin/tailscaled --port ${toString cfg.port}";
RuntimeDirectory = "tailscale";
RuntimeDirectoryMode = 755;
StateDirectory = "tailscale";
StateDirectoryMode = 750;
CacheDirectory = "tailscale";
CacheDirectoryMode = 750;
Restart = "on-failure";
};
serviceConfig.Environment = "PORT=${toString cfg.port}";
};
};
}

View file

@ -236,6 +236,7 @@ in
# an AppArmor profile is provided to get a confinement based upon paths and rights.
builtins.storeDir
"/etc"
"/run"
] ++
optional (cfg.settings.script-torrent-done-enabled &&
cfg.settings.script-torrent-done-filename != "")
@ -408,6 +409,7 @@ in
#r @{PROC}/@{pid}/environ,
r @{PROC}/@{pid}/mounts,
rwk /tmp/tr_session_id_*,
r /run/systemd/resolve/stub-resolv.conf,
r ${pkgs.openssl.out}/etc/**,
r ${config.systemd.services.transmission.environment.CURL_CA_BUNDLE},

View file

@ -1,31 +0,0 @@
#!/usr/bin/env -S nix-build --no-out-link
# Script to generate default streaming configurations for EPGStation. There's
# no need to run this script directly since generate.sh in the EPGStation
# package directory would run this script for you.
#
# Usage: ./generate | xargs cat > streaming.json
{ pkgs ? (import ../../../../.. {}) }:
let
sampleConfigPath = "${pkgs.epgstation.src}/config/config.sample.json";
sampleConfig = builtins.fromJSON (builtins.readFile sampleConfigPath);
streamingConfig = {
inherit (sampleConfig)
mpegTsStreaming
mpegTsViewer
liveHLS
liveMP4
liveWebM
recordedDownloader
recordedStreaming
recordedViewer
recordedHLS;
};
in
pkgs.runCommand "streaming.json" { nativeBuildInputs = [ pkgs.jq ]; } ''
jq . <<<'${builtins.toJSON streamingConfig}' > $out
''
# vim:set ft=nix:

View file

@ -1,119 +1,119 @@
{
"liveHLS": [
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -flags +loop-global_header %OUTPUT%",
"name": "720p"
"name": "720p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -flags +loop-global_header %OUTPUT%"
},
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -flags +loop-global_header %OUTPUT%",
"name": "480p"
"name": "480p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -flags +loop-global_header %OUTPUT%"
},
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 48k -ac 2 -c:v libx264 -vf yadif,scale=-2:180 -b:v 100k -preset veryfast -maxrate 110k -bufsize 1000k -flags +loop-global_header %OUTPUT%",
"name": "180p"
"name": "180p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 17 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 48k -ac 2 -c:v libx264 -vf yadif,scale=-2:180 -b:v 100k -preset veryfast -maxrate 110k -bufsize 1000k -flags +loop-global_header %OUTPUT%"
}
],
"liveMP4": [
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"name": "720p"
"name": "720p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1"
},
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"name": "480p"
"name": "480p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1"
}
],
"liveWebM": [
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 3 -c:a libvorbis -ar 48000 -b:a 192k -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:720 -b:v 3000k -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"name": "720p"
"name": "720p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 3 -c:a libvorbis -ar 48000 -b:a 192k -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:720 -b:v 3000k -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1"
},
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 2 -c:a libvorbis -ar 48000 -b:a 128k -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:480 -b:v 1500k -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"name": "480p"
"name": "480p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 2 -c:a libvorbis -ar 48000 -b:a 128k -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:480 -b:v 1500k -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1"
}
],
"mpegTsStreaming": [
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -y -f mpegts pipe:1",
"name": "720p"
"name": "720p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -y -f mpegts pipe:1"
},
{
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -y -f mpegts pipe:1",
"name": "480p"
"name": "480p",
"cmd": "%FFMPEG% -re -dual_mono_mode main -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -y -f mpegts pipe:1"
},
{
"name": "Original"
}
],
"mpegTsViewer": {
"android": "intent://ADDRESS#Intent;package=com.mxtech.videoplayer.ad;type=video;scheme=http;end",
"ios": "vlc-x-callback://x-callback-url/stream?url=http://ADDRESS"
"ios": "vlc-x-callback://x-callback-url/stream?url=http://ADDRESS",
"android": "intent://ADDRESS#Intent;package=com.mxtech.videoplayer.ad;type=video;scheme=http;end"
},
"recordedDownloader": {
"android": "intent://ADDRESS#Intent;package=com.dv.adm;type=video;scheme=http;end",
"ios": "vlc-x-callback://x-callback-url/download?url=http://ADDRESS&filename=FILENAME"
"ios": "vlc-x-callback://x-callback-url/download?url=http://ADDRESS&filename=FILENAME",
"android": "intent://ADDRESS#Intent;package=com.dv.adm;type=video;scheme=http;end"
},
"recordedHLS": [
"recordedStreaming": {
"webm": [
{
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -flags +loop-global_header %OUTPUT%",
"name": "720p"
"name": "720p",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 3 -c:a libvorbis -ar 48000 -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"vb": "3000k",
"ab": "192k"
},
{
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -flags +loop-global_header %OUTPUT%",
"name": "480p"
},
{
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_type fmp4 -hls_fmp4_init_filename stream%streamNum%-init.mp4 -hls_segment_filename stream%streamNum%-%09d.m4s -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx265 -vf yadif,scale=-2:480 -b:v 350k -preset veryfast -tag:v hvc1 %OUTPUT%",
"name": "480p(h265)"
"name": "360p",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 2 -c:a libvorbis -ar 48000 -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"vb": "1500k",
"ab": "128k"
}
],
"recordedStreaming": {
"mp4": [
{
"ab": "192k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"name": "720p",
"vb": "3000k"
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"vb": "3000k",
"ab": "192k"
},
{
"ab": "128k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"name": "360p",
"vb": "1500k"
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -movflags frag_keyframe+empty_moov+faststart+default_base_moof -y -f mp4 pipe:1",
"vb": "1500k",
"ab": "128k"
}
],
"mpegTs": [
{
"ab": "192k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -y -f mpegts pipe:1",
"name": "720p (H.264)",
"vb": "3000k"
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -y -f mpegts pipe:1",
"vb": "3000k",
"ab": "192k"
},
{
"ab": "128k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -y -f mpegts pipe:1",
"name": "360p (H.264)",
"vb": "1500k"
}
],
"webm": [
{
"ab": "192k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 3 -c:a libvorbis -ar 48000 -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:720 %VB% %VBUFFER% %AB% %ABUFFER% -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"name": "720p",
"vb": "3000k"
},
{
"ab": "128k",
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 2 -c:a libvorbis -ar 48000 -ac 2 -c:v libvpx-vp9 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -deadline realtime -speed 4 -cpu-used -8 -y -f webm pipe:1",
"name": "360p",
"vb": "1500k"
"cmd": "%FFMPEG% -dual_mono_mode main %RE% -i pipe:0 -sn -threads 0 -c:a aac -ar 48000 -ac 2 -c:v libx264 -vf yadif,scale=-2:360 %VB% %VBUFFER% %AB% %ABUFFER% -profile:v baseline -preset veryfast -tune fastdecode,zerolatency -y -f mpegts pipe:1",
"vb": "1500k",
"ab": "128k"
}
]
},
"recordedHLS": [
{
"name": "720p",
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 192k -ac 2 -c:v libx264 -vf yadif,scale=-2:720 -b:v 3000k -preset veryfast -flags +loop-global_header %OUTPUT%"
},
{
"name": "480p",
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -threads 0 -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_filename %streamFileDir%/stream%streamNum%-%09d.ts -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx264 -vf yadif,scale=-2:480 -b:v 1500k -preset veryfast -flags +loop-global_header %OUTPUT%"
},
{
"name": "480p(h265)",
"cmd": "%FFMPEG% -dual_mono_mode main -i %INPUT% -sn -map 0 -ignore_unknown -max_muxing_queue_size 1024 -f hls -hls_time 3 -hls_list_size 0 -hls_allow_cache 1 -hls_segment_type fmp4 -hls_fmp4_init_filename stream%streamNum%-init.mp4 -hls_segment_filename stream%streamNum%-%09d.m4s -c:a aac -ar 48000 -b:a 128k -ac 2 -c:v libx265 -vf yadif,scale=-2:480 -b:v 350k -preset veryfast -tag:v hvc1 %OUTPUT%"
}
],
"recordedViewer": {
"android": "intent://ADDRESS#Intent;package=com.mxtech.videoplayer.ad;type=video;scheme=http;end",
"ios": "infuse://x-callback-url/play?url=http://ADDRESS"
"ios": "infuse://x-callback-url/play?url=http://ADDRESS",
"android": "intent://ADDRESS#Intent;package=com.mxtech.videoplayer.ad;type=video;scheme=http;end"
}
}

View file

@ -0,0 +1,692 @@
{ config, pkgs, lib, ... }:
let
cfg = config.services.keycloak;
in
{
options.services.keycloak = {
enable = lib.mkOption {
type = lib.types.bool;
default = false;
example = true;
description = ''
Whether to enable the Keycloak identity and access management
server.
'';
};
bindAddress = lib.mkOption {
type = lib.types.str;
default = "\${jboss.bind.address:0.0.0.0}";
example = "127.0.0.1";
description = ''
On which address Keycloak should accept new connections.
A special syntax can be used to allow command line Java system
properties to override the value: ''${property.name:value}
'';
};
httpPort = lib.mkOption {
type = lib.types.str;
default = "\${jboss.http.port:80}";
example = "8080";
description = ''
On which port Keycloak should listen for new HTTP connections.
A special syntax can be used to allow command line Java system
properties to override the value: ''${property.name:value}
'';
};
httpsPort = lib.mkOption {
type = lib.types.str;
default = "\${jboss.https.port:443}";
example = "8443";
description = ''
On which port Keycloak should listen for new HTTPS connections.
A special syntax can be used to allow command line Java system
properties to override the value: ''${property.name:value}
'';
};
frontendUrl = lib.mkOption {
type = lib.types.str;
example = "keycloak.example.com/auth";
description = ''
The public URL used as base for all frontend requests. Should
normally include a trailing <literal>/auth</literal>.
See <link xlink:href="https://www.keycloak.org/docs/latest/server_installation/#_hostname">the
Hostname section of the Keycloak server installation
manual</link> for more information.
'';
};
forceBackendUrlToFrontendUrl = lib.mkOption {
type = lib.types.bool;
default = false;
example = true;
description = ''
Whether Keycloak should force all requests to go through the
frontend URL configured in <xref
linkend="opt-services.keycloak.frontendUrl" />. By default,
Keycloak allows backend requests to instead use its local
hostname or IP address and may also advertise it to clients
through its OpenID Connect Discovery endpoint.
See <link
xlink:href="https://www.keycloak.org/docs/latest/server_installation/#_hostname">the
Hostname section of the Keycloak server installation
manual</link> for more information.
'';
};
certificatePrivateKeyBundle = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
example = "/run/keys/ssl_cert";
description = ''
The path to a PEM formatted bundle of the private key and
certificate to use for TLS connections.
This should be a string, not a Nix path, since Nix paths are
copied into the world-readable Nix store.
'';
};
databaseType = lib.mkOption {
type = lib.types.enum [ "mysql" "postgresql" ];
default = "postgresql";
example = "mysql";
description = ''
The type of database Keycloak should connect to.
'';
};
databaseHost = lib.mkOption {
type = lib.types.str;
default = "localhost";
description = ''
Hostname of the database to connect to.
'';
};
databasePort =
let
dbPorts = {
postgresql = 5432;
mysql = 3306;
};
in
lib.mkOption {
type = lib.types.port;
default = dbPorts.${cfg.databaseType};
description = ''
Port of the database to connect to.
'';
};
databaseUseSSL = lib.mkOption {
type = lib.types.bool;
default = cfg.databaseHost != "localhost";
description = ''
Whether the database connection should be secured by SSL /
TLS.
'';
};
databaseCaCert = lib.mkOption {
type = lib.types.nullOr lib.types.path;
default = null;
description = ''
The SSL / TLS CA certificate that verifies the identity of the
database server.
Required when PostgreSQL is used and SSL is turned on.
For MySQL, if left at <literal>null</literal>, the default
Java keystore is used, which should suffice if the server
certificate is issued by an official CA.
'';
};
databaseCreateLocally = lib.mkOption {
type = lib.types.bool;
default = true;
description = ''
Whether a database should be automatically created on the
local host. Set this to false if you plan on provisioning a
local database yourself. This has no effect if
services.keycloak.databaseHost is customized.
'';
};
databaseUsername = lib.mkOption {
type = lib.types.str;
default = "keycloak";
description = ''
Username to use when connecting to an external or manually
provisioned database; has no effect when a local database is
automatically provisioned.
'';
};
databasePasswordFile = lib.mkOption {
type = lib.types.path;
example = "/run/keys/db_password";
description = ''
File containing the database password.
This should be a string, not a Nix path, since Nix paths are
copied into the world-readable Nix store.
'';
};
package = lib.mkOption {
type = lib.types.package;
default = pkgs.keycloak;
description = ''
Keycloak package to use.
'';
};
initialAdminPassword = lib.mkOption {
type = lib.types.str;
default = "changeme";
description = ''
Initial password set for the <literal>admin</literal>
user. The password is not stored safely and should be changed
immediately in the admin panel.
'';
};
extraConfig = lib.mkOption {
type = lib.types.attrs;
default = { };
example = lib.literalExample ''
{
"subsystem=keycloak-server" = {
"spi=hostname" = {
"provider=default" = null;
"provider=fixed" = {
enabled = true;
properties.hostname = "keycloak.example.com";
};
default-provider = "fixed";
};
};
}
'';
description = ''
Additional Keycloak configuration options to set in
<literal>standalone.xml</literal>.
Options are expressed as a Nix attribute set which matches the
structure of the jboss-cli configuration. The configuration is
effectively overlayed on top of the default configuration
shipped with Keycloak. To remove existing nodes and undefine
attributes from the default configuration, set them to
<literal>null</literal>.
The example configuration does the equivalent of the following
script, which removes the hostname provider
<literal>default</literal>, adds the deprecated hostname
provider <literal>fixed</literal> and defines it the default:
<programlisting>
/subsystem=keycloak-server/spi=hostname/provider=default:remove()
/subsystem=keycloak-server/spi=hostname/provider=fixed:add(enabled = true, properties = { hostname = "keycloak.example.com" })
/subsystem=keycloak-server/spi=hostname:write-attribute(name=default-provider, value="fixed")
</programlisting>
You can discover available options by using the <link
xlink:href="http://docs.wildfly.org/21/Admin_Guide.html#Command_Line_Interface">jboss-cli.sh</link>
program and by referring to the <link
xlink:href="https://www.keycloak.org/docs/latest/server_installation/index.html">Keycloak
Server Installation and Configuration Guide</link>.
'';
};
};
config =
let
# We only want to create a database if we're actually going to connect to it.
databaseActuallyCreateLocally = cfg.databaseCreateLocally && cfg.databaseHost == "localhost";
createLocalPostgreSQL = databaseActuallyCreateLocally && cfg.databaseType == "postgresql";
createLocalMySQL = databaseActuallyCreateLocally && cfg.databaseType == "mysql";
mySqlCaKeystore = pkgs.runCommandNoCC "mysql-ca-keystore" {} ''
${pkgs.jre}/bin/keytool -importcert -trustcacerts -alias MySQLCACert -file ${cfg.databaseCaCert} -keystore $out -storepass notsosecretpassword -noprompt
'';
keycloakConfig' = builtins.foldl' lib.recursiveUpdate {
"interface=public".inet-address = cfg.bindAddress;
"socket-binding-group=standard-sockets"."socket-binding=http".port = cfg.httpPort;
"subsystem=keycloak-server"."spi=hostname" = {
"provider=default" = {
enabled = true;
properties = {
inherit (cfg) frontendUrl forceBackendUrlToFrontendUrl;
};
};
};
"subsystem=datasources"."data-source=KeycloakDS" = {
max-pool-size = "20";
user-name = if databaseActuallyCreateLocally then "keycloak" else cfg.databaseUsername;
password = "@db-password@";
};
} [
(lib.optionalAttrs (cfg.databaseType == "postgresql") {
"subsystem=datasources" = {
"jdbc-driver=postgresql" = {
driver-module-name = "org.postgresql";
driver-name = "postgresql";
driver-xa-datasource-class-name = "org.postgresql.xa.PGXADataSource";
};
"data-source=KeycloakDS" = {
connection-url = "jdbc:postgresql://${cfg.databaseHost}:${builtins.toString cfg.databasePort}/keycloak";
driver-name = "postgresql";
"connection-properties=ssl".value = lib.boolToString cfg.databaseUseSSL;
} // (lib.optionalAttrs (cfg.databaseCaCert != null) {
"connection-properties=sslrootcert".value = cfg.databaseCaCert;
"connection-properties=sslmode".value = "verify-ca";
});
};
})
(lib.optionalAttrs (cfg.databaseType == "mysql") {
"subsystem=datasources" = {
"jdbc-driver=mysql" = {
driver-module-name = "com.mysql";
driver-name = "mysql";
driver-class-name = "com.mysql.jdbc.Driver";
};
"data-source=KeycloakDS" = {
connection-url = "jdbc:mysql://${cfg.databaseHost}:${builtins.toString cfg.databasePort}/keycloak";
driver-name = "mysql";
"connection-properties=useSSL".value = lib.boolToString cfg.databaseUseSSL;
"connection-properties=requireSSL".value = lib.boolToString cfg.databaseUseSSL;
"connection-properties=verifyServerCertificate".value = lib.boolToString cfg.databaseUseSSL;
"connection-properties=characterEncoding".value = "UTF-8";
valid-connection-checker-class-name = "org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker";
validate-on-match = true;
exception-sorter-class-name = "org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter";
} // (lib.optionalAttrs (cfg.databaseCaCert != null) {
"connection-properties=trustCertificateKeyStoreUrl".value = "file:${mySqlCaKeystore}";
"connection-properties=trustCertificateKeyStorePassword".value = "notsosecretpassword";
});
};
})
(lib.optionalAttrs (cfg.certificatePrivateKeyBundle != null) {
"socket-binding-group=standard-sockets"."socket-binding=https".port = cfg.httpsPort;
"core-service=management"."security-realm=UndertowRealm"."server-identity=ssl" = {
keystore-path = "/run/keycloak/ssl/certificate_private_key_bundle.p12";
keystore-password = "notsosecretpassword";
};
"subsystem=undertow"."server=default-server"."https-listener=https".security-realm = "UndertowRealm";
})
cfg.extraConfig
];
/* Produces a JBoss CLI script that creates paths and sets
attributes matching those described by `attrs`. When the
script is run, the existing settings are effectively overlayed
by those from `attrs`. Existing attributes can be unset by
defining them `null`.
JBoss paths and attributes / maps are distinguished by their
name, where paths follow a `key=value` scheme.
Example:
mkJbossScript {
"subsystem=keycloak-server"."spi=hostname" = {
"provider=fixed" = null;
"provider=default" = {
enabled = true;
properties = {
inherit frontendUrl;
forceBackendUrlToFrontendUrl = false;
};
};
};
}
=> ''
if (outcome != success) of /:read-resource()
/:add()
end-if
if (outcome != success) of /subsystem=keycloak-server:read-resource()
/subsystem=keycloak-server:add()
end-if
if (outcome != success) of /subsystem=keycloak-server/spi=hostname:read-resource()
/subsystem=keycloak-server/spi=hostname:add()
end-if
if (outcome != success) of /subsystem=keycloak-server/spi=hostname/provider=default:read-resource()
/subsystem=keycloak-server/spi=hostname/provider=default:add(enabled = true, properties = { forceBackendUrlToFrontendUrl = false, frontendUrl = "https://keycloak.example.com/auth" })
end-if
if (result != true) of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="enabled")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=enabled, value=true)
end-if
if (result != false) of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="properties.forceBackendUrlToFrontendUrl")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.forceBackendUrlToFrontendUrl, value=false)
end-if
if (result != "https://keycloak.example.com/auth") of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="properties.frontendUrl")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl, value="https://keycloak.example.com/auth")
end-if
if (outcome != success) of /subsystem=keycloak-server/spi=hostname/provider=fixed:read-resource()
/subsystem=keycloak-server/spi=hostname/provider=fixed:remove()
end-if
''
*/
mkJbossScript = attrs:
let
/* From a JBoss path and an attrset, produces a JBoss CLI
snippet that writes the corresponding attributes starting
at `path`. Recurses down into subattrsets as necessary,
producing the variable name from its full path in the
attrset.
Example:
writeAttributes "/subsystem=keycloak-server/spi=hostname/provider=default" {
enabled = true;
properties = {
forceBackendUrlToFrontendUrl = false;
frontendUrl = "https://keycloak.example.com/auth";
};
}
=> ''
if (result != true) of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="enabled")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=enabled, value=true)
end-if
if (result != false) of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="properties.forceBackendUrlToFrontendUrl")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.forceBackendUrlToFrontendUrl, value=false)
end-if
if (result != "https://keycloak.example.com/auth") of /subsystem=keycloak-server/spi=hostname/provider=default:read-attribute(name="properties.frontendUrl")
/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.frontendUrl, value="https://keycloak.example.com/auth")
end-if
''
*/
writeAttributes = path: set:
let
# JBoss expressions like `${var}` need to be prefixed
# with `expression` to evaluate.
prefixExpression = string:
let
match = (builtins.match ''"\$\{.*}"'' string);
in
if match != null then
"expression " + string
else
string;
writeAttribute = attribute: value:
let
type = builtins.typeOf value;
in
if type == "set" then
let
names = builtins.attrNames value;
in
builtins.foldl' (text: name: text + (writeAttribute "${attribute}.${name}" value.${name})) "" names
else if value == null then ''
if (outcome == success) of ${path}:read-attribute(name="${attribute}")
${path}:undefine-attribute(name="${attribute}")
end-if
''
else if builtins.elem type [ "string" "path" "bool" ] then
let
value' = if type == "bool" then lib.boolToString value else ''"${value}"'';
in ''
if (result != ${prefixExpression value'}) of ${path}:read-attribute(name="${attribute}")
${path}:write-attribute(name=${attribute}, value=${value'})
end-if
''
else throw "Unsupported type '${type}' for path '${path}'!";
in
lib.concatStrings
(lib.mapAttrsToList
(attribute: value: (writeAttribute attribute value))
set);
/* Produces an argument list for the JBoss `add()` function,
which adds a JBoss path and takes as its arguments the
required subpaths and attributes.
Example:
makeArgList {
enabled = true;
properties = {
forceBackendUrlToFrontendUrl = false;
frontendUrl = "https://keycloak.example.com/auth";
};
}
=> ''
enabled = true, properties = { forceBackendUrlToFrontendUrl = false, frontendUrl = "https://keycloak.example.com/auth" }
''
*/
makeArgList = set:
let
makeArg = attribute: value:
let
type = builtins.typeOf value;
in
if type == "set" then
"${attribute} = { " + (makeArgList value) + " }"
else if builtins.elem type [ "string" "path" "bool" ] then
"${attribute} = ${if type == "bool" then lib.boolToString value else ''"${value}"''}"
else if value == null then
""
else
throw "Unsupported type '${type}' for attribute '${attribute}'!";
in
lib.concatStringsSep ", " (lib.mapAttrsToList makeArg set);
/* Recurses into the `attrs` attrset, beginning at the path
resolved from `state.path ++ node`; if `node` is `null`,
starts from `state.path`. Only subattrsets that are JBoss
paths, i.e. follows the `key=value` format, are recursed
into - the rest are considered JBoss attributes / maps.
*/
recurse = state: node:
let
path = state.path ++ (lib.optional (node != null) node);
isPath = name:
let
value = lib.getAttrFromPath (path ++ [ name ]) attrs;
in
if (builtins.match ".*([=]).*" name) == [ "=" ] then
if builtins.isAttrs value || value == null then
true
else
throw "Parsing path '${lib.concatStringsSep "." (path ++ [ name ])}' failed: JBoss attributes cannot contain '='!"
else
false;
jbossPath = "/" + (lib.concatStringsSep "/" path);
nodeValue = lib.getAttrFromPath path attrs;
children = if !builtins.isAttrs nodeValue then {} else nodeValue;
subPaths = builtins.filter isPath (builtins.attrNames children);
jbossAttrs = lib.filterAttrs (name: _: !(isPath name)) children;
in
state // {
text = state.text + (
if nodeValue != null then ''
if (outcome != success) of ${jbossPath}:read-resource()
${jbossPath}:add(${makeArgList jbossAttrs})
end-if
'' + (writeAttributes jbossPath jbossAttrs)
else ''
if (outcome == success) of ${jbossPath}:read-resource()
${jbossPath}:remove()
end-if
'') + (builtins.foldl' recurse { text = ""; inherit path; } subPaths).text;
};
in
(recurse { text = ""; path = []; } null).text;
jbossCliScript = pkgs.writeText "jboss-cli-script" (mkJbossScript keycloakConfig');
keycloakConfig = pkgs.runCommandNoCC "keycloak-config" {} ''
export JBOSS_BASE_DIR="$(pwd -P)";
export JBOSS_MODULEPATH="${cfg.package}/modules";
export JBOSS_LOG_DIR="$JBOSS_BASE_DIR/log";
cp -r ${cfg.package}/standalone/configuration .
chmod -R u+rwX ./configuration
mkdir -p {deployments,ssl}
"${cfg.package}/bin/standalone.sh"&
attempt=1
max_attempts=30
while ! ${cfg.package}/bin/jboss-cli.sh --connect ':read-attribute(name=server-state)'; do
if [[ "$attempt" == "$max_attempts" ]]; then
echo "ERROR: Could not connect to Keycloak after $attempt attempts! Failing.." >&2
exit 1
fi
echo "Keycloak not fully started yet, retrying.. ($attempt/$max_attempts)"
sleep 1
(( attempt++ ))
done
${cfg.package}/bin/jboss-cli.sh --connect --file=${jbossCliScript} --echo-command
cp configuration/standalone.xml $out
'';
in
lib.mkIf cfg.enable {
assertions = [
{
assertion = (cfg.databaseUseSSL && cfg.databaseType == "postgresql") -> (cfg.databaseCaCert != null);
message = ''A CA certificate must be specified (in 'services.keycloak.databaseCaCert') when PostgreSQL is used with SSL'';
}
];
environment.systemPackages = [ cfg.package ];
systemd.services.keycloakPostgreSQLInit = lib.mkIf createLocalPostgreSQL {
after = [ "postgresql.service" ];
before = [ "keycloak.service" ];
bindsTo = [ "postgresql.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = "postgres";
Group = "postgres";
};
script = ''
set -eu
PSQL=${config.services.postgresql.package}/bin/psql
db_password="$(<'${cfg.databasePasswordFile}')"
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='keycloak'" | grep -q 1 || $PSQL -tAc "CREATE ROLE keycloak WITH LOGIN PASSWORD '$db_password' CREATEDB"
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = 'keycloak'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "keycloak" OWNER "keycloak"'
'';
};
systemd.services.keycloakMySQLInit = lib.mkIf createLocalMySQL {
after = [ "mysql.service" ];
before = [ "keycloak.service" ];
bindsTo = [ "mysql.service" ];
serviceConfig = {
Type = "oneshot";
RemainAfterExit = true;
User = config.services.mysql.user;
Group = config.services.mysql.group;
};
script = ''
set -eu
db_password="$(<'${cfg.databasePasswordFile}')"
( echo "CREATE USER IF NOT EXISTS 'keycloak'@'localhost' IDENTIFIED BY '$db_password';"
echo "CREATE DATABASE keycloak CHARACTER SET utf8 COLLATE utf8_unicode_ci;"
echo "GRANT ALL PRIVILEGES ON keycloak.* TO 'keycloak'@'localhost';"
) | ${config.services.mysql.package}/bin/mysql -N
'';
};
systemd.services.keycloak =
let
databaseServices =
if createLocalPostgreSQL then [
"keycloakPostgreSQLInit.service" "postgresql.service"
]
else if createLocalMySQL then [
"keycloakMySQLInit.service" "mysql.service"
]
else [ ];
in {
after = databaseServices;
bindsTo = databaseServices;
wantedBy = [ "multi-user.target" ];
environment = {
JBOSS_LOG_DIR = "/var/log/keycloak";
JBOSS_BASE_DIR = "/run/keycloak";
JBOSS_MODULEPATH = "${cfg.package}/modules";
};
serviceConfig = {
ExecStartPre = let
startPreFullPrivileges = ''
set -eu
install -T -m 0400 -o keycloak -g keycloak '${cfg.databasePasswordFile}' /run/keycloak/secrets/db_password
'' + lib.optionalString (cfg.certificatePrivateKeyBundle != null) ''
install -T -m 0400 -o keycloak -g keycloak '${cfg.certificatePrivateKeyBundle}' /run/keycloak/secrets/ssl_cert_pk_bundle
'';
startPre = ''
set -eu
install -m 0600 ${cfg.package}/standalone/configuration/*.properties /run/keycloak/configuration
install -T -m 0600 ${keycloakConfig} /run/keycloak/configuration/standalone.xml
db_password="$(</run/keycloak/secrets/db_password)"
${pkgs.replace}/bin/replace-literal -fe '@db-password@' "$db_password" /run/keycloak/configuration/standalone.xml
export JAVA_OPTS=-Djboss.server.config.user.dir=/run/keycloak/configuration
${cfg.package}/bin/add-user-keycloak.sh -u admin -p '${cfg.initialAdminPassword}'
'' + lib.optionalString (cfg.certificatePrivateKeyBundle != null) ''
pushd /run/keycloak/ssl/
cat /run/keycloak/secrets/ssl_cert_pk_bundle <(echo) /etc/ssl/certs/ca-certificates.crt > allcerts.pem
${pkgs.openssl}/bin/openssl pkcs12 -export -in /run/keycloak/secrets/ssl_cert_pk_bundle -chain \
-name "${cfg.frontendUrl}" -out certificate_private_key_bundle.p12 \
-CAfile allcerts.pem -passout pass:notsosecretpassword
popd
'';
in [
"+${pkgs.writeShellScript "keycloak-start-pre-full-privileges" startPreFullPrivileges}"
"${pkgs.writeShellScript "keycloak-start-pre" startPre}"
];
ExecStart = "${cfg.package}/bin/standalone.sh";
User = "keycloak";
Group = "keycloak";
DynamicUser = true;
RuntimeDirectory = map (p: "keycloak/" + p) [
"secrets"
"configuration"
"deployments"
"data"
"ssl"
"log"
"tmp"
];
RuntimeDirectoryMode = 0700;
LogsDirectory = "keycloak";
AmbientCapabilities = "CAP_NET_BIND_SERVICE";
};
};
services.postgresql.enable = lib.mkDefault createLocalPostgreSQL;
services.mysql.enable = lib.mkDefault createLocalMySQL;
services.mysql.package = lib.mkIf createLocalMySQL pkgs.mysql;
};
meta.doc = ./keycloak.xml;
}

View file

@ -0,0 +1,205 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="module-services-keycloak">
<title>Keycloak</title>
<para>
<link xlink:href="https://www.keycloak.org/">Keycloak</link> is an
open source identity and access management server with support for
<link xlink:href="https://openid.net/connect/">OpenID
Connect</link>, <link xlink:href="https://oauth.net/2/">OAUTH
2.0</link> and <link
xlink:href="https://en.wikipedia.org/wiki/SAML_2.0">SAML
2.0</link>.
</para>
<section xml:id="module-services-keycloak-admin">
<title>Administration</title>
<para>
An administrative user with the username
<literal>admin</literal> is automatically created in the
<literal>master</literal> realm. Its initial password can be
configured by setting <xref linkend="opt-services.keycloak.initialAdminPassword" />
and defaults to <literal>changeme</literal>. The password is
not stored safely and should be changed immediately in the
admin panel.
</para>
<para>
Refer to the <link
xlink:href="https://www.keycloak.org/docs/latest/server_admin/index.html#admin-console">Admin
Console section of the Keycloak Server Administration Guide</link> for
information on how to administer your
<productname>Keycloak</productname> instance.
</para>
</section>
<section xml:id="module-services-keycloak-database">
<title>Database access</title>
<para>
<productname>Keycloak</productname> can be used with either
<productname>PostgreSQL</productname> or
<productname>MySQL</productname>. Which one is used can be
configured in <xref
linkend="opt-services.keycloak.databaseType" />. The selected
database will automatically be enabled and a database and role
created unless <xref
linkend="opt-services.keycloak.databaseHost" /> is changed from
its default of <literal>localhost</literal> or <xref
linkend="opt-services.keycloak.databaseCreateLocally" /> is set
to <literal>false</literal>.
</para>
<para>
External database access can also be configured by setting
<xref linkend="opt-services.keycloak.databaseHost" />, <xref
linkend="opt-services.keycloak.databaseUsername" />, <xref
linkend="opt-services.keycloak.databaseUseSSL" /> and <xref
linkend="opt-services.keycloak.databaseCaCert" /> as
appropriate. Note that you need to manually create a database
called <literal>keycloak</literal> and allow the configured
database user full access to it.
</para>
<para>
<xref linkend="opt-services.keycloak.databasePasswordFile" />
must be set to the path to a file containing the password used
to log in to the database. If <xref linkend="opt-services.keycloak.databaseHost" />
and <xref linkend="opt-services.keycloak.databaseCreateLocally" />
are kept at their defaults, the database role
<literal>keycloak</literal> with that password is provisioned
on the local database instance.
</para>
<warning>
<para>
The path should be provided as a string, not a Nix path, since Nix
paths are copied into the world readable Nix store.
</para>
</warning>
</section>
<section xml:id="module-services-keycloak-frontendurl">
<title>Frontend URL</title>
<para>
The frontend URL is used as base for all frontend requests and
must be configured through <xref linkend="opt-services.keycloak.frontendUrl" />.
It should normally include a trailing <literal>/auth</literal>
(the default web context).
</para>
<para>
<xref linkend="opt-services.keycloak.forceBackendUrlToFrontendUrl" />
determines whether Keycloak should force all requests to go
through the frontend URL. By default,
<productname>Keycloak</productname> allows backend requests to
instead use its local hostname or IP address and may also
advertise it to clients through its OpenID Connect Discovery
endpoint.
</para>
<para>
See the <link
xlink:href="https://www.keycloak.org/docs/latest/server_installation/#_hostname">Hostname
section of the Keycloak Server Installation and Configuration
Guide</link> for more information.
</para>
</section>
<section xml:id="module-services-keycloak-tls">
<title>Setting up TLS/SSL</title>
<para>
By default, <productname>Keycloak</productname> won't accept
unsecured HTTP connections originating from outside its local
network.
</para>
<para>
For HTTPS support, a TLS certificate and private key is
required. They should be <link
xlink:href="https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail">PEM
formatted</link> and concatenated into a single file. The path
to this file should be configured in
<xref linkend="opt-services.keycloak.certificatePrivateKeyBundle" />.
</para>
<warning>
<para>
The path should be provided as a string, not a Nix path,
since Nix paths are copied into the world readable Nix store.
</para>
</warning>
</section>
<section xml:id="module-services-keycloak-extra-config">
<title>Additional configuration</title>
<para>
Additional Keycloak configuration options, for which no
explicit <productname>NixOS</productname> options are provided,
can be set in <xref linkend="opt-services.keycloak.extraConfig" />.
</para>
<para>
Options are expressed as a Nix attribute set which matches the
structure of the jboss-cli configuration. The configuration is
effectively overlayed on top of the default configuration
shipped with Keycloak. To remove existing nodes and undefine
attributes from the default configuration, set them to
<literal>null</literal>.
</para>
<para>
For example, the following script, which removes the hostname
provider <literal>default</literal>, adds the deprecated
hostname provider <literal>fixed</literal> and defines it the
default:
<programlisting>
/subsystem=keycloak-server/spi=hostname/provider=default:remove()
/subsystem=keycloak-server/spi=hostname/provider=fixed:add(enabled = true, properties = { hostname = "keycloak.example.com" })
/subsystem=keycloak-server/spi=hostname:write-attribute(name=default-provider, value="fixed")
</programlisting>
would be expressed as
<programlisting>
services.keycloak.extraConfig = {
"subsystem=keycloak-server" = {
"spi=hostname" = {
"provider=default" = null;
"provider=fixed" = {
enabled = true;
properties.hostname = "keycloak.example.com";
};
default-provider = "fixed";
};
};
};
</programlisting>
</para>
<para>
You can discover available options by using the <link
xlink:href="http://docs.wildfly.org/21/Admin_Guide.html#Command_Line_Interface">jboss-cli.sh</link>
program and by referring to the <link
xlink:href="https://www.keycloak.org/docs/latest/server_installation/index.html">Keycloak
Server Installation and Configuration Guide</link>.
</para>
</section>
<section xml:id="module-services-keycloak-example-config">
<title>Example configuration</title>
<para>
A basic configuration with some custom settings could look like this:
<programlisting>
services.keycloak = {
<link linkend="opt-services.keycloak.enable">enable</link> = true;
<link linkend="opt-services.keycloak.initialAdminPassword">initialAdminPassword</link> = "e6Wcm0RrtegMEHl"; # change on first login
<link linkend="opt-services.keycloak.frontendUrl">frontendUrl</link> = "https://keycloak.example.com/auth";
<link linkend="opt-services.keycloak.forceBackendUrlToFrontendUrl">forceBackendUrlToFrontendUrl</link> = true;
<link linkend="opt-services.keycloak.certificatePrivateKeyBundle">certificatePrivateKeyBundle</link> = "/run/keys/ssl_cert";
<link linkend="opt-services.keycloak.databasePasswordFile">databasePasswordFile</link> = "/run/keys/db_password";
};
</programlisting>
</para>
</section>
</chapter>

View file

@ -227,7 +227,7 @@ in
"xhci_pci"
"usbhid"
"hid_generic" "hid_lenovo" "hid_apple" "hid_roccat"
"hid_logitech_hidpp" "hid_logitech_dj"
"hid_logitech_hidpp" "hid_logitech_dj" "hid_microsoft"
] ++ optionals (pkgs.stdenv.isi686 || pkgs.stdenv.isx86_64) [
# Misc. x86 keyboard stuff.

View file

@ -136,7 +136,7 @@ in
}
];
users.users.resolved.group = "systemd-resolve";
users.users.systemd-resolve.group = "systemd-resolve";
# add resolve to nss hosts database if enabled and nscd enabled
# system.nssModules is configured in nixos/modules/system/boot/systemd.nix

View file

@ -329,24 +329,24 @@ let self = {
"20.03".ap-east-1.hvm-ebs = "ami-0d18fdd309cdefa86";
"20.03".sa-east-1.hvm-ebs = "ami-09859378158ae971d";
# 20.09.1465.9a0b14b097d
"20.09".eu-west-1.hvm-ebs = "ami-0d90f16418e3c364c";
"20.09".eu-west-2.hvm-ebs = "ami-0635ec0780ea57cfe";
"20.09".eu-west-3.hvm-ebs = "ami-0714e94352f2eabb9";
"20.09".eu-central-1.hvm-ebs = "ami-0979d39762a4d2a02";
"20.09".eu-north-1.hvm-ebs = "ami-0b14e273185c66e9b";
"20.09".us-east-1.hvm-ebs = "ami-0f8b063ac3f2d9645";
"20.09".us-east-2.hvm-ebs = "ami-0959202a0393fdd0c";
"20.09".us-west-1.hvm-ebs = "ami-096d50833b785478b";
"20.09".us-west-2.hvm-ebs = "ami-0fc31031df0df6104";
"20.09".ca-central-1.hvm-ebs = "ami-0787786a38cde3905";
"20.09".ap-southeast-1.hvm-ebs = "ami-0b3f693d3a2a0b9ae";
"20.09".ap-southeast-2.hvm-ebs = "ami-02471872bc876b610";
"20.09".ap-northeast-1.hvm-ebs = "ami-06505fd2bf44a59a7";
"20.09".ap-northeast-2.hvm-ebs = "ami-0754b4c014eea1e8a";
"20.09".ap-south-1.hvm-ebs = "ami-05100e32242ae65a6";
"20.09".ap-east-1.hvm-ebs = "ami-045288859a39de009";
"20.09".sa-east-1.hvm-ebs = "ami-0a937748db48fb00d";
# 20.09.1632.a6a3a368dda
"20.09".eu-west-1.hvm-ebs = "ami-01a79d5ce435f4db3";
"20.09".eu-west-2.hvm-ebs = "ami-0cbe14f32904e6331";
"20.09".eu-west-3.hvm-ebs = "ami-07f493412d6213de6";
"20.09".eu-central-1.hvm-ebs = "ami-01d4a0c2248cbfe38";
"20.09".eu-north-1.hvm-ebs = "ami-0003f54dd99d68e0f";
"20.09".us-east-1.hvm-ebs = "ami-068a62d478710462d";
"20.09".us-east-2.hvm-ebs = "ami-01ac677ff61399caa";
"20.09".us-west-1.hvm-ebs = "ami-04befdb203b4b17f6";
"20.09".us-west-2.hvm-ebs = "ami-0fb7bd4a43261c6b2";
"20.09".ca-central-1.hvm-ebs = "ami-06d5ee429f153f856";
"20.09".ap-southeast-1.hvm-ebs = "ami-0db0304e23c535b2a";
"20.09".ap-southeast-2.hvm-ebs = "ami-045983c4db7e36447";
"20.09".ap-northeast-1.hvm-ebs = "ami-0beb18d632cf64e5a";
"20.09".ap-northeast-2.hvm-ebs = "ami-0dd0316af578862db";
"20.09".ap-south-1.hvm-ebs = "ami-008d15ced81c88aed";
"20.09".ap-east-1.hvm-ebs = "ami-071f49713f86ea965";
"20.09".sa-east-1.hvm-ebs = "ami-05ded1ae35209b5a8";
latest = self."20.09";
}; in self

View file

@ -71,7 +71,6 @@ in rec {
(onFullSupported "nixos.tests.fontconfig-default-fonts")
(onFullSupported "nixos.tests.gnome3")
(onFullSupported "nixos.tests.gnome3-xorg")
(onFullSupported "nixos.tests.hardened")
(onSystems ["x86_64-linux"] "nixos.tests.hibernate")
(onFullSupported "nixos.tests.i3wm")
(onSystems ["x86_64-linux"] "nixos.tests.installer.btrfsSimple")
@ -93,7 +92,6 @@ in rec {
(onFullSupported "nixos.tests.keymap.dvp")
(onFullSupported "nixos.tests.keymap.neo")
(onFullSupported "nixos.tests.keymap.qwertz")
(onFullSupported "nixos.tests.latestKernel.hardened")
(onFullSupported "nixos.tests.latestKernel.login")
(onFullSupported "nixos.tests.lightdm")
(onFullSupported "nixos.tests.login")

View file

@ -24,6 +24,7 @@ in
_3proxy = handleTest ./3proxy.nix {};
acme = handleTest ./acme.nix {};
agda = handleTest ./agda.nix {};
ammonite = handleTest ./ammonite.nix {};
atd = handleTest ./atd.nix {};
avahi = handleTest ./avahi.nix {};
avahi-with-resolved = handleTest ./avahi.nix { networkd = true; };
@ -175,6 +176,7 @@ in
kernel-latest = handleTest ./kernel-latest.nix {};
kernel-lts = handleTest ./kernel-lts.nix {};
kernel-testing = handleTest ./kernel-testing.nix {};
keycloak = discoverTests (import ./keycloak.nix);
keymap = handleTest ./keymap.nix {};
knot = handleTest ./knot.nix {};
krb5 = discoverTests (import ./krb5 {});
@ -223,6 +225,7 @@ in
mysql-backup = handleTest ./mysql/mysql-backup.nix {};
mysql-replication = handleTest ./mysql/mysql-replication.nix {};
nagios = handleTest ./nagios.nix {};
nar-serve = handleTest ./nar-serve.nix {};
nat.firewall = handleTest ./nat.nix { withFirewall = true; };
nat.firewall-conntrack = handleTest ./nat.nix { withFirewall = true; withConntrackHelpers = true; };
nat.standalone = handleTest ./nat.nix { withFirewall = false; };
@ -253,6 +256,7 @@ in
novacomd = handleTestOn ["x86_64-linux"] ./novacomd.nix {};
nsd = handleTest ./nsd.nix {};
nzbget = handleTest ./nzbget.nix {};
oh-my-zsh = handleTest ./oh-my-zsh.nix {};
openarena = handleTest ./openarena.nix {};
openldap = handleTest ./openldap.nix {};
opensmtpd = handleTest ./opensmtpd.nix {};
@ -310,6 +314,8 @@ in
rxe = handleTest ./rxe.nix {};
samba = handleTest ./samba.nix {};
sanoid = handleTest ./sanoid.nix {};
sbt = handleTest ./sbt.nix {};
scala = handleTest ./scala.nix {};
sddm = handleTest ./sddm.nix {};
service-runner = handleTest ./service-runner.nix {};
shadowsocks = handleTest ./shadowsocks {};

View file

@ -8,7 +8,7 @@ import ./make-test-python.nix ({ pkgs, ...} : {
amm =
{ pkgs, ... }:
{
environment.systemPackages = [ pkgs.ammonite ];
environment.systemPackages = [ (pkgs.ammonite.override { jre = pkgs.jre8; }) ];
};
};

View file

@ -22,6 +22,10 @@ import ../make-test-python.nix ({ lib, ... }:
hostKeys = [ ./ssh_host_ed25519_key ];
};
};
boot.initrd.extraUtilsCommands = ''
mkdir -p $out/secrets/etc/ssh
cat "${./ssh_host_ed25519_key}" > $out/secrets/etc/ssh/sh_host_ed25519_key
'';
boot.initrd.preLVMCommands = ''
while true; do
if [ -f fnord ]; then

View file

@ -0,0 +1,144 @@
# This tests Keycloak: it starts the service, creates a realm with an
# OIDC client and a user, and simulates the user logging in to the
# client using their Keycloak login.
let
frontendUrl = "http://keycloak/auth";
initialAdminPassword = "h4IhoJFnt2iQIR9";
keycloakTest = import ./make-test-python.nix (
{ pkgs, databaseType, ... }:
{
name = "keycloak";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ talyz ];
};
nodes = {
keycloak = { ... }: {
virtualisation.memorySize = 1024;
services.keycloak = {
enable = true;
inherit frontendUrl databaseType initialAdminPassword;
databasePasswordFile = pkgs.writeText "dbPassword" "wzf6vOCbPp6cqTH";
};
environment.systemPackages = with pkgs; [
xmlstarlet
libtidy
jq
];
};
};
testScript =
let
client = {
clientId = "test-client";
name = "test-client";
redirectUris = [ "urn:ietf:wg:oauth:2.0:oob" ];
};
user = {
firstName = "Chuck";
lastName = "Testa";
username = "chuck.testa";
email = "chuck.testa@example.com";
};
password = "password1234";
realm = {
enabled = true;
realm = "test-realm";
clients = [ client ];
users = [(
user // {
enabled = true;
credentials = [{
type = "password";
temporary = false;
value = password;
}];
}
)];
};
realmDataJson = pkgs.writeText "realm-data.json" (builtins.toJSON realm);
jqCheckUserinfo = pkgs.writeText "check-userinfo.jq" ''
if {
"firstName": .given_name,
"lastName": .family_name,
"username": .preferred_username,
"email": .email
} != ${builtins.toJSON user} then
error("Wrong user info!")
else
empty
end
'';
in ''
keycloak.start()
keycloak.wait_for_unit("keycloak.service")
keycloak.wait_until_succeeds("curl -sSf ${frontendUrl}")
### Realm Setup ###
# Get an admin interface access token
keycloak.succeed(
"curl -sSf -d 'client_id=admin-cli' -d 'username=admin' -d 'password=${initialAdminPassword}' -d 'grant_type=password' '${frontendUrl}/realms/master/protocol/openid-connect/token' | jq -r '\"Authorization: bearer \" + .access_token' >admin_auth_header"
)
# Publish the realm, including a test OIDC client and user
keycloak.succeed(
"curl -sSf -H @admin_auth_header -X POST -H 'Content-Type: application/json' -d @${realmDataJson} '${frontendUrl}/admin/realms/'"
)
# Generate and save the client secret. To do this we need
# Keycloak's internal id for the client.
keycloak.succeed(
"curl -sSf -H @admin_auth_header '${frontendUrl}/admin/realms/${realm.realm}/clients?clientId=${client.name}' | jq -r '.[].id' >client_id",
"curl -sSf -H @admin_auth_header -X POST '${frontendUrl}/admin/realms/${realm.realm}/clients/'$(<client_id)'/client-secret' | jq -r .value >client_secret",
)
### Authentication Testing ###
# Start the login process by sending an initial request to the
# OIDC authentication endpoint, saving the returned page. Tidy
# up the HTML (XmlStarlet is picky) and extract the login form
# post url.
keycloak.succeed(
"curl -sSf -c cookie '${frontendUrl}/realms/${realm.realm}/protocol/openid-connect/auth?client_id=${client.name}&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=openid+email&response_type=code&response_mode=query&nonce=qw4o89g3qqm' >login_form",
"tidy -q -m login_form || true",
"xml sel -T -t -m \"_:html/_:body/_:div/_:div/_:div/_:div/_:div/_:div/_:form[@id='kc-form-login']\" -v @action login_form >form_post_url",
)
# Post the login form and save the response. Once again tidy up
# the HTML, then extract the authorization code.
keycloak.succeed(
"curl -sSf -L -b cookie -d 'username=${user.username}' -d 'password=${password}' -d 'credentialId=' \"$(<form_post_url)\" >auth_code_html",
"tidy -q -m auth_code_html || true",
"xml sel -T -t -m \"_:html/_:body/_:div/_:div/_:div/_:div/_:div/_:input[@id='code']\" -v @value auth_code_html >auth_code",
)
# Exchange the authorization code for an access token.
keycloak.succeed(
"curl -sSf -d grant_type=authorization_code -d code=$(<auth_code) -d client_id=${client.name} -d client_secret=$(<client_secret) -d redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob '${frontendUrl}/realms/${realm.realm}/protocol/openid-connect/token' | jq -r '\"Authorization: bearer \" + .access_token' >auth_header"
)
# Use the access token on the OIDC userinfo endpoint and check
# that the returned user info matches what we initialized the
# realm with.
keycloak.succeed(
"curl -sSf -H @auth_header '${frontendUrl}/realms/${realm.realm}/protocol/openid-connect/userinfo' | jq -f ${jqCheckUserinfo}"
)
'';
}
);
in
{
postgres = keycloakTest { databaseType = "postgresql"; };
mysql = keycloakTest { databaseType = "mysql"; };
}

View file

@ -12,15 +12,28 @@ import ./make-test-python.nix ({ lib, pkgs, ... }:
enable = true;
configFile = "${pkgs.grafana-loki.src}/cmd/loki/loki-local-config.yaml";
};
systemd.services.promtail = {
description = "Promtail service for Loki test";
wantedBy = [ "multi-user.target" ];
serviceConfig = {
ExecStart = ''
${pkgs.grafana-loki}/bin/promtail --config.file ${pkgs.grafana-loki.src}/cmd/promtail/promtail-local-config.yaml
'';
DynamicUser = true;
services.promtail = {
enable = true;
configuration = {
server = {
http_listen_port = 9080;
grpc_listen_port = 0;
};
clients = [ { url = "http://localhost:3100/loki/api/v1/push"; } ];
scrape_configs = [
{
job_name = "system";
static_configs = [
{
targets = [ "localhost" ];
labels = {
job = "varlogs";
__path__ = "/var/log/*log";
};
}
];
}
];
};
};
};

View file

@ -0,0 +1,48 @@
import ./make-test-python.nix (
{ pkgs, lib, ... }:
{
name = "nar-serve";
meta.maintainers = [ lib.maintainers.rizary ];
nodes =
{
server = { pkgs, ... }: {
services.nginx = {
enable = true;
virtualHosts.default.root = "/var/www";
};
services.nar-serve = {
enable = true;
# Connect to the localhost nginx instead of the default
# https://cache.nixos.org
cacheURL = "http://localhost/";
};
environment.systemPackages = [
pkgs.hello
pkgs.curl
];
networking.firewall.allowedTCPPorts = [ 8383 ];
# virtualisation.diskSize = 2 * 1024;
};
};
testScript = ''
start_all()
# Create a fake cache with Nginx service the static files
server.succeed(
"nix copy --to file:///var/www ${pkgs.hello}"
)
server.wait_for_unit("nginx.service")
server.wait_for_open_port(80)
# Check that nar-serve can return the content of the derivation
drvName = os.path.basename("${pkgs.hello}")
drvHash = drvName.split("-")[0]
server.wait_for_unit("nar-serve.service")
server.succeed(
"curl -o hello -f http://localhost:8383/nix/store/{}/bin/hello".format(drvHash)
)
'';
}
)

View file

@ -0,0 +1,18 @@
import ./make-test-python.nix ({ pkgs, ... }: {
name = "oh-my-zsh";
machine = { pkgs, ... }:
{
programs.zsh = {
enable = true;
ohMyZsh.enable = true;
};
};
testScript = ''
start_all()
machine.succeed("touch ~/.zshrc")
machine.succeed("zsh -c 'source /etc/zshrc && echo $ZSH | grep oh-my-zsh-${pkgs.oh-my-zsh.version}'")
'';
})

View file

@ -609,6 +609,50 @@ let
'';
};
sql = {
exporterConfig = {
configuration.jobs.points = {
interval = "1m";
connections = [
"postgres://prometheus-sql-exporter@/data?host=/run/postgresql&sslmode=disable"
];
queries = {
points = {
labels = [ "name" ];
help = "Amount of points accumulated per person";
values = [ "amount" ];
query = "SELECT SUM(amount) as amount, name FROM points GROUP BY name";
};
};
};
enable = true;
user = "prometheus-sql-exporter";
};
metricProvider = {
services.postgresql = {
enable = true;
initialScript = builtins.toFile "init.sql" ''
CREATE DATABASE data;
\c data;
CREATE TABLE points (amount INT, name TEXT);
INSERT INTO points(amount, name) VALUES (1, 'jack');
INSERT INTO points(amount, name) VALUES (2, 'jill');
INSERT INTO points(amount, name) VALUES (3, 'jack');
CREATE USER "prometheus-sql-exporter";
GRANT ALL PRIVILEGES ON DATABASE data TO "prometheus-sql-exporter";
GRANT SELECT ON points TO "prometheus-sql-exporter";
'';
};
systemd.services.prometheus-sql-exporter.after = [ "postgresql.service" ];
};
exporterTest = ''
wait_for_unit("prometheus-sql-exporter.service")
wait_for_open_port(9237)
succeed("curl http://localhost:9237/metrics | grep -c 'sql_points{' | grep -q 2")
'';
};
surfboard = {
exporterConfig = {
enable = true;

18
third_party/nixpkgs/nixos/tests/sbt.nix vendored Normal file
View file

@ -0,0 +1,18 @@
import ./make-test-python.nix ({ pkgs, ...} : {
name = "sbt";
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ nequissimus ];
};
machine = { pkgs, ... }:
{
environment.systemPackages = [ pkgs.sbt ];
};
testScript =
''
machine.succeed(
"(sbt --offline --version 2>&1 || true) | grep 'getting org.scala-sbt sbt ${pkgs.sbt.version} (this may take some time)'"
)
'';
})

View file

@ -0,0 +1,33 @@
{ system ? builtins.currentSystem,
config ? {},
pkgs ? import ../.. { inherit system config; }
}:
with pkgs.lib;
let
common = name: package: (import ./make-test-python.nix ({
inherit name;
meta = with pkgs.stdenv.lib.maintainers; {
maintainers = [ nequissimus ];
};
nodes = {
scala = { ... }: {
environment.systemPackages = [ package ];
};
};
testScript = ''
start_all()
scala.succeed("scalac -version 2>&1 | grep '^Scala compiler version ${package.version}'")
'';
}) { inherit system; });
in with pkgs; {
scala_2_10 = common "scala_2_10" scala_2_10;
scala_2_11 = common "scala_2_11" scala_2_11;
scala_2_12 = common "scala_2_12" scala_2_12;
scala_2_13 = common "scala_2_13" scala_2_13;
}

View file

@ -12,17 +12,14 @@
, fftw
, fftwSinglePrec
, flac
, fluidsynth
, glibc
, glibmm
, graphviz
, gtkmm2
, hidapi
, itstool
, libarchive
, libjack2
, liblo
, libltc
, libogg
, libpulseaudio
, librdf_raptor
@ -42,11 +39,11 @@
, perl
, pkg-config
, python3
, qm-dsp
, readline
, rubberband
, serd
, sord
, soundtouch
, sratom
, suil
, taglib
@ -55,13 +52,13 @@
}:
stdenv.mkDerivation rec {
pname = "ardour";
version = "6.2";
version = "6.3";
# don't fetch releases from the GitHub mirror, they are broken
src = fetchgit {
url = "git://git.ardour.org/ardour/ardour.git";
rev = version;
sha256 = "17jxbqavricy01x4ymq6d302djsqfnv84m7dm4fd8cpka0dqjp1y";
sha256 = "050p1adgyirr790a3xp878pq3axqwzcmrk3drgm9z6v753h0xhcd";
};
patches = [
@ -91,15 +88,12 @@ stdenv.mkDerivation rec {
fftw
fftwSinglePrec
flac
fluidsynth
glibmm
gtkmm2
hidapi
itstool
libarchive
libjack2
liblo
libltc
libogg
libpulseaudio
librdf_raptor
@ -118,11 +112,11 @@ stdenv.mkDerivation rec {
pango
perl
python3
qm-dsp
readline
rubberband
serd
sord
soundtouch
sratom
suil
taglib
@ -136,11 +130,11 @@ stdenv.mkDerivation rec {
"--no-phone-home"
"--optimize"
"--ptformat"
"--qm-dsp-include=${qm-dsp}/include/qm-dsp"
"--run-tests"
"--test"
"--use-external-libs"
];
# removed because it fixes https://tracker.ardour.org/view.php?id=8161 and https://tracker.ardour.org/view.php?id=8437
# "--use-external-libs"
# Ardour's wscript requires git revision and date to be available.
# Since they are not, let's generate the file manually.

View file

@ -9,13 +9,13 @@
stdenv.mkDerivation rec {
pname = "ft2-clone";
version = "1.36";
version = "1.37";
src = fetchFromGitHub {
owner = "8bitbubsy";
repo = "ft2-clone";
rev = "v${version}";
sha256 = "0hsgzh7s2qgl8ah8hzmhfl74v5y8wc7f6z8ly9026h5r6pb09id0";
sha256 = "1lhpzd46mpr3bq13qhd0bq724db5fhc8jplfb684c2q7sc4v92nk";
};
nativeBuildInputs = [ cmake ];

View file

@ -13,13 +13,13 @@
stdenv.mkDerivation rec {
pname = "mamba";
version = "1.6";
version = "1.7";
src = fetchFromGitHub {
owner = "brummer10";
repo = "Mamba";
rev = "v${version}";
sha256 = "02w47347cbfqxybh908ww5ifd9jcns8v0msycq59y9q7x0a2h6fh";
sha256 = "1i78snpyxap2r4899967nyfr8hg20k45nsbshs9h6hdxbfwhikbc";
fetchSubmodules = true;
};

View file

@ -10,13 +10,13 @@ assert pcreSupport -> pcre != null;
stdenv.mkDerivation rec {
pname = "ncmpc";
version = "0.39";
version = "0.42";
src = fetchFromGitHub {
owner = "MusicPlayerDaemon";
repo = "ncmpc";
rev = "v${version}";
sha256 = "08xrcinfm1a7hjycf8la7gnsxbp3six70ks987dr7j42kd42irfq";
sha256 = "1c21sbdm6pp3kwhnzc7c6ksna7madvsmfa7j91as2g8485symqv2";
};
buildInputs = [ glib ncurses mpd_clientlib boost ]

View file

@ -3,13 +3,13 @@
stdenv.mkDerivation rec {
pname = "ncpamixer";
version = "1.3.3";
version = "1.3.3.1";
src = fetchFromGitHub {
owner = "fulhax";
repo = "ncpamixer";
rev = version;
sha256 = "19pxfvfhhrbfk1wz5awx60y51jccrgrcvlq7lb622sw2z0wzw4ac";
sha256 = "1v3bz0vpgh18257hdnz3yvbnl51779g1h5b265zgc21ks7m1jw5z";
};
buildInputs = [ ncurses libpulseaudio ];

View file

@ -2,11 +2,11 @@
mkDerivation rec {
pname = "padthv1";
version = "0.9.17";
version = "0.9.18";
src = fetchurl {
url = "mirror://sourceforge/padthv1/${pname}-${version}.tar.gz";
sha256 = "098fk8fwcgssnfr1gilqg8g17zvch62lrn3rqsswpzbr3an5adb3";
sha256 = "1karrprb3ijrbiwpr43rl3nxnzc33lnmwrd1832psgr3flnr9fp5";
};
buildInputs = [ libjack2 alsaLib libsndfile liblo lv2 qt5.qtbase qt5.qttools fftwFloat ];

View file

@ -1,5 +1,5 @@
{ stdenv, fetchFromGitHub, cmake, pkg-config, qttools
, alsaLib, ftgl, libGLU, libjack2, qtbase, rtmidi
, alsaLib, ftgl, libGLU, libjack2, qtbase, rtmidi, wrapQtAppsHook
}:
stdenv.mkDerivation rec {
@ -13,7 +13,7 @@ stdenv.mkDerivation rec {
sha256 = "03xcdnlpsij22ca3i6xj19yqzn3q2ch0d32r73v0c96nm04gvhjj";
};
nativeBuildInputs = [ cmake pkg-config qttools ];
nativeBuildInputs = [ cmake pkg-config qttools wrapQtAppsHook ];
buildInputs = [ alsaLib ftgl libGLU libjack2 qtbase rtmidi ];

View file

@ -12,13 +12,13 @@ let
;
in pythonPackages.buildPythonApplication rec {
pname = "picard";
version = "2.5";
version = "2.5.1";
src = fetchFromGitHub {
owner = "metabrainz";
repo = pname;
rev = "release-${version}";
sha256 = "02px6r086pyhpf6wia876c73bgr4xa4pyx2yykv6j74zyp5wig3z";
sha256 = "13q926iqwdba6ds5s3ir57c9bkg8gcv6dhqvhmg00fnzkq9xqk3d";
};
nativeBuildInputs = [ gettext qt5.wrapQtAppsHook qt5.qtbase ]

View file

@ -2,13 +2,13 @@
let
pname = "plexamp";
version = "3.2.0";
version = "3.3.1";
name = "${pname}-${version}";
src = fetchurl {
url = "https://plexamp.plex.tv/plexamp.plex.tv/desktop/Plexamp-${version}.AppImage";
sha256 = "R1BhobnwoU7oJ7bNes8kH2neXqHlMPbRCNjcHyzUPqo=";
name="${pname}-${version}.AppImage";
sha256 = "6/asP8VR+rJ52lKKds46gSw1or9suUEmyR75pjdWHIQ=";
};
appimageContents = appimageTools.extractType2 {

View file

@ -0,0 +1,34 @@
{ mkDerivation
, stdenv
, fetchFromGitHub
, qmake
, qtbase
, qtmultimedia
, libvorbis
}:
mkDerivation rec {
pname = "ptcollab";
version = "0.3.4.1";
src = fetchFromGitHub {
owner = "yuxshao";
repo = "ptcollab";
rev = "v${version}";
sha256 = "0rjyhxfad864w84n0bxyhc1jjxhzwwdx26r6psba2582g90cv024";
};
nativeBuildInputs = [ qmake ];
buildInputs = [ qtbase qtmultimedia libvorbis ];
meta = with stdenv.lib; {
description = "Experimental pxtone editor where you can collaborate with friends";
homepage = "https://yuxshao.github.io/ptcollab/";
license = licenses.mit;
maintainers = with maintainers; [ OPNA2608 ];
platforms = platforms.all;
# Requires Qt5.15
broken = stdenv.hostPlatform.isDarwin;
};
}

View file

@ -7,13 +7,13 @@ let
in stdenv.mkDerivation rec {
name = "${pname}-${version}";
version = "1.67";
version = "1.68";
src = fetchFromGitHub {
owner = "graysky2";
repo = pname;
rev = "v${version}";
sha256 = "1mf5r7x6aiqmx9mz7gpckrqvvzxnr5gs2q1k4m42rjk6ldkpdb46";
sha256 = "0wrzfanwy18wyawpg8rfvfgjh3lwngqwmfpi4ww3530rfmi84cf0";
};
postPatch = ''

View file

@ -44,13 +44,13 @@ let
];
in stdenv.mkDerivation rec {
pname = "pulseeffects";
version = "4.8.1";
version = "4.8.2";
src = fetchFromGitHub {
owner = "wwmm";
repo = "pulseeffects";
rev = "v${version}";
sha256 = "17yfs3ja7vflywhxbn3n3r8n6hl829x257kzplg2vpppppg6ylj6";
sha256 = "19h47mrxjm6x83pqcxfsshf48kd1babfk0kwdy1c7fjri7kj0g0s";
};
nativeBuildInputs = [

View file

@ -1,41 +0,0 @@
{ stdenv, fetchurl, pkgconfig, autoreconfHook, hexdump, openssl, db48
, boost, zlib, miniupnpc, qt4, protobuf, qrencode, libevent
, AppKit
, withGui ? !stdenv.isDarwin
}:
with stdenv.lib;
stdenv.mkDerivation rec {
name = "bit1" + (toString (optional (!withGui) "d")) + "-" + version;
version = "1.15.1";
src = fetchurl {
url = "https://github.com/btc1/bitcoin/archive/v${version}.tar.gz";
sha256 = "0v0g2wb4nsnhddxzb63vj2bc1mgyj05vqm5imicjfz8prvgc0si8";
};
nativeBuildInputs = [ pkgconfig autoreconfHook hexdump ];
buildInputs = [ openssl db48 boost zlib miniupnpc protobuf libevent ]
++ optionals withGui [ qt4 qrencode ]
++ optional stdenv.isDarwin AppKit;
configureFlags = [ "--with-boost-libdir=${boost.out}/lib" ]
++ optionals withGui [ "--with-gui=qt4" ];
meta = {
description = "Peer-to-peer electronic cash system (btc1 client)";
longDescription= ''
Bitcoin is a free open source peer-to-peer electronic cash system that is
completely decentralized, without the need for a central server or trusted
parties. Users hold the crypto keys to their own money and transact directly
with each other, with the help of a P2P network to check for double-spending.
btc1 is an implementation of a Bitcoin full node with segwit2x hard fork
support.
'';
homepage = "https://github.com/btc1/bitcoin";
license = licenses.mit;
maintainers = with maintainers; [ sorpaas ];
platforms = platforms.unix;
};
}

View file

@ -7,18 +7,16 @@
}:
rustPlatform.buildRustPackage rec {
pname = "polkadot";
version = "0.8.25";
version = "0.8.26";
src = fetchFromGitHub {
owner = "paritytech";
repo = "polkadot";
rev = "v${version}";
sha256 = "1jdklmysr25rlwgx7pz0jw66j1w60h98kqghzjhr90zhynzh39lz";
sha256 = "1bvma6k3gsjqh8w76k4kf52sjg8wxn1b7a409kmnmmvmd9j6z5ia";
};
cargoSha256 = "08yfafrspkd1g1mhlfwngbknkxjkyymbcga8n2rdsk7mz0hm0vgy";
cargoPatches = [ ./substrate-wasm-builder-runner.patch ];
cargoSha256 = "0pacmmvvjgzmaxgg47qbfhqwl02jxj3i6vnmkjbj9npzqfmqf72d";
nativeBuildInputs = [ clang ];

View file

@ -1,25 +0,0 @@
diff --git a/Cargo.lock b/Cargo.lock
index 5e7c4a14..bb67aada 100644
--- a/Cargo.lock
+++ b/Cargo.lock
@@ -8642,8 +8642,7 @@ dependencies = [
[[package]]
name = "substrate-wasm-builder-runner"
version = "1.0.6"
-source = "registry+https://github.com/rust-lang/crates.io-index"
-checksum = "d2a965994514ab35d3893e9260245f2947fd1981cdd4fffd2c6e6d1a9ce02e6a"
+source = "git+https://github.com/paritytech/substrate#647ad15565d7c35ecf00b73b12cccad9858780b9"
[[package]]
name = "subtle"
diff --git a/Cargo.toml b/Cargo.toml
index 78047a1a..2d571f8e 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -112,3 +112,6 @@ polkadot = { path = "/usr/bin/polkadot" }
[package.metadata.rpm.files]
"../scripts/packaging/polkadot.service" = { path = "/usr/lib/systemd/system/polkadot.service", mode = "644" }
+
+[patch.crates-io]
+substrate-wasm-builder-runner = { git = "https://github.com/paritytech/substrate", branch = "master" }

View file

@ -1,14 +1,17 @@
{ stdenv, fetchFromGitHub }:
{ stdenv, fetchFromGitHub, writeScript, nixosTests, common-updater-scripts
, coreutils, git, gnused, nix, nixfmt }:
stdenv.mkDerivation {
pname = "nanorc";
version = "2020-01-25";
src = fetchFromGitHub {
let
owner = "scopatz";
repo = "nanorc";
rev = "2020.1.25";
sha256 = "1y8jk3jsl4bd6r4hzmxzcf77hv8bwm0318yv7y2npkkd3a060z8d";
in stdenv.mkDerivation rec {
pname = "nanorc";
version = "2020-10-10";
src = fetchFromGitHub {
inherit owner repo;
rev = builtins.replaceStrings [ "-" ] [ "." ] version;
sha256 = "3B2nNFYkwYHCX6pQz/hMO/rnVqlCiw1BSNmGmJ6KCqE=";
};
dontBuild = true;
@ -19,6 +22,32 @@ stdenv.mkDerivation {
install *.nanorc $out/share/
'';
passthru.updateScript = writeScript "update.sh" ''
#!${stdenv.shell}
set -o errexit
PATH=${
stdenv.lib.makeBinPath [
common-updater-scripts
coreutils
git
gnused
nix
nixfmt
]
}
oldVersion="$(nix-instantiate --eval -E "with import ./. {}; lib.getVersion ${pname}" | tr -d '"' | sed 's|\\.|-|g')"
latestTag="$(git -c 'versionsort.suffix=-' ls-remote --exit-code --refs --sort='version:refname' --tags git@github.com:${owner}/${repo} '*.*.*' | tail --lines=1 | cut --delimiter='/' --fields=3)"
if [ "$oldVersion" != "$latestTag" ]; then
nixpkgs="$(git rev-parse --show-toplevel)"
default_nix="$nixpkgs/pkgs/applications/editors/nano/nanorc/default.nix"
newTag=$(echo $latestTag | sed 's|\.|-|g')
update-source-version ${pname} "$newTag" --version-key=version --print-changes
nixfmt "$default_nix"
else
echo "${pname} is already up-to-date"
fi
'';
meta = {
description = "Improved Nano Syntax Highlighting Files";
homepage = "https://github.com/scopatz/nanorc";

View file

@ -23,9 +23,7 @@ stdenv.mkDerivation {
cp -r '${gnvim-unwrapped}/share/applications' "$out/share/applications"
# Sed needs a writable directory to do inplace modifications
chmod u+rw "$out/share/applications"
for file in $out/share/applications/*.desktop; do
sed -e "s|Exec=.\\+gnvim\\>|Exec=$out/bin/gnvim|" -i "$file"
done
sed -e "s|Exec=.\\+gnvim\\>|Exec=gnvim|" -i $out/share/applications/*.desktop
'';
preferLocalBuild = true;

View file

@ -35,7 +35,7 @@ let
# for forward compability, when adding new environments, haskell etc.
, ...
}:
}@args:
let
rubyEnv = bundlerEnv {
name = "neovim-ruby-env";
@ -99,7 +99,7 @@ let
manifestRc = vimUtils.vimrcContent (configure // { customRC = ""; });
neovimRcContent = vimUtils.vimrcContent configure;
in
{
args // {
wrapperArgs = makeWrapperArgs;
inherit neovimRcContent;
inherit manifestRc;
@ -142,8 +142,7 @@ let
extraPythonPackages = compatFun extraPythonPackages;
inherit withPython3;
extraPython3Packages = compatFun extraPython3Packages;
inherit withNodeJs withRuby;
inherit withNodeJs withRuby viAlias vimAlias;
inherit configure;
};
in

View file

@ -43,7 +43,6 @@ let
postBuild = lib.optionalString stdenv.isLinux ''
rm $out/share/applications/nvim.desktop
substitute ${neovim}/share/applications/nvim.desktop $out/share/applications/nvim.desktop \
--replace 'TryExec=nvim' "TryExec=$out/bin/nvim" \
--replace 'Name=Neovim' 'Name=WrappedNeovim'
''
+ optionalString withPython2 ''

View file

@ -4,13 +4,13 @@
stdenv.mkDerivation rec {
pname = "quilter";
version = "2.5.0";
version = "2.5.1";
src = fetchFromGitHub {
owner = "lainsce";
repo = pname;
rev = version;
sha256 = "0622mh46z3fi6zvipmgj8k4d4gj1c2781l10frk7wqq1sysjrxps";
sha256 = "0ya1iwzfzvrci083zyrjj6ac4ys25j90slpk8yydw9n99kb750rk";
};
nativeBuildInputs = [

View file

@ -0,0 +1,53 @@
{ lib, stdenv, fetchhg, fetchFromGitHub, fetchurl, gtk2, glib, pkgconfig, unzip, ncurses, zip }:
stdenv.mkDerivation rec {
version = "11.0_beta";
pname = "textadept11";
nativeBuildInputs = [ pkgconfig ];
buildInputs = [
gtk2 ncurses glib unzip zip
];
src = fetchFromGitHub {
name = "textadept11";
owner = "orbitalquark";
repo = "textadept";
rev = "8da5f6b4a13f14b9dd3cb9dc23ad4f7bf41e91c1";
sha256 = "0v11v3x8g6v696m3l1bm52zy2g9xzz7hlmn912sn30nhcag3raxs";
};
preConfigure =
lib.concatStringsSep "\n" (lib.mapAttrsToList (name: params:
"ln -s ${fetchurl params} $PWD/src/${name}"
) (import ./deps.nix)) + ''
cd src
make deps
'';
postBuild = ''
make curses
'';
preInstall = ''
mkdir -p $out/share/applications
mkdir -p $out/share/pixmaps
'';
postInstall = ''
make curses install PREFIX=$out MAKECMDGOALS=curses
'';
makeFlags = [
"PREFIX=$(out) WGET=true PIXMAPS_DIR=$(out)/share/pixmaps"
];
meta = with stdenv.lib; {
description = "An extensible text editor based on Scintilla with Lua scripting. Version 11_beta";
homepage = "http://foicica.com/textadept";
license = licenses.mit;
maintainers = with maintainers; [ raskin mirrexagon ];
platforms = platforms.linux;
};
}

View file

@ -0,0 +1,50 @@
{
"scintilla445.tgz" = {
url = "https://www.scintilla.org/scintilla445.tgz";
sha256 = "1v1kyxj7rv5rxadbg8gl8wh1jafpy7zj0wr6dcyxq9209dl6h8ag";
};
"9e2ffa159299899c9345aea15c17ba1941953871.zip" = {
url = "https://github.com/orbitalquark/scinterm/archive/9e2ffa159299899c9345aea15c17ba1941953871.zip";
sha256 = "12h7prgp689w45p4scxd8vvsyw8fkv27g6gvgis55xr44daa6122";
};
"scintillua_4.4.5-1.zip" = {
url = "https://github.com/orbitalquark/scintillua/archive/scintillua_4.4.5-1.zip";
sha256 = "095wpbid2kvr5xgkhd5bd4sd7ljgk6gd9palrjkmdcwfgsf1lp04";
};
"lua-5.3.5.tar.gz" = {
url = "http://www.lua.org/ftp/lua-5.3.5.tar.gz";
sha256 = "1b2qn2rv96nmbm6zab4l877bd4zq7wpwm8drwjiy2ih4jqzysbhc";
};
"lpeg-1.0.2.tar.gz" = {
url = "http://www.inf.puc-rio.br/~roberto/lpeg/lpeg-1.0.2.tar.gz";
sha256 = "1zjzl7acvcdavmcg5l7wi12jd4rh95q9pl5aiww7hv0v0mv6bmj8";
};
"v1_7_0_2.zip" = {
url = "https://github.com/keplerproject/luafilesystem/archive/v1_7_0_2.zip";
sha256 = "0y44ymc7higz5dd2w3c6ib7mwmpr6yvszcl7lm12nf8x3y4snx4i";
};
"64587546482a1a6324706d75c80b77d2f87118a4.zip" = {
url = "https://github.com/orbitalquark/gtdialog/archive/64587546482a1a6324706d75c80b77d2f87118a4.zip";
sha256 = "10mglbnn8r1cakqn9h285pwfnh7kfa98v7j8qh83c24n66blyfh9";
};
"cdk-5.0-20150928.tgz" = {
url = "http://invisible-mirror.net/archives/cdk/cdk-5.0-20150928.tgz";
sha256 = "0j74l874y33i26y5kjg3pf1vswyjif8k93pqhi0iqykpbxfsg382";
};
"libtermkey-0.20.tar.gz" = {
url = "http://www.leonerd.org.uk/code/libtermkey/libtermkey-0.20.tar.gz";
sha256 = "1xfj6lchhfljmbcl6dz8dpakppyy13nbl4ykxiv5x4dr9b4qf3bc";
};
"pdcurs39.zip" = {
url = "https://github.com/wmcbrine/PDCurses/archive/3.9.zip";
sha256 = "0ydsa15d6fgk15zcavbxsi4vj3knlr2495dc5v4f5xzvv2qwlb2w";
};
"bombay.zip" = {
url = "http://foicica.com/hg/bombay/archive/b25520cc76bb.zip";
sha256 = "07spq7jmkfyq20gv67yffara3ln3ns2xi0k02m2mxdms3xm1q36h";
};
"cloc-1.60.pl" = {
url = "http://prdownloads.sourceforge.net/cloc/cloc-1.60.pl";
sha256 = "0p504bi19va3dh274v7lb7giqrydwa5yyry60f7jpz84y6z71a2a";
};
}

View file

@ -94,6 +94,19 @@ stdenv.mkDerivation {
+ ''
unset LD
''
# When building with nix-daemon, we need to pass -derivedDataPath or else it tries to use
# a folder rooted in /var/empty and fails. Unfortunately we can't just pass -derivedDataPath
# by itself as this flag requires the use of -scheme or -xctestrun (not sure why), but MacVim
# by default just runs `xcodebuild -project src/MacVim/MacVim.xcodeproj`, relying on the default
# behavior to build the first target in the project. Experimentally, there seems to be a scheme
# called MacVim, so we'll explicitly select that. We also need to specify the configuration too
# as the scheme seems to have the wrong default.
+ ''
configureFlagsArray+=(
XCODEFLAGS="-scheme MacVim -derivedDataPath $NIX_BUILD_TOP/derivedData"
--with-xcodecfg="Release"
)
''
;
# Because we're building with system clang, this means we're building against Xcode's SDK and

View file

@ -11,8 +11,8 @@ let
archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz";
sha256 = {
x86_64-linux = "0mpb4641icr3z89y2rlh5anli40p1f48sl5xagr7h3nb5c84k10x";
x86_64-darwin = "1azmc79zf72007qc1xndp9wdkd078mvqgv35hf231q7kdi6wzxcp";
x86_64-linux = "18fx2nsgn09l2gzgr1abi0cp4g8z2v9177sdl2rqr0yvmwk5i3p0";
x86_64-darwin = "14qdfz8q1dz0skkcgpamksgdvgsid2mcm9h09cvkh4z3v458100r";
}.${system};
in
callPackage ./generic.nix rec {
@ -21,7 +21,7 @@ in
# Please backport all compatible updates to the stable release.
# This is important for the extension ecosystem.
version = "1.50.1";
version = "1.51.0";
pname = "vscode";
executableName = "code" + lib.optionalString isInsiders "-insiders";

View file

@ -11,8 +11,8 @@ let
archive_fmt = if system == "x86_64-darwin" then "zip" else "tar.gz";
sha256 = {
x86_64-linux = "1sarih1yah69ympp12bmgyb0y9ybrxasppb47l58w05iz1wpn6v0";
x86_64-darwin = "1pj041kccj2i77v223i86xxqj9bg88k0sfbshm7qiynwyj9p05ji";
x86_64-linux = "0qims8qypx6aackw1b47pb7hkf0lffh94c69bm5rld2swzczcfnj";
x86_64-darwin = "1i96qhynjl1ihycq25xjakqlyvszindg5g8kgyhd6ab0q0zhmxqy";
}.${system};
sourceRoot = {
@ -27,7 +27,7 @@ in
# Please backport all compatible updates to the stable release.
# This is important for the extension ecosystem.
version = "1.50.1";
version = "1.51.0";
pname = "vscodium";
executableName = "codium";

View file

@ -85,22 +85,6 @@ stdenv.lib.makeScope pkgs.newScope (self: with self; {
};
};
focusblur = pluginDerivation rec {
/* menu:
Blur/Focus Blur
*/
name = "focusblur-3.2.6";
buildInputs = with pkgs; [ fftwSinglePrec ];
patches = [ ./patches/focusblur-glib.patch ];
postInstall = "fail";
installPhase = "installPlugins src/focusblur";
src = fetchurl {
url = "http://registry.gimp.org/files/${name}.tar.bz2";
sha256 = "1gqf3hchz7n7v5kpqkhqh8kwnxbsvlb5cr2w2n7ngrvl56f5xs1h";
};
meta.broken = true;
};
resynthesizer = pluginDerivation rec {
/* menu:
Edit/Fill with pattern seamless...

View file

@ -10,11 +10,11 @@
mkDerivation rec {
pname = "krita";
version = "4.4.0";
version = "4.4.1";
src = fetchurl {
url = "https://download.kde.org/stable/${pname}/${version}/${pname}-${version}.tar.xz";
sha256 = "0ydmxql8iym62q0nqwn9mnb94jz1nh84i6bni0mgzwjk8p4zfzw3";
sha256 = "1bmmfvmawnlihbqkksdrwxfkaip4nfsi97w83fmvkyxl4jk715vr";
};
# *somtimes* fails with can't find ui_manager.h, also see https://github.com/NixOS/nixpkgs/issues/35359

View file

@ -0,0 +1,31 @@
{ mkDerivation, stdenv, graphicsmagick, fetchFromGitHub, qmake, qtbase, qttools
}:
mkDerivation rec {
pname = "photoflare";
version = "1.6.5";
src = fetchFromGitHub {
owner = "PhotoFlare";
repo = "photoflare";
rev = "v${version}";
sha256 = "0a394324h7ds567z3i3pw6kkii78n4qwdn129kgkkm996yh03q89";
};
nativeBuildInputs = [ qmake qttools ];
buildInputs = [ qtbase graphicsmagick ];
qmakeFlags = [ "PREFIX=${placeholder "out"}" ];
NIX_CFLAGS_COMPILE = "-I${graphicsmagick}/include/GraphicsMagick";
enableParallelBuilding = true;
meta = with stdenv.lib; {
description = "A cross-platform image editor with a powerful features and a very friendly graphical user interface";
homepage = "https://photoflare.io";
maintainers = [ maintainers.omgbebebe ];
license = licenses.gpl3;
platforms = platforms.linux;
};
}

View file

@ -1,4 +1,4 @@
{ stdenv, fetchzip, makeWrapper, unzip, jre }:
{ stdenv, fetchzip, makeWrapper, unzip, jre, wrapGAppsHook }:
stdenv.mkDerivation rec {
pname = "yEd";
@ -9,16 +9,25 @@ stdenv.mkDerivation rec {
sha256 = "0sd73s700f3gqq5zq1psrqjg6ff2gv49f8vd37v6bv65vdxqxryq";
};
nativeBuildInputs = [ makeWrapper unzip ];
nativeBuildInputs = [ makeWrapper unzip wrapGAppsHook ];
# For wrapGAppsHook setup hook
buildInputs = [ jre.gtk3 ];
installPhase = ''
dontConfigure = true;
dontBuild = true;
dontInstall = true;
preFixup = ''
mkdir -p $out/yed
cp -r * $out/yed
mkdir -p $out/bin
makeWrapperArgs+=("''${gappsWrapperArgs[@]}")
makeWrapper ${jre}/bin/java $out/bin/yed \
''${makeWrapperArgs[@]} \
--add-flags "-jar $out/yed/yed.jar --"
'';
dontWrapGApps = true;
meta = with stdenv.lib; {
license = licenses.unfree;

View file

@ -5,16 +5,16 @@
buildGoModule rec {
pname = "archiver";
version = "3.4.0";
version = "3.5.0";
src = fetchFromGitHub {
owner = "mholt";
repo = pname;
rev = "v${version}";
sha256 = "16jawybywqfkp68035bnf206a2w4khjw239saa429a21lxrfyk4a";
sha256 = "0fdkqfs87svpijccz8m11gvby8pvmznq6fs9k94vbzak0kxhw1wg";
};
vendorSha256 = "0m89ibj3dm58j49d99dhkn0ryivnianxz7lkpkvhs0cdbzzc02az";
vendorSha256 = "0avnskay23mpl3qkyf1h75rr7szpsxis2bj5pplhwf8q8q0212xf";
buildFlagsArray = [ "-ldflags=-s -w -X main.version=${version} -X main.commit=${src.rev} -X main.date=unknown" ];

View file

@ -2,16 +2,16 @@
buildGoModule rec {
pname = "charm";
version = "0.8.3";
version = "0.8.4";
src = fetchFromGitHub {
owner = "charmbracelet";
repo = "charm";
rev = "v${version}";
sha256 = "1nbix7fi6g9jadak5zyx7fdz7d6367aly6fnrs0v98zsl1kxyvx3";
sha256 = "0wsh83kchqakvx7kgs2s31rzsvnfr47jk6pbmqzjv1kqmnlhc3rh";
};
vendorSha256 = "0lhml6m0j9ksn09j7z4d9pix5aszhndpyqajycwj3apvi3ic90il";
vendorSha256 = "1lg4bbdzgnw50v6m6p7clibwm8m82kdr1jizgbmhfmzy15d5sfll";
doCheck = false;

View file

@ -0,0 +1,30 @@
{ stdenv
, buildGoModule
, fetchFromGitHub
}:
buildGoModule rec {
pname = "dasel";
version = "1.2.0";
src = fetchFromGitHub {
owner = "TomWright";
repo = pname;
rev = "v${version}";
sha256 = "sha256-Un9tqODwiWsaw66t2m8NyaDF0+hq/e0tmRFi3/T4LMI=";
};
vendorSha256 = "sha256:1552k85z4s6gv7sss7dccv3h8x22j2sr12icp6s7s0a3i4iwyksw";
meta = with stdenv.lib; {
description = "Query and update data structures from the command line";
longDescription = ''
Dasel (short for data-selector) allows you to query and modify data structures using selector strings.
Comparable to jq / yq, but supports JSON, YAML, TOML and XML with zero runtime dependencies.
'';
homepage = "https://github.com/TomWright/dasel";
license = licenses.mit;
platforms = platforms.unix;
maintainers = with maintainers; [ _0x4A6F ];
};
}

View file

@ -7,7 +7,7 @@
stdenv.mkDerivation rec {
pname = "dbeaver-ce";
version = "7.2.3";
version = "7.2.4";
desktopItem = makeDesktopItem {
name = "dbeaver";
@ -30,7 +30,7 @@ stdenv.mkDerivation rec {
src = fetchurl {
url = "https://dbeaver.io/files/${version}/dbeaver-ce-${version}-linux.gtk.x86_64.tar.gz";
sha256 = "sha256-XYAe+e9zK/fvxBJ2Caz9/95++JzIQykXj8953IocDZU=";
sha256 = "sha256-RsXLznTz/U23e77xzyINi8HVuGqR4TrPaf+w++zPOH4=";
};
installPhase = ''

View file

@ -0,0 +1,26 @@
{ mkDerivation, fetchgit, lib, cmake, extra-cmake-modules, kitemmodels
, libiberty, libelf, libdwarf, libopcodes }:
mkDerivation rec {
pname = "elf-dissector";
version = "unstable-2020-11-14";
src = fetchgit {
url = "https://invent.kde.org/sdk/elf-dissector.git";
rev = "d1700e76e3f60aff0a2a9fb63bc001251d2be522";
sha256 = "1h1xr3ag1sbf005drcx8g8dc5mk7fb2ybs73swrld7clcawhxnk8";
};
nativeBuildInputs = [ cmake extra-cmake-modules ];
buildInputs = [ kitemmodels libiberty libelf libdwarf libopcodes ];
enableParallelBuilding = true;
meta = with lib; {
homepage = "https://invent.kde.org/sdk/elf-dissector";
description = "Tools for inspecting, analyzing and optimizing ELF files";
license = licenses.gpl2;
maintainers = with maintainers; [ ehmry ];
};
}

View file

@ -2,13 +2,13 @@
mkDerivation rec {
pname = "gpxsee";
version = "7.33";
version = "7.35";
src = fetchFromGitHub {
owner = "tumic0";
repo = "GPXSee";
rev = version;
sha256 = "1k4zl7knlpwxrpqk1axkmy8x12915z15h3q2sjnx3jcnx6qw73ja";
sha256 = "1schmymcsd8s0r26qwyx56z107ql8pgrk1pnqy19mc7fyirdwmp5";
};
patches = (substituteAll {
@ -29,15 +29,14 @@ mkDerivation rec {
wrapQtApp $out/Applications/GPXSee.app/Contents/MacOS/GPXSee
'';
enableParallelBuilding = true;
meta = with stdenv.lib; {
homepage = "https://www.gpxsee.org/";
description = "GPS log file viewer and analyzer";
longDescription = ''
GPXSee is a Qt-based GPS log file viewer and analyzer that supports
all common GPS log file formats.
'';
homepage = "https://www.gpxsee.org/";
changelog = "https://build.opensuse.org/package/view_file/home:tumic:GPXSee/gpxsee/gpxsee.changes";
license = licenses.gpl3;
maintainers = with maintainers; [ womfoo sikmir ];
platforms = with platforms; linux ++ darwin;

View file

@ -1,20 +1,35 @@
{ stdenv, python, fetchpatch }:
{ stdenv, fetchFromGitHub, python3, fetchpatch }:
with python.pkgs;
let
py = python3.override {
packageOverrides = self: super: {
self = py;
# not compatible with prompt_toolkit >=2.0
prompt_toolkit = super.prompt_toolkit.overridePythonAttrs (oldAttrs: rec {
name = "${oldAttrs.pname}-${version}";
version = "1.0.18";
src = oldAttrs.src.override {
inherit version;
sha256 = "09h1153wgr5x2ny7ds0w2m81n3bb9j8hjb8sjfnrg506r01clkyx";
};
});
};
};
in
with py.pkgs;
buildPythonApplication rec {
pname = "haxor-news";
version = "0.4.3";
version = "unstable-2020-10-20";
src = fetchPypi {
inherit pname version;
sha256 = "5b9af8338a0f8b95a8133b66ef106553823813ac171c0aefa3f3f2dbeb4d7f88";
};
# allow newer click version
patches = fetchpatch {
url = "${meta.homepage}/commit/5b0d3ef1775756ca15b6d83fba1fb751846b5427.patch";
sha256 = "1551knh2f7yarqzcpip16ijmbx8kzdna8cihxlxx49ww55f5sg67";
# haven't done a stable release in 3+ years, but actively developed
src = fetchFromGitHub {
owner = "donnemartin";
repo = pname;
rev = "811a5804c09406465b2b02eab638c08bf5c4fa7f";
sha256 = "1g3dfsyk4727d9jh9w6j5r51ag07851cls7v7a7hmdvdixpvbzp6";
};
propagatedBuildInputs = [
@ -26,6 +41,7 @@ buildPythonApplication rec {
six
];
# will fail without pre-seeded config files
doCheck = false;
checkInputs = [ mock ];

View file

@ -2,16 +2,16 @@
buildGoModule rec {
pname = "hugo";
version = "0.77.0";
version = "0.78.0";
src = fetchFromGitHub {
owner = "gohugoio";
repo = pname;
rev = "v${version}";
sha256 = "1vjqddcbk8afqkjzrj9wwvz697bxhv9vz0rk2vj2ji6lz1slhc56";
sha256 = "0la1c6yj9dq9rqxk6m8n8l4cabgzlk0r3was8mvgd80g3x3zn55v";
};
vendorSha256 = "03xv188jw5scqd6a8xd2s13vkn721d37bgs6a6rik7pgqmjh46c6";
vendorSha256 = "09fvvs85rvvh0z4px2bj5908xf1mrcslkzsz09p0gy5i3zaqfnp9";
doCheck = false;

View file

@ -7,14 +7,14 @@
let
pname = "krename";
version = "5.0.0";
version = "5.0.1";
in mkDerivation rec {
name = "${pname}-${version}";
src = fetchurl {
url = "mirror://kde/stable/${pname}/${version}/src/${name}.tar.xz";
sha256 = "136j1dkqrhv458rjh5v3vzjhvq6dhz7k79zk6mmx8zvqacc7cq8a";
sha256 = "0zbadxjp13jqxgb58wslhm0wy2lhpdq1bgbvyhyn21mssfppib6a";
};
buildInputs = [ taglib exiv2 podofo ];

View file

@ -16,6 +16,8 @@ buildPythonApplication rec {
pname = "kupfer";
version = "319";
format = "other";
src = fetchurl {
url = "https://github.com/kupferlauncher/kupfer/releases/download/v${version}/kupfer-v${version}.tar.xz";
sha256 = "0c9xjx13r8ckfr4az116bhxsd3pk78v04c3lz6lqhraak0rp4d92";
@ -33,13 +35,9 @@ buildPythonApplication rec {
# see https://github.com/NixOS/nixpkgs/issues/56943 for details
strictDeps = false;
postInstall = let
pythonPath = (stdenv.lib.concatMapStringsSep ":"
(m: "${m}/lib/${python.libPrefix}/site-packages")
propagatedBuildInputs);
in ''
postInstall = ''
gappsWrapperArgs+=(
"--prefix" "PYTHONPATH" : "${pythonPath}"
"--prefix" "PYTHONPATH" : "${makePythonPath propagatedBuildInputs}"
"--set" "PYTHONNOUSERSITE" "1"
)
'';

View file

@ -1,17 +1,40 @@
{ stdenv, autoconf, automake, c-ares, cryptopp, curl, doxygen, fetchFromGitHub
, fetchpatch, ffmpeg_3, libmediainfo, libraw, libsodium, libtool, libuv, libzen
, lsb-release, mkDerivation, pkgconfig, qtbase, qttools, sqlite, swig, unzip
, wget }:
{ stdenv
, autoconf
, automake
, c-ares
, cryptopp
, curl
, doxygen
, fetchFromGitHub
, fetchpatch
, ffmpeg_3
, libmediainfo
, libraw
, libsodium
, libtool
, libuv
, libzen
, lsb-release
, mkDerivation
, pkgconfig
, qtbase
, qttools
, qtx11extras
, sqlite
, swig
, unzip
, wget
}:
mkDerivation rec {
pname = "megasync";
version = "4.3.1.0";
version = "4.3.5.0";
src = fetchFromGitHub {
owner = "meganz";
repo = "MEGAsync";
rev = "v${version}_Linux";
sha256 = "0b68wpif8a0wf1vfn1nr19dmz8f31dprb27jpldxrxhyfslc43yj";
sha256 = "0rr1jjy0n5bj1lh6xi3nbbcikvq69j3r9qnajp4mhywr5izpccvs";
fetchSubmodules = true;
};
@ -29,6 +52,7 @@ mkDerivation rec {
libuv
libzen
qtbase
qtx11extras
sqlite
unzip
wget

View file

@ -9,7 +9,7 @@
, gtkmm3
, pcre
, swig
, antlr4_7
, antlr4_8
, sudo
, mysql
, libxml2
@ -80,7 +80,7 @@ in stdenv.mkDerivation rec {
# have it look for 4.7.2 instead of 4.7.1
preConfigure = ''
substituteInPlace CMakeLists.txt \
--replace "antlr-4.7.1-complete.jar" "antlr-4.7.2-complete.jar"
--replace "antlr-4.7.1-complete.jar" "antlr-4.8-complete.jar"
'';
nativeBuildInputs = [
@ -96,7 +96,7 @@ in stdenv.mkDerivation rec {
gtk3
gtkmm3
libX11
antlr4_7.runtime.cpp
antlr4_8.runtime.cpp
python2
mysql
libxml2
@ -141,7 +141,7 @@ in stdenv.mkDerivation rec {
cmakeFlags = [
"-DMySQL_CONFIG_PATH=${mysql}/bin/mysql_config"
"-DIODBC_CONFIG_PATH=${libiodbc}/bin/iodbc-config"
"-DWITH_ANTLR_JAR=${antlr4_7.jarLocation}"
"-DWITH_ANTLR_JAR=${antlr4_8.jarLocation}"
# mysql-workbench 8.0.21 depends on libmysqlconnectorcpp 1.1.8.
# Newer versions of connector still provide the legacy library when enabled
# but the headers are in a different location.

View file

@ -30,12 +30,12 @@ let
in stdenv.mkDerivation rec {
pname = "obsidian";
version = "0.9.4";
version = "0.9.6";
src = fetchurl {
url =
"https://github.com/obsidianmd/obsidian-releases/releases/download/v${version}/obsidian-${version}.asar.gz";
sha256 = "0qahgm9gf4sap28wy7cxbf41h8zldplbwxnv8shyajbkxn108g5p";
sha256 = "1n8qc8ssv93xcal9fgbwvkvahzwyn6367v8gbxgc3036l66mira7";
};
nativeBuildInputs = [ makeWrapper graphicsmagick ];

View file

@ -0,0 +1,61 @@
{ stdenv, fetchFromGitHub, python3, dbus, gnupg }:
python3.pkgs.buildPythonApplication rec {
pname = "pass-secret-service";
# PyPI has old alpha version. Since then the project has switched from using a
# seemingly abandoned D-Bus package pydbus and started using maintained
# dbus-next. So let's use latest from GitHub.
version = "unstable-2020-04-12";
src = fetchFromGitHub {
owner = "mdellweg";
repo = "pass_secret_service";
rev = "f6fbca6ac3ccd16bfec407d845ed9257adf74dfa";
sha256 = "0rm4pbx1fiwds1v7f99khhh7x3inv9yniclwd95mrbgljk3cc6a4";
};
# Need to specify session.conf file for tests because it won't be found under
# /etc/ in check phase.
postPatch = ''
substituteInPlace Makefile \
--replace \
"dbus-run-session" \
"dbus-run-session --config-file=${dbus}/share/dbus-1/session.conf"
'';
propagatedBuildInputs = with python3.pkgs; [
click
cryptography
dbus-next
decorator
pypass
secretstorage
];
checkInputs =
let
ps = python3.pkgs;
in
[
dbus
gnupg
ps.pytest
ps.pytest-asyncio
ps.pypass
];
checkPhase = ''
runHook preCheck
make test
runHook postCheck
'';
meta = {
description = "Libsecret D-Bus API with pass as the backend";
homepage = "https://github.com/mdellweg/pass_secret_service/";
license = stdenv.lib.licenses.gpl3Only;
platforms = stdenv.lib.platforms.all;
maintainers = with stdenv.lib.maintainers; [ jluttine ];
};
}

View file

@ -2,11 +2,11 @@
stdenv.mkDerivation rec {
pname = "pdfsam-basic";
version = "4.1.4";
version = "4.2.0";
src = fetchurl {
url = "https://github.com/torakiki/pdfsam/releases/download/v${version}/pdfsam_${version}-1_amd64.deb";
sha256 = "1gw3cmc8c1xxc55bm71v1dz9x9560lbhx9nkwprarhxlmn0m0zzp";
sha256 = "0dhwaadk2qw7avpfnw0mgqv3yhjsm4qm88yyy4w24a3cqzrvb56g";
};
unpackPhase = ''

View file

@ -0,0 +1,27 @@
{ stdenv, fetchFromGitHub, imagemagick }:
stdenv.mkDerivation rec {
pname = "tiv";
version = "1.1.0";
src = fetchFromGitHub {
owner = "stefanhaustein";
repo = "TerminalImageViewer";
rev = "v${version}";
sha256 = "17zqbwj2imk6ygyc142mw6v4fh7h4rd5vzn5wxr9gs0g8qdc6ixn";
};
buildInputs = [ imagemagick ];
makeFlags = [ "prefix=$(out)" ];
preConfigure = "cd src/main/cpp";
meta = with stdenv.lib; {
homepage = "https://github.com/stefanhaustein/TerminalImageViewer";
description = "Small C++ program to display images in a (modern) terminal using RGB ANSI codes and unicode block graphics characters";
license = licenses.asl20;
maintainers = with maintainers; [ magnetophon ];
platforms = [ "x86_64-linux" ];
};
}

View file

@ -2,7 +2,7 @@
let
pname = "Sylk";
version = "2.9.1";
version = "2.9.2";
in
appimageTools.wrapType2 rec {
@ -10,7 +10,7 @@ appimageTools.wrapType2 rec {
src = fetchurl {
url = "http://download.ag-projects.com/Sylk/Sylk-${version}-x86_64.AppImage";
hash = "sha256-Y1FR1tYZTxhMFn6NL578otitmOsngMJBPK/9cpCqE/Q=";
hash = "sha256-pfzTeKxY2fs98mgvhzaI/uBbYYkxfnQ+6jQ+gTSeEkA=";
};
profile = ''

View file

@ -316,7 +316,12 @@ let
patchelf --set-rpath "${libGL}/lib:$origRpath" "$chromiumBinary"
'';
passthru.updateScript = ./update.py;
passthru = {
updateScript = ./update.py;
chromiumDeps = {
gn = gnChromium;
};
};
};
# Remove some extraAttrs we supplied to the base attributes already.

View file

@ -35,26 +35,15 @@ let
mkChromiumDerivation = callPackage ./common.nix ({
inherit channel gnome gnomeSupport gnomeKeyringSupport proprietaryCodecs
cupsSupport pulseSupport useOzone;
# TODO: Remove after we can update gn for the stable channel (backward incompatible changes):
gnChromium = gn.overrideAttrs (oldAttrs: {
version = "2020-07-20";
inherit (upstream-info.deps.gn) version;
src = fetchgit {
url = "https://gn.googlesource.com/gn";
rev = "3028c6a426a4aaf6da91c4ebafe716ae370225fe";
sha256 = "0h3wf4152zdvrbb0jbj49q6814lfl3rcy5mj8b2pl9s0ahvkbc6q";
inherit (upstream-info.deps.gn) url rev sha256;
};
});
} // lib.optionalAttrs (lib.versionAtLeast upstream-info.version "87") {
useOzone = true; # YAY: https://chromium-review.googlesource.com/c/chromium/src/+/2382834 \o/
useVaapi = !stdenv.isAarch64; # TODO: Might be best to not set use_vaapi anymore (default is fine)
gnChromium = gn.overrideAttrs (oldAttrs: {
version = "2020-08-17";
src = fetchgit {
url = "https://gn.googlesource.com/gn";
rev = "6f13aaac55a977e1948910942675c69f2b4f7a94";
sha256 = "01hpma1sllpdx09mvr4d6073sg6zmk6iv44kd3r28khymcj4s251";
};
});
});
browser = callPackage ./browser.nix { inherit channel enableWideVine; };

View file

@ -1,13 +1,15 @@
#! /usr/bin/env nix-shell
#! nix-shell -i python -p python3 nix
#! nix-shell -i python -p python3 nix nix-prefetch-git
import csv
import json
import re
import subprocess
import sys
from codecs import iterdecode
from collections import OrderedDict
from datetime import datetime
from os.path import abspath, dirname
from urllib.request import urlopen
@ -26,6 +28,30 @@ def nix_prefetch_url(url, algo='sha256'):
out = subprocess.check_output(['nix-prefetch-url', '--type', algo, url])
return out.decode('utf-8').rstrip()
def nix_prefetch_git(url, rev):
print(f'nix-prefetch-git {url} {rev}')
out = subprocess.check_output(['nix-prefetch-git', '--quiet', '--url', url, '--rev', rev])
return json.loads(out)
def get_file_revision(revision, file_path):
url = f'https://raw.githubusercontent.com/chromium/chromium/{revision}/{file_path}'
with urlopen(url) as http_response:
return http_response.read()
def get_channel_dependencies(channel):
deps = get_file_revision(channel['version'], 'DEPS')
gn_pattern = b"'gn_version': 'git_revision:([0-9a-f]{40})'"
gn_commit = re.search(gn_pattern, deps).group(1).decode()
gn = nix_prefetch_git('https://gn.googlesource.com/gn', gn_commit)
return {
'gn': {
'version': datetime.fromisoformat(gn['date']).date().isoformat(),
'url': gn['url'],
'rev': gn['rev'],
'sha256': gn['sha256']
}
}
channels = {}
last_channels = load_json(JSON_PATH)
@ -58,6 +84,8 @@ with urlopen(HISTORY_URL) as resp:
# the next one.
continue
channel['deps'] = get_channel_dependencies(channel)
channels[channel_name] = channel
with open(JSON_PATH, 'w') as out:

View file

@ -1,17 +1,41 @@
{
"stable": {
"version": "86.0.4240.111",
"sha256": "05y7lwr89awkhvgmwkx3br9j4ap2aypg2wsc0nz8mi7kxc1dnyzj",
"sha256bin64": "10aqiiydw4i3jxnw8xxdgkgcqbfqc67n1fbrg40y54kg0v5dz8l6"
"version": "86.0.4240.183",
"sha256": "1g39i82js7fm4fqb8i66d6xs0kzqjxzi4vzvvwz5y9rkbikcc4ma",
"sha256bin64": "1r0dxqsx6j19hgwr3v2sdlb2vd7gb961c4wba4ymd8wy8j8pzly9",
"deps": {
"gn": {
"version": "2020-08-07",
"url": "https://gn.googlesource.com/gn",
"rev": "e327ffdc503815916db2543ec000226a8df45163",
"sha256": "0kvlfj3www84zp1vmxh76x8fdjm9hyk8lkh2vdsidafpmm75fphr"
}
}
},
"beta": {
"version": "87.0.4280.27",
"sha256": "0w0asxj7jlsw69cssfia8km4q9cx1c2mliks2rmhf4jk0hsghasm",
"sha256bin64": "1lsx4mhy8nachfb8c9f3mrx5nqw2bi046dqirb4lnv7y80jjjs1k"
"version": "87.0.4280.40",
"sha256": "07xh76fl257np68way6i5rf64qbvirkfddy7m5gvqb0fzcqd7dp3",
"sha256bin64": "1b2z0aqlh28pqrk6dmabxp1d4mvp9iyfmi4kqmns4cdpg0qgaf41",
"deps": {
"gn": {
"version": "2020-09-09",
"url": "https://gn.googlesource.com/gn",
"rev": "e002e68a48d1c82648eadde2f6aafa20d08c36f2",
"sha256": "0x4c7amxwzxs39grqs3dnnz0531mpf1p75niq7zhinyfqm86i4dk"
}
}
},
"dev": {
"version": "88.0.4298.4",
"sha256": "0ka11gmpkyrmifajaxm66c16hrj3xakdvhjqg04slyp2sv0nlhrl",
"sha256bin64": "0768y31jqbl1znp7yp6mvl5j12xl1nwjkh2l8zdga81q0wz52hh6"
"version": "88.0.4300.0",
"sha256": "00cfs2rp4h8ybn2snr1d8ygg635hx7q5gv2aqriy1j6f8a1pgh1b",
"sha256bin64": "110r1m14h91212nx6pfhn8wkics7wlwx1608l5cqsxxcpvpzl3pv",
"deps": {
"gn": {
"version": "2020-09-09",
"url": "https://gn.googlesource.com/gn",
"rev": "e002e68a48d1c82648eadde2f6aafa20d08c36f2",
"sha256": "0x4c7amxwzxs39grqs3dnnz0531mpf1p75niq7zhinyfqm86i4dk"
}
}
}
}

View file

@ -9,7 +9,7 @@
, hunspell, libXdamage, libevent, libstartup_notification
, libvpx_1_8
, icu67, libpng, jemalloc, glib
, autoconf213, which, gnused, cargo, rustc
, autoconf213, which, gnused, rustPackages, rustPackages_1_45
, rust-cbindgen, nodejs, nasm, fetchpatch
, gnum4
, debugBuild ? false
@ -102,6 +102,10 @@ let
buildStdenv = if ltoSupport
then overrideCC stdenv llvmPackages.lldClang
else stdenv;
# 78 ESR won't build with rustc 1.47
inherit (if lib.versionAtLeast ffversion "82" then rustPackages else rustPackages_1_45)
rustc cargo;
in
buildStdenv.mkDerivation ({

View file

@ -409,5 +409,6 @@ stdenv.mkDerivation rec {
# the compound is "libre" in a strict sense (some components place certain
# restrictions on redistribution), it's free enough for our purposes.
license = licenses.free;
broken = true;
};
}

View file

@ -13,7 +13,7 @@ mkChromiumDerivation (base: rec {
installPhase = ''
mkdir -p "$libExecPath"
cp -v "$buildPath/"*.pak "$buildPath/"*.bin "$libExecPath/"
cp -v "$buildPath/"*.so "$buildPath/"*.pak "$buildPath/"*.bin "$libExecPath/"
cp -v "$buildPath/icudtl.dat" "$libExecPath/"
cp -vLR "$buildPath/locales" "$buildPath/resources" "$libExecPath/"
cp -v "$buildPath/chrome" "$libExecPath/$packageName"
@ -78,17 +78,10 @@ mkChromiumDerivation (base: rec {
'';
homepage = "https://github.com/Eloston/ungoogled-chromium";
maintainers = with maintainers; [ squalus ];
# Overview of the maintainer roles:
# nixos-unstable:
# - TODO: Need a new maintainer for x86_64 [0]
# - @thefloweringash: aarch64
# - @primeos: Provisional maintainer (x86_64)
# Stable channel:
# - TODO (need someone to test backports [0])
# [0]: https://github.com/NixOS/nixpkgs/issues/78450
license = if enableWideVine then licenses.unfree else licenses.bsd3;
platforms = platforms.linux;
hydraPlatforms = if channel == "stable" then ["aarch64-linux" "x86_64-linux"] else [];
timeout = 172800; # 48 hours
timeout = 172800; # 48 hours (increased from the Hydra default of 10h)
broken = channel == "dev"; # Blocked on https://bugs.chromium.org/p/chromium/issues/detail?id=1141896
};
})

Some files were not shown because too many files have changed in this diff Show more