Project import generated by Copybara.
GitOrigin-RevId: 2deb07f3ac4eeb5de1c12c4ba2911a2eb1f6ed61
This commit is contained in:
parent
bdb17c63e3
commit
620eecebfb
1042 changed files with 20582 additions and 24484 deletions
|
@ -8,7 +8,7 @@ In the Nixpkgs tree, Ruby packages can be found throughout, depending on what th
|
|||
|
||||
There are two main approaches for using Ruby with gems. One is to use a specifically locked `Gemfile` for an application that has very strict dependencies. The other is to depend on the common gems, which we'll explain further down, and rely on them being updated regularly.
|
||||
|
||||
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_2_6.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
|
||||
The interpreters have common attributes, namely `gems`, and `withPackages`. So you can refer to `ruby.gems.nokogiri`, or `ruby_2_7.gems.nokogiri` to get the Nokogiri gem already compiled and ready to use.
|
||||
|
||||
Since not all gems have executables like `nokogiri`, it's usually more convenient to use the `withPackages` function like this: `ruby.withPackages (p: with p; [ nokogiri ])`. This will also make sure that the Ruby in your environment will be able to find the gem and it can be used in your Ruby code (for example via `ruby` or `irb` executables) via `require "nokogiri"` as usual.
|
||||
|
||||
|
|
|
@ -158,9 +158,9 @@ One would think that `localSystem` and `crossSystem` overlap horribly with the t
|
|||
|
||||
### Implementation of dependencies {#ssec-cross-dependency-implementation}
|
||||
|
||||
The categories of dependencies developed in [](#ssec-cross-dependency-categorization) are specified as lists of derivations given to `mkDerivation`, as documented in [](#ssec-stdenv-dependencies). In short, each list of dependencies for "host → target" of "foo → bar" is called `depsFooBar`, with exceptions for backwards compatibility that `depsBuildHost` is instead called `nativeBuildInputs` and `depsHostTarget` is instead called `buildInputs`. Nixpkgs is now structured so that each `depsFooBar` is automatically taken from `pkgsFooBar`. (These `pkgsFooBar`s are quite new, so there is no special case for `nativeBuildInputs` and `buildInputs`.) For example, `pkgsBuildHost.gcc` should be used at build-time, while `pkgsHostTarget.gcc` should be used at run-time.
|
||||
The categories of dependencies developed in [](#ssec-cross-dependency-categorization) are specified as lists of derivations given to `mkDerivation`, as documented in [](#ssec-stdenv-dependencies). In short, each list of dependencies for "host → target" is called `deps<host><target>` (where `host`, and `target` values are either `build`, `host`, or `target`), with exceptions for backwards compatibility that `depsBuildHost` is instead called `nativeBuildInputs` and `depsHostTarget` is instead called `buildInputs`. Nixpkgs is now structured so that each `deps<host><target>` is automatically taken from `pkgs<host><target>`. (These `pkgs<host><target>`s are quite new, so there is no special case for `nativeBuildInputs` and `buildInputs`.) For example, `pkgsBuildHost.gcc` should be used at build-time, while `pkgsHostTarget.gcc` should be used at run-time.
|
||||
|
||||
Now, for most of Nixpkgs's history, there were no `pkgsFooBar` attributes, and most packages have not been refactored to use it explicitly. Prior to those, there were just `buildPackages`, `pkgs`, and `targetPackages`. Those are now redefined as aliases to `pkgsBuildHost`, `pkgsHostTarget`, and `pkgsTargetTarget`. It is acceptable, even recommended, to use them for libraries to show that the host platform is irrelevant.
|
||||
Now, for most of Nixpkgs's history, there were no `pkgs<host><target>` attributes, and most packages have not been refactored to use it explicitly. Prior to those, there were just `buildPackages`, `pkgs`, and `targetPackages`. Those are now redefined as aliases to `pkgsBuildHost`, `pkgsHostTarget`, and `pkgsTargetTarget`. It is acceptable, even recommended, to use them for libraries to show that the host platform is irrelevant.
|
||||
|
||||
But before that, there was just `pkgs`, even though both `buildInputs` and `nativeBuildInputs` existed. \[Cross barely worked, and those were implemented with some hacks on `mkDerivation` to override dependencies.\] What this means is the vast majority of packages do not use any explicit package set to populate their dependencies, just using whatever `callPackage` gives them even if they do correctly sort their dependencies into the multiple lists described above. And indeed, asking that users both sort their dependencies, _and_ take them from the right attribute set, is both too onerous and redundant, so the recommended approach (for now) is to continue just categorizing by list and not using an explicit package set.
|
||||
|
||||
|
|
33
third_party/nixpkgs/doc/stdenv/stdenv.chapter.md
vendored
33
third_party/nixpkgs/doc/stdenv/stdenv.chapter.md
vendored
|
@ -116,15 +116,27 @@ On Linux, `stdenv` also includes the `patchelf` utility.
|
|||
|
||||
## Specifying dependencies {#ssec-stdenv-dependencies}
|
||||
|
||||
As described in the Nix manual, almost any `*.drv` store path in a derivation’s attribute set will induce a dependency on that derivation. `mkDerivation`, however, takes a few attributes intended to, between them, include all the dependencies of a package. This is done both for structure and consistency, but also so that certain other setup can take place. For example, certain dependencies need their bin directories added to the `PATH`. That is built-in, but other setup is done via a pluggable mechanism that works in conjunction with these dependency attributes. See [](#ssec-setup-hooks) for details.
|
||||
As described in the Nix manual, almost any `*.drv` store path in a derivation’s attribute set will induce a dependency on that derivation. `mkDerivation`, however, takes a few attributes intended to include all the dependencies of a package. This is done both for structure and consistency, but also so that certain other setup can take place. For example, certain dependencies need their bin directories added to the `PATH`. That is built-in, but other setup is done via a pluggable mechanism that works in conjunction with these dependency attributes. See [](#ssec-setup-hooks) for details.
|
||||
|
||||
Dependencies can be broken down along three axes: their host and target platforms relative to the new derivation’s, and whether they are propagated. The platform distinctions are motivated by cross compilation; see [](#chap-cross) for exactly what each platform means. [^footnote-stdenv-ignored-build-platform] But even if one is not cross compiling, the platforms imply whether or not the dependency is needed at run-time or build-time, a concept that makes perfect sense outside of cross compilation. By default, the run-time/build-time distinction is just a hint for mental clarity, but with `strictDeps` set it is mostly enforced even in the native case.
|
||||
|
||||
The extension of `PATH` with dependencies, alluded to above, proceeds according to the relative platforms alone. The process is carried out only for dependencies whose host platform matches the new derivation’s build platform i.e. dependencies which run on the platform where the new derivation will be built. [^footnote-stdenv-native-dependencies-in-path] For each dependency \<dep\> of those dependencies, `dep/bin`, if present, is added to the `PATH` environment variable.
|
||||
|
||||
The dependency is propagated when it forces some of its other-transitive (non-immediate) downstream dependencies to also take it on as an immediate dependency. Nix itself already takes a package’s transitive dependencies into account, but this propagation ensures nixpkgs-specific infrastructure like setup hooks (mentioned above) also are run as if the propagated dependency.
|
||||
A dependency is said to be **propagated** when some of its other-transitive (non-immediate) downstream dependencies also need it as an immediate dependency.
|
||||
[^footnote-stdenv-propagated-dependencies]
|
||||
|
||||
It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. The exact rules for dependency propagation can be given by assigning to each dependency two integers based one how its host and target platforms are offset from the depending derivation’s platforms. Those offsets are given below in the descriptions of each dependency list attribute. Algorithmically, we traverse propagated inputs, accumulating every propagated dependency’s propagated dependencies and adjusting them to account for the “shift in perspective” described by the current dependency’s platform offsets. This results in sort a transitive closure of the dependency relation, with the offsets being approximately summed when two dependency links are combined. We also prune transitive dependencies whose combined offsets go out-of-bounds, which can be viewed as a filter over that transitive closure removing dependencies that are blatantly absurd.
|
||||
It is important to note that dependencies are not necessarily propagated as the same sort of dependency that they were before, but rather as the corresponding sort so that the platform rules still line up. To determine the exact rules for dependency propagation, we start by assigning to each dependency a couple of ternary numbers (`-1` for `build`, `0` for `host`, and `1` for `target`), representing how respectively its host and target platforms are "offset" from the depending derivation’s platforms. The following table summarize the different combinations that can be obtained:
|
||||
|
||||
| `host → target` | attribute name | offset |
|
||||
| ------------------- | ------------------- | -------- |
|
||||
| `build --> build` | `depsBuildBuild` | `-1, -1` |
|
||||
| `build --> host` | `nativeBuildInputs` | `-1, 0` |
|
||||
| `build --> target` | `depsBuildTarget` | `-1, 1` |
|
||||
| `host --> host` | `depsHostHost` | `0, 0` |
|
||||
| `host --> target` | `buildInputs` | `0, 1` |
|
||||
| `target --> target` | `depsTargetTarget` | `1, 1` |
|
||||
|
||||
Algorithmically, we traverse propagated inputs, accumulating every propagated dependency’s propagated dependencies and adjusting them to account for the “shift in perspective” described by the current dependency’s platform offsets. This results is sort of a transitive closure of the dependency relation, with the offsets being approximately summed when two dependency links are combined. We also prune transitive dependencies whose combined offsets go out-of-bounds, which can be viewed as a filter over that transitive closure removing dependencies that are blatantly absurd.
|
||||
|
||||
We can define the process precisely with [Natural Deduction](https://en.wikipedia.org/wiki/Natural_deduction) using the inference rules. This probably seems a bit obtuse, but so is the bash code that actually implements it! [^footnote-stdenv-find-inputs-location] They’re confusing in very different ways so… hopefully if something doesn’t make sense in one presentation, it will in the other!
|
||||
|
||||
|
@ -179,37 +191,37 @@ Overall, the unifying theme here is that propagation shouldn’t be introducing
|
|||
|
||||
#### `depsBuildBuild` {#var-stdenv-depsBuildBuild}
|
||||
|
||||
A list of dependencies whose host and target platforms are the new derivation’s build platform. This means a `-1` host and `-1` target offset from the new derivation’s platforms. These are programs and libraries used at build time that produce programs and libraries also used at build time. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it in `nativeBuildInputs` instead. The most common use of this `buildPackages.stdenv.cc`, the default C compiler for this role. That example crops up more than one might think in old commonly used C libraries.
|
||||
A list of dependencies whose host and target platforms are the new derivation’s build platform. These are programs and libraries used at build time that produce programs and libraries also used at build time. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it in `nativeBuildInputs` instead. The most common use of this `buildPackages.stdenv.cc`, the default C compiler for this role. That example crops up more than one might think in old commonly used C libraries.
|
||||
|
||||
Since these packages are able to be run at build-time, they are always added to the `PATH`, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.
|
||||
|
||||
#### `nativeBuildInputs` {#var-stdenv-nativeBuildInputs}
|
||||
|
||||
A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s host platform. This means a `-1` host offset and `0` target offset from the new derivation’s platforms. These are programs and libraries used at build-time that, if they are a compiler or similar tool, produce code to run at run-time—i.e. tools used to build the new derivation. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in `depsBuildBuild` or `depsBuildTarget`. This could be called `depsBuildHost` but `nativeBuildInputs` is used for historical continuity.
|
||||
A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s host platform. These are programs and libraries used at build-time that, if they are a compiler or similar tool, produce code to run at run-time—i.e. tools used to build the new derivation. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in `depsBuildBuild` or `depsBuildTarget`. This could be called `depsBuildHost` but `nativeBuildInputs` is used for historical continuity.
|
||||
|
||||
Since these packages are able to be run at build-time, they are added to the `PATH`, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.
|
||||
|
||||
#### `depsBuildTarget` {#var-stdenv-depsBuildTarget}
|
||||
|
||||
A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s target platform. This means a `-1` host offset and `1` target offset from the new derivation’s platforms. These are programs used at build time that produce code to run with code produced by the depending package. Most commonly, these are tools used to build the runtime or standard library that the currently-being-built compiler will inject into any code it compiles. In many cases, the currently-being-built-compiler is itself employed for that task, but when that compiler won’t run (i.e. its build and host platform differ) this is not possible. Other times, the compiler relies on some other tool, like binutils, that is always built separately so that the dependency is unconditional.
|
||||
A list of dependencies whose host platform is the new derivation’s build platform, and target platform is the new derivation’s target platform. These are programs used at build time that produce code to run with code produced by the depending package. Most commonly, these are tools used to build the runtime or standard library that the currently-being-built compiler will inject into any code it compiles. In many cases, the currently-being-built-compiler is itself employed for that task, but when that compiler won’t run (i.e. its build and host platform differ) this is not possible. Other times, the compiler relies on some other tool, like binutils, that is always built separately so that the dependency is unconditional.
|
||||
|
||||
This is a somewhat confusing concept to wrap one’s head around, and for good reason. As the only dependency type where the platform offsets are not adjacent integers, it requires thinking of a bootstrapping stage *two* away from the current one. It and its use-case go hand in hand and are both considered poor form: try to not need this sort of dependency, and try to avoid building standard libraries and runtimes in the same derivation as the compiler produces code using them. Instead strive to build those like a normal library, using the newly-built compiler just as a normal library would. In short, do not use this attribute unless you are packaging a compiler and are sure it is needed.
|
||||
This is a somewhat confusing concept to wrap one’s head around, and for good reason. As the only dependency type where the platform offsets, `-1` and `1`, are not adjacent integers, it requires thinking of a bootstrapping stage *two* away from the current one. It and its use-case go hand in hand and are both considered poor form: try to not need this sort of dependency, and try to avoid building standard libraries and runtimes in the same derivation as the compiler produces code using them. Instead strive to build those like a normal library, using the newly-built compiler just as a normal library would. In short, do not use this attribute unless you are packaging a compiler and are sure it is needed.
|
||||
|
||||
Since these packages are able to run at build time, they are added to the `PATH`, as described above. But since these packages are only guaranteed to be able to run then, they shouldn’t persist as run-time dependencies. This isn’t currently enforced, but could be in the future.
|
||||
|
||||
#### `depsHostHost` {#var-stdenv-depsHostHost}
|
||||
|
||||
A list of dependencies whose host and target platforms match the new derivation’s host platform. This means a `0` host offset and `0` target offset from the new derivation’s host platform. These are packages used at run-time to generate code also used at run-time. In practice, this would usually be tools used by compilers for macros or a metaprogramming system, or libraries used by the macros or metaprogramming code itself. It’s always preferable to use a `depsBuildBuild` dependency in the derivation being built over a `depsHostHost` on the tool doing the building for this purpose.
|
||||
A list of dependencies whose host and target platforms match the new derivation’s host platform. In practice, this would usually be tools used by compilers for macros or a metaprogramming system, or libraries used by the macros or metaprogramming code itself. It’s always preferable to use a `depsBuildBuild` dependency in the derivation being built over a `depsHostHost` on the tool doing the building for this purpose.
|
||||
|
||||
#### `buildInputs` {#var-stdenv-buildInputs}
|
||||
|
||||
A list of dependencies whose host platform and target platform match the new derivation’s. This means a `0` host offset and a `1` target offset from the new derivation’s host platform. This would be called `depsHostTarget` but for historical continuity. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in `depsBuildBuild`.
|
||||
A list of dependencies whose host platform and target platform match the new derivation’s. This would be called `depsHostTarget` but for historical continuity. If the dependency doesn’t care about the target platform (i.e. isn’t a compiler or similar tool), put it here, rather than in `depsBuildBuild`.
|
||||
|
||||
These are often programs and libraries used by the new derivation at *run*-time, but that isn’t always the case. For example, the machine code in a statically-linked library is only used at run-time, but the derivation containing the library is only needed at build-time. Even in the dynamic case, the library may also be needed at build-time to appease the linker.
|
||||
|
||||
#### `depsTargetTarget` {#var-stdenv-depsTargetTarget}
|
||||
|
||||
A list of dependencies whose host platform matches the new derivation’s target platform. This means a `1` offset from the new derivation’s platforms. These are packages that run on the target platform, e.g. the standard library or run-time deps of standard library that a compiler insists on knowing about. It’s poor form in almost all cases for a package to depend on another from a future stage \[future stage corresponding to positive offset\]. Do not use this attribute unless you are packaging a compiler and are sure it is needed.
|
||||
A list of dependencies whose host platform matches the new derivation’s target platform. These are packages that run on the target platform, e.g. the standard library or run-time deps of standard library that a compiler insists on knowing about. It’s poor form in almost all cases for a package to depend on another from a future stage \[future stage corresponding to positive offset\]. Do not use this attribute unless you are packaging a compiler and are sure it is needed.
|
||||
|
||||
#### `depsBuildBuildPropagated` {#var-stdenv-depsBuildBuildPropagated}
|
||||
|
||||
|
@ -1228,6 +1240,7 @@ If the libraries lack `-fPIE`, you will get the error `recompile with -fPIE`.
|
|||
|
||||
[^footnote-stdenv-ignored-build-platform]: The build platform is ignored because it is a mere implementation detail of the package satisfying the dependency: As a general programming principle, dependencies are always *specified* as interfaces, not concrete implementation.
|
||||
[^footnote-stdenv-native-dependencies-in-path]: Currently, this means for native builds all dependencies are put on the `PATH`. But in the future that may not be the case for sake of matching cross: the platforms would be assumed to be unique for native and cross builds alike, so only the `depsBuild*` and `nativeBuildInputs` would be added to the `PATH`.
|
||||
[^footnote-stdenv-propagated-dependencies]: Nix itself already takes a package’s transitive dependencies into account, but this propagation ensures nixpkgs-specific infrastructure like setup hooks (mentioned above) also are run as if the propagated dependency.
|
||||
[^footnote-stdenv-find-inputs-location]: The `findInputs` function, currently residing in `pkgs/stdenv/generic/setup.sh`, implements the propagation logic.
|
||||
[^footnote-stdenv-sys-lib-search-path]: It clears the `sys_lib_*search_path` variables in the Libtool script to prevent Libtool from using libraries in `/usr/lib` and such.
|
||||
[^footnote-stdenv-build-time-guessing-impurity]: Eventually these will be passed building natively as well, to improve determinism: build-time guessing, as is done today, is a risk of impurity.
|
||||
|
|
2
third_party/nixpkgs/lib/attrsets.nix
vendored
2
third_party/nixpkgs/lib/attrsets.nix
vendored
|
@ -487,7 +487,7 @@ rec {
|
|||
=> "/nix/store/9rz8gxhzf8sw4kf2j2f1grr49w8zx5vj-openssl-1.0.1r-dev"
|
||||
*/
|
||||
getOutput = output: pkg:
|
||||
if pkg.outputUnspecified or false
|
||||
if ! pkg ? outputSpecified || ! pkg.outputSpecified
|
||||
then pkg.${output} or pkg.out or pkg
|
||||
else pkg;
|
||||
|
||||
|
|
5
third_party/nixpkgs/lib/customisation.nix
vendored
5
third_party/nixpkgs/lib/customisation.nix
vendored
|
@ -145,14 +145,14 @@ rec {
|
|||
let
|
||||
outputs = drv.outputs or [ "out" ];
|
||||
|
||||
commonAttrs = (removeAttrs drv [ "outputUnspecified" ]) //
|
||||
(builtins.listToAttrs outputsList) //
|
||||
commonAttrs = drv // (builtins.listToAttrs outputsList) //
|
||||
({ all = map (x: x.value) outputsList; }) // passthru;
|
||||
|
||||
outputToAttrListElement = outputName:
|
||||
{ name = outputName;
|
||||
value = commonAttrs // {
|
||||
inherit (drv.${outputName}) type outputName;
|
||||
outputSpecified = true;
|
||||
drvPath = assert condition; drv.${outputName}.drvPath;
|
||||
outPath = assert condition; drv.${outputName}.outPath;
|
||||
};
|
||||
|
@ -160,7 +160,6 @@ rec {
|
|||
|
||||
outputsList = map outputToAttrListElement outputs;
|
||||
in commonAttrs // {
|
||||
outputUnspecified = true;
|
||||
drvPath = assert condition; drv.drvPath;
|
||||
outPath = assert condition; drv.outPath;
|
||||
};
|
||||
|
|
|
@ -741,6 +741,7 @@
|
|||
angustrau = {
|
||||
name = "Angus Trau";
|
||||
email = "nix@angus.ws";
|
||||
matrix = "@angustrau:matrix.org";
|
||||
github = "angustrau";
|
||||
githubId = 13267947;
|
||||
};
|
||||
|
@ -1079,6 +1080,12 @@
|
|||
githubId = 354741;
|
||||
name = "Austin Butler";
|
||||
};
|
||||
autophagy = {
|
||||
email = "mail@autophagy.io";
|
||||
github = "autophagy";
|
||||
githubId = 12958979;
|
||||
name = "Mika Naylor";
|
||||
};
|
||||
avaq = {
|
||||
email = "nixpkgs@account.avaq.it";
|
||||
github = "avaq";
|
||||
|
@ -2136,6 +2143,12 @@
|
|||
githubId = 199180;
|
||||
name = "Claes Wallin";
|
||||
};
|
||||
cleeyv = {
|
||||
email = "cleeyv@riseup.net";
|
||||
github = "cleeyv";
|
||||
githubId = 71959829;
|
||||
name = "Cleeyv";
|
||||
};
|
||||
cleverca22 = {
|
||||
email = "cleverca22@gmail.com";
|
||||
matrix = "@cleverca22:matrix.org";
|
||||
|
@ -2872,6 +2885,12 @@
|
|||
githubId = 28980797;
|
||||
name = "David Leslie";
|
||||
};
|
||||
dlip = {
|
||||
email = "dane@lipscombe.com.au";
|
||||
github = "dlip";
|
||||
githubId = 283316;
|
||||
name = "Dane Lipscombe";
|
||||
};
|
||||
dmalikov = {
|
||||
email = "malikov.d.y@gmail.com";
|
||||
github = "dmalikov";
|
||||
|
@ -3115,6 +3134,7 @@
|
|||
};
|
||||
earvstedt = {
|
||||
email = "erik.arvstedt@gmail.com";
|
||||
matrix = "@erikarvstedt:matrix.org";
|
||||
github = "erikarvstedt";
|
||||
githubId = 36110478;
|
||||
name = "Erik Arvstedt";
|
||||
|
@ -3726,6 +3746,13 @@
|
|||
githubId = 541748;
|
||||
name = "Felipe Espinoza";
|
||||
};
|
||||
fedx-sudo = {
|
||||
email = "fedx-sudo@pm.me";
|
||||
github = "Fedx-sudo";
|
||||
githubId = 66258975;
|
||||
name = "Fedx sudo";
|
||||
matrix = "fedx:matrix.org";
|
||||
};
|
||||
fehnomenal = {
|
||||
email = "fehnomenal@fehn.systems";
|
||||
github = "fehnomenal";
|
||||
|
@ -4816,6 +4843,7 @@
|
|||
};
|
||||
ilkecan = {
|
||||
email = "ilkecan@protonmail.com";
|
||||
matrix = "@ilkecan:matrix.org";
|
||||
github = "ilkecan";
|
||||
githubId = 40234257;
|
||||
name = "ilkecan bozdogan";
|
||||
|
@ -6025,6 +6053,12 @@
|
|||
githubId = 8260207;
|
||||
name = "Karthik Iyengar";
|
||||
};
|
||||
kjeremy = {
|
||||
email = "kjeremy@gmail.com";
|
||||
name = "Jeremy Kolb";
|
||||
github = "kjeremy";
|
||||
githubId = 4325700;
|
||||
};
|
||||
kkallio = {
|
||||
email = "tierpluspluslists@gmail.com";
|
||||
name = "Karn Kallio";
|
||||
|
@ -6345,6 +6379,12 @@
|
|||
githubId = 1104419;
|
||||
name = "Lucas Hoffmann";
|
||||
};
|
||||
lucasew = {
|
||||
email = "lucas59356@gmail.com";
|
||||
github = "lucasew";
|
||||
githubId = 15693688;
|
||||
name = "Lucas Eduardo Wendt";
|
||||
};
|
||||
lde = {
|
||||
email = "lilian.deloche@puck.fr";
|
||||
github = "lde";
|
||||
|
@ -8249,6 +8289,12 @@
|
|||
githubId = 810877;
|
||||
name = "Tom Doggett";
|
||||
};
|
||||
noisersup = {
|
||||
email = "patryk@kwiatek.xyz";
|
||||
github = "noisersup";
|
||||
githubId = 42322511;
|
||||
name = "Patryk Kwiatek";
|
||||
};
|
||||
nomeata = {
|
||||
email = "mail@joachim-breitner.de";
|
||||
github = "nomeata";
|
||||
|
@ -8776,6 +8822,7 @@
|
|||
};
|
||||
peterhoeg = {
|
||||
email = "peter@hoeg.com";
|
||||
matrix = "@peter:hoeg.com";
|
||||
github = "peterhoeg";
|
||||
githubId = 722550;
|
||||
name = "Peter Hoeg";
|
||||
|
@ -9646,6 +9693,7 @@
|
|||
};
|
||||
rnhmjoj = {
|
||||
email = "rnhmjoj@inventati.org";
|
||||
matrix = "@rnhmjoj:maxwell.ydns.eu";
|
||||
github = "rnhmjoj";
|
||||
githubId = 2817565;
|
||||
name = "Michele Guerini Rocco";
|
||||
|
@ -9747,6 +9795,7 @@
|
|||
};
|
||||
roosemberth = {
|
||||
email = "roosembert.palacios+nixpkgs@posteo.ch";
|
||||
matrix = "@roosemberth:orbstheorem.ch";
|
||||
github = "roosemberth";
|
||||
githubId = 3621083;
|
||||
name = "Roosembert (Roosemberth) Palacios";
|
||||
|
@ -9887,6 +9936,7 @@
|
|||
};
|
||||
ryantm = {
|
||||
email = "ryan@ryantm.com";
|
||||
matrix = "@ryantm:matrix.org";
|
||||
github = "ryantm";
|
||||
githubId = 4804;
|
||||
name = "Ryan Mulligan";
|
||||
|
@ -10788,6 +10838,12 @@
|
|||
githubId = 1181362;
|
||||
name = "Stefan Junker";
|
||||
};
|
||||
stevenroose = {
|
||||
email = "github@stevenroose.org";
|
||||
github = "stevenroose";
|
||||
githubId = 853468;
|
||||
name = "Steven Roose";
|
||||
};
|
||||
stianlagstad = {
|
||||
email = "stianlagstad@gmail.com";
|
||||
github = "stianlagstad";
|
||||
|
@ -11668,6 +11724,13 @@
|
|||
fingerprint = "EE59 5E29 BB5B F2B3 5ED2 3F1C D276 FF74 6700 7335";
|
||||
}];
|
||||
};
|
||||
uniquepointer = {
|
||||
email = "uniquepointer@mailbox.org";
|
||||
matrix = "@uniquepointer:matrix.org";
|
||||
github = "uniquepointer";
|
||||
githubId = 71751817;
|
||||
name = "uniquepointer";
|
||||
};
|
||||
unode = {
|
||||
email = "alves.rjc@gmail.com";
|
||||
matrix = "@renato_alves:matrix.org";
|
||||
|
@ -12054,6 +12117,22 @@
|
|||
githubId = 9002575;
|
||||
name = "Weihua Lu";
|
||||
};
|
||||
welteki = {
|
||||
email = "welteki@pm.me";
|
||||
github = "welteki";
|
||||
githubId = 16267532;
|
||||
name = "Han Verstraete";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x11F7BAEA856743FF";
|
||||
fingerprint = "2145 955E 3F5E 0C95 3458 41B5 11F7 BAEA 8567 43FF";
|
||||
}];
|
||||
};
|
||||
wentasah = {
|
||||
name = "Michal Sojka";
|
||||
email = "wsh@2x.cz";
|
||||
github = "wentasah";
|
||||
githubId = 140542;
|
||||
};
|
||||
wheelsandmetal = {
|
||||
email = "jakob@schmutz.co.uk";
|
||||
github = "wheelsandmetal";
|
||||
|
@ -12876,6 +12955,12 @@
|
|||
fingerprint = "61AE D40F 368B 6F26 9DAE 3892 6861 6B2D 8AC4 DCC5";
|
||||
}];
|
||||
};
|
||||
zbioe = {
|
||||
name = "Iury Fukuda";
|
||||
email = "zbioe@protonmail.com";
|
||||
github = "zbioe";
|
||||
githubId = 7332055;
|
||||
};
|
||||
zenithal = {
|
||||
name = "zenithal";
|
||||
email = "i@zenithal.me";
|
||||
|
|
|
@ -145,6 +145,7 @@ with lib.maintainers; {
|
|||
|
||||
jitsi = {
|
||||
members = [
|
||||
cleeyv
|
||||
petabyteboy
|
||||
ryantm
|
||||
yuka
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
<section xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-release-21.11">
|
||||
<title>Release 21.11 (“?”, 2021.11/??)</title>
|
||||
<title>Release 21.11 (“Porcupine”, 2021.11/??)</title>
|
||||
<para>
|
||||
In addition to numerous new and upgraded packages, this release has
|
||||
the following highlights:
|
||||
|
@ -130,6 +130,14 @@
|
|||
<link xlink:href="options.html#opt-services.geoipupdate.enable">services.geoipupdate</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/jitsi/jibri">Jibri</link>,
|
||||
a service for recording or streaming a Jitsi Meet conference.
|
||||
Available as
|
||||
<link xlink:href="options.html#opt-services.jibri.enable">services.jibri</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.isc.org/kea/">Kea</link>, ISCs
|
||||
|
@ -144,6 +152,14 @@
|
|||
<link xlink:href="options.html#opt-services.owncast">services.owncast</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://joinpeertube.org/">PeerTube</link>,
|
||||
developed by Framasoft, is the free and decentralized
|
||||
alternative to video platforms. Available at
|
||||
<link xlink:href="options.html#opt-services.peertube">services.peertube</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://sr.ht">sourcehut</link>, a
|
||||
|
@ -357,6 +373,14 @@
|
|||
<link linkend="opt-services.multipath.enable">services.multipath</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://www.seafile.com/en/home/">seafile</link>,
|
||||
an open source file syncing & sharing software. Available
|
||||
as
|
||||
<link xlink:href="options.html#opt-services.seafile.enable">services.seafile</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-incompatibilities">
|
||||
|
@ -1129,6 +1153,29 @@ Superuser created successfully.
|
|||
would be parsed as 3 parameters.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>coursier</literal> package’s binary was renamed
|
||||
from <literal>coursier</literal> to <literal>cs</literal>.
|
||||
Completions which haven’t worked for a while should now work
|
||||
with the renamed binary. To keep using
|
||||
<literal>coursier</literal>, you can create a shell alias.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>services.mosquitto</literal> module has been
|
||||
rewritten to support multiple listeners and per-listener
|
||||
configuration. Module configurations from previous releases
|
||||
will no longer work and must be updated.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Nextcloud 20 (<literal>pkgs.nextcloud20</literal>) has been
|
||||
dropped because it was EOLed by upstream in 2021-10.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-notable-changes">
|
||||
|
@ -1596,6 +1643,16 @@ Superuser created successfully.
|
|||
</listitem>
|
||||
</itemizedlist>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>cawbird</literal> Twitter client now uses its own
|
||||
API keys to count as different application than upstream
|
||||
builds. This is done to evade application-level rate limiting.
|
||||
While existing accounts continue to work, users may want to
|
||||
remove and re-register their account in the client to enjoy a
|
||||
better user experience and benefit from this change.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
# Release 21.11 (“?”, 2021.11/??) {#sec-release-21.11}
|
||||
# Release 21.11 (“Porcupine”, 2021.11/??) {#sec-release-21.11}
|
||||
|
||||
In addition to numerous new and upgraded packages, this release has the following highlights:
|
||||
|
||||
|
@ -43,10 +43,14 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- [geoipupdate](https://github.com/maxmind/geoipupdate), a GeoIP database updater from MaxMind. Available as [services.geoipupdate](options.html#opt-services.geoipupdate.enable).
|
||||
|
||||
- [Jibri](https://github.com/jitsi/jibri), a service for recording or streaming a Jitsi Meet conference. Available as [services.jibri](options.html#opt-services.jibri.enable).
|
||||
|
||||
- [Kea](https://www.isc.org/kea/), ISCs 2nd generation DHCP and DDNS server suite. Available at [services.kea](options.html#opt-services.kea).
|
||||
|
||||
- [owncast](https://owncast.online/), self-hosted video live streaming solution. Available at [services.owncast](options.html#opt-services.owncast).
|
||||
|
||||
- [PeerTube](https://joinpeertube.org/), developed by Framasoft, is the free and decentralized alternative to video platforms. Available at [services.peertube](options.html#opt-services.peertube).
|
||||
|
||||
- [sourcehut](https://sr.ht), a collection of tools useful for software development. Available as [services.sourcehut](options.html#opt-services.sourcehut.enable).
|
||||
|
||||
- [ucarp](https://download.pureftpd.org/pub/ucarp/README), an userspace implementation of the Common Address Redundancy Protocol (CARP). Available as [networking.ucarp](options.html#opt-networking.ucarp.enable).
|
||||
|
@ -110,6 +114,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- [multipath](https://github.com/opensvc/multipath-tools), the device mapper multipath (DM-MP) daemon. Available as [services.multipath](#opt-services.multipath.enable).
|
||||
|
||||
- [seafile](https://www.seafile.com/en/home/), an open source file syncing & sharing software. Available as [services.seafile](options.html#opt-services.seafile.enable).
|
||||
|
||||
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
|
||||
|
||||
- The `services.wakeonlan` option was removed, and replaced with `networking.interfaces.<name>.wakeOnLan`.
|
||||
|
@ -349,6 +355,13 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- `boot.kernelParams` now only accepts one command line parameter per string. This change is aimed to reduce common mistakes like "param = 12", which would be parsed as 3 parameters.
|
||||
|
||||
- The `coursier` package's binary was renamed from `coursier` to `cs`. Completions which haven't worked for a while should now work with the renamed binary. To keep using `coursier`, you can create a shell alias.
|
||||
|
||||
- The `services.mosquitto` module has been rewritten to support multiple listeners and per-listener configuration.
|
||||
Module configurations from previous releases will no longer work and must be updated.
|
||||
|
||||
- Nextcloud 20 (`pkgs.nextcloud20`) has been dropped because it was EOLed by upstream in 2021-10.
|
||||
|
||||
## Other Notable Changes {#sec-release-21.11-notable-changes}
|
||||
|
||||
|
||||
|
@ -457,3 +470,5 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
- `virtualisation.libvirtd.qemu*` options (e.g.: `virtualisation.libvirtd.qemuRunAsRoot`) were moved to [`virtualisation.libvirtd.qemu`](options.html#opt-virtualisation.libvirtd.qemu) submodule,
|
||||
- software TPM1/TPM2 support (e.g.: Windows 11 guests) ([`virtualisation.libvirtd.qemu.swtpm`](options.html#opt-virtualisation.libvirtd.qemu.swtpm)),
|
||||
- custom OVMF package (e.g.: `pkgs.OVMFFull` with HTTP, CSM and Secure Boot support) ([`virtualisation.libvirtd.qemu.ovmf.package`](options.html#opt-virtualisation.libvirtd.qemu.ovmf.package)).
|
||||
|
||||
- The `cawbird` Twitter client now uses its own API keys to count as different application than upstream builds. This is done to evade application-level rate limiting. While existing accounts continue to work, users may want to remove and re-register their account in the client to enjoy a better user experience and benefit from this change.
|
||||
|
|
|
@ -6,9 +6,8 @@ from xml.sax.saxutils import XMLGenerator
|
|||
from colorama import Style
|
||||
import queue
|
||||
import io
|
||||
import _thread
|
||||
import threading
|
||||
import argparse
|
||||
import atexit
|
||||
import base64
|
||||
import codecs
|
||||
import os
|
||||
|
@ -405,13 +404,14 @@ class Machine:
|
|||
keep_vm_state: bool
|
||||
allow_reboot: bool
|
||||
|
||||
process: Optional[subprocess.Popen] = None
|
||||
pid: Optional[int] = None
|
||||
monitor: Optional[socket.socket] = None
|
||||
shell: Optional[socket.socket] = None
|
||||
process: Optional[subprocess.Popen]
|
||||
pid: Optional[int]
|
||||
monitor: Optional[socket.socket]
|
||||
shell: Optional[socket.socket]
|
||||
serial_thread: Optional[threading.Thread]
|
||||
|
||||
booted: bool = False
|
||||
connected: bool = False
|
||||
booted: bool
|
||||
connected: bool
|
||||
# Store last serial console lines for use
|
||||
# of wait_for_console_text
|
||||
last_lines: Queue = Queue()
|
||||
|
@ -444,6 +444,15 @@ class Machine:
|
|||
self.cleanup_statedir()
|
||||
self.state_dir.mkdir(mode=0o700, exist_ok=True)
|
||||
|
||||
self.process = None
|
||||
self.pid = None
|
||||
self.monitor = None
|
||||
self.shell = None
|
||||
self.serial_thread = None
|
||||
|
||||
self.booted = False
|
||||
self.connected = False
|
||||
|
||||
@staticmethod
|
||||
def create_startcommand(args: Dict[str, str]) -> StartCommand:
|
||||
rootlog.warning(
|
||||
|
@ -921,7 +930,8 @@ class Machine:
|
|||
self.last_lines.put(line)
|
||||
self.log_serial(line)
|
||||
|
||||
_thread.start_new_thread(process_serial_output, ())
|
||||
self.serial_thread = threading.Thread(target=process_serial_output)
|
||||
self.serial_thread.start()
|
||||
|
||||
self.wait_for_monitor_prompt()
|
||||
|
||||
|
@ -1021,9 +1031,12 @@ class Machine:
|
|||
assert self.process
|
||||
assert self.shell
|
||||
assert self.monitor
|
||||
assert self.serial_thread
|
||||
|
||||
self.process.terminate()
|
||||
self.shell.close()
|
||||
self.monitor.close()
|
||||
self.serial_thread.join()
|
||||
|
||||
|
||||
class VLan:
|
||||
|
@ -1114,8 +1127,10 @@ class Driver:
|
|||
for cmd in cmd(start_scripts)
|
||||
]
|
||||
|
||||
@atexit.register
|
||||
def clean_up() -> None:
|
||||
def __enter__(self) -> "Driver":
|
||||
return self
|
||||
|
||||
def __exit__(self, *_: Any) -> None:
|
||||
with rootlog.nested("cleanup"):
|
||||
for machine in self.machines:
|
||||
machine.release()
|
||||
|
@ -1293,10 +1308,9 @@ if __name__ == "__main__":
|
|||
if not args.keep_vm_state:
|
||||
rootlog.info("Machine state will be reset. To keep it, pass --keep-vm-state")
|
||||
|
||||
driver = Driver(
|
||||
with Driver(
|
||||
args.start_scripts, args.vlans, args.testscript.read_text(), args.keep_vm_state
|
||||
)
|
||||
|
||||
) as driver:
|
||||
if args.interactive:
|
||||
ptpython.repl.embed(driver.test_symbols(), {})
|
||||
else:
|
||||
|
|
|
@ -116,7 +116,7 @@ in
|
|||
{ console.keyMap = with config.services.xserver;
|
||||
mkIf cfg.useXkbConfig
|
||||
(pkgs.runCommand "xkb-console-keymap" { preferLocalBuild = true; } ''
|
||||
'${pkgs.ckbcomp}/bin/ckbcomp' \
|
||||
'${pkgs.buildPackages.ckbcomp}/bin/ckbcomp' \
|
||||
${optionalString (config.environment.sessionVariables ? XKB_CONFIG_ROOT)
|
||||
"-I${config.environment.sessionVariables.XKB_CONFIG_ROOT}"
|
||||
} \
|
||||
|
|
18
third_party/nixpkgs/nixos/modules/hardware/gkraken.nix
vendored
Normal file
18
third_party/nixpkgs/nixos/modules/hardware/gkraken.nix
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.hardware.gkraken;
|
||||
in
|
||||
{
|
||||
options.hardware.gkraken = {
|
||||
enable = mkEnableOption "gkraken's udev rules for NZXT AIO liquid coolers";
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
services.udev.packages = with pkgs; [
|
||||
gkraken
|
||||
];
|
||||
};
|
||||
}
|
|
@ -48,6 +48,7 @@
|
|||
./hardware/corectrl.nix
|
||||
./hardware/digitalbitbox.nix
|
||||
./hardware/device-tree.nix
|
||||
./hardware/gkraken.nix
|
||||
./hardware/i2c.nix
|
||||
./hardware/sensor/hddtemp.nix
|
||||
./hardware/sensor/iio.nix
|
||||
|
@ -755,6 +756,7 @@
|
|||
./services/networking/iscsi/root-initiator.nix
|
||||
./services/networking/iscsi/target.nix
|
||||
./services/networking/iwd.nix
|
||||
./services/networking/jibri/default.nix
|
||||
./services/networking/jicofo.nix
|
||||
./services/networking/jitsi-videobridge.nix
|
||||
./services/networking/kea.nix
|
||||
|
@ -836,6 +838,7 @@
|
|||
./services/networking/rpcbind.nix
|
||||
./services/networking/rxe.nix
|
||||
./services/networking/sabnzbd.nix
|
||||
./services/networking/seafile.nix
|
||||
./services/networking/searx.nix
|
||||
./services/networking/skydns.nix
|
||||
./services/networking/shadowsocks.nix
|
||||
|
@ -998,6 +1001,7 @@
|
|||
./services/web-apps/nexus.nix
|
||||
./services/web-apps/node-red.nix
|
||||
./services/web-apps/pict-rs.nix
|
||||
./services/web-apps/peertube.nix
|
||||
./services/web-apps/plantuml-server.nix
|
||||
./services/web-apps/plausible.nix
|
||||
./services/web-apps/pgpkeyserver-lite.nix
|
||||
|
|
|
@ -4,7 +4,9 @@
|
|||
|
||||
with lib;
|
||||
|
||||
{
|
||||
let cfg = config.programs.file-roller;
|
||||
|
||||
in {
|
||||
|
||||
# Added 2019-08-09
|
||||
imports = [
|
||||
|
@ -21,6 +23,13 @@ with lib;
|
|||
|
||||
enable = mkEnableOption "File Roller, an archive manager for GNOME";
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.gnome.file-roller;
|
||||
defaultText = literalExpression "pkgs.gnome.file-roller";
|
||||
description = "File Roller derivation to use.";
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
};
|
||||
|
@ -28,11 +37,11 @@ with lib;
|
|||
|
||||
###### implementation
|
||||
|
||||
config = mkIf config.programs.file-roller.enable {
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
environment.systemPackages = [ pkgs.gnome.file-roller ];
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
services.dbus.packages = [ pkgs.gnome.file-roller ];
|
||||
services.dbus.packages = [ cfg.package ];
|
||||
|
||||
};
|
||||
|
||||
|
|
|
@ -192,6 +192,14 @@ let
|
|||
++ data.extraLegoRenewFlags
|
||||
);
|
||||
|
||||
# We need to collect all the ACME webroots to grant them write
|
||||
# access in the systemd service.
|
||||
webroots =
|
||||
lib.remove null
|
||||
(lib.unique
|
||||
(builtins.map
|
||||
(certAttrs: certAttrs.webroot)
|
||||
(lib.attrValues config.security.acme.certs)));
|
||||
in {
|
||||
inherit accountHash cert selfsignedDeps;
|
||||
|
||||
|
@ -288,6 +296,8 @@ let
|
|||
"acme/.lego/accounts/${accountHash}"
|
||||
];
|
||||
|
||||
ReadWritePaths = commonServiceConfig.ReadWritePaths ++ webroots;
|
||||
|
||||
# Needs to be space separated, but can't use a multiline string because that'll include newlines
|
||||
BindPaths = [
|
||||
"${accountDir}:/tmp/accounts"
|
||||
|
|
|
@ -428,7 +428,7 @@ let
|
|||
${optionalString config.security.pam.enableEcryptfs
|
||||
"auth optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so unwrap"}
|
||||
${optionalString cfg.pamMount
|
||||
"auth optional ${pkgs.pam_mount}/lib/security/pam_mount.so"}
|
||||
"auth optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive"}
|
||||
${optionalString cfg.enableKwallet
|
||||
("auth optional ${pkgs.plasma5Packages.kwallet-pam}/lib/security/pam_kwallet5.so" +
|
||||
" kwalletd=${pkgs.plasma5Packages.kwallet.bin}/bin/kwalletd5")}
|
||||
|
@ -489,7 +489,7 @@ let
|
|||
${optionalString config.security.pam.enableEcryptfs
|
||||
"session optional ${pkgs.ecryptfs}/lib/security/pam_ecryptfs.so"}
|
||||
${optionalString cfg.pamMount
|
||||
"session optional ${pkgs.pam_mount}/lib/security/pam_mount.so"}
|
||||
"session optional ${pkgs.pam_mount}/lib/security/pam_mount.so disable_interactive"}
|
||||
${optionalString use_ldap
|
||||
"session optional ${pam_ldap}/lib/security/pam_ldap.so"}
|
||||
${optionalString config.services.sssd.enable
|
||||
|
|
|
@ -42,12 +42,16 @@ let
|
|||
${cfg.postInit}
|
||||
fi
|
||||
'' + ''
|
||||
(
|
||||
set -o pipefail
|
||||
${optionalString (cfg.dumpCommand != null) ''${escapeShellArg cfg.dumpCommand} | \''}
|
||||
borg create $extraArgs \
|
||||
--compression ${cfg.compression} \
|
||||
--exclude-from ${mkExcludeFile cfg} \
|
||||
$extraCreateArgs \
|
||||
"::$archiveName$archiveSuffix" \
|
||||
${escapeShellArgs cfg.paths}
|
||||
${if cfg.paths == null then "-" else escapeShellArgs cfg.paths}
|
||||
)
|
||||
'' + optionalString cfg.appendFailedSuffix ''
|
||||
borg rename $extraArgs \
|
||||
"::$archiveName$archiveSuffix" "$archiveName"
|
||||
|
@ -182,6 +186,14 @@ let
|
|||
+ " without at least one public key";
|
||||
};
|
||||
|
||||
mkSourceAssertions = name: cfg: {
|
||||
assertion = count isNull [ cfg.dumpCommand cfg.paths ] == 1;
|
||||
message = ''
|
||||
Exactly one of borgbackup.jobs.${name}.paths or borgbackup.jobs.${name}.dumpCommand
|
||||
must be set.
|
||||
'';
|
||||
};
|
||||
|
||||
mkRemovableDeviceAssertions = name: cfg: {
|
||||
assertion = !(isLocalPath cfg.repo) -> !cfg.removableDevice;
|
||||
message = ''
|
||||
|
@ -240,11 +252,25 @@ in {
|
|||
options = {
|
||||
|
||||
paths = mkOption {
|
||||
type = with types; coercedTo str lib.singleton (listOf str);
|
||||
description = "Path(s) to back up.";
|
||||
type = with types; nullOr (coercedTo str lib.singleton (listOf str));
|
||||
default = null;
|
||||
description = ''
|
||||
Path(s) to back up.
|
||||
Mutually exclusive with <option>dumpCommand</option>.
|
||||
'';
|
||||
example = "/home/user";
|
||||
};
|
||||
|
||||
dumpCommand = mkOption {
|
||||
type = with types; nullOr path;
|
||||
default = null;
|
||||
description = ''
|
||||
Backup the stdout of this program instead of filesystem paths.
|
||||
Mutually exclusive with <option>paths</option>.
|
||||
'';
|
||||
example = "/path/to/createZFSsend.sh";
|
||||
};
|
||||
|
||||
repo = mkOption {
|
||||
type = types.str;
|
||||
description = "Remote or local repository to back up to.";
|
||||
|
@ -657,6 +683,7 @@ in {
|
|||
assertions =
|
||||
mapAttrsToList mkPassAssertion jobs
|
||||
++ mapAttrsToList mkKeysAssertion repos
|
||||
++ mapAttrsToList mkSourceAssertions jobs
|
||||
++ mapAttrsToList mkRemovableDeviceAssertions jobs;
|
||||
|
||||
system.activationScripts = mapAttrs' mkActivationScript jobs;
|
||||
|
|
|
@ -11,7 +11,7 @@ in
|
|||
description = ''
|
||||
Periodic backups to create with Restic.
|
||||
'';
|
||||
type = types.attrsOf (types.submodule ({ name, ... }: {
|
||||
type = types.attrsOf (types.submodule ({ config, name, ... }: {
|
||||
options = {
|
||||
passwordFile = mkOption {
|
||||
type = types.str;
|
||||
|
@ -21,6 +21,17 @@ in
|
|||
example = "/etc/nixos/restic-password";
|
||||
};
|
||||
|
||||
environmentFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
# added on 2021-08-28, s3CredentialsFile should
|
||||
# be removed in the future (+ remember the warning)
|
||||
default = config.s3CredentialsFile;
|
||||
description = ''
|
||||
file containing the credentials to access the repository, in the
|
||||
format of an EnvironmentFile as described by systemd.exec(5)
|
||||
'';
|
||||
};
|
||||
|
||||
s3CredentialsFile = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
|
@ -212,6 +223,7 @@ in
|
|||
};
|
||||
|
||||
config = {
|
||||
warnings = mapAttrsToList (n: v: "services.restic.backups.${n}.s3CredentialsFile is deprecated, please use services.restic.backups.${n}.environmentFile instead.") (filterAttrs (n: v: v.s3CredentialsFile != null) config.services.restic.backups);
|
||||
systemd.services =
|
||||
mapAttrs' (name: backup:
|
||||
let
|
||||
|
@ -251,8 +263,8 @@ in
|
|||
RuntimeDirectory = "restic-backups-${name}";
|
||||
CacheDirectory = "restic-backups-${name}";
|
||||
CacheDirectoryMode = "0700";
|
||||
} // optionalAttrs (backup.s3CredentialsFile != null) {
|
||||
EnvironmentFile = backup.s3CredentialsFile;
|
||||
} // optionalAttrs (backup.environmentFile != null) {
|
||||
EnvironmentFile = backup.environmentFile;
|
||||
};
|
||||
} // optionalAttrs (backup.initialize || backup.dynamicFilesFrom != null) {
|
||||
preStart = ''
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{ hadoop, pkgs }:
|
||||
{ cfg, pkgs, lib }:
|
||||
let
|
||||
propertyXml = name: value: ''
|
||||
<property>
|
||||
|
@ -13,19 +13,31 @@ let
|
|||
${builtins.concatStringsSep "\n" (pkgs.lib.mapAttrsToList propertyXml properties)}
|
||||
</configuration>
|
||||
'';
|
||||
cfgLine = name: value: ''
|
||||
${name}=${builtins.toString value}
|
||||
'';
|
||||
cfgFile = fileName: properties: pkgs.writeTextDir fileName ''
|
||||
# generated by NixOS
|
||||
${builtins.concatStringsSep "" (pkgs.lib.mapAttrsToList cfgLine properties)}
|
||||
'';
|
||||
userFunctions = ''
|
||||
hadoop_verify_logdir() {
|
||||
echo Skipping verification of log directory
|
||||
}
|
||||
'';
|
||||
hadoopEnv = ''
|
||||
export HADOOP_LOG_DIR=/tmp/hadoop/$USER
|
||||
'';
|
||||
in
|
||||
pkgs.buildEnv {
|
||||
name = "hadoop-conf";
|
||||
paths = [
|
||||
(siteXml "core-site.xml" hadoop.coreSite)
|
||||
(siteXml "hdfs-site.xml" hadoop.hdfsSite)
|
||||
(siteXml "mapred-site.xml" hadoop.mapredSite)
|
||||
(siteXml "yarn-site.xml" hadoop.yarnSite)
|
||||
(pkgs.writeTextDir "hadoop-user-functions.sh" userFunctions)
|
||||
];
|
||||
}
|
||||
pkgs.runCommand "hadoop-conf" {} ''
|
||||
mkdir -p $out/
|
||||
cp ${siteXml "core-site.xml" cfg.coreSite}/* $out/
|
||||
cp ${siteXml "hdfs-site.xml" cfg.hdfsSite}/* $out/
|
||||
cp ${siteXml "mapred-site.xml" cfg.mapredSite}/* $out/
|
||||
cp ${siteXml "yarn-site.xml" cfg.yarnSite}/* $out/
|
||||
cp ${cfgFile "container-executor.cfg" cfg.containerExecutorCfg}/* $out/
|
||||
cp ${pkgs.writeTextDir "hadoop-user-functions.sh" userFunctions}/* $out/
|
||||
cp ${pkgs.writeTextDir "hadoop-env.sh" hadoopEnv}/* $out/
|
||||
cp ${cfg.log4jProperties} $out/log4j.properties
|
||||
${lib.concatMapStringsSep "\n" (dir: "cp -r ${dir}/* $out/") cfg.extraConfDirs}
|
||||
''
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
{ config, lib, pkgs, ...}:
|
||||
|
||||
let
|
||||
cfg = config.services.hadoop;
|
||||
in
|
||||
with lib;
|
||||
{
|
||||
imports = [ ./yarn.nix ./hdfs.nix ];
|
||||
|
@ -17,7 +19,9 @@ with lib;
|
|||
};
|
||||
|
||||
hdfsSite = mkOption {
|
||||
default = {};
|
||||
default = {
|
||||
"dfs.namenode.rpc-bind-host" = "0.0.0.0";
|
||||
};
|
||||
type = types.attrsOf types.anything;
|
||||
example = literalExpression ''
|
||||
{
|
||||
|
@ -28,27 +32,81 @@ with lib;
|
|||
};
|
||||
|
||||
mapredSite = mkOption {
|
||||
default = {};
|
||||
default = {
|
||||
"mapreduce.framework.name" = "yarn";
|
||||
"yarn.app.mapreduce.am.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}";
|
||||
"mapreduce.map.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}";
|
||||
"mapreduce.reduce.env" = "HADOOP_MAPRED_HOME=${cfg.package}/lib/${cfg.package.untarDir}";
|
||||
};
|
||||
type = types.attrsOf types.anything;
|
||||
example = literalExpression ''
|
||||
{
|
||||
"mapreduce.map.cpu.vcores" = "1";
|
||||
options.services.hadoop.mapredSite.default // {
|
||||
"mapreduce.map.java.opts" = "-Xmx900m -XX:+UseParallelGC";
|
||||
}
|
||||
'';
|
||||
description = "Hadoop mapred-site.xml definition";
|
||||
};
|
||||
|
||||
yarnSite = mkOption {
|
||||
default = {};
|
||||
default = {
|
||||
"yarn.nodemanager.admin-env" = "PATH=$PATH";
|
||||
"yarn.nodemanager.aux-services" = "mapreduce_shuffle";
|
||||
"yarn.nodemanager.aux-services.mapreduce_shuffle.class" = "org.apache.hadoop.mapred.ShuffleHandler";
|
||||
"yarn.nodemanager.bind-host" = "0.0.0.0";
|
||||
"yarn.nodemanager.container-executor.class" = "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor";
|
||||
"yarn.nodemanager.env-whitelist" = "JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,LANG,TZ";
|
||||
"yarn.nodemanager.linux-container-executor.group" = "hadoop";
|
||||
"yarn.nodemanager.linux-container-executor.path" = "/run/wrappers/yarn-nodemanager/bin/container-executor";
|
||||
"yarn.nodemanager.log-dirs" = "/var/log/hadoop/yarn/nodemanager";
|
||||
"yarn.resourcemanager.bind-host" = "0.0.0.0";
|
||||
"yarn.resourcemanager.scheduler.class" = "org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler";
|
||||
};
|
||||
type = types.attrsOf types.anything;
|
||||
example = literalExpression ''
|
||||
{
|
||||
"yarn.resourcemanager.ha.id" = "resourcemanager1";
|
||||
options.services.hadoop.yarnSite.default // {
|
||||
"yarn.resourcemanager.hostname" = "''${config.networking.hostName}";
|
||||
}
|
||||
'';
|
||||
description = "Hadoop yarn-site.xml definition";
|
||||
};
|
||||
|
||||
log4jProperties = mkOption {
|
||||
default = "${cfg.package}/lib/${cfg.package.untarDir}/etc/hadoop/log4j.properties";
|
||||
type = types.path;
|
||||
example = literalExpression ''
|
||||
"''${pkgs.hadoop}/lib/''${pkgs.hadoop.untarDir}/etc/hadoop/log4j.properties";
|
||||
'';
|
||||
description = "log4j.properties file added to HADOOP_CONF_DIR";
|
||||
};
|
||||
|
||||
containerExecutorCfg = mkOption {
|
||||
default = {
|
||||
# must be the same as yarn.nodemanager.linux-container-executor.group in yarnSite
|
||||
"yarn.nodemanager.linux-container-executor.group"="hadoop";
|
||||
"min.user.id"=1000;
|
||||
"feature.terminal.enabled"=1;
|
||||
};
|
||||
type = types.attrsOf types.anything;
|
||||
example = literalExpression ''
|
||||
options.services.hadoop.containerExecutorCfg.default // {
|
||||
"feature.terminal.enabled" = 0;
|
||||
}
|
||||
'';
|
||||
description = "Yarn container-executor.cfg definition";
|
||||
};
|
||||
|
||||
extraConfDirs = mkOption {
|
||||
default = [];
|
||||
type = types.listOf types.path;
|
||||
example = literalExpression ''
|
||||
[
|
||||
./extraHDFSConfs
|
||||
./extraYARNConfs
|
||||
]
|
||||
'';
|
||||
description = "Directories containing additional config files to be added to HADOOP_CONF_DIR";
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.hadoop;
|
||||
|
@ -64,6 +122,12 @@ with lib;
|
|||
users.groups.hadoop = {
|
||||
gid = config.ids.gids.hadoop;
|
||||
};
|
||||
environment = {
|
||||
systemPackages = [ cfg.package ];
|
||||
etc."hadoop-conf".source = let
|
||||
hadoopConf = "${import ./conf.nix { inherit cfg pkgs lib; }}/";
|
||||
in "${hadoopConf}";
|
||||
};
|
||||
})
|
||||
|
||||
];
|
||||
|
|
|
@ -1,25 +1,55 @@
|
|||
{ config, lib, pkgs, ...}:
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.hadoop;
|
||||
hadoopConf = import ./conf.nix { hadoop = cfg; pkgs = pkgs; };
|
||||
hadoopConf = "${import ./conf.nix { inherit cfg pkgs lib; }}/";
|
||||
restartIfChanged = mkOption {
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Automatically restart the service on config change.
|
||||
This can be set to false to defer restarts on clusters running critical applications.
|
||||
Please consider the security implications of inadvertently running an older version,
|
||||
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
|
||||
'';
|
||||
default = false;
|
||||
};
|
||||
in
|
||||
with lib;
|
||||
{
|
||||
options.services.hadoop.hdfs = {
|
||||
namenode.enabled = mkOption {
|
||||
namenode = {
|
||||
enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run the Hadoop YARN NameNode
|
||||
Whether to run the HDFS NameNode
|
||||
'';
|
||||
};
|
||||
datanode.enabled = mkOption {
|
||||
inherit restartIfChanged;
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Open firewall ports for namenode
|
||||
'';
|
||||
};
|
||||
};
|
||||
datanode = {
|
||||
enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run the Hadoop YARN DataNode
|
||||
Whether to run the HDFS DataNode
|
||||
'';
|
||||
};
|
||||
inherit restartIfChanged;
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Open firewall ports for datanode
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkMerge [
|
||||
|
@ -27,10 +57,7 @@ with lib;
|
|||
systemd.services.hdfs-namenode = {
|
||||
description = "Hadoop HDFS NameNode";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
environment = {
|
||||
HADOOP_HOME = "${cfg.package}";
|
||||
};
|
||||
inherit (cfg.hdfs.namenode) restartIfChanged;
|
||||
|
||||
preStart = ''
|
||||
${cfg.package}/bin/hdfs --config ${hadoopConf} namenode -format -nonInteractive || true
|
||||
|
@ -40,24 +67,34 @@ with lib;
|
|||
User = "hdfs";
|
||||
SyslogIdentifier = "hdfs-namenode";
|
||||
ExecStart = "${cfg.package}/bin/hdfs --config ${hadoopConf} namenode";
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = (mkIf cfg.hdfs.namenode.openFirewall [
|
||||
9870 # namenode.http-address
|
||||
8020 # namenode.rpc-address
|
||||
]);
|
||||
})
|
||||
(mkIf cfg.hdfs.datanode.enabled {
|
||||
systemd.services.hdfs-datanode = {
|
||||
description = "Hadoop HDFS DataNode";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
environment = {
|
||||
HADOOP_HOME = "${cfg.package}";
|
||||
};
|
||||
inherit (cfg.hdfs.datanode) restartIfChanged;
|
||||
|
||||
serviceConfig = {
|
||||
User = "hdfs";
|
||||
SyslogIdentifier = "hdfs-datanode";
|
||||
ExecStart = "${cfg.package}/bin/hdfs --config ${hadoopConf} datanode";
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = (mkIf cfg.hdfs.datanode.openFirewall [
|
||||
9864 # datanode.http.address
|
||||
9866 # datanode.address
|
||||
9867 # datanode.ipc.address
|
||||
]);
|
||||
})
|
||||
(mkIf (
|
||||
cfg.hdfs.namenode.enabled || cfg.hdfs.datanode.enabled
|
||||
|
|
|
@ -1,25 +1,63 @@
|
|||
{ config, lib, pkgs, ...}:
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.hadoop;
|
||||
hadoopConf = import ./conf.nix { hadoop = cfg; pkgs = pkgs; };
|
||||
hadoopConf = "${import ./conf.nix { inherit cfg pkgs lib; }}/";
|
||||
restartIfChanged = mkOption {
|
||||
type = types.bool;
|
||||
description = ''
|
||||
Automatically restart the service on config change.
|
||||
This can be set to false to defer restarts on clusters running critical applications.
|
||||
Please consider the security implications of inadvertently running an older version,
|
||||
and the possibility of unexpected behavior caused by inconsistent versions across a cluster when disabling this option.
|
||||
'';
|
||||
default = false;
|
||||
};
|
||||
in
|
||||
with lib;
|
||||
{
|
||||
options.services.hadoop.yarn = {
|
||||
resourcemanager.enabled = mkOption {
|
||||
resourcemanager = {
|
||||
enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run the Hadoop YARN ResourceManager
|
||||
'';
|
||||
};
|
||||
nodemanager.enabled = mkOption {
|
||||
inherit restartIfChanged;
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Open firewall ports for resourcemanager
|
||||
'';
|
||||
};
|
||||
};
|
||||
nodemanager = {
|
||||
enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run the Hadoop YARN NodeManager
|
||||
'';
|
||||
};
|
||||
inherit restartIfChanged;
|
||||
addBinBash = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Add /bin/bash. This is needed by the linux container executor's launch script.
|
||||
'';
|
||||
};
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Open firewall ports for nodemanager.
|
||||
Because containers can listen on any ephemeral port, TCP ports 1024–65535 will be opened.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkMerge [
|
||||
|
@ -38,36 +76,63 @@ with lib;
|
|||
systemd.services.yarn-resourcemanager = {
|
||||
description = "Hadoop YARN ResourceManager";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
environment = {
|
||||
HADOOP_HOME = "${cfg.package}";
|
||||
};
|
||||
inherit (cfg.yarn.resourcemanager) restartIfChanged;
|
||||
|
||||
serviceConfig = {
|
||||
User = "yarn";
|
||||
SyslogIdentifier = "yarn-resourcemanager";
|
||||
ExecStart = "${cfg.package}/bin/yarn --config ${hadoopConf} " +
|
||||
" resourcemanager";
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
networking.firewall.allowedTCPPorts = (mkIf cfg.yarn.resourcemanager.openFirewall [
|
||||
8088 # resourcemanager.webapp.address
|
||||
8030 # resourcemanager.scheduler.address
|
||||
8031 # resourcemanager.resource-tracker.address
|
||||
8032 # resourcemanager.address
|
||||
]);
|
||||
})
|
||||
|
||||
(mkIf cfg.yarn.nodemanager.enabled {
|
||||
# Needed because yarn hardcodes /bin/bash in container start scripts
|
||||
# These scripts can't be patched, they are generated at runtime
|
||||
systemd.tmpfiles.rules = [
|
||||
(mkIf cfg.yarn.nodemanager.addBinBash "L /bin/bash - - - - /run/current-system/sw/bin/bash")
|
||||
];
|
||||
|
||||
systemd.services.yarn-nodemanager = {
|
||||
description = "Hadoop YARN NodeManager";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
inherit (cfg.yarn.nodemanager) restartIfChanged;
|
||||
|
||||
environment = {
|
||||
HADOOP_HOME = "${cfg.package}";
|
||||
};
|
||||
preStart = ''
|
||||
# create log dir
|
||||
mkdir -p /var/log/hadoop/yarn/nodemanager
|
||||
chown yarn:hadoop /var/log/hadoop/yarn/nodemanager
|
||||
|
||||
# set up setuid container executor binary
|
||||
rm -rf /run/wrappers/yarn-nodemanager/ || true
|
||||
mkdir -p /run/wrappers/yarn-nodemanager/{bin,etc/hadoop}
|
||||
cp ${cfg.package}/lib/${cfg.package.untarDir}/bin/container-executor /run/wrappers/yarn-nodemanager/bin/
|
||||
chgrp hadoop /run/wrappers/yarn-nodemanager/bin/container-executor
|
||||
chmod 6050 /run/wrappers/yarn-nodemanager/bin/container-executor
|
||||
cp ${hadoopConf}/container-executor.cfg /run/wrappers/yarn-nodemanager/etc/hadoop/
|
||||
'';
|
||||
|
||||
serviceConfig = {
|
||||
User = "yarn";
|
||||
SyslogIdentifier = "yarn-nodemanager";
|
||||
PermissionsStartOnly = true;
|
||||
ExecStart = "${cfg.package}/bin/yarn --config ${hadoopConf} " +
|
||||
" nodemanager";
|
||||
Restart = "always";
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPortRanges = [
|
||||
(mkIf (cfg.yarn.nodemanager.openFirewall) {from = 1024; to = 65535;})
|
||||
];
|
||||
})
|
||||
|
||||
];
|
||||
|
|
|
@ -19,7 +19,7 @@ in {
|
|||
enable = lib.mkEnableOption "Blackfire profiler agent";
|
||||
settings = lib.mkOption {
|
||||
description = ''
|
||||
See https://blackfire.io/docs/configuration/agent
|
||||
See https://blackfire.io/docs/up-and-running/configuration/agent
|
||||
'';
|
||||
type = lib.types.submodule {
|
||||
freeformType = with lib.types; attrsOf str;
|
||||
|
@ -53,13 +53,8 @@ in {
|
|||
|
||||
services.blackfire-agent.settings.socket = "unix:///run/${agentSock}";
|
||||
|
||||
systemd.services.blackfire-agent = {
|
||||
description = "Blackfire agent";
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.blackfire}/bin/blackfire-agent";
|
||||
RuntimeDirectory = "blackfire";
|
||||
};
|
||||
};
|
||||
systemd.packages = [
|
||||
pkgs.blackfire
|
||||
];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -28,13 +28,14 @@ in {
|
|||
enable = true;
|
||||
settings = {
|
||||
# You will need to get credentials at https://blackfire.io/my/settings/credentials
|
||||
# You can also use other options described in https://blackfire.io/docs/configuration/agent
|
||||
# You can also use other options described in https://blackfire.io/docs/up-and-running/configuration/agent
|
||||
server-id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX";
|
||||
server-token = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
|
||||
};
|
||||
};
|
||||
|
||||
# Make the agent run on start-up.
|
||||
# (WantedBy= from the upstream unit not respected: https://github.com/NixOS/nixpkgs/issues/81138)
|
||||
# Alternately, you can start it manually with `systemctl start blackfire-agent`.
|
||||
systemd.services.blackfire-agent.wantedBy = [ "phpfpm-foo.service" ];
|
||||
}</programlisting>
|
||||
|
|
|
@ -677,15 +677,13 @@ in {
|
|||
RuntimeDirectory = "grafana";
|
||||
RuntimeDirectoryMode = "0755";
|
||||
# Hardening
|
||||
CapabilityBoundingSet = [ "" ];
|
||||
AmbientCapabilities = lib.mkIf (cfg.port < 1024) [ "CAP_NET_BIND_SERVICE" ];
|
||||
CapabilityBoundingSet = if (cfg.port < 1024) then [ "CAP_NET_BIND_SERVICE" ] else [ "" ];
|
||||
DeviceAllow = [ "" ];
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = true;
|
||||
NoNewPrivileges = true;
|
||||
PrivateDevices = true;
|
||||
PrivateTmp = true;
|
||||
PrivateUsers = true;
|
||||
ProcSubset = "pid";
|
||||
ProtectClock = true;
|
||||
ProtectControlGroups = true;
|
||||
ProtectHome = true;
|
||||
|
@ -701,6 +699,8 @@ in {
|
|||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
SystemCallArchitectures = "native";
|
||||
# Upstream grafana is not setting SystemCallFilter for compatibility
|
||||
# reasons, see https://github.com/grafana/grafana/pull/40176
|
||||
SystemCallFilter = [ "@system-service" "~@privileged" "~@resources" ];
|
||||
UMask = "0027";
|
||||
};
|
||||
|
|
417
third_party/nixpkgs/nixos/modules/services/networking/jibri/default.nix
vendored
Normal file
417
third_party/nixpkgs/nixos/modules/services/networking/jibri/default.nix
vendored
Normal file
|
@ -0,0 +1,417 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.jibri;
|
||||
|
||||
# Copied from the jitsi-videobridge.nix file.
|
||||
toHOCON = x:
|
||||
if isAttrs x && x ? __hocon_envvar then ("\${" + x.__hocon_envvar + "}")
|
||||
else if isAttrs x then "{${ concatStringsSep "," (mapAttrsToList (k: v: ''"${k}":${toHOCON v}'') x) }}"
|
||||
else if isList x then "[${ concatMapStringsSep "," toHOCON x }]"
|
||||
else builtins.toJSON x;
|
||||
|
||||
# We're passing passwords in environment variables that have names generated
|
||||
# from an attribute name, which may not be a valid bash identifier.
|
||||
toVarName = s: "XMPP_PASSWORD_" + stringAsChars (c: if builtins.match "[A-Za-z0-9]" c != null then c else "_") s;
|
||||
|
||||
defaultJibriConfig = {
|
||||
id = "";
|
||||
single-use-mode = false;
|
||||
|
||||
api = {
|
||||
http.external-api-port = 2222;
|
||||
http.internal-api-port = 3333;
|
||||
|
||||
xmpp.environments = flip mapAttrsToList cfg.xmppEnvironments (name: env: {
|
||||
inherit name;
|
||||
|
||||
xmpp-server-hosts = env.xmppServerHosts;
|
||||
xmpp-domain = env.xmppDomain;
|
||||
control-muc = {
|
||||
domain = env.control.muc.domain;
|
||||
room-name = env.control.muc.roomName;
|
||||
nickname = env.control.muc.nickname;
|
||||
};
|
||||
|
||||
control-login = {
|
||||
domain = env.control.login.domain;
|
||||
username = env.control.login.username;
|
||||
password.__hocon_envvar = toVarName "${name}_control";
|
||||
};
|
||||
|
||||
call-login = {
|
||||
domain = env.call.login.domain;
|
||||
username = env.call.login.username;
|
||||
password.__hocon_envvar = toVarName "${name}_call";
|
||||
};
|
||||
|
||||
strip-from-room-domain = env.stripFromRoomDomain;
|
||||
usage-timeout = env.usageTimeout;
|
||||
trust-all-xmpp-certs = env.disableCertificateVerification;
|
||||
});
|
||||
};
|
||||
|
||||
recording = {
|
||||
recordings-directory = "/tmp/recordings";
|
||||
finalize-script = "${cfg.finalizeScript}";
|
||||
};
|
||||
|
||||
streaming.rtmp-allow-list = [ ".*" ];
|
||||
|
||||
chrome.flags = [
|
||||
"--use-fake-ui-for-media-stream"
|
||||
"--start-maximized"
|
||||
"--kiosk"
|
||||
"--enabled"
|
||||
"--disable-infobars"
|
||||
"--autoplay-policy=no-user-gesture-required"
|
||||
]
|
||||
++ lists.optional cfg.ignoreCert
|
||||
"--ignore-certificate-errors";
|
||||
|
||||
|
||||
stats.enable-stats-d = true;
|
||||
webhook.subscribers = [ ];
|
||||
|
||||
jwt-info = { };
|
||||
|
||||
call-status-checks = {
|
||||
no-media-timout = "30 seconds";
|
||||
all-muted-timeout = "10 minutes";
|
||||
default-call-empty-timout = "30 seconds";
|
||||
};
|
||||
};
|
||||
# Allow overriding leaves of the default config despite types.attrs not doing any merging.
|
||||
jibriConfig = recursiveUpdate defaultJibriConfig cfg.config;
|
||||
configFile = pkgs.writeText "jibri.conf" (toHOCON { jibri = jibriConfig; });
|
||||
in
|
||||
{
|
||||
options.services.jibri = with types; {
|
||||
enable = mkEnableOption "Jitsi BRoadcasting Infrastructure. Currently Jibri must be run on a host that is also running <option>services.jitsi-meet.enable</option>, so for most use cases it will be simpler to run <option>services.jitsi-meet.jibri.enable</option>";
|
||||
config = mkOption {
|
||||
type = attrs;
|
||||
default = { };
|
||||
description = ''
|
||||
Jibri configuration.
|
||||
See <link xlink:href="https://github.com/jitsi/jibri/blob/master/src/main/resources/reference.conf" />
|
||||
for default configuration with comments.
|
||||
'';
|
||||
};
|
||||
|
||||
finalizeScript = mkOption {
|
||||
type = types.path;
|
||||
default = pkgs.writeScript "finalize_recording.sh" ''
|
||||
#!/bin/sh
|
||||
|
||||
RECORDINGS_DIR=$1
|
||||
|
||||
echo "This is a dummy finalize script" > /tmp/finalize.out
|
||||
echo "The script was invoked with recordings directory $RECORDINGS_DIR." >> /tmp/finalize.out
|
||||
echo "You should put any finalize logic (renaming, uploading to a service" >> /tmp/finalize.out
|
||||
echo "or storage provider, etc.) in this script" >> /tmp/finalize.out
|
||||
|
||||
exit 0
|
||||
'';
|
||||
defaultText = literalExpression ''
|
||||
pkgs.writeScript "finalize_recording.sh" ''''''
|
||||
#!/bin/sh
|
||||
|
||||
RECORDINGS_DIR=$1
|
||||
|
||||
echo "This is a dummy finalize script" > /tmp/finalize.out
|
||||
echo "The script was invoked with recordings directory $RECORDINGS_DIR." >> /tmp/finalize.out
|
||||
echo "You should put any finalize logic (renaming, uploading to a service" >> /tmp/finalize.out
|
||||
echo "or storage provider, etc.) in this script" >> /tmp/finalize.out
|
||||
|
||||
exit 0
|
||||
'''''';
|
||||
'';
|
||||
example = literalExpression ''
|
||||
pkgs.writeScript "finalize_recording.sh" ''''''
|
||||
#!/bin/sh
|
||||
RECORDINGS_DIR=$1
|
||||
${pkgs.rclone}/bin/rclone copy $RECORDINGS_DIR RCLONE_REMOTE:jibri-recordings/ -v --log-file=/var/log/jitsi/jibri/recording-upload.txt
|
||||
exit 0
|
||||
'''''';
|
||||
'';
|
||||
description = ''
|
||||
This script runs when jibri finishes recording a video of a conference.
|
||||
'';
|
||||
};
|
||||
|
||||
ignoreCert = mkOption {
|
||||
type = bool;
|
||||
default = false;
|
||||
example = true;
|
||||
description = ''
|
||||
Whether to enable the flag "--ignore-certificate-errors" for the Chromium browser opened by Jibri.
|
||||
Intended for use in automated tests or anywhere else where using a verified cert for Jitsi-Meet is not possible.
|
||||
'';
|
||||
};
|
||||
|
||||
xmppEnvironments = mkOption {
|
||||
description = ''
|
||||
XMPP servers to connect to.
|
||||
'';
|
||||
example = literalExpression ''
|
||||
"jitsi-meet" = {
|
||||
xmppServerHosts = [ "localhost" ];
|
||||
xmppDomain = config.services.jitsi-meet.hostName;
|
||||
|
||||
control.muc = {
|
||||
domain = "internal.''${config.services.jitsi-meet.hostName}";
|
||||
roomName = "JibriBrewery";
|
||||
nickname = "jibri";
|
||||
};
|
||||
|
||||
control.login = {
|
||||
domain = "auth.''${config.services.jitsi-meet.hostName}";
|
||||
username = "jibri";
|
||||
passwordFile = "/var/lib/jitsi-meet/jibri-auth-secret";
|
||||
};
|
||||
|
||||
call.login = {
|
||||
domain = "recorder.''${config.services.jitsi-meet.hostName}";
|
||||
username = "recorder";
|
||||
passwordFile = "/var/lib/jitsi-meet/jibri-recorder-secret";
|
||||
};
|
||||
|
||||
usageTimeout = "0";
|
||||
disableCertificateVerification = true;
|
||||
stripFromRoomDomain = "conference.";
|
||||
};
|
||||
'';
|
||||
default = { };
|
||||
type = attrsOf (submodule ({ name, ... }: {
|
||||
options = {
|
||||
xmppServerHosts = mkOption {
|
||||
type = listOf str;
|
||||
example = [ "xmpp.example.org" ];
|
||||
description = ''
|
||||
Hostnames of the XMPP servers to connect to.
|
||||
'';
|
||||
};
|
||||
xmppDomain = mkOption {
|
||||
type = str;
|
||||
example = "xmpp.example.org";
|
||||
description = ''
|
||||
The base XMPP domain.
|
||||
'';
|
||||
};
|
||||
control.muc.domain = mkOption {
|
||||
type = str;
|
||||
description = ''
|
||||
The domain part of the MUC to connect to for control.
|
||||
'';
|
||||
};
|
||||
control.muc.roomName = mkOption {
|
||||
type = str;
|
||||
default = "JibriBrewery";
|
||||
description = ''
|
||||
The room name of the MUC to connect to for control.
|
||||
'';
|
||||
};
|
||||
control.muc.nickname = mkOption {
|
||||
type = str;
|
||||
default = "jibri";
|
||||
description = ''
|
||||
The nickname for this Jibri instance in the MUC.
|
||||
'';
|
||||
};
|
||||
control.login.domain = mkOption {
|
||||
type = str;
|
||||
description = ''
|
||||
The domain part of the JID for this Jibri instance.
|
||||
'';
|
||||
};
|
||||
control.login.username = mkOption {
|
||||
type = str;
|
||||
default = "jvb";
|
||||
description = ''
|
||||
User part of the JID.
|
||||
'';
|
||||
};
|
||||
control.login.passwordFile = mkOption {
|
||||
type = str;
|
||||
example = "/run/keys/jibri-xmpp1";
|
||||
description = ''
|
||||
File containing the password for the user.
|
||||
'';
|
||||
};
|
||||
|
||||
call.login.domain = mkOption {
|
||||
type = str;
|
||||
example = "recorder.xmpp.example.org";
|
||||
description = ''
|
||||
The domain part of the JID for the recorder.
|
||||
'';
|
||||
};
|
||||
call.login.username = mkOption {
|
||||
type = str;
|
||||
default = "recorder";
|
||||
description = ''
|
||||
User part of the JID for the recorder.
|
||||
'';
|
||||
};
|
||||
call.login.passwordFile = mkOption {
|
||||
type = str;
|
||||
example = "/run/keys/jibri-recorder-xmpp1";
|
||||
description = ''
|
||||
File containing the password for the user.
|
||||
'';
|
||||
};
|
||||
disableCertificateVerification = mkOption {
|
||||
type = bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to skip validation of the server's certificate.
|
||||
'';
|
||||
};
|
||||
|
||||
stripFromRoomDomain = mkOption {
|
||||
type = str;
|
||||
default = "0";
|
||||
example = "conference.";
|
||||
description = ''
|
||||
The prefix to strip from the room's JID domain to derive the call URL.
|
||||
'';
|
||||
};
|
||||
usageTimeout = mkOption {
|
||||
type = str;
|
||||
default = "0";
|
||||
example = "1 hour";
|
||||
description = ''
|
||||
The duration that the Jibri session can be.
|
||||
A value of zero means indefinitely.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config =
|
||||
let
|
||||
nick = mkDefault (builtins.replaceStrings [ "." ] [ "-" ] (
|
||||
config.networking.hostName + optionalString (config.networking.domain != null) ".${config.networking.domain}"
|
||||
));
|
||||
in
|
||||
{
|
||||
call.login.username = nick;
|
||||
control.muc.nickname = nick;
|
||||
};
|
||||
}));
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
users.groups.jibri = { };
|
||||
users.groups.plugdev = { };
|
||||
users.users.jibri = {
|
||||
isSystemUser = true;
|
||||
group = "jibri";
|
||||
home = "/var/lib/jibri";
|
||||
extraGroups = [ "jitsi-meet" "adm" "audio" "video" "plugdev" ];
|
||||
};
|
||||
|
||||
systemd.services.jibri-xorg = {
|
||||
description = "Jitsi Xorg Process";
|
||||
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "jibri.service" "jibri-icewm.service" ];
|
||||
|
||||
preStart = ''
|
||||
cp --no-preserve=mode,ownership ${pkgs.jibri}/etc/jitsi/jibri/* /var/lib/jibri
|
||||
mv /var/lib/jibri/{,.}asoundrc
|
||||
'';
|
||||
|
||||
environment.DISPLAY = ":0";
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
|
||||
User = "jibri";
|
||||
Group = "jibri";
|
||||
KillMode = "process";
|
||||
Restart = "on-failure";
|
||||
RestartPreventExitStatus = 255;
|
||||
|
||||
StateDirectory = "jibri";
|
||||
|
||||
ExecStart = "${pkgs.xorg.xorgserver}/bin/Xorg -nocursor -noreset +extension RANDR +extension RENDER -config ${pkgs.jibri}/etc/jitsi/jibri/xorg-video-dummy.conf -logfile /dev/null :0";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.jibri-icewm = {
|
||||
description = "Jitsi Window Manager";
|
||||
|
||||
requires = [ "jibri-xorg.service" ];
|
||||
after = [ "jibri-xorg.service" ];
|
||||
wantedBy = [ "jibri.service" ];
|
||||
|
||||
environment.DISPLAY = ":0";
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
|
||||
User = "jibri";
|
||||
Group = "jibri";
|
||||
Restart = "on-failure";
|
||||
RestartPreventExitStatus = 255;
|
||||
|
||||
StateDirectory = "jibri";
|
||||
|
||||
ExecStart = "${pkgs.icewm}/bin/icewm-session";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.jibri = {
|
||||
description = "Jibri Process";
|
||||
|
||||
requires = [ "jibri-icewm.service" "jibri-xorg.service" ];
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
path = with pkgs; [ chromedriver chromium ffmpeg-full ];
|
||||
|
||||
script = (concatStrings (mapAttrsToList
|
||||
(name: env: ''
|
||||
export ${toVarName "${name}_control"}=$(cat ${env.control.login.passwordFile})
|
||||
export ${toVarName "${name}_call"}=$(cat ${env.call.login.passwordFile})
|
||||
'')
|
||||
cfg.xmppEnvironments))
|
||||
+ ''
|
||||
${pkgs.jre8_headless}/bin/java -Djava.util.logging.config.file=${./logging.properties-journal} -Dconfig.file=${configFile} -jar ${pkgs.jibri}/opt/jitsi/jibri/jibri.jar --config /var/lib/jibri/jibri.json
|
||||
'';
|
||||
|
||||
environment.HOME = "/var/lib/jibri";
|
||||
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
|
||||
User = "jibri";
|
||||
Group = "jibri";
|
||||
Restart = "always";
|
||||
RestartPreventExitStatus = 255;
|
||||
|
||||
StateDirectory = "jibri";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/log/jitsi/jibri 755 jibri jibri"
|
||||
];
|
||||
|
||||
|
||||
|
||||
# Configure Chromium to not show the "Chrome is being controlled by automatic test software" message.
|
||||
environment.etc."chromium/policies/managed/managed_policies.json".text = builtins.toJSON { CommandLineFlagSecurityWarningsEnabled = false; };
|
||||
warnings = [ "All security warnings for Chromium have been disabled. This is necessary for Jibri, but it also impacts all other uses of Chromium on this system." ];
|
||||
|
||||
boot = {
|
||||
extraModprobeConfig = ''
|
||||
options snd-aloop enable=1,1,1,1,1,1,1,1
|
||||
'';
|
||||
kernelModules = [ "snd-aloop" ];
|
||||
};
|
||||
};
|
||||
|
||||
meta.maintainers = lib.teams.jitsi.members;
|
||||
}
|
32
third_party/nixpkgs/nixos/modules/services/networking/jibri/logging.properties-journal
vendored
Normal file
32
third_party/nixpkgs/nixos/modules/services/networking/jibri/logging.properties-journal
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
handlers = java.util.logging.FileHandler
|
||||
|
||||
java.util.logging.FileHandler.level = FINE
|
||||
java.util.logging.FileHandler.pattern = /var/log/jitsi/jibri/log.%g.txt
|
||||
java.util.logging.FileHandler.formatter = net.java.sip.communicator.util.ScLogFormatter
|
||||
java.util.logging.FileHandler.count = 10
|
||||
java.util.logging.FileHandler.limit = 10000000
|
||||
|
||||
org.jitsi.jibri.capture.ffmpeg.util.FfmpegFileHandler.level = FINE
|
||||
org.jitsi.jibri.capture.ffmpeg.util.FfmpegFileHandler.pattern = /var/log/jitsi/jibri/ffmpeg.%g.txt
|
||||
org.jitsi.jibri.capture.ffmpeg.util.FfmpegFileHandler.formatter = net.java.sip.communicator.util.ScLogFormatter
|
||||
org.jitsi.jibri.capture.ffmpeg.util.FfmpegFileHandler.count = 10
|
||||
org.jitsi.jibri.capture.ffmpeg.util.FfmpegFileHandler.limit = 10000000
|
||||
|
||||
org.jitsi.jibri.sipgateway.pjsua.util.PjsuaFileHandler.level = FINE
|
||||
org.jitsi.jibri.sipgateway.pjsua.util.PjsuaFileHandler.pattern = /var/log/jitsi/jibri/pjsua.%g.txt
|
||||
org.jitsi.jibri.sipgateway.pjsua.util.PjsuaFileHandler.formatter = net.java.sip.communicator.util.ScLogFormatter
|
||||
org.jitsi.jibri.sipgateway.pjsua.util.PjsuaFileHandler.count = 10
|
||||
org.jitsi.jibri.sipgateway.pjsua.util.PjsuaFileHandler.limit = 10000000
|
||||
|
||||
org.jitsi.jibri.selenium.util.BrowserFileHandler.level = FINE
|
||||
org.jitsi.jibri.selenium.util.BrowserFileHandler.pattern = /var/log/jitsi/jibri/browser.%g.txt
|
||||
org.jitsi.jibri.selenium.util.BrowserFileHandler.formatter = net.java.sip.communicator.util.ScLogFormatter
|
||||
org.jitsi.jibri.selenium.util.BrowserFileHandler.count = 10
|
||||
org.jitsi.jibri.selenium.util.BrowserFileHandler.limit = 10000000
|
||||
|
||||
org.jitsi.level = FINE
|
||||
org.jitsi.jibri.config.level = INFO
|
||||
|
||||
org.glassfish.level = INFO
|
||||
org.osgi.level = INFO
|
||||
org.jitsi.xmpp.level = INFO
|
|
@ -5,115 +5,36 @@ with lib;
|
|||
let
|
||||
cfg = config.services.mosquitto;
|
||||
|
||||
listenerConf = optionalString cfg.ssl.enable ''
|
||||
listener ${toString cfg.ssl.port} ${cfg.ssl.host}
|
||||
cafile ${cfg.ssl.cafile}
|
||||
certfile ${cfg.ssl.certfile}
|
||||
keyfile ${cfg.ssl.keyfile}
|
||||
'';
|
||||
|
||||
passwordConf = optionalString cfg.checkPasswords ''
|
||||
password_file ${cfg.dataDir}/passwd
|
||||
'';
|
||||
|
||||
mosquittoConf = pkgs.writeText "mosquitto.conf" ''
|
||||
acl_file ${aclFile}
|
||||
persistence true
|
||||
allow_anonymous ${boolToString cfg.allowAnonymous}
|
||||
listener ${toString cfg.port} ${cfg.host}
|
||||
${passwordConf}
|
||||
${listenerConf}
|
||||
${cfg.extraConf}
|
||||
'';
|
||||
|
||||
userAcl = (concatStringsSep "\n\n" (mapAttrsToList (n: c:
|
||||
"user ${n}\n" + (concatStringsSep "\n" c.acl)) cfg.users
|
||||
));
|
||||
|
||||
aclFile = pkgs.writeText "mosquitto.acl" ''
|
||||
${cfg.aclExtraConf}
|
||||
${userAcl}
|
||||
'';
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
|
||||
###### Interface
|
||||
|
||||
options = {
|
||||
services.mosquitto = {
|
||||
enable = mkEnableOption "the MQTT Mosquitto broker";
|
||||
|
||||
host = mkOption {
|
||||
default = "127.0.0.1";
|
||||
example = "0.0.0.0";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Host to listen on without SSL.
|
||||
'';
|
||||
# note that mosquitto config parsing is very simplistic as of may 2021.
|
||||
# often times they'll e.g. strtok() a line, check the first two tokens, and ignore the rest.
|
||||
# there's no escaping available either, so we have to prevent any being necessary.
|
||||
str = types.strMatching "[^\r\n]*" // {
|
||||
description = "single-line string";
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
default = 1883;
|
||||
type = types.int;
|
||||
description = ''
|
||||
Port on which to listen without SSL.
|
||||
'';
|
||||
path = types.addCheck types.path (p: str.check "${p}");
|
||||
configKey = types.strMatching "[^\r\n\t ]+";
|
||||
optionType = with types; oneOf [ str path bool int ] // {
|
||||
description = "string, path, bool, or integer";
|
||||
};
|
||||
optionToString = v:
|
||||
if isBool v then boolToString v
|
||||
else if path.check v then "${v}"
|
||||
else toString v;
|
||||
|
||||
ssl = {
|
||||
enable = mkEnableOption "SSL listener";
|
||||
assertKeysValid = prefix: valid: config:
|
||||
mapAttrsToList
|
||||
(n: _: {
|
||||
assertion = valid ? ${n};
|
||||
message = "Invalid config key ${prefix}.${n}.";
|
||||
})
|
||||
config;
|
||||
|
||||
cafile = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = "Path to PEM encoded CA certificates.";
|
||||
};
|
||||
formatFreeform = { prefix ? "" }: mapAttrsToList (n: v: "${prefix}${n} ${optionToString v}");
|
||||
|
||||
certfile = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = "Path to PEM encoded server certificate.";
|
||||
};
|
||||
|
||||
keyfile = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = "Path to PEM encoded server key.";
|
||||
};
|
||||
|
||||
host = mkOption {
|
||||
default = "0.0.0.0";
|
||||
example = "localhost";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Host to listen on with SSL.
|
||||
'';
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
default = 8883;
|
||||
type = types.int;
|
||||
description = ''
|
||||
Port on which to listen with SSL.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
dataDir = mkOption {
|
||||
default = "/var/lib/mosquitto";
|
||||
type = types.path;
|
||||
description = ''
|
||||
The data directory.
|
||||
'';
|
||||
};
|
||||
|
||||
users = mkOption {
|
||||
type = types.attrsOf (types.submodule {
|
||||
userOptions = with types; submodule {
|
||||
options = {
|
||||
password = mkOption {
|
||||
type = with types; uniq (nullOr str);
|
||||
type = uniq (nullOr str);
|
||||
default = null;
|
||||
description = ''
|
||||
Specifies the (clear text) password for the MQTT User.
|
||||
|
@ -121,7 +42,7 @@ in
|
|||
};
|
||||
|
||||
passwordFile = mkOption {
|
||||
type = with types; uniq (nullOr str);
|
||||
type = uniq (nullOr types.path);
|
||||
example = "/path/to/file";
|
||||
default = null;
|
||||
description = ''
|
||||
|
@ -131,7 +52,7 @@ in
|
|||
};
|
||||
|
||||
hashedPassword = mkOption {
|
||||
type = with types; uniq (nullOr str);
|
||||
type = uniq (nullOr str);
|
||||
default = null;
|
||||
description = ''
|
||||
Specifies the hashed password for the MQTT User.
|
||||
|
@ -141,7 +62,7 @@ in
|
|||
};
|
||||
|
||||
hashedPasswordFile = mkOption {
|
||||
type = with types; uniq (nullOr str);
|
||||
type = uniq (nullOr types.path);
|
||||
example = "/path/to/file";
|
||||
default = null;
|
||||
description = ''
|
||||
|
@ -153,67 +74,474 @@ in
|
|||
};
|
||||
|
||||
acl = mkOption {
|
||||
type = types.listOf types.str;
|
||||
example = [ "topic read A/B" "topic A/#" ];
|
||||
type = listOf str;
|
||||
example = [ "read A/B" "readwrite A/#" ];
|
||||
default = [];
|
||||
description = ''
|
||||
Control client access to topics on the broker.
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
userAsserts = prefix: users:
|
||||
mapAttrsToList
|
||||
(n: _: {
|
||||
assertion = builtins.match "[^:\r\n]+" n != null;
|
||||
message = "Invalid user name ${n} in ${prefix}";
|
||||
})
|
||||
users
|
||||
++ mapAttrsToList
|
||||
(n: u: {
|
||||
assertion = count (s: s != null) [
|
||||
u.password u.passwordFile u.hashedPassword u.hashedPasswordFile
|
||||
] <= 1;
|
||||
message = "Cannot set more than one password option for user ${n} in ${prefix}";
|
||||
}) users;
|
||||
|
||||
makePasswordFile = users: path:
|
||||
let
|
||||
makeLines = store: file:
|
||||
mapAttrsToList
|
||||
(n: u: "addLine ${escapeShellArg n} ${escapeShellArg u.${store}}")
|
||||
(filterAttrs (_: u: u.${store} != null) users)
|
||||
++ mapAttrsToList
|
||||
(n: u: "addFile ${escapeShellArg n} ${escapeShellArg "${u.${file}}"}")
|
||||
(filterAttrs (_: u: u.${file} != null) users);
|
||||
plainLines = makeLines "password" "passwordFile";
|
||||
hashedLines = makeLines "hashedPassword" "hashedPasswordFile";
|
||||
in
|
||||
pkgs.writeScript "make-mosquitto-passwd"
|
||||
(''
|
||||
#! ${pkgs.runtimeShell}
|
||||
|
||||
set -eu
|
||||
|
||||
file=${escapeShellArg path}
|
||||
|
||||
rm -f "$file"
|
||||
touch "$file"
|
||||
|
||||
addLine() {
|
||||
echo "$1:$2" >> "$file"
|
||||
}
|
||||
addFile() {
|
||||
if [ $(wc -l <"$2") -gt 1 ]; then
|
||||
echo "invalid mosquitto password file $2" >&2
|
||||
return 1
|
||||
fi
|
||||
echo "$1:$(cat "$2")" >> "$file"
|
||||
}
|
||||
''
|
||||
+ concatStringsSep "\n"
|
||||
(plainLines
|
||||
++ optional (plainLines != []) ''
|
||||
${pkgs.mosquitto}/bin/mosquitto_passwd -U "$file"
|
||||
''
|
||||
++ hashedLines));
|
||||
|
||||
makeACLFile = idx: users: supplement:
|
||||
pkgs.writeText "mosquitto-acl-${toString idx}.conf"
|
||||
(concatStringsSep
|
||||
"\n"
|
||||
(flatten [
|
||||
supplement
|
||||
(mapAttrsToList
|
||||
(n: u: [ "user ${n}" ] ++ map (t: "topic ${t}") u.acl)
|
||||
users)
|
||||
]));
|
||||
|
||||
authPluginOptions = with types; submodule {
|
||||
options = {
|
||||
plugin = mkOption {
|
||||
type = path;
|
||||
description = ''
|
||||
Plugin path to load, should be a <literal>.so</literal> file.
|
||||
'';
|
||||
};
|
||||
|
||||
denySpecialChars = mkOption {
|
||||
type = bool;
|
||||
description = ''
|
||||
Automatically disallow all clients using <literal>#</literal>
|
||||
or <literal>+</literal> in their name/id.
|
||||
'';
|
||||
default = true;
|
||||
};
|
||||
|
||||
options = mkOption {
|
||||
type = attrsOf optionType;
|
||||
description = ''
|
||||
Options for the auth plugin. Each key turns into a <literal>auth_opt_*</literal>
|
||||
line in the config.
|
||||
'';
|
||||
default = {};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
authAsserts = prefix: auth:
|
||||
mapAttrsToList
|
||||
(n: _: {
|
||||
assertion = configKey.check n;
|
||||
message = "Invalid auth plugin key ${prefix}.${n}";
|
||||
})
|
||||
auth;
|
||||
|
||||
formatAuthPlugin = plugin:
|
||||
[
|
||||
"auth_plugin ${plugin.plugin}"
|
||||
"auth_plugin_deny_special_chars ${optionToString plugin.denySpecialChars}"
|
||||
]
|
||||
++ formatFreeform { prefix = "auth_opt_"; } plugin.options;
|
||||
|
||||
freeformListenerKeys = {
|
||||
allow_anonymous = 1;
|
||||
allow_zero_length_clientid = 1;
|
||||
auto_id_prefix = 1;
|
||||
cafile = 1;
|
||||
capath = 1;
|
||||
certfile = 1;
|
||||
ciphers = 1;
|
||||
"ciphers_tls1.3" = 1;
|
||||
crlfile = 1;
|
||||
dhparamfile = 1;
|
||||
http_dir = 1;
|
||||
keyfile = 1;
|
||||
max_connections = 1;
|
||||
max_qos = 1;
|
||||
max_topic_alias = 1;
|
||||
mount_point = 1;
|
||||
protocol = 1;
|
||||
psk_file = 1;
|
||||
psk_hint = 1;
|
||||
require_certificate = 1;
|
||||
socket_domain = 1;
|
||||
tls_engine = 1;
|
||||
tls_engine_kpass_sha1 = 1;
|
||||
tls_keyform = 1;
|
||||
tls_version = 1;
|
||||
use_identity_as_username = 1;
|
||||
use_subject_as_username = 1;
|
||||
use_username_as_clientid = 1;
|
||||
};
|
||||
|
||||
listenerOptions = with types; submodule {
|
||||
options = {
|
||||
port = mkOption {
|
||||
type = port;
|
||||
description = ''
|
||||
Port to listen on. Must be set to 0 to listen on a unix domain socket.
|
||||
'';
|
||||
default = 1883;
|
||||
};
|
||||
|
||||
address = mkOption {
|
||||
type = nullOr str;
|
||||
description = ''
|
||||
Address to listen on. Listen on <literal>0.0.0.0</literal>/<literal>::</literal>
|
||||
when unset.
|
||||
'';
|
||||
default = null;
|
||||
};
|
||||
|
||||
authPlugins = mkOption {
|
||||
type = listOf authPluginOptions;
|
||||
description = ''
|
||||
Authentication plugin to attach to this listener.
|
||||
Refer to the <link xlink:href="https://mosquitto.org/man/mosquitto-conf-5.html">
|
||||
mosquitto.conf documentation</link> for details on authentication plugins.
|
||||
'';
|
||||
default = [];
|
||||
};
|
||||
|
||||
users = mkOption {
|
||||
type = attrsOf userOptions;
|
||||
example = { john = { password = "123456"; acl = [ "topic readwrite john/#" ]; }; };
|
||||
description = ''
|
||||
A set of users and their passwords and ACLs.
|
||||
'';
|
||||
default = {};
|
||||
};
|
||||
|
||||
allowAnonymous = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
acl = mkOption {
|
||||
type = listOf str;
|
||||
description = ''
|
||||
Allow clients to connect without authentication.
|
||||
Additional ACL items to prepend to the generated ACL file.
|
||||
'';
|
||||
default = [];
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = submodule {
|
||||
freeformType = attrsOf optionType;
|
||||
};
|
||||
description = ''
|
||||
Additional settings for this listener.
|
||||
'';
|
||||
default = {};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
listenerAsserts = prefix: listener:
|
||||
assertKeysValid prefix freeformListenerKeys listener.settings
|
||||
++ userAsserts prefix listener.users
|
||||
++ imap0
|
||||
(i: v: authAsserts "${prefix}.authPlugins.${toString i}" v)
|
||||
listener.authPlugins;
|
||||
|
||||
formatListener = idx: listener:
|
||||
[
|
||||
"listener ${toString listener.port} ${toString listener.address}"
|
||||
"password_file ${cfg.dataDir}/passwd-${toString idx}"
|
||||
"acl_file ${makeACLFile idx listener.users listener.acl}"
|
||||
]
|
||||
++ formatFreeform {} listener.settings
|
||||
++ concatMap formatAuthPlugin listener.authPlugins;
|
||||
|
||||
freeformBridgeKeys = {
|
||||
bridge_alpn = 1;
|
||||
bridge_attempt_unsubscribe = 1;
|
||||
bridge_bind_address = 1;
|
||||
bridge_cafile = 1;
|
||||
bridge_capath = 1;
|
||||
bridge_certfile = 1;
|
||||
bridge_identity = 1;
|
||||
bridge_insecure = 1;
|
||||
bridge_keyfile = 1;
|
||||
bridge_max_packet_size = 1;
|
||||
bridge_outgoing_retain = 1;
|
||||
bridge_protocol_version = 1;
|
||||
bridge_psk = 1;
|
||||
bridge_require_ocsp = 1;
|
||||
bridge_tls_version = 1;
|
||||
cleansession = 1;
|
||||
idle_timeout = 1;
|
||||
keepalive_interval = 1;
|
||||
local_cleansession = 1;
|
||||
local_clientid = 1;
|
||||
local_password = 1;
|
||||
local_username = 1;
|
||||
notification_topic = 1;
|
||||
notifications = 1;
|
||||
notifications_local_only = 1;
|
||||
remote_clientid = 1;
|
||||
remote_password = 1;
|
||||
remote_username = 1;
|
||||
restart_timeout = 1;
|
||||
round_robin = 1;
|
||||
start_type = 1;
|
||||
threshold = 1;
|
||||
try_private = 1;
|
||||
};
|
||||
|
||||
bridgeOptions = with types; submodule {
|
||||
options = {
|
||||
addresses = mkOption {
|
||||
type = listOf (submodule {
|
||||
options = {
|
||||
address = mkOption {
|
||||
type = str;
|
||||
description = ''
|
||||
Address of the remote MQTT broker.
|
||||
'';
|
||||
};
|
||||
|
||||
checkPasswords = mkOption {
|
||||
default = false;
|
||||
example = true;
|
||||
type = types.bool;
|
||||
port = mkOption {
|
||||
type = port;
|
||||
description = ''
|
||||
Refuse connection when clients provide incorrect passwords.
|
||||
Port of the remote MQTT broker.
|
||||
'';
|
||||
default = 1883;
|
||||
};
|
||||
};
|
||||
});
|
||||
default = [];
|
||||
description = ''
|
||||
Remote endpoints for the bridge.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConf = mkOption {
|
||||
default = "";
|
||||
type = types.lines;
|
||||
topics = mkOption {
|
||||
type = listOf str;
|
||||
description = ''
|
||||
Extra config to append to `mosquitto.conf` file.
|
||||
Topic patterns to be shared between the two brokers.
|
||||
Refer to the <link xlink:href="https://mosquitto.org/man/mosquitto-conf-5.html">
|
||||
mosquitto.conf documentation</link> for details on the format.
|
||||
'';
|
||||
default = [];
|
||||
example = [ "# both 2 local/topic/ remote/topic/" ];
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = submodule {
|
||||
freeformType = attrsOf optionType;
|
||||
};
|
||||
description = ''
|
||||
Additional settings for this bridge.
|
||||
'';
|
||||
default = {};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
bridgeAsserts = prefix: bridge:
|
||||
assertKeysValid prefix freeformBridgeKeys bridge.settings
|
||||
++ [ {
|
||||
assertion = length bridge.addresses > 0;
|
||||
message = "Bridge ${prefix} needs remote broker addresses";
|
||||
} ];
|
||||
|
||||
formatBridge = name: bridge:
|
||||
[
|
||||
"connection ${name}"
|
||||
"addresses ${concatMapStringsSep " " (a: "${a.address}:${toString a.port}") bridge.addresses}"
|
||||
]
|
||||
++ map (t: "topic ${t}") bridge.topics
|
||||
++ formatFreeform {} bridge.settings;
|
||||
|
||||
freeformGlobalKeys = {
|
||||
allow_duplicate_messages = 1;
|
||||
autosave_interval = 1;
|
||||
autosave_on_changes = 1;
|
||||
check_retain_source = 1;
|
||||
connection_messages = 1;
|
||||
log_facility = 1;
|
||||
log_timestamp = 1;
|
||||
log_timestamp_format = 1;
|
||||
max_inflight_bytes = 1;
|
||||
max_inflight_messages = 1;
|
||||
max_keepalive = 1;
|
||||
max_packet_size = 1;
|
||||
max_queued_bytes = 1;
|
||||
max_queued_messages = 1;
|
||||
memory_limit = 1;
|
||||
message_size_limit = 1;
|
||||
persistence_file = 1;
|
||||
persistence_location = 1;
|
||||
persistent_client_expiration = 1;
|
||||
pid_file = 1;
|
||||
queue_qos0_messages = 1;
|
||||
retain_available = 1;
|
||||
set_tcp_nodelay = 1;
|
||||
sys_interval = 1;
|
||||
upgrade_outgoing_qos = 1;
|
||||
websockets_headers_size = 1;
|
||||
websockets_log_level = 1;
|
||||
};
|
||||
|
||||
globalOptions = with types; {
|
||||
enable = mkEnableOption "the MQTT Mosquitto broker";
|
||||
|
||||
bridges = mkOption {
|
||||
type = attrsOf bridgeOptions;
|
||||
default = {};
|
||||
description = ''
|
||||
Bridges to build to other MQTT brokers.
|
||||
'';
|
||||
};
|
||||
|
||||
aclExtraConf = mkOption {
|
||||
default = "";
|
||||
type = types.lines;
|
||||
listeners = mkOption {
|
||||
type = listOf listenerOptions;
|
||||
default = {};
|
||||
description = ''
|
||||
Extra config to prepend to the ACL file.
|
||||
Listeners to configure on this broker.
|
||||
'';
|
||||
};
|
||||
|
||||
includeDirs = mkOption {
|
||||
type = listOf path;
|
||||
description = ''
|
||||
Directories to be scanned for further config files to include.
|
||||
Directories will processed in the order given,
|
||||
<literal>*.conf</literal> files in the directory will be
|
||||
read in case-sensistive alphabetical order.
|
||||
'';
|
||||
default = [];
|
||||
};
|
||||
|
||||
logDest = mkOption {
|
||||
type = listOf (either path (enum [ "stdout" "stderr" "syslog" "topic" "dlt" ]));
|
||||
description = ''
|
||||
Destinations to send log messages to.
|
||||
'';
|
||||
default = [ "stderr" ];
|
||||
};
|
||||
|
||||
logType = mkOption {
|
||||
type = listOf (enum [ "debug" "error" "warning" "notice" "information"
|
||||
"subscribe" "unsubscribe" "websockets" "none" "all" ]);
|
||||
description = ''
|
||||
Types of messages to log.
|
||||
'';
|
||||
default = [];
|
||||
};
|
||||
|
||||
persistence = mkOption {
|
||||
type = bool;
|
||||
description = ''
|
||||
Enable persistent storage of subscriptions and messages.
|
||||
'';
|
||||
default = true;
|
||||
};
|
||||
|
||||
dataDir = mkOption {
|
||||
default = "/var/lib/mosquitto";
|
||||
type = types.path;
|
||||
description = ''
|
||||
The data directory.
|
||||
'';
|
||||
};
|
||||
|
||||
settings = mkOption {
|
||||
type = submodule {
|
||||
freeformType = attrsOf optionType;
|
||||
};
|
||||
description = ''
|
||||
Global configuration options for the mosquitto broker.
|
||||
'';
|
||||
default = {};
|
||||
};
|
||||
};
|
||||
|
||||
globalAsserts = prefix: cfg:
|
||||
flatten [
|
||||
(assertKeysValid prefix freeformGlobalKeys cfg.settings)
|
||||
(imap0 (n: l: listenerAsserts "${prefix}.listener.${toString n}" l) cfg.listeners)
|
||||
(mapAttrsToList (n: b: bridgeAsserts "${prefix}.bridge.${n}" b) cfg.bridges)
|
||||
];
|
||||
|
||||
formatGlobal = cfg:
|
||||
[
|
||||
"per_listener_settings true"
|
||||
"persistence ${optionToString cfg.persistence}"
|
||||
]
|
||||
++ map
|
||||
(d: if path.check d then "log_dest file ${d}" else "log_dest ${d}")
|
||||
cfg.logDest
|
||||
++ map (t: "log_type ${t}") cfg.logType
|
||||
++ formatFreeform {} cfg.settings
|
||||
++ concatLists (imap0 formatListener cfg.listeners)
|
||||
++ concatLists (mapAttrsToList formatBridge cfg.bridges)
|
||||
++ map (d: "include_dir ${d}") cfg.includeDirs;
|
||||
|
||||
configFile = pkgs.writeText "mosquitto.conf"
|
||||
(concatStringsSep "\n" (formatGlobal cfg));
|
||||
|
||||
in
|
||||
|
||||
{
|
||||
|
||||
###### Interface
|
||||
|
||||
options.services.mosquitto = globalOptions;
|
||||
|
||||
###### Implementation
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
assertions = mapAttrsToList (name: cfg: {
|
||||
assertion = length (filter (s: s != null) (with cfg; [
|
||||
password passwordFile hashedPassword hashedPasswordFile
|
||||
])) <= 1;
|
||||
message = "Cannot set more than one password option";
|
||||
}) cfg.users;
|
||||
assertions = globalAsserts "services.mosquitto" cfg;
|
||||
|
||||
systemd.services.mosquitto = {
|
||||
description = "Mosquitto MQTT Broker Daemon";
|
||||
|
@ -227,7 +555,7 @@ in
|
|||
RuntimeDirectory = "mosquitto";
|
||||
WorkingDirectory = cfg.dataDir;
|
||||
Restart = "on-failure";
|
||||
ExecStart = "${pkgs.mosquitto}/bin/mosquitto -c ${mosquittoConf}";
|
||||
ExecStart = "${pkgs.mosquitto}/bin/mosquitto -c ${configFile}";
|
||||
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||
|
||||
# Hardening
|
||||
|
@ -252,12 +580,34 @@ in
|
|||
ReadWritePaths = [
|
||||
cfg.dataDir
|
||||
"/tmp" # mosquitto_passwd creates files in /tmp before moving them
|
||||
];
|
||||
ReadOnlyPaths = with cfg.ssl; lib.optionals (enable) [
|
||||
certfile
|
||||
keyfile
|
||||
cafile
|
||||
];
|
||||
] ++ filter path.check cfg.logDest;
|
||||
ReadOnlyPaths =
|
||||
map (p: "${p}")
|
||||
(cfg.includeDirs
|
||||
++ filter
|
||||
(v: v != null)
|
||||
(flatten [
|
||||
(map
|
||||
(l: [
|
||||
(l.settings.psk_file or null)
|
||||
(l.settings.http_dir or null)
|
||||
(l.settings.cafile or null)
|
||||
(l.settings.capath or null)
|
||||
(l.settings.certfile or null)
|
||||
(l.settings.crlfile or null)
|
||||
(l.settings.dhparamfile or null)
|
||||
(l.settings.keyfile or null)
|
||||
])
|
||||
cfg.listeners)
|
||||
(mapAttrsToList
|
||||
(_: b: [
|
||||
(b.settings.bridge_cafile or null)
|
||||
(b.settings.bridge_capath or null)
|
||||
(b.settings.bridge_certfile or null)
|
||||
(b.settings.bridge_keyfile or null)
|
||||
])
|
||||
cfg.bridges)
|
||||
]));
|
||||
RemoveIPC = true;
|
||||
RestrictAddressFamilies = [
|
||||
"AF_UNIX" # for sd_notify() call
|
||||
|
@ -275,20 +625,12 @@ in
|
|||
];
|
||||
UMask = "0077";
|
||||
};
|
||||
preStart = ''
|
||||
rm -f ${cfg.dataDir}/passwd
|
||||
touch ${cfg.dataDir}/passwd
|
||||
'' + concatStringsSep "\n" (
|
||||
mapAttrsToList (n: c:
|
||||
if c.hashedPasswordFile != null then
|
||||
"echo '${n}:'$(cat '${c.hashedPasswordFile}') >> ${cfg.dataDir}/passwd"
|
||||
else if c.passwordFile != null then
|
||||
"${pkgs.mosquitto}/bin/mosquitto_passwd -b ${cfg.dataDir}/passwd ${n} $(cat '${c.passwordFile}')"
|
||||
else if c.hashedPassword != null then
|
||||
"echo '${n}:${c.hashedPassword}' >> ${cfg.dataDir}/passwd"
|
||||
else optionalString (c.password != null)
|
||||
"${pkgs.mosquitto}/bin/mosquitto_passwd -b ${cfg.dataDir}/passwd ${n} '${c.password}'"
|
||||
) cfg.users);
|
||||
preStart =
|
||||
concatStringsSep
|
||||
"\n"
|
||||
(imap0
|
||||
(idx: listener: makePasswordFile listener.users "${cfg.dataDir}/passwd-${toString idx}")
|
||||
cfg.listeners);
|
||||
};
|
||||
|
||||
users.users.mosquitto = {
|
||||
|
@ -302,4 +644,6 @@ in
|
|||
users.groups.mosquitto.gid = config.ids.gids.mosquitto;
|
||||
|
||||
};
|
||||
|
||||
meta.maintainers = with lib.maintainers; [ pennae ];
|
||||
}
|
||||
|
|
290
third_party/nixpkgs/nixos/modules/services/networking/seafile.nix
vendored
Normal file
290
third_party/nixpkgs/nixos/modules/services/networking/seafile.nix
vendored
Normal file
|
@ -0,0 +1,290 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
with lib;
|
||||
let
|
||||
python = pkgs.python3Packages.python;
|
||||
cfg = config.services.seafile;
|
||||
settingsFormat = pkgs.formats.ini { };
|
||||
|
||||
ccnetConf = settingsFormat.generate "ccnet.conf" cfg.ccnetSettings;
|
||||
|
||||
seafileConf = settingsFormat.generate "seafile.conf" cfg.seafileSettings;
|
||||
|
||||
seahubSettings = pkgs.writeText "seahub_settings.py" ''
|
||||
FILE_SERVER_ROOT = '${cfg.ccnetSettings.General.SERVICE_URL}/seafhttp'
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ENGINE': 'django.db.backends.sqlite3',
|
||||
'NAME': '${seahubDir}/seahub.db',
|
||||
}
|
||||
}
|
||||
MEDIA_ROOT = '${seahubDir}/media/'
|
||||
THUMBNAIL_ROOT = '${seahubDir}/thumbnail/'
|
||||
|
||||
with open('${seafRoot}/.seahubSecret') as f:
|
||||
SECRET_KEY = f.readline().rstrip()
|
||||
|
||||
${cfg.seahubExtraConf}
|
||||
'';
|
||||
|
||||
seafRoot = "/var/lib/seafile"; # hardcode it due to dynamicuser
|
||||
ccnetDir = "${seafRoot}/ccnet";
|
||||
dataDir = "${seafRoot}/data";
|
||||
seahubDir = "${seafRoot}/seahub";
|
||||
|
||||
in {
|
||||
|
||||
###### Interface
|
||||
|
||||
options.services.seafile = {
|
||||
enable = mkEnableOption "Seafile server";
|
||||
|
||||
ccnetSettings = mkOption {
|
||||
type = types.submodule {
|
||||
freeformType = settingsFormat.type;
|
||||
|
||||
options = {
|
||||
General = {
|
||||
SERVICE_URL = mkOption {
|
||||
type = types.str;
|
||||
example = "https://www.example.com";
|
||||
description = ''
|
||||
Seahub public URL.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
default = { };
|
||||
description = ''
|
||||
Configuration for ccnet, see
|
||||
<link xlink:href="https://manual.seafile.com/config/ccnet-conf/"/>
|
||||
for supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
seafileSettings = mkOption {
|
||||
type = types.submodule {
|
||||
freeformType = settingsFormat.type;
|
||||
|
||||
options = {
|
||||
fileserver = {
|
||||
port = mkOption {
|
||||
type = types.port;
|
||||
default = 8082;
|
||||
description = ''
|
||||
The tcp port used by seafile fileserver.
|
||||
'';
|
||||
};
|
||||
host = mkOption {
|
||||
type = types.str;
|
||||
default = "127.0.0.1";
|
||||
example = "0.0.0.0";
|
||||
description = ''
|
||||
The binding address used by seafile fileserver.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
default = { };
|
||||
description = ''
|
||||
Configuration for seafile-server, see
|
||||
<link xlink:href="https://manual.seafile.com/config/seafile-conf/"/>
|
||||
for supported values.
|
||||
'';
|
||||
};
|
||||
|
||||
workers = mkOption {
|
||||
type = types.int;
|
||||
default = 4;
|
||||
example = 10;
|
||||
description = ''
|
||||
The number of gunicorn worker processes for handling requests.
|
||||
'';
|
||||
};
|
||||
|
||||
adminEmail = mkOption {
|
||||
example = "john@example.com";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Seafile Seahub Admin Account Email.
|
||||
'';
|
||||
};
|
||||
|
||||
initialAdminPassword = mkOption {
|
||||
example = "someStrongPass";
|
||||
type = types.str;
|
||||
description = ''
|
||||
Seafile Seahub Admin Account initial password.
|
||||
Should be change via Seahub web front-end.
|
||||
'';
|
||||
};
|
||||
|
||||
seafilePackage = mkOption {
|
||||
type = types.package;
|
||||
description = "Which package to use for the seafile server.";
|
||||
default = pkgs.seafile-server;
|
||||
};
|
||||
|
||||
seahubExtraConf = mkOption {
|
||||
default = "";
|
||||
type = types.lines;
|
||||
description = ''
|
||||
Extra config to append to `seahub_settings.py` file.
|
||||
Refer to <link xlink:href="https://manual.seafile.com/config/seahub_settings_py/" />
|
||||
for all available options.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
###### Implementation
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
environment.etc."seafile/ccnet.conf".source = ccnetConf;
|
||||
environment.etc."seafile/seafile.conf".source = seafileConf;
|
||||
environment.etc."seafile/seahub_settings.py".source = seahubSettings;
|
||||
|
||||
systemd.targets.seafile = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
description = "Seafile components";
|
||||
};
|
||||
|
||||
systemd.services = let
|
||||
securityOptions = {
|
||||
ProtectHome = true;
|
||||
PrivateUsers = true;
|
||||
PrivateDevices = true;
|
||||
ProtectClock = true;
|
||||
ProtectHostname = true;
|
||||
ProtectProc = "invisible";
|
||||
ProtectKernelModules = true;
|
||||
ProtectKernelTunables = true;
|
||||
ProtectKernelLogs = true;
|
||||
ProtectControlGroups = true;
|
||||
RestrictNamespaces = true;
|
||||
LockPersonality = true;
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
MemoryDenyWriteExecute = true;
|
||||
SystemCallArchitectures = "native";
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" ];
|
||||
};
|
||||
in {
|
||||
seaf-server = {
|
||||
description = "Seafile server";
|
||||
partOf = [ "seafile.target" ];
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "seafile.target" ];
|
||||
restartTriggers = [ ccnetConf seafileConf ];
|
||||
serviceConfig = securityOptions // {
|
||||
User = "seafile";
|
||||
Group = "seafile";
|
||||
DynamicUser = true;
|
||||
StateDirectory = "seafile";
|
||||
RuntimeDirectory = "seafile";
|
||||
LogsDirectory = "seafile";
|
||||
ConfigurationDirectory = "seafile";
|
||||
ExecStart = ''
|
||||
${cfg.seafilePackage}/bin/seaf-server \
|
||||
--foreground \
|
||||
-F /etc/seafile \
|
||||
-c ${ccnetDir} \
|
||||
-d ${dataDir} \
|
||||
-l /var/log/seafile/server.log \
|
||||
-P /run/seafile/server.pid \
|
||||
-p /run/seafile
|
||||
'';
|
||||
};
|
||||
preStart = ''
|
||||
if [ ! -f "${seafRoot}/server-setup" ]; then
|
||||
mkdir -p ${dataDir}/library-template
|
||||
mkdir -p ${ccnetDir}/{GroupMgr,misc,OrgMgr,PeerMgr}
|
||||
${pkgs.sqlite}/bin/sqlite3 ${ccnetDir}/GroupMgr/groupmgr.db ".read ${cfg.seafilePackage}/share/seafile/sql/sqlite/groupmgr.sql"
|
||||
${pkgs.sqlite}/bin/sqlite3 ${ccnetDir}/misc/config.db ".read ${cfg.seafilePackage}/share/seafile/sql/sqlite/config.sql"
|
||||
${pkgs.sqlite}/bin/sqlite3 ${ccnetDir}/OrgMgr/orgmgr.db ".read ${cfg.seafilePackage}/share/seafile/sql/sqlite/org.sql"
|
||||
${pkgs.sqlite}/bin/sqlite3 ${ccnetDir}/PeerMgr/usermgr.db ".read ${cfg.seafilePackage}/share/seafile/sql/sqlite/user.sql"
|
||||
${pkgs.sqlite}/bin/sqlite3 ${dataDir}/seafile.db ".read ${cfg.seafilePackage}/share/seafile/sql/sqlite/seafile.sql"
|
||||
echo "${cfg.seafilePackage.version}-sqlite" > "${seafRoot}"/server-setup
|
||||
fi
|
||||
# checking for upgrades and handling them
|
||||
# WARNING: needs to be extended to actually handle major version migrations
|
||||
installedMajor=$(cat "${seafRoot}/server-setup" | cut -d"-" -f1 | cut -d"." -f1)
|
||||
installedMinor=$(cat "${seafRoot}/server-setup" | cut -d"-" -f1 | cut -d"." -f2)
|
||||
pkgMajor=$(echo "${cfg.seafilePackage.version}" | cut -d"." -f1)
|
||||
pkgMinor=$(echo "${cfg.seafilePackage.version}" | cut -d"." -f2)
|
||||
if [ $installedMajor != $pkgMajor ] || [ $installedMinor != $pkgMinor ]; then
|
||||
echo "Unsupported upgrade" >&2
|
||||
exit 1
|
||||
fi
|
||||
'';
|
||||
};
|
||||
|
||||
seahub = let
|
||||
penv = (pkgs.python3.withPackages (ps: with ps; [ gunicorn seahub ]));
|
||||
in {
|
||||
description = "Seafile Server Web Frontend";
|
||||
wantedBy = [ "seafile.target" ];
|
||||
partOf = [ "seafile.target" ];
|
||||
after = [ "network.target" "seaf-server.service" ];
|
||||
requires = [ "seaf-server.service" ];
|
||||
restartTriggers = [ seahubSettings ];
|
||||
environment = {
|
||||
PYTHONPATH =
|
||||
"${pkgs.python3Packages.seahub}/thirdpart:${pkgs.python3Packages.seahub}:${penv}/${python.sitePackages}";
|
||||
DJANGO_SETTINGS_MODULE = "seahub.settings";
|
||||
CCNET_CONF_DIR = ccnetDir;
|
||||
SEAFILE_CONF_DIR = dataDir;
|
||||
SEAFILE_CENTRAL_CONF_DIR = "/etc/seafile";
|
||||
SEAFILE_RPC_PIPE_PATH = "/run/seafile";
|
||||
SEAHUB_LOG_DIR = "/var/log/seafile";
|
||||
};
|
||||
serviceConfig = securityOptions // {
|
||||
User = "seafile";
|
||||
Group = "seafile";
|
||||
DynamicUser = true;
|
||||
RuntimeDirectory = "seahub";
|
||||
StateDirectory = "seafile";
|
||||
LogsDirectory = "seafile";
|
||||
ConfigurationDirectory = "seafile";
|
||||
ExecStart = ''
|
||||
${penv}/bin/gunicorn seahub.wsgi:application \
|
||||
--name seahub \
|
||||
--workers ${toString cfg.workers} \
|
||||
--log-level=info \
|
||||
--preload \
|
||||
--timeout=1200 \
|
||||
--limit-request-line=8190 \
|
||||
--bind unix:/run/seahub/gunicorn.sock
|
||||
'';
|
||||
};
|
||||
preStart = ''
|
||||
mkdir -p ${seahubDir}/media
|
||||
# Link all media except avatars
|
||||
for m in `find ${pkgs.python3Packages.seahub}/media/ -maxdepth 1 -not -name "avatars"`; do
|
||||
ln -sf $m ${seahubDir}/media/
|
||||
done
|
||||
if [ ! -e "${seafRoot}/.seahubSecret" ]; then
|
||||
${penv}/bin/python ${pkgs.python3Packages.seahub}/tools/secret_key_generator.py > ${seafRoot}/.seahubSecret
|
||||
chmod 400 ${seafRoot}/.seahubSecret
|
||||
fi
|
||||
if [ ! -f "${seafRoot}/seahub-setup" ]; then
|
||||
# avatars directory should be writable
|
||||
install -D -t ${seahubDir}/media/avatars/ ${pkgs.python3Packages.seahub}/media/avatars/default.png
|
||||
install -D -t ${seahubDir}/media/avatars/groups ${pkgs.python3Packages.seahub}/media/avatars/groups/default.png
|
||||
# init database
|
||||
${pkgs.python3Packages.seahub}/manage.py migrate
|
||||
# create admin account
|
||||
${pkgs.expect}/bin/expect -c 'spawn ${pkgs.python3Packages.seahub}/manage.py createsuperuser --email=${cfg.adminEmail}; expect "Password: "; send "${cfg.initialAdminPassword}\r"; expect "Password (again): "; send "${cfg.initialAdminPassword}\r"; expect "Superuser created successfully."'
|
||||
echo "${pkgs.python3Packages.seahub.version}-sqlite" > "${seafRoot}/seahub-setup"
|
||||
fi
|
||||
if [ $(cat "${seafRoot}/seahub-setup" | cut -d"-" -f1) != "${pkgs.python3Packages.seahub.version}" ]; then
|
||||
# update database
|
||||
${pkgs.python3Packages.seahub}/manage.py migrate
|
||||
echo "${pkgs.python3Packages.seahub.version}-sqlite" > "${seafRoot}/seahub-setup"
|
||||
fi
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -119,7 +119,7 @@ in
|
|||
];
|
||||
|
||||
# ProtectProc = "invisible"; # not supported by upstream yet
|
||||
# ProcSubset = "pid"; # not supported by upstream upstream yet
|
||||
# ProcSubset = "pid"; # not supported by upstream yet
|
||||
# PrivateUsers = true; # doesn't work with privileged ports therefore not supported by upstream
|
||||
|
||||
DynamicUser = true;
|
||||
|
|
|
@ -7,15 +7,20 @@ let
|
|||
inherit (config.environment) etc;
|
||||
apparmor = config.security.apparmor;
|
||||
rootDir = "/run/transmission";
|
||||
homeDir = "/var/lib/transmission";
|
||||
settingsDir = ".config/transmission-daemon";
|
||||
downloadsDir = "Downloads";
|
||||
incompleteDir = ".incomplete";
|
||||
watchDir = "watchdir";
|
||||
# TODO: switch to configGen.json once RFC0042 is implemented
|
||||
settingsFile = pkgs.writeText "settings.json" (builtins.toJSON cfg.settings);
|
||||
settingsFormat = pkgs.formats.json {};
|
||||
settingsFile = settingsFormat.generate "settings.json" cfg.settings;
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
(mkRenamedOptionModule ["services" "transmission" "port"]
|
||||
["services" "transmission" "settings" "rpc-port"])
|
||||
(mkAliasOptionModule ["services" "transmission" "openFirewall"]
|
||||
["services" "transmission" "openPeerPorts"])
|
||||
];
|
||||
options = {
|
||||
services.transmission = {
|
||||
enable = mkEnableOption ''the headless Transmission BitTorrent daemon.
|
||||
|
@ -24,48 +29,141 @@ in
|
|||
transmission-remote, the WebUI (http://127.0.0.1:9091/ by default),
|
||||
or other clients like stig or tremc.
|
||||
|
||||
Torrents are downloaded to ${homeDir}/${downloadsDir} by default and are
|
||||
Torrents are downloaded to <xref linkend="opt-services.transmission.home"/>/${downloadsDir} by default and are
|
||||
accessible to users in the "transmission" group'';
|
||||
|
||||
settings = mkOption rec {
|
||||
# TODO: switch to types.config.json as prescribed by RFC0042 once it's implemented
|
||||
type = types.attrs;
|
||||
apply = recursiveUpdate default;
|
||||
default =
|
||||
{
|
||||
download-dir = "${cfg.home}/${downloadsDir}";
|
||||
incomplete-dir = "${cfg.home}/${incompleteDir}";
|
||||
incomplete-dir-enabled = true;
|
||||
watch-dir = "${cfg.home}/${watchDir}";
|
||||
watch-dir-enabled = false;
|
||||
message-level = 1;
|
||||
peer-port = 51413;
|
||||
peer-port-random-high = 65535;
|
||||
peer-port-random-low = 49152;
|
||||
peer-port-random-on-start = false;
|
||||
rpc-bind-address = "127.0.0.1";
|
||||
rpc-port = 9091;
|
||||
script-torrent-done-enabled = false;
|
||||
script-torrent-done-filename = "";
|
||||
umask = 2; # 0o002 in decimal as expected by Transmission
|
||||
utp-enabled = true;
|
||||
};
|
||||
example =
|
||||
{
|
||||
download-dir = "/srv/torrents/";
|
||||
incomplete-dir = "/srv/torrents/.incomplete/";
|
||||
incomplete-dir-enabled = true;
|
||||
rpc-whitelist = "127.0.0.1,192.168.*.*";
|
||||
};
|
||||
settings = mkOption {
|
||||
description = ''
|
||||
Attribute set whose fields overwrites fields in
|
||||
Settings whose options overwrite fields in
|
||||
<literal>.config/transmission-daemon/settings.json</literal>
|
||||
(each time the service starts). String values must be quoted, integer and
|
||||
boolean values must not.
|
||||
(each time the service starts).
|
||||
|
||||
See <link xlink:href="https://github.com/transmission/transmission/wiki/Editing-Configuration-Files">Transmission's Wiki</link>
|
||||
for documentation.
|
||||
for documentation of settings not explicitely covered by this module.
|
||||
'';
|
||||
default = {};
|
||||
type = types.submodule {
|
||||
freeformType = settingsFormat.type;
|
||||
options.download-dir = mkOption {
|
||||
type = types.path;
|
||||
default = "${cfg.home}/${downloadsDir}";
|
||||
description = "Directory where to download torrents.";
|
||||
};
|
||||
options.incomplete-dir = mkOption {
|
||||
type = types.path;
|
||||
default = "${cfg.home}/${incompleteDir}";
|
||||
description = ''
|
||||
When enabled with
|
||||
services.transmission.home
|
||||
<xref linkend="opt-services.transmission.settings.incomplete-dir-enabled"/>,
|
||||
new torrents will download the files to this directory.
|
||||
When complete, the files will be moved to download-dir
|
||||
<xref linkend="opt-services.transmission.settings.download-dir"/>.
|
||||
'';
|
||||
};
|
||||
options.incomplete-dir-enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = "";
|
||||
};
|
||||
options.message-level = mkOption {
|
||||
type = types.ints.between 0 2;
|
||||
default = 2;
|
||||
description = "Set verbosity of transmission messages.";
|
||||
};
|
||||
options.peer-port = mkOption {
|
||||
type = types.port;
|
||||
default = 51413;
|
||||
description = "The peer port to listen for incoming connections.";
|
||||
};
|
||||
options.peer-port-random-high = mkOption {
|
||||
type = types.port;
|
||||
default = 65535;
|
||||
description = ''
|
||||
The maximum peer port to listen to for incoming connections
|
||||
when <xref linkend="opt-services.transmission.settings.peer-port-random-on-start"/> is enabled.
|
||||
'';
|
||||
};
|
||||
options.peer-port-random-low = mkOption {
|
||||
type = types.port;
|
||||
default = 65535;
|
||||
description = ''
|
||||
The minimal peer port to listen to for incoming connections
|
||||
when <xref linkend="opt-services.transmission.settings.peer-port-random-on-start"/> is enabled.
|
||||
'';
|
||||
};
|
||||
options.peer-port-random-on-start = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Randomize the peer port.";
|
||||
};
|
||||
options.rpc-bind-address = mkOption {
|
||||
type = types.str;
|
||||
default = "127.0.0.1";
|
||||
example = "0.0.0.0";
|
||||
description = ''
|
||||
Where to listen for RPC connections.
|
||||
Use \"0.0.0.0\" to listen on all interfaces.
|
||||
'';
|
||||
};
|
||||
options.rpc-port = mkOption {
|
||||
type = types.port;
|
||||
default = 9091;
|
||||
description = "The RPC port to listen to.";
|
||||
};
|
||||
options.script-torrent-done-enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to run
|
||||
<xref linkend="opt-services.transmission.settings.script-torrent-done-filename"/>
|
||||
at torrent completion.
|
||||
'';
|
||||
};
|
||||
options.script-torrent-done-filename = mkOption {
|
||||
type = types.nullOr types.path;
|
||||
default = null;
|
||||
description = "Executable to be run at torrent completion.";
|
||||
};
|
||||
options.umask = mkOption {
|
||||
type = types.int;
|
||||
default = 2;
|
||||
description = ''
|
||||
Sets transmission's file mode creation mask.
|
||||
See the umask(2) manpage for more information.
|
||||
Users who want their saved torrents to be world-writable
|
||||
may want to set this value to 0.
|
||||
Bear in mind that the json markup language only accepts numbers in base 10,
|
||||
so the standard umask(2) octal notation "022" is written in settings.json as 18.
|
||||
'';
|
||||
};
|
||||
options.utp-enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Whether to enable <link xlink:href="http://en.wikipedia.org/wiki/Micro_Transport_Protocol">Micro Transport Protocol (µTP)</link>.
|
||||
'';
|
||||
};
|
||||
options.watch-dir = mkOption {
|
||||
type = types.path;
|
||||
default = "${cfg.home}/${watchDir}";
|
||||
description = "Watch a directory for torrent files and add them to transmission.";
|
||||
};
|
||||
options.watch-dir-enabled = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''Whether to enable the
|
||||
<xref linkend="opt-services.transmission.settings.watch-dir"/>.
|
||||
'';
|
||||
};
|
||||
options.trash-original-torrent-files = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''Whether to delete torrents added from the
|
||||
<xref linkend="opt-services.transmission.settings.watch-dir"/>.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
downloadDirPermissions = mkOption {
|
||||
|
@ -74,31 +172,22 @@ in
|
|||
example = "775";
|
||||
description = ''
|
||||
The permissions set by <literal>systemd.activationScripts.transmission-daemon</literal>
|
||||
on the directories <link linkend="opt-services.transmission.settings">settings.download-dir</link>
|
||||
and <link linkend="opt-services.transmission.settings">settings.incomplete-dir</link>.
|
||||
on the directories <xref linkend="opt-services.transmission.settings.download-dir"/>
|
||||
and <xref linkend="opt-services.transmission.settings.incomplete-dir"/>.
|
||||
Note that you may also want to change
|
||||
<link linkend="opt-services.transmission.settings">settings.umask</link>.
|
||||
'';
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
type = types.port;
|
||||
description = ''
|
||||
TCP port number to run the RPC/web interface.
|
||||
|
||||
If instead you want to change the peer port,
|
||||
use <link linkend="opt-services.transmission.settings">settings.peer-port</link>
|
||||
or <link linkend="opt-services.transmission.settings">settings.peer-port-random-on-start</link>.
|
||||
<xref linkend="opt-services.transmission.settings.umask"/>.
|
||||
'';
|
||||
};
|
||||
|
||||
home = mkOption {
|
||||
type = types.path;
|
||||
default = homeDir;
|
||||
default = "/var/lib/transmission";
|
||||
description = ''
|
||||
The directory where Transmission will create <literal>${settingsDir}</literal>.
|
||||
as well as <literal>${downloadsDir}/</literal> unless <link linkend="opt-services.transmission.settings">settings.download-dir</link> is changed,
|
||||
and <literal>${incompleteDir}/</literal> unless <link linkend="opt-services.transmission.settings">settings.incomplete-dir</link> is changed.
|
||||
as well as <literal>${downloadsDir}/</literal> unless
|
||||
<xref linkend="opt-services.transmission.settings.download-dir"/> is changed,
|
||||
and <literal>${incompleteDir}/</literal> unless
|
||||
<xref linkend="opt-services.transmission.settings.incomplete-dir"/> is changed.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -119,19 +208,30 @@ in
|
|||
description = ''
|
||||
Path to a JSON file to be merged with the settings.
|
||||
Useful to merge a file which is better kept out of the Nix store
|
||||
because it contains sensible data like <link linkend="opt-services.transmission.settings">settings.rpc-password</link>.
|
||||
to set secret config parameters like <code>rpc-password</code>.
|
||||
'';
|
||||
default = "/dev/null";
|
||||
example = "/var/lib/secrets/transmission/settings.json";
|
||||
};
|
||||
|
||||
openFirewall = mkEnableOption "opening of the peer port(s) in the firewall";
|
||||
extraFlags = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [];
|
||||
example = [ "--log-debug" ];
|
||||
description = ''
|
||||
Extra flags passed to the transmission command in the service definition.
|
||||
'';
|
||||
};
|
||||
|
||||
openPeerPorts = mkEnableOption "opening of the peer port(s) in the firewall";
|
||||
|
||||
openRPCPort = mkEnableOption "opening of the RPC port in the firewall";
|
||||
|
||||
performanceNetParameters = mkEnableOption ''tweaking of kernel parameters
|
||||
to open many more connections at the same time.
|
||||
|
||||
Note that you may also want to increase
|
||||
<link linkend="opt-services.transmission.settings">settings.peer-limit-global</link>.
|
||||
<code>peer-limit-global"</code>.
|
||||
And be aware that these settings are quite aggressive
|
||||
and might not suite your regular desktop use.
|
||||
For instance, SSH sessions may time out more easily'';
|
||||
|
@ -156,34 +256,6 @@ in
|
|||
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.watch-dir}'
|
||||
'';
|
||||
|
||||
assertions = [
|
||||
{ assertion = builtins.match "^/.*" cfg.home != null;
|
||||
message = "`services.transmission.home' must be an absolute path.";
|
||||
}
|
||||
{ assertion = types.path.check cfg.settings.download-dir;
|
||||
message = "`services.transmission.settings.download-dir' must be an absolute path.";
|
||||
}
|
||||
{ assertion = types.path.check cfg.settings.incomplete-dir;
|
||||
message = "`services.transmission.settings.incomplete-dir' must be an absolute path.";
|
||||
}
|
||||
{ assertion = types.path.check cfg.settings.watch-dir;
|
||||
message = "`services.transmission.settings.watch-dir' must be an absolute path.";
|
||||
}
|
||||
{ assertion = cfg.settings.script-torrent-done-filename == "" || types.path.check cfg.settings.script-torrent-done-filename;
|
||||
message = "`services.transmission.settings.script-torrent-done-filename' must be an absolute path.";
|
||||
}
|
||||
{ assertion = types.port.check cfg.settings.rpc-port;
|
||||
message = "${toString cfg.settings.rpc-port} is not a valid port number for `services.transmission.settings.rpc-port`.";
|
||||
}
|
||||
# In case both port and settings.rpc-port are explicitely defined: they must be the same.
|
||||
{ assertion = !options.services.transmission.port.isDefined || cfg.port == cfg.settings.rpc-port;
|
||||
message = "`services.transmission.port' is not equal to `services.transmission.settings.rpc-port'";
|
||||
}
|
||||
];
|
||||
|
||||
services.transmission.settings =
|
||||
optionalAttrs options.services.transmission.port.isDefined { rpc-port = cfg.port; };
|
||||
|
||||
systemd.services.transmission = {
|
||||
description = "Transmission BitTorrent Service";
|
||||
after = [ "network.target" ] ++ optional apparmor.enable "apparmor.service";
|
||||
|
@ -199,15 +271,13 @@ in
|
|||
install -D -m 600 -o '${cfg.user}' -g '${cfg.group}' /dev/stdin \
|
||||
'${cfg.home}/${settingsDir}/settings.json'
|
||||
'')];
|
||||
ExecStart="${pkgs.transmission}/bin/transmission-daemon -f -g ${cfg.home}/${settingsDir}";
|
||||
ExecStart="${pkgs.transmission}/bin/transmission-daemon -f -g ${cfg.home}/${settingsDir} ${escapeShellArgs cfg.extraFlags}";
|
||||
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
# Create rootDir in the host's mount namespace.
|
||||
RuntimeDirectory = [(baseNameOf rootDir)];
|
||||
RuntimeDirectoryMode = "755";
|
||||
# Avoid mounting rootDir in the own rootDir of ExecStart='s mount namespace.
|
||||
InaccessiblePaths = ["-+${rootDir}"];
|
||||
# This is for BindPaths= and BindReadOnlyPaths=
|
||||
# to allow traversal of directories they create in RootDirectory=.
|
||||
UMask = "0066";
|
||||
|
@ -228,11 +298,9 @@ in
|
|||
cfg.settings.download-dir
|
||||
] ++
|
||||
optional cfg.settings.incomplete-dir-enabled
|
||||
cfg.settings.incomplete-dir
|
||||
++
|
||||
optional cfg.settings.watch-dir-enabled
|
||||
cfg.settings.watch-dir
|
||||
;
|
||||
cfg.settings.incomplete-dir ++
|
||||
optional (cfg.settings.watch-dir-enabled && cfg.settings.trash-original-torrent-files)
|
||||
cfg.settings.watch-dir;
|
||||
BindReadOnlyPaths = [
|
||||
# No confinement done of /nix/store here like in systemd-confinement.nix,
|
||||
# an AppArmor profile is provided to get a confinement based upon paths and rights.
|
||||
|
@ -241,8 +309,10 @@ in
|
|||
"/run"
|
||||
] ++
|
||||
optional (cfg.settings.script-torrent-done-enabled &&
|
||||
cfg.settings.script-torrent-done-filename != "")
|
||||
cfg.settings.script-torrent-done-filename;
|
||||
cfg.settings.script-torrent-done-filename != null)
|
||||
cfg.settings.script-torrent-done-filename ++
|
||||
optional (cfg.settings.watch-dir-enabled && !cfg.settings.trash-original-torrent-files)
|
||||
cfg.settings.watch-dir;
|
||||
# The following options are only for optimizing:
|
||||
# systemd-analyze security transmission
|
||||
AmbientCapabilities = "";
|
||||
|
@ -287,7 +357,6 @@ in
|
|||
"quotactl"
|
||||
];
|
||||
SystemCallArchitectures = "native";
|
||||
SystemCallErrorNumber = "EPERM";
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -309,7 +378,8 @@ in
|
|||
};
|
||||
});
|
||||
|
||||
networking.firewall = mkIf cfg.openFirewall (
|
||||
networking.firewall = mkMerge [
|
||||
(mkIf cfg.openPeerPorts (
|
||||
if cfg.settings.peer-port-random-on-start
|
||||
then
|
||||
{ allowedTCPPortRanges =
|
||||
|
@ -327,7 +397,9 @@ in
|
|||
{ allowedTCPPorts = [ cfg.settings.peer-port ];
|
||||
allowedUDPPorts = [ cfg.settings.peer-port ];
|
||||
}
|
||||
);
|
||||
))
|
||||
(mkIf cfg.openRPCPort { allowedTCPPorts = [ cfg.settings.rpc-port ]; })
|
||||
];
|
||||
|
||||
boot.kernel.sysctl = mkMerge [
|
||||
# Transmission uses a single UDP socket in order to implement multiple uTP sockets,
|
||||
|
@ -342,21 +414,21 @@ in
|
|||
# Increase the number of available source (local) TCP and UDP ports to 49151.
|
||||
# Usual default is 32768 60999, ie. 28231 ports.
|
||||
# Find out your current usage with: ss -s
|
||||
"net.ipv4.ip_local_port_range" = "16384 65535";
|
||||
"net.ipv4.ip_local_port_range" = mkDefault "16384 65535";
|
||||
# Timeout faster generic TCP states.
|
||||
# Usual default is 600.
|
||||
# Find out your current usage with: watch -n 1 netstat -nptuo
|
||||
"net.netfilter.nf_conntrack_generic_timeout" = 60;
|
||||
"net.netfilter.nf_conntrack_generic_timeout" = mkDefault 60;
|
||||
# Timeout faster established but inactive connections.
|
||||
# Usual default is 432000.
|
||||
"net.netfilter.nf_conntrack_tcp_timeout_established" = 600;
|
||||
"net.netfilter.nf_conntrack_tcp_timeout_established" = mkDefault 600;
|
||||
# Clear immediately TCP states after timeout.
|
||||
# Usual default is 120.
|
||||
"net.netfilter.nf_conntrack_tcp_timeout_time_wait" = 1;
|
||||
"net.netfilter.nf_conntrack_tcp_timeout_time_wait" = mkDefault 1;
|
||||
# Increase the number of trackable connections.
|
||||
# Usual default is 262144.
|
||||
# Find out your current usage with: conntrack -C
|
||||
"net.netfilter.nf_conntrack_max" = 1048576;
|
||||
"net.netfilter.nf_conntrack_max" = mkDefault 1048576;
|
||||
})
|
||||
];
|
||||
|
||||
|
@ -372,7 +444,7 @@ in
|
|||
rw ${cfg.settings.incomplete-dir}/**,
|
||||
''}
|
||||
${optionalString cfg.settings.watch-dir-enabled ''
|
||||
rw ${cfg.settings.watch-dir}/**,
|
||||
r${optionalString cfg.settings.trash-original-torrent-files "w"} ${cfg.settings.watch-dir}/**,
|
||||
''}
|
||||
profile dirs {
|
||||
rw ${cfg.settings.download-dir}/**,
|
||||
|
@ -380,12 +452,12 @@ in
|
|||
rw ${cfg.settings.incomplete-dir}/**,
|
||||
''}
|
||||
${optionalString cfg.settings.watch-dir-enabled ''
|
||||
rw ${cfg.settings.watch-dir}/**,
|
||||
r${optionalString cfg.settings.trash-original-torrent-files "w"} ${cfg.settings.watch-dir}/**,
|
||||
''}
|
||||
}
|
||||
|
||||
${optionalString (cfg.settings.script-torrent-done-enabled &&
|
||||
cfg.settings.script-torrent-done-filename != "") ''
|
||||
cfg.settings.script-torrent-done-filename != null) ''
|
||||
# Stack transmission_directories profile on top of
|
||||
# any existing profile for script-torrent-done-filename
|
||||
# FIXME: to be tested as I'm not sure it works well with NoNewPrivileges=
|
||||
|
|
|
@ -221,7 +221,7 @@ in {
|
|||
|
||||
assertions = [
|
||||
{ assertion = db.createLocally -> db.user == user;
|
||||
message = "services.bookstack.database.user must be set to ${user} if services.mediawiki.database.createLocally is set true.";
|
||||
message = "services.bookstack.database.user must be set to ${user} if services.bookstack.database.createLocally is set true.";
|
||||
}
|
||||
{ assertion = db.createLocally -> db.passwordFile == null;
|
||||
message = "services.bookstack.database.passwordFile cannot be specified if services.bookstack.database.createLocally is set to true.";
|
||||
|
|
|
@ -38,6 +38,10 @@ let
|
|||
};
|
||||
bosh = "//${cfg.hostName}/http-bind";
|
||||
websocket = "wss://${cfg.hostName}/xmpp-websocket";
|
||||
|
||||
fileRecordingsEnabled = true;
|
||||
liveStreamingEnabled = true;
|
||||
hiddenDomain = "recorder.${cfg.hostName}";
|
||||
};
|
||||
in
|
||||
{
|
||||
|
@ -48,7 +52,7 @@ in
|
|||
type = str;
|
||||
example = "meet.example.org";
|
||||
description = ''
|
||||
Hostname of the Jitsi Meet instance.
|
||||
FQDN of the Jitsi Meet instance.
|
||||
'';
|
||||
};
|
||||
|
||||
|
@ -130,6 +134,17 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
jibri.enable = mkOption {
|
||||
type = bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to enable a Jibri instance and configure it to connect to Prosody.
|
||||
|
||||
Additional configuration is possible with <option>services.jibri</option>, and
|
||||
<option>services.jibri.finalizeScript</option> is especially useful.
|
||||
'';
|
||||
};
|
||||
|
||||
nginx.enable = mkOption {
|
||||
type = bool;
|
||||
default = true;
|
||||
|
@ -229,6 +244,14 @@ in
|
|||
key = "/var/lib/jitsi-meet/jitsi-meet.key";
|
||||
};
|
||||
};
|
||||
virtualHosts."recorder.${cfg.hostName}" = {
|
||||
enabled = true;
|
||||
domain = "recorder.${cfg.hostName}";
|
||||
extraConfig = ''
|
||||
authentication = "internal_plain"
|
||||
c2s_require_encryption = false
|
||||
'';
|
||||
};
|
||||
};
|
||||
systemd.services.prosody.serviceConfig = mkIf cfg.prosody.enable {
|
||||
EnvironmentFile = [ "/var/lib/jitsi-meet/secrets-env" ];
|
||||
|
@ -243,12 +266,13 @@ in
|
|||
systemd.services.jitsi-meet-init-secrets = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
before = [ "jicofo.service" "jitsi-videobridge2.service" ] ++ (optional cfg.prosody.enable "prosody.service");
|
||||
path = [ config.services.prosody.package ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
};
|
||||
|
||||
script = let
|
||||
secrets = [ "jicofo-component-secret" "jicofo-user-secret" ] ++ (optional (cfg.videobridge.passwordFile == null) "videobridge-secret");
|
||||
secrets = [ "jicofo-component-secret" "jicofo-user-secret" "jibri-auth-secret" "jibri-recorder-secret" ] ++ (optional (cfg.videobridge.passwordFile == null) "videobridge-secret");
|
||||
videobridgeSecret = if cfg.videobridge.passwordFile != null then cfg.videobridge.passwordFile else "/var/lib/jitsi-meet/videobridge-secret";
|
||||
in
|
||||
''
|
||||
|
@ -267,9 +291,11 @@ in
|
|||
chmod 640 secrets-env
|
||||
''
|
||||
+ optionalString cfg.prosody.enable ''
|
||||
${config.services.prosody.package}/bin/prosodyctl register focus auth.${cfg.hostName} "$(cat /var/lib/jitsi-meet/jicofo-user-secret)"
|
||||
${config.services.prosody.package}/bin/prosodyctl register jvb auth.${cfg.hostName} "$(cat ${videobridgeSecret})"
|
||||
${config.services.prosody.package}/bin/prosodyctl mod_roster_command subscribe focus.${cfg.hostName} focus@auth.${cfg.hostName}
|
||||
prosodyctl register focus auth.${cfg.hostName} "$(cat /var/lib/jitsi-meet/jicofo-user-secret)"
|
||||
prosodyctl register jvb auth.${cfg.hostName} "$(cat ${videobridgeSecret})"
|
||||
prosodyctl mod_roster_command subscribe focus.${cfg.hostName} focus@auth.${cfg.hostName}
|
||||
prosodyctl register jibri auth.${cfg.hostName} "$(cat /var/lib/jitsi-meet/jibri-auth-secret)"
|
||||
prosodyctl register recorder recorder.${cfg.hostName} "$(cat /var/lib/jitsi-meet/jibri-recorder-secret)"
|
||||
|
||||
# generate self-signed certificates
|
||||
if [ ! -f /var/lib/jitsi-meet.crt ]; then
|
||||
|
@ -380,8 +406,43 @@ in
|
|||
userPasswordFile = "/var/lib/jitsi-meet/jicofo-user-secret";
|
||||
componentPasswordFile = "/var/lib/jitsi-meet/jicofo-component-secret";
|
||||
bridgeMuc = "jvbbrewery@internal.${cfg.hostName}";
|
||||
config = {
|
||||
config = mkMerge [{
|
||||
"org.jitsi.jicofo.ALWAYS_TRUST_MODE_ENABLED" = "true";
|
||||
#} (lib.mkIf cfg.jibri.enable {
|
||||
} (lib.mkIf (config.services.jibri.enable || cfg.jibri.enable) {
|
||||
"org.jitsi.jicofo.jibri.BREWERY" = "JibriBrewery@internal.${cfg.hostName}";
|
||||
"org.jitsi.jicofo.jibri.PENDING_TIMEOUT" = "90";
|
||||
})];
|
||||
};
|
||||
|
||||
services.jibri = mkIf cfg.jibri.enable {
|
||||
enable = true;
|
||||
|
||||
xmppEnvironments."jitsi-meet" = {
|
||||
xmppServerHosts = [ "localhost" ];
|
||||
xmppDomain = cfg.hostName;
|
||||
|
||||
control.muc = {
|
||||
domain = "internal.${cfg.hostName}";
|
||||
roomName = "JibriBrewery";
|
||||
nickname = "jibri";
|
||||
};
|
||||
|
||||
control.login = {
|
||||
domain = "auth.${cfg.hostName}";
|
||||
username = "jibri";
|
||||
passwordFile = "/var/lib/jitsi-meet/jibri-auth-secret";
|
||||
};
|
||||
|
||||
call.login = {
|
||||
domain = "recorder.${cfg.hostName}";
|
||||
username = "recorder";
|
||||
passwordFile = "/var/lib/jitsi-meet/jibri-recorder-secret";
|
||||
};
|
||||
|
||||
usageTimeout = "0";
|
||||
disableCertificateVerification = true;
|
||||
stripFromRoomDomain = "conference.";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -344,7 +344,7 @@ in {
|
|||
authenticate = lib.mkOption {
|
||||
description = "Authenticate with the SMTP server using username and password.";
|
||||
type = lib.types.bool;
|
||||
default = true;
|
||||
default = false;
|
||||
};
|
||||
|
||||
host = lib.mkOption {
|
||||
|
@ -596,6 +596,7 @@ in {
|
|||
|
||||
services.postfix = lib.mkIf (cfg.smtp.createLocally && cfg.smtp.host == "127.0.0.1") {
|
||||
enable = true;
|
||||
hostname = lib.mkDefault "${cfg.localDomain}";
|
||||
};
|
||||
services.redis = lib.mkIf (cfg.redis.createLocally && cfg.redis.host == "127.0.0.1") {
|
||||
enable = true;
|
||||
|
|
|
@ -153,7 +153,7 @@ in {
|
|||
package = mkOption {
|
||||
type = types.package;
|
||||
description = "Which package to use for the Nextcloud instance.";
|
||||
relatedPackages = [ "nextcloud20" "nextcloud21" "nextcloud22" ];
|
||||
relatedPackages = [ "nextcloud21" "nextcloud22" ];
|
||||
};
|
||||
phpPackage = mkOption {
|
||||
type = types.package;
|
||||
|
@ -507,13 +507,7 @@ in {
|
|||
};
|
||||
|
||||
config = mkIf cfg.enable (mkMerge [
|
||||
{ assertions = let acfg = cfg.config; in [
|
||||
{ assertion = versionOlder cfg.package.version "21" -> cfg.config.defaultPhoneRegion == null;
|
||||
message = "The `defaultPhoneRegion'-setting is only supported for Nextcloud >=21!";
|
||||
}
|
||||
];
|
||||
|
||||
warnings = let
|
||||
{ warnings = let
|
||||
latest = 22;
|
||||
upgradeWarning = major: nixos:
|
||||
''
|
||||
|
@ -547,7 +541,6 @@ in {
|
|||
Using config.services.nextcloud.poolConfig is deprecated and will become unsupported in a future release.
|
||||
Please migrate your configuration to config.services.nextcloud.poolSettings.
|
||||
'')
|
||||
++ (optional (versionOlder cfg.package.version "20") (upgradeWarning 19 "21.05"))
|
||||
++ (optional (versionOlder cfg.package.version "21") (upgradeWarning 20 "21.05"))
|
||||
++ (optional (versionOlder cfg.package.version "22") (upgradeWarning 21 "21.11"))
|
||||
++ (optional isUnsupportedMariadb ''
|
||||
|
@ -574,7 +567,11 @@ in {
|
|||
# This versionOlder statement remains set to 21.03 for backwards compatibility.
|
||||
# See https://github.com/NixOS/nixpkgs/pull/108899 and
|
||||
# https://github.com/NixOS/rfcs/blob/master/rfcs/0080-nixos-release-schedule.md.
|
||||
else if versionOlder stateVersion "21.03" then nextcloud19
|
||||
# FIXME(@Ma27) remove this else-if as soon as 21.05 is EOL! This is only here
|
||||
# to ensure that users who are on Nextcloud 19 with a stateVersion <21.05 with
|
||||
# no explicit services.nextcloud.package don't upgrade to v21 by accident (
|
||||
# nextcloud20 throws an eval-error because it's dropped).
|
||||
else if versionOlder stateVersion "21.03" then nextcloud20
|
||||
else if versionOlder stateVersion "21.11" then nextcloud21
|
||||
else nextcloud22
|
||||
);
|
||||
|
|
447
third_party/nixpkgs/nixos/modules/services/web-apps/peertube.nix
vendored
Normal file
447
third_party/nixpkgs/nixos/modules/services/web-apps/peertube.nix
vendored
Normal file
|
@ -0,0 +1,447 @@
|
|||
{ lib, pkgs, config, ... }:
|
||||
|
||||
let
|
||||
cfg = config.services.peertube;
|
||||
|
||||
settingsFormat = pkgs.formats.json {};
|
||||
configFile = settingsFormat.generate "production.json" cfg.settings;
|
||||
|
||||
env = {
|
||||
NODE_CONFIG_DIR = "/var/lib/peertube/config";
|
||||
NODE_ENV = "production";
|
||||
NODE_EXTRA_CA_CERTS = "/etc/ssl/certs/ca-certificates.crt";
|
||||
NPM_CONFIG_PREFIX = cfg.package;
|
||||
HOME = cfg.package;
|
||||
};
|
||||
|
||||
systemCallsList = [ "@cpu-emulation" "@debug" "@keyring" "@ipc" "@memlock" "@mount" "@obsolete" "@privileged" "@setuid" ];
|
||||
|
||||
cfgService = {
|
||||
# Proc filesystem
|
||||
ProcSubset = "pid";
|
||||
ProtectProc = "invisible";
|
||||
# Access write directories
|
||||
UMask = "0027";
|
||||
# Capabilities
|
||||
CapabilityBoundingSet = "";
|
||||
# Security
|
||||
NoNewPrivileges = true;
|
||||
# Sandboxing
|
||||
ProtectSystem = "strict";
|
||||
ProtectHome = true;
|
||||
PrivateTmp = true;
|
||||
PrivateDevices = true;
|
||||
PrivateUsers = true;
|
||||
ProtectClock = true;
|
||||
ProtectHostname = true;
|
||||
ProtectKernelLogs = true;
|
||||
ProtectKernelModules = true;
|
||||
ProtectKernelTunables = true;
|
||||
ProtectControlGroups = true;
|
||||
RestrictNamespaces = true;
|
||||
LockPersonality = true;
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
RemoveIPC = true;
|
||||
PrivateMounts = true;
|
||||
# System Call Filtering
|
||||
SystemCallArchitectures = "native";
|
||||
};
|
||||
|
||||
envFile = pkgs.writeText "peertube.env" (lib.concatMapStrings (s: s + "\n") (
|
||||
(lib.concatLists (lib.mapAttrsToList (name: value:
|
||||
if value != null then [
|
||||
"${name}=\"${toString value}\""
|
||||
] else []
|
||||
) env))));
|
||||
|
||||
peertubeEnv = pkgs.writeShellScriptBin "peertube-env" ''
|
||||
set -a
|
||||
source "${envFile}"
|
||||
eval -- "\$@"
|
||||
'';
|
||||
|
||||
peertubeCli = pkgs.writeShellScriptBin "peertube" ''
|
||||
node ~/dist/server/tools/peertube.js $@
|
||||
'';
|
||||
|
||||
in {
|
||||
options.services.peertube = {
|
||||
enable = lib.mkEnableOption "Enable Peertube’s service";
|
||||
|
||||
user = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "peertube";
|
||||
description = "User account under which Peertube runs.";
|
||||
};
|
||||
|
||||
group = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "peertube";
|
||||
description = "Group under which Peertube runs.";
|
||||
};
|
||||
|
||||
localDomain = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
example = "peertube.example.com";
|
||||
description = "The domain serving your PeerTube instance.";
|
||||
};
|
||||
|
||||
listenHttp = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 9000;
|
||||
description = "listen port for HTTP server.";
|
||||
};
|
||||
|
||||
listenWeb = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 9000;
|
||||
description = "listen port for WEB server.";
|
||||
};
|
||||
|
||||
enableWebHttps = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Enable or disable HTTPS protocol.";
|
||||
};
|
||||
|
||||
dataDirs = lib.mkOption {
|
||||
type = lib.types.listOf lib.types.path;
|
||||
default = [ ];
|
||||
example = [ "/opt/peertube/storage" "/var/cache/peertube" ];
|
||||
description = "Allow access to custom data locations.";
|
||||
};
|
||||
|
||||
serviceEnvironmentFile = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.path;
|
||||
default = null;
|
||||
example = "/run/keys/peertube/password-init-root";
|
||||
description = ''
|
||||
Set environment variables for the service. Mainly useful for setting the initial root password.
|
||||
For example write to file:
|
||||
PT_INITIAL_ROOT_PASSWORD=changeme
|
||||
'';
|
||||
};
|
||||
|
||||
settings = lib.mkOption {
|
||||
type = settingsFormat.type;
|
||||
example = lib.literalExpression ''
|
||||
{
|
||||
listen = {
|
||||
hostname = "0.0.0.0";
|
||||
};
|
||||
log = {
|
||||
level = "debug";
|
||||
};
|
||||
storage = {
|
||||
tmp = "/opt/data/peertube/storage/tmp/";
|
||||
logs = "/opt/data/peertube/storage/logs/";
|
||||
cache = "/opt/data/peertube/storage/cache/";
|
||||
};
|
||||
}
|
||||
'';
|
||||
description = "Configuration for peertube.";
|
||||
};
|
||||
|
||||
database = {
|
||||
createLocally = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Configure local PostgreSQL database server for PeerTube.";
|
||||
};
|
||||
|
||||
host = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = if cfg.database.createLocally then "/run/postgresql" else null;
|
||||
example = "192.168.15.47";
|
||||
description = "Database host address or unix socket.";
|
||||
};
|
||||
|
||||
port = lib.mkOption {
|
||||
type = lib.types.int;
|
||||
default = 5432;
|
||||
description = "Database host port.";
|
||||
};
|
||||
|
||||
name = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "peertube";
|
||||
description = "Database name.";
|
||||
};
|
||||
|
||||
user = lib.mkOption {
|
||||
type = lib.types.str;
|
||||
default = "peertube";
|
||||
description = "Database user.";
|
||||
};
|
||||
|
||||
passwordFile = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.path;
|
||||
default = null;
|
||||
example = "/run/keys/peertube/password-posgressql-db";
|
||||
description = "Password for PostgreSQL database.";
|
||||
};
|
||||
};
|
||||
|
||||
redis = {
|
||||
createLocally = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Configure local Redis server for PeerTube.";
|
||||
};
|
||||
|
||||
host = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.str;
|
||||
default = if cfg.redis.createLocally && !cfg.redis.enableUnixSocket then "127.0.0.1" else null;
|
||||
description = "Redis host.";
|
||||
};
|
||||
|
||||
port = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.port;
|
||||
default = if cfg.redis.createLocally && cfg.redis.enableUnixSocket then null else 6379;
|
||||
description = "Redis port.";
|
||||
};
|
||||
|
||||
passwordFile = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.path;
|
||||
default = null;
|
||||
example = "/run/keys/peertube/password-redis-db";
|
||||
description = "Password for redis database.";
|
||||
};
|
||||
|
||||
enableUnixSocket = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = cfg.redis.createLocally;
|
||||
description = "Use Unix socket.";
|
||||
};
|
||||
};
|
||||
|
||||
smtp = {
|
||||
createLocally = lib.mkOption {
|
||||
type = lib.types.bool;
|
||||
default = false;
|
||||
description = "Configure local Postfix SMTP server for PeerTube.";
|
||||
};
|
||||
|
||||
passwordFile = lib.mkOption {
|
||||
type = lib.types.nullOr lib.types.path;
|
||||
default = null;
|
||||
example = "/run/keys/peertube/password-smtp";
|
||||
description = "Password for smtp server.";
|
||||
};
|
||||
};
|
||||
|
||||
package = lib.mkOption {
|
||||
type = lib.types.package;
|
||||
default = pkgs.peertube;
|
||||
description = "Peertube package to use.";
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf cfg.enable {
|
||||
assertions = [
|
||||
{ assertion = cfg.serviceEnvironmentFile == null || !lib.hasPrefix builtins.storeDir cfg.serviceEnvironmentFile;
|
||||
message = ''
|
||||
<option>services.peertube.serviceEnvironmentFile</option> points to
|
||||
a file in the Nix store. You should use a quoted absolute path to
|
||||
prevent this.
|
||||
'';
|
||||
}
|
||||
{ assertion = !(cfg.redis.enableUnixSocket && (cfg.redis.host != null || cfg.redis.port != null));
|
||||
message = ''
|
||||
<option>services.peertube.redis.createLocally</option> and redis network connection (<option>services.peertube.redis.host</option> or <option>services.peertube.redis.port</option>) enabled. Disable either of them.
|
||||
'';
|
||||
}
|
||||
{ assertion = cfg.redis.enableUnixSocket || (cfg.redis.host != null && cfg.redis.port != null);
|
||||
message = ''
|
||||
<option>services.peertube.redis.host</option> and <option>services.peertube.redis.port</option> needs to be set if <option>services.peertube.redis.enableUnixSocket</option> is not enabled.
|
||||
'';
|
||||
}
|
||||
{ assertion = cfg.redis.passwordFile == null || !lib.hasPrefix builtins.storeDir cfg.redis.passwordFile;
|
||||
message = ''
|
||||
<option>services.peertube.redis.passwordFile</option> points to
|
||||
a file in the Nix store. You should use a quoted absolute path to
|
||||
prevent this.
|
||||
'';
|
||||
}
|
||||
{ assertion = cfg.database.passwordFile == null || !lib.hasPrefix builtins.storeDir cfg.database.passwordFile;
|
||||
message = ''
|
||||
<option>services.peertube.database.passwordFile</option> points to
|
||||
a file in the Nix store. You should use a quoted absolute path to
|
||||
prevent this.
|
||||
'';
|
||||
}
|
||||
{ assertion = cfg.smtp.passwordFile == null || !lib.hasPrefix builtins.storeDir cfg.smtp.passwordFile;
|
||||
message = ''
|
||||
<option>services.peertube.smtp.passwordFile</option> points to
|
||||
a file in the Nix store. You should use a quoted absolute path to
|
||||
prevent this.
|
||||
'';
|
||||
}
|
||||
];
|
||||
|
||||
services.peertube.settings = lib.mkMerge [
|
||||
{
|
||||
listen = {
|
||||
port = cfg.listenHttp;
|
||||
};
|
||||
webserver = {
|
||||
https = (if cfg.enableWebHttps then true else false);
|
||||
hostname = "${cfg.localDomain}";
|
||||
port = cfg.listenWeb;
|
||||
};
|
||||
database = {
|
||||
hostname = "${cfg.database.host}";
|
||||
port = cfg.database.port;
|
||||
name = "${cfg.database.name}";
|
||||
username = "${cfg.database.user}";
|
||||
};
|
||||
redis = {
|
||||
hostname = "${toString cfg.redis.host}";
|
||||
port = (if cfg.redis.port == null then "" else cfg.redis.port);
|
||||
};
|
||||
storage = {
|
||||
tmp = lib.mkDefault "/var/lib/peertube/storage/tmp/";
|
||||
avatars = lib.mkDefault "/var/lib/peertube/storage/avatars/";
|
||||
videos = lib.mkDefault "/var/lib/peertube/storage/videos/";
|
||||
streaming_playlists = lib.mkDefault "/var/lib/peertube/storage/streaming-playlists/";
|
||||
redundancy = lib.mkDefault "/var/lib/peertube/storage/redundancy/";
|
||||
logs = lib.mkDefault "/var/lib/peertube/storage/logs/";
|
||||
previews = lib.mkDefault "/var/lib/peertube/storage/previews/";
|
||||
thumbnails = lib.mkDefault "/var/lib/peertube/storage/thumbnails/";
|
||||
torrents = lib.mkDefault "/var/lib/peertube/storage/torrents/";
|
||||
captions = lib.mkDefault "/var/lib/peertube/storage/captions/";
|
||||
cache = lib.mkDefault "/var/lib/peertube/storage/cache/";
|
||||
plugins = lib.mkDefault "/var/lib/peertube/storage/plugins/";
|
||||
client_overrides = lib.mkDefault "/var/lib/peertube/storage/client-overrides/";
|
||||
};
|
||||
}
|
||||
(lib.mkIf cfg.redis.enableUnixSocket { redis = { socket = "/run/redis/redis.sock"; }; })
|
||||
];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d '/var/lib/peertube/config' 0700 ${cfg.user} ${cfg.group} - -"
|
||||
"z '/var/lib/peertube/config' 0700 ${cfg.user} ${cfg.group} - -"
|
||||
];
|
||||
|
||||
systemd.services.peertube-init-db = lib.mkIf cfg.database.createLocally {
|
||||
description = "Initialization database for PeerTube daemon";
|
||||
after = [ "network.target" "postgresql.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
script = let
|
||||
psqlSetupCommands = pkgs.writeText "peertube-init.sql" ''
|
||||
SELECT 'CREATE USER "${cfg.database.user}"' WHERE NOT EXISTS (SELECT FROM pg_roles WHERE rolname = '${cfg.database.user}')\gexec
|
||||
SELECT 'CREATE DATABASE "${cfg.database.name}" OWNER "${cfg.database.user}" TEMPLATE template0 ENCODING UTF8' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = '${cfg.database.name}')\gexec
|
||||
\c '${cfg.database.name}'
|
||||
CREATE EXTENSION IF NOT EXISTS pg_trgm;
|
||||
CREATE EXTENSION IF NOT EXISTS unaccent;
|
||||
'';
|
||||
in "${config.services.postgresql.package}/bin/psql -f ${psqlSetupCommands}";
|
||||
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
WorkingDirectory = cfg.package;
|
||||
# User and group
|
||||
User = "postgres";
|
||||
Group = "postgres";
|
||||
# Sandboxing
|
||||
RestrictAddressFamilies = [ "AF_UNIX" ];
|
||||
MemoryDenyWriteExecute = true;
|
||||
# System Call Filtering
|
||||
SystemCallFilter = "~" + lib.concatStringsSep " " (systemCallsList ++ [ "@resources" ]);
|
||||
} // cfgService;
|
||||
};
|
||||
|
||||
systemd.services.peertube = {
|
||||
description = "PeerTube daemon";
|
||||
after = [ "network.target" ]
|
||||
++ lib.optionals cfg.redis.createLocally [ "redis.service" ]
|
||||
++ lib.optionals cfg.database.createLocally [ "postgresql.service" "peertube-init-db.service" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
environment = env;
|
||||
|
||||
path = with pkgs; [ bashInteractive ffmpeg nodejs-16_x openssl yarn youtube-dl ];
|
||||
|
||||
script = ''
|
||||
#!/bin/sh
|
||||
umask 077
|
||||
cat > /var/lib/peertube/config/local.yaml <<EOF
|
||||
${lib.optionalString ((!cfg.database.createLocally) && (cfg.database.passwordFile != null)) ''
|
||||
database:
|
||||
password: '$(cat ${cfg.database.passwordFile})'
|
||||
''}
|
||||
${lib.optionalString (cfg.redis.passwordFile != null) ''
|
||||
redis:
|
||||
auth: '$(cat ${cfg.redis.passwordFile})'
|
||||
''}
|
||||
${lib.optionalString (cfg.smtp.passwordFile != null) ''
|
||||
smtp:
|
||||
password: '$(cat ${cfg.smtp.passwordFile})'
|
||||
''}
|
||||
EOF
|
||||
ln -sf ${cfg.package}/config/default.yaml /var/lib/peertube/config/default.yaml
|
||||
ln -sf ${configFile} /var/lib/peertube/config/production.json
|
||||
npm start
|
||||
'';
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
Restart = "always";
|
||||
RestartSec = 20;
|
||||
TimeoutSec = 60;
|
||||
WorkingDirectory = cfg.package;
|
||||
# User and group
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
# State directory and mode
|
||||
StateDirectory = "peertube";
|
||||
StateDirectoryMode = "0750";
|
||||
# Access write directories
|
||||
ReadWritePaths = cfg.dataDirs;
|
||||
# Environment
|
||||
EnvironmentFile = cfg.serviceEnvironmentFile;
|
||||
# Sandboxing
|
||||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" "AF_NETLINK" ];
|
||||
MemoryDenyWriteExecute = false;
|
||||
# System Call Filtering
|
||||
SystemCallFilter = [ ("~" + lib.concatStringsSep " " systemCallsList) "pipe" "pipe2" ];
|
||||
} // cfgService;
|
||||
};
|
||||
|
||||
services.postgresql = lib.mkIf cfg.database.createLocally {
|
||||
enable = true;
|
||||
};
|
||||
|
||||
services.redis = lib.mkMerge [
|
||||
(lib.mkIf cfg.redis.createLocally {
|
||||
enable = true;
|
||||
})
|
||||
(lib.mkIf (cfg.redis.createLocally && cfg.redis.enableUnixSocket) {
|
||||
unixSocket = "/run/redis/redis.sock";
|
||||
unixSocketPerm = 770;
|
||||
})
|
||||
];
|
||||
|
||||
services.postfix = lib.mkIf cfg.smtp.createLocally {
|
||||
enable = true;
|
||||
hostname = lib.mkDefault "${cfg.localDomain}";
|
||||
};
|
||||
|
||||
users.users = lib.mkMerge [
|
||||
(lib.mkIf (cfg.user == "peertube") {
|
||||
peertube = {
|
||||
isSystemUser = true;
|
||||
group = cfg.group;
|
||||
home = cfg.package;
|
||||
};
|
||||
})
|
||||
(lib.attrsets.setAttrByPath [ cfg.user "packages" ] [ cfg.package peertubeEnv peertubeCli pkgs.ffmpeg pkgs.nodejs-16_x pkgs.yarn ])
|
||||
(lib.mkIf cfg.redis.enableUnixSocket {${config.services.peertube.user}.extraGroups = [ "redis" ];})
|
||||
];
|
||||
|
||||
users.groups = lib.optionalAttrs (cfg.group == "peertube") {
|
||||
peertube = { };
|
||||
};
|
||||
};
|
||||
}
|
|
@ -889,7 +889,7 @@ in
|
|||
RestrictAddressFamilies = [ "AF_UNIX" "AF_INET" "AF_INET6" ];
|
||||
RestrictNamespaces = true;
|
||||
LockPersonality = true;
|
||||
MemoryDenyWriteExecute = !(builtins.any (mod: (mod.allowMemoryWriteExecute or false)) cfg.package.modules);
|
||||
MemoryDenyWriteExecute = !((builtins.any (mod: (mod.allowMemoryWriteExecute or false)) cfg.package.modules) || (cfg.package == pkgs.openresty));
|
||||
RestrictRealtime = true;
|
||||
RestrictSUIDSGID = true;
|
||||
RemoveIPC = true;
|
||||
|
|
|
@ -453,7 +453,7 @@ in
|
|||
cantarell-fonts
|
||||
dejavu_fonts
|
||||
source-code-pro # Default monospace font in 3.32
|
||||
source-sans-pro
|
||||
source-sans
|
||||
];
|
||||
|
||||
# Adapt from https://gitlab.gnome.org/GNOME/gnome-build-meta/blob/gnome-3-38/elements/core/meta-gnome-core-shell.bst
|
||||
|
|
|
@ -38,5 +38,11 @@ in
|
|||
"/share"
|
||||
];
|
||||
|
||||
security.wrappers.lumina-checkpass-wrapped = {
|
||||
source = "${pkgs.lumina.lumina}/bin/lumina-checkpass";
|
||||
owner = "root";
|
||||
group = "root";
|
||||
};
|
||||
|
||||
};
|
||||
}
|
||||
|
|
|
@ -221,13 +221,16 @@ in
|
|||
programs.evince.enable = mkDefault true;
|
||||
programs.evince.package = pkgs.pantheon.evince;
|
||||
programs.file-roller.enable = mkDefault true;
|
||||
programs.file-roller.package = pkgs.pantheon.file-roller;
|
||||
|
||||
# Settings from elementary-default-settings
|
||||
environment.sessionVariables.GTK_CSD = "1";
|
||||
environment.etc."gtk-3.0/settings.ini".source = "${pkgs.pantheon.elementary-default-settings}/etc/gtk-3.0/settings.ini";
|
||||
|
||||
xdg.portal.extraPortals = [
|
||||
pkgs.pantheon.elementary-files
|
||||
xdg.portal.extraPortals = with pkgs; [
|
||||
pantheon.elementary-files
|
||||
pantheon.elementary-settings-daemon
|
||||
xdg-desktop-portal-pantheon
|
||||
];
|
||||
|
||||
# Override GSettings schemas
|
||||
|
|
|
@ -1,15 +1,18 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
|
||||
xcfg = config.services.xserver;
|
||||
cfg = xcfg.desktopManager.plasma5;
|
||||
|
||||
libsForQt5 = pkgs.plasma5Packages;
|
||||
inherit (libsForQt5) kdeGear kdeFrameworks plasma5;
|
||||
inherit (pkgs) writeText;
|
||||
inherit (lib)
|
||||
getBin optionalString
|
||||
mkRemovedOptionModule mkRenamedOptionModule
|
||||
mkDefault mkIf mkMerge mkOption types;
|
||||
|
||||
ini = pkgs.formats.ini { };
|
||||
|
||||
pulseaudio = config.hardware.pulseaudio;
|
||||
pactl = "${getBin pulseaudio.package}/bin/pactl";
|
||||
|
@ -33,23 +36,25 @@ let
|
|||
gtk-button-images=1
|
||||
'';
|
||||
|
||||
gtk3_settings = writeText "settings.ini" ''
|
||||
[Settings]
|
||||
gtk-font-name=Sans Serif Regular 10
|
||||
gtk-theme-name=Breeze
|
||||
gtk-icon-theme-name=breeze
|
||||
gtk-fallback-icon-theme=hicolor
|
||||
gtk-cursor-theme-name=breeze_cursors
|
||||
gtk-toolbar-style=GTK_TOOLBAR_ICONS
|
||||
gtk-menu-images=1
|
||||
gtk-button-images=1
|
||||
'';
|
||||
gtk3_settings = ini.generate "settings.ini" {
|
||||
Settings = {
|
||||
gtk-font-name = "Sans Serif Regular 10";
|
||||
gtk-theme-name = "Breeze";
|
||||
gtk-icon-theme-name = "breeze";
|
||||
gtk-fallback-icon-theme = "hicolor";
|
||||
gtk-cursor-theme-name = "breeze_cursors";
|
||||
gtk-toolbar-style = "GTK_TOOLBAR_ICONS";
|
||||
gtk-menu-images = 1;
|
||||
gtk-button-images = 1;
|
||||
};
|
||||
};
|
||||
|
||||
kcminputrc = writeText "kcminputrc" ''
|
||||
[Mouse]
|
||||
cursorTheme=breeze_cursors
|
||||
cursorSize=0
|
||||
'';
|
||||
kcminputrc = ini.generate "kcminputrc" {
|
||||
Mouse = {
|
||||
cursorTheme = "breeze_cursors";
|
||||
cursorSize = 0;
|
||||
};
|
||||
};
|
||||
|
||||
activationScript = ''
|
||||
${set_XDG_CONFIG_HOME}
|
||||
|
@ -116,20 +121,17 @@ let
|
|||
if ! [ -f "$kdeglobals" ]
|
||||
then
|
||||
kcminputrc="''${XDG_CONFIG_HOME}/kcminputrc"
|
||||
if ! [ -f "$kcminputrc" ]
|
||||
then
|
||||
if ! [ -f "$kcminputrc" ]; then
|
||||
cat ${kcminputrc} >"$kcminputrc"
|
||||
fi
|
||||
|
||||
gtkrc2="$HOME/.gtkrc-2.0"
|
||||
if ! [ -f "$gtkrc2" ]
|
||||
then
|
||||
if ! [ -f "$gtkrc2" ]; then
|
||||
cat ${gtkrc2} >"$gtkrc2"
|
||||
fi
|
||||
|
||||
gtk3_settings="''${XDG_CONFIG_HOME}/gtk-3.0/settings.ini"
|
||||
if ! [ -f "$gtk3_settings" ]
|
||||
then
|
||||
if ! [ -f "$gtk3_settings" ]; then
|
||||
mkdir -p "$(dirname "$gtk3_settings")"
|
||||
cat ${gtk3_settings} >"$gtk3_settings"
|
||||
fi
|
||||
|
@ -140,9 +142,7 @@ let
|
|||
in
|
||||
|
||||
{
|
||||
options = {
|
||||
|
||||
services.xserver.desktopManager.plasma5 = {
|
||||
options.services.xserver.desktopManager.plasma5 = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
|
@ -174,8 +174,12 @@ in
|
|||
default = false;
|
||||
description = "Enable HiDPI scaling in Qt.";
|
||||
};
|
||||
};
|
||||
|
||||
runUsingSystemd = mkOption {
|
||||
description = "Use systemd to manage the Plasma session";
|
||||
type = types.bool;
|
||||
default = false;
|
||||
};
|
||||
};
|
||||
|
||||
imports = [
|
||||
|
@ -187,32 +191,37 @@ in
|
|||
(mkIf cfg.enable {
|
||||
|
||||
# Seed our configuration into nixos-generate-config
|
||||
system.nixos-generate-config.desktopConfiguration = [''
|
||||
system.nixos-generate-config.desktopConfiguration = [
|
||||
''
|
||||
# Enable the Plasma 5 Desktop Environment.
|
||||
services.xserver.displayManager.sddm.enable = true;
|
||||
services.xserver.desktopManager.plasma5.enable = true;
|
||||
''];
|
||||
''
|
||||
];
|
||||
|
||||
services.xserver.displayManager.sessionPackages = [ pkgs.libsForQt5.plasma5.plasma-workspace ];
|
||||
|
||||
security.wrappers = {
|
||||
kcheckpass =
|
||||
{ setuid = true;
|
||||
{
|
||||
setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${lib.getBin libsForQt5.kscreenlocker}/libexec/kcheckpass";
|
||||
source = "${getBin libsForQt5.kscreenlocker}/libexec/kcheckpass";
|
||||
};
|
||||
start_kdeinit =
|
||||
{ setuid = true;
|
||||
{
|
||||
setuid = true;
|
||||
owner = "root";
|
||||
group = "root";
|
||||
source = "${lib.getBin libsForQt5.kinit}/libexec/kf5/start_kdeinit";
|
||||
source = "${getBin libsForQt5.kinit}/libexec/kf5/start_kdeinit";
|
||||
};
|
||||
kwin_wayland =
|
||||
{ owner = "root";
|
||||
{
|
||||
owner = "root";
|
||||
group = "root";
|
||||
capabilities = "cap_sys_nice+ep";
|
||||
source = "${lib.getBin plasma5.kwin}/bin/kwin_wayland";
|
||||
source = "${getBin plasma5.kwin}/bin/kwin_wayland";
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -314,7 +323,8 @@ in
|
|||
breeze-icons
|
||||
pkgs.hicolor-icon-theme
|
||||
|
||||
kde-gtk-config breeze-gtk
|
||||
kde-gtk-config
|
||||
breeze-gtk
|
||||
|
||||
qtvirtualkeyboard
|
||||
|
||||
|
@ -336,6 +346,7 @@ in
|
|||
++ lib.optional config.services.pipewire.pulse.enable plasma-pa
|
||||
++ lib.optional config.powerManagement.enable powerdevil
|
||||
++ lib.optional config.services.colord.enable pkgs.colord-kde
|
||||
++ lib.optional config.services.hardware.bolt.enable pkgs.plasma5Packages.plasma-thunderbolt
|
||||
++ lib.optionals config.services.samba.enable [ kdenetwork-filesharing pkgs.samba ]
|
||||
++ lib.optional config.services.xserver.wacom.enable pkgs.wacomtablet;
|
||||
|
||||
|
@ -385,6 +396,27 @@ in
|
|||
security.pam.services.lightdm.enableKwallet = true;
|
||||
security.pam.services.sddm.enableKwallet = true;
|
||||
|
||||
systemd.user.services = {
|
||||
plasma-early-setup = mkIf cfg.runUsingSystemd {
|
||||
description = "Early Plasma setup";
|
||||
wantedBy = [ "graphical-session-pre.target" ];
|
||||
serviceConfig.Type = "oneshot";
|
||||
script = activationScript;
|
||||
};
|
||||
|
||||
plasma-run-with-systemd = {
|
||||
description = "Run KDE Plasma via systemd";
|
||||
wantedBy = [ "basic.target" ];
|
||||
serviceConfig.Type = "oneshot";
|
||||
script = ''
|
||||
${set_XDG_CONFIG_HOME}
|
||||
|
||||
${kdeFrameworks.kconfig}/bin/kwriteconfig5 \
|
||||
--file startkderc --group General --key systemdBoot ${lib.boolToString cfg.runUsingSystemd}
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
xdg.portal.enable = true;
|
||||
xdg.portal.extraPortals = [ plasma5.xdg-desktop-portal-kde ];
|
||||
|
||||
|
|
|
@ -122,10 +122,10 @@ let
|
|||
done
|
||||
|
||||
if test -d ${pkg}/share/xsessions; then
|
||||
${xorg.lndir}/bin/lndir ${pkg}/share/xsessions $out/share/xsessions
|
||||
${pkgs.buildPackages.xorg.lndir}/bin/lndir ${pkg}/share/xsessions $out/share/xsessions
|
||||
fi
|
||||
if test -d ${pkg}/share/wayland-sessions; then
|
||||
${xorg.lndir}/bin/lndir ${pkg}/share/wayland-sessions $out/share/wayland-sessions
|
||||
${pkgs.buildPackages.xorg.lndir}/bin/lndir ${pkg}/share/wayland-sessions $out/share/wayland-sessions
|
||||
fi
|
||||
'') cfg.displayManager.sessionPackages}
|
||||
'';
|
||||
|
|
|
@ -4,7 +4,7 @@ with lib;
|
|||
|
||||
let
|
||||
cfg = config.systemd;
|
||||
lndir = "${pkgs.xorg.lndir}/bin/lndir";
|
||||
lndir = "${pkgs.buildPackages.xorg.lndir}/bin/lndir";
|
||||
in rec {
|
||||
|
||||
shellEscape = s: (replaceChars [ "\\" ] [ "\\\\" ] s);
|
||||
|
|
|
@ -26,6 +26,8 @@ let
|
|||
"nss-user-lookup.target"
|
||||
"time-sync.target"
|
||||
"cryptsetup.target"
|
||||
"cryptsetup-pre.target"
|
||||
"remote-cryptsetup.target"
|
||||
"sigpwr.target"
|
||||
"timers.target"
|
||||
"paths.target"
|
||||
|
|
|
@ -157,6 +157,7 @@ in
|
|||
gobgpd = handleTest ./gobgpd.nix {};
|
||||
gocd-agent = handleTest ./gocd-agent.nix {};
|
||||
gocd-server = handleTest ./gocd-server.nix {};
|
||||
google-cloud-sdk = handleTest ./google-cloud-sdk.nix {};
|
||||
google-oslogin = handleTest ./google-oslogin {};
|
||||
gotify-server = handleTest ./gotify-server.nix {};
|
||||
grafana = handleTest ./grafana.nix {};
|
||||
|
@ -165,6 +166,7 @@ in
|
|||
grocy = handleTest ./grocy.nix {};
|
||||
grub = handleTest ./grub.nix {};
|
||||
gvisor = handleTest ./gvisor.nix {};
|
||||
hadoop.all = handleTestOn [ "x86_64-linux" ] ./hadoop/hadoop.nix {};
|
||||
hadoop.hdfs = handleTestOn [ "x86_64-linux" ] ./hadoop/hdfs.nix {};
|
||||
hadoop.yarn = handleTestOn [ "x86_64-linux" ] ./hadoop/yarn.nix {};
|
||||
handbrake = handleTestOn ["x86_64-linux"] ./handbrake.nix {};
|
||||
|
@ -206,6 +208,7 @@ in
|
|||
jackett = handleTest ./jackett.nix {};
|
||||
jellyfin = handleTest ./jellyfin.nix {};
|
||||
jenkins = handleTest ./jenkins.nix {};
|
||||
jibri = handleTest ./jibri.nix {};
|
||||
jirafeau = handleTest ./jirafeau.nix {};
|
||||
jitsi-meet = handleTest ./jitsi-meet.nix {};
|
||||
k3s = handleTest ./k3s.nix {};
|
||||
|
@ -323,6 +326,7 @@ in
|
|||
ombi = handleTest ./ombi.nix {};
|
||||
openarena = handleTest ./openarena.nix {};
|
||||
openldap = handleTest ./openldap.nix {};
|
||||
openresty-lua = handleTest ./openresty-lua.nix {};
|
||||
opensmtpd = handleTest ./opensmtpd.nix {};
|
||||
opensmtpd-rspamd = handleTest ./opensmtpd-rspamd.nix {};
|
||||
openssh = handleTest ./openssh.nix {};
|
||||
|
@ -343,6 +347,7 @@ in
|
|||
parsedmarc = handleTest ./parsedmarc {};
|
||||
pdns-recursor = handleTest ./pdns-recursor.nix {};
|
||||
peerflix = handleTest ./peerflix.nix {};
|
||||
peertube = handleTestOn ["x86_64-linux"] ./web-apps/peertube.nix {};
|
||||
pgjwt = handleTest ./pgjwt.nix {};
|
||||
pgmanage = handleTest ./pgmanage.nix {};
|
||||
php = handleTest ./php {};
|
||||
|
@ -414,6 +419,7 @@ in
|
|||
solr = handleTest ./solr.nix {};
|
||||
sonarr = handleTest ./sonarr.nix {};
|
||||
spacecookie = handleTest ./spacecookie.nix {};
|
||||
spark = handleTestOn ["x86_64-linux"] ./spark {};
|
||||
spike = handleTest ./spike.nix {};
|
||||
sslh = handleTest ./sslh.nix {};
|
||||
sssd = handleTestOn ["x86_64-linux"] ./sssd.nix {};
|
||||
|
|
33
third_party/nixpkgs/nixos/tests/borgbackup.nix
vendored
33
third_party/nixpkgs/nixos/tests/borgbackup.nix
vendored
|
@ -81,6 +81,24 @@ in {
|
|||
environment.BORG_RSH = "ssh -oStrictHostKeyChecking=no -i /root/id_ed25519.appendOnly";
|
||||
};
|
||||
|
||||
commandSuccess = {
|
||||
dumpCommand = pkgs.writeScript "commandSuccess" ''
|
||||
echo -n test
|
||||
'';
|
||||
repo = remoteRepo;
|
||||
encryption.mode = "none";
|
||||
startAt = [ ];
|
||||
environment.BORG_RSH = "ssh -oStrictHostKeyChecking=no -i /root/id_ed25519";
|
||||
};
|
||||
|
||||
commandFail = {
|
||||
dumpCommand = "${pkgs.coreutils}/bin/false";
|
||||
repo = remoteRepo;
|
||||
encryption.mode = "none";
|
||||
startAt = [ ];
|
||||
environment.BORG_RSH = "ssh -oStrictHostKeyChecking=no -i /root/id_ed25519";
|
||||
};
|
||||
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -171,5 +189,20 @@ in {
|
|||
client.fail("{} list borg\@server:wrong".format(borg))
|
||||
|
||||
# TODO: Make sure that data is not actually deleted
|
||||
|
||||
with subtest("commandSuccess"):
|
||||
server.wait_for_unit("sshd.service")
|
||||
client.wait_for_unit("network.target")
|
||||
client.systemctl("start --wait borgbackup-job-commandSuccess")
|
||||
client.fail("systemctl is-failed borgbackup-job-commandSuccess")
|
||||
id = client.succeed("borg-job-commandSuccess list | tail -n1 | cut -d' ' -f1").strip()
|
||||
client.succeed(f"borg-job-commandSuccess extract ::{id} stdin")
|
||||
assert "test" == client.succeed("cat stdin")
|
||||
|
||||
with subtest("commandFail"):
|
||||
server.wait_for_unit("sshd.service")
|
||||
client.wait_for_unit("network.target")
|
||||
client.systemctl("start --wait borgbackup-job-commandFail")
|
||||
client.succeed("systemctl is-failed borgbackup-job-commandFail")
|
||||
'';
|
||||
})
|
||||
|
|
13
third_party/nixpkgs/nixos/tests/google-cloud-sdk.nix
vendored
Normal file
13
third_party/nixpkgs/nixos/tests/google-cloud-sdk.nix
vendored
Normal file
|
@ -0,0 +1,13 @@
|
|||
import ./make-test-python.nix ({ pkgs, ... }: {
|
||||
name = "google-cloud-sdk";
|
||||
meta = with pkgs.lib.maintainers; { maintainers = [ iammrinal0 ]; };
|
||||
|
||||
machine = { pkgs, ... }: {
|
||||
environment.systemPackages = [ pkgs.google-cloud-sdk ];
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
import json
|
||||
assert "${pkgs.google-cloud-sdk.version}" in json.loads(machine.succeed("gcloud version --format json"))["Google Cloud SDK"]
|
||||
'';
|
||||
})
|
70
third_party/nixpkgs/nixos/tests/hadoop/hadoop.nix
vendored
Normal file
70
third_party/nixpkgs/nixos/tests/hadoop/hadoop.nix
vendored
Normal file
|
@ -0,0 +1,70 @@
|
|||
import ../make-test-python.nix ({pkgs, ...}: {
|
||||
|
||||
nodes = let
|
||||
package = pkgs.hadoop;
|
||||
coreSite = {
|
||||
"fs.defaultFS" = "hdfs://master";
|
||||
};
|
||||
in {
|
||||
master = {pkgs, options, ...}: {
|
||||
services.hadoop = {
|
||||
inherit package coreSite;
|
||||
hdfs.namenode.enabled = true;
|
||||
yarn.resourcemanager.enabled = true;
|
||||
};
|
||||
virtualisation.memorySize = 1024;
|
||||
};
|
||||
|
||||
worker = {pkgs, options, ...}: {
|
||||
services.hadoop = {
|
||||
inherit package coreSite;
|
||||
hdfs.datanode.enabled = true;
|
||||
yarn.nodemanager.enabled = true;
|
||||
yarnSite = options.services.hadoop.yarnSite.default // {
|
||||
"yarn.resourcemanager.hostname" = "master";
|
||||
};
|
||||
};
|
||||
virtualisation.memorySize = 2048;
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
|
||||
master.wait_for_unit("network.target")
|
||||
master.wait_for_unit("hdfs-namenode")
|
||||
|
||||
master.wait_for_open_port(8020)
|
||||
master.wait_for_open_port(9870)
|
||||
|
||||
worker.wait_for_unit("network.target")
|
||||
worker.wait_for_unit("hdfs-datanode")
|
||||
worker.wait_for_open_port(9864)
|
||||
worker.wait_for_open_port(9866)
|
||||
worker.wait_for_open_port(9867)
|
||||
|
||||
master.succeed("curl -f http://worker:9864")
|
||||
worker.succeed("curl -f http://master:9870")
|
||||
|
||||
worker.succeed("sudo -u hdfs hdfs dfsadmin -safemode wait")
|
||||
|
||||
master.wait_for_unit("yarn-resourcemanager")
|
||||
|
||||
master.wait_for_open_port(8030)
|
||||
master.wait_for_open_port(8031)
|
||||
master.wait_for_open_port(8032)
|
||||
master.wait_for_open_port(8088)
|
||||
worker.succeed("curl -f http://master:8088")
|
||||
|
||||
worker.wait_for_unit("yarn-nodemanager")
|
||||
worker.wait_for_open_port(8042)
|
||||
worker.wait_for_open_port(8040)
|
||||
master.succeed("curl -f http://worker:8042")
|
||||
|
||||
assert "Total Nodes:1" in worker.succeed("yarn node -list")
|
||||
|
||||
assert "Estimated value of Pi is" in worker.succeed("HADOOP_USER_NAME=hdfs yarn jar $(readlink $(which yarn) | sed -r 's~bin/yarn~lib/hadoop-*/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar~g') pi 2 10")
|
||||
assert "SUCCEEDED" in worker.succeed("yarn application -list -appStates FINISHED")
|
||||
worker.succeed("sudo -u hdfs hdfs dfs -ls / | systemd-cat")
|
||||
'';
|
||||
})
|
|
@ -2,7 +2,7 @@ import ../make-test-python.nix ({...}: {
|
|||
nodes = {
|
||||
namenode = {pkgs, ...}: {
|
||||
services.hadoop = {
|
||||
package = pkgs.hadoop_3_1;
|
||||
package = pkgs.hadoop;
|
||||
hdfs.namenode.enabled = true;
|
||||
coreSite = {
|
||||
"fs.defaultFS" = "hdfs://namenode:8020";
|
||||
|
@ -20,7 +20,7 @@ import ../make-test-python.nix ({...}: {
|
|||
};
|
||||
datanode = {pkgs, ...}: {
|
||||
services.hadoop = {
|
||||
package = pkgs.hadoop_3_1;
|
||||
package = pkgs.hadoop;
|
||||
hdfs.datanode.enabled = true;
|
||||
coreSite = {
|
||||
"fs.defaultFS" = "hdfs://namenode:8020";
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
import ../make-test-python.nix ({...}: {
|
||||
nodes = {
|
||||
resourcemanager = {pkgs, ...}: {
|
||||
services.hadoop.package = pkgs.hadoop_3_1;
|
||||
services.hadoop.package = pkgs.hadoop;
|
||||
services.hadoop.yarn.resourcemanager.enabled = true;
|
||||
services.hadoop.yarnSite = {
|
||||
"yarn.resourcemanager.scheduler.class" = "org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler";
|
||||
|
@ -12,7 +12,7 @@ import ../make-test-python.nix ({...}: {
|
|||
];
|
||||
};
|
||||
nodemanager = {pkgs, ...}: {
|
||||
services.hadoop.package = pkgs.hadoop_3_1;
|
||||
services.hadoop.package = pkgs.hadoop;
|
||||
services.hadoop.yarn.nodemanager.enabled = true;
|
||||
services.hadoop.yarnSite = {
|
||||
"yarn.resourcemanager.hostname" = "resourcemanager";
|
||||
|
|
|
@ -68,7 +68,7 @@ in makeTest {
|
|||
testScript =
|
||||
''
|
||||
def create_named_machine(name):
|
||||
return create_machine(
|
||||
machine = create_machine(
|
||||
{
|
||||
"qemuFlags": "-cpu max ${
|
||||
if system == "x86_64-linux" then "-m 1024"
|
||||
|
@ -78,6 +78,8 @@ in makeTest {
|
|||
"name": name,
|
||||
}
|
||||
)
|
||||
driver.machines.append(machine)
|
||||
return machine
|
||||
|
||||
|
||||
# Install NixOS
|
||||
|
|
|
@ -12,13 +12,14 @@ in {
|
|||
environment.systemPackages = with pkgs; [ mosquitto ];
|
||||
services.mosquitto = {
|
||||
enable = true;
|
||||
checkPasswords = true;
|
||||
listeners = [ {
|
||||
users = {
|
||||
"${mqttUsername}" = {
|
||||
acl = [ "topic readwrite #" ];
|
||||
acl = [ "readwrite #" ];
|
||||
password = mqttPassword;
|
||||
};
|
||||
};
|
||||
} ];
|
||||
};
|
||||
services.home-assistant = {
|
||||
inherit configDir;
|
||||
|
|
69
third_party/nixpkgs/nixos/tests/jibri.nix
vendored
Normal file
69
third_party/nixpkgs/nixos/tests/jibri.nix
vendored
Normal file
|
@ -0,0 +1,69 @@
|
|||
import ./make-test-python.nix ({ pkgs, ... }: {
|
||||
name = "jibri";
|
||||
meta = with pkgs.lib; {
|
||||
maintainers = teams.jitsi.members;
|
||||
};
|
||||
|
||||
machine = { config, pkgs, ... }: {
|
||||
virtualisation.memorySize = 5120;
|
||||
|
||||
services.jitsi-meet = {
|
||||
enable = true;
|
||||
hostName = "machine";
|
||||
jibri.enable = true;
|
||||
};
|
||||
services.jibri.ignoreCert = true;
|
||||
services.jitsi-videobridge.openFirewall = true;
|
||||
|
||||
networking.firewall.allowedTCPPorts = [ 80 443 ];
|
||||
|
||||
services.nginx.virtualHosts.machine = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
};
|
||||
|
||||
security.acme.email = "me@example.org";
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.server = "https://example.com"; # self-signed only
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.wait_for_unit("jitsi-videobridge2.service")
|
||||
machine.wait_for_unit("jicofo.service")
|
||||
machine.wait_for_unit("nginx.service")
|
||||
machine.wait_for_unit("prosody.service")
|
||||
machine.wait_for_unit("jibri.service")
|
||||
|
||||
machine.wait_until_succeeds(
|
||||
"journalctl -b -u jitsi-videobridge2 -o cat | grep -q 'Performed a successful health check'", timeout=30
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"journalctl -b -u prosody -o cat | grep -q 'Authenticated as focus@auth.machine'", timeout=31
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"journalctl -b -u prosody -o cat | grep -q 'Authenticated as jvb@auth.machine'", timeout=32
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"journalctl -b -u prosody -o cat | grep -q 'Authenticated as jibri@auth.machine'", timeout=33
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"cat /var/log/jitsi/jibri/log.0.txt | grep -q 'Joined MUC: jibribrewery@internal.machine'", timeout=34
|
||||
)
|
||||
|
||||
assert '"busyStatus":"IDLE","health":{"healthStatus":"HEALTHY"' in machine.succeed(
|
||||
"curl -X GET http://machine:2222/jibri/api/v1.0/health"
|
||||
)
|
||||
machine.succeed(
|
||||
"""curl -H "Content-Type: application/json" -X POST http://localhost:2222/jibri/api/v1.0/startService -d '{"sessionId": "RecordTest","callParams":{"callUrlInfo":{"baseUrl": "https://machine","callName": "TestCall"}},"callLoginParams":{"domain": "recorder.machine", "username": "recorder", "password": "'"$(cat /var/lib/jitsi-meet/jibri-recorder-secret)"'" },"sinkType": "file"}'"""
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"cat /var/log/jitsi/jibri/log.0.txt | grep -q 'File recording service transitioning from state Starting up to Running'", timeout=35
|
||||
)
|
||||
machine.succeed(
|
||||
"""sleep 15 && curl -H "Content-Type: application/json" -X POST http://localhost:2222/jibri/api/v1.0/stopService -d '{"sessionId": "RecordTest","callParams":{"callUrlInfo":{"baseUrl": "https://machine","callName": "TestCall"}},"callLoginParams":{"domain": "recorder.machine", "username": "recorder", "password": "'"$(cat /var/lib/jitsi-meet/jibri-recorder-secret)"'" },"sinkType": "file"}'"""
|
||||
)
|
||||
machine.wait_until_succeeds(
|
||||
"cat /var/log/jitsi/jibri/log.0.txt | grep -q 'Recording finalize script finished with exit value 0'", timeout=36
|
||||
)
|
||||
'';
|
||||
})
|
188
third_party/nixpkgs/nixos/tests/mosquitto.nix
vendored
188
third_party/nixpkgs/nixos/tests/mosquitto.nix
vendored
|
@ -2,13 +2,59 @@ import ./make-test-python.nix ({ pkgs, lib, ... }:
|
|||
|
||||
let
|
||||
port = 1888;
|
||||
username = "mqtt";
|
||||
tlsPort = 1889;
|
||||
password = "VERY_secret";
|
||||
hashedPassword = "$7$101$/WJc4Mp+I+uYE9sR$o7z9rD1EYXHPwEP5GqQj6A7k4W1yVbePlb8TqNcuOLV9WNCiDgwHOB0JHC1WCtdkssqTBduBNUnUGd6kmZvDSw==";
|
||||
topic = "test/foo";
|
||||
|
||||
snakeOil = pkgs.runCommand "snakeoil-certs" {
|
||||
buildInputs = [ pkgs.gnutls.bin ];
|
||||
caTemplate = pkgs.writeText "snakeoil-ca.template" ''
|
||||
cn = server
|
||||
expiration_days = -1
|
||||
cert_signing_key
|
||||
ca
|
||||
'';
|
||||
certTemplate = pkgs.writeText "snakeoil-cert.template" ''
|
||||
cn = server
|
||||
expiration_days = -1
|
||||
tls_www_server
|
||||
encryption_key
|
||||
signing_key
|
||||
'';
|
||||
userCertTemplate = pkgs.writeText "snakeoil-user-cert.template" ''
|
||||
organization = snakeoil
|
||||
cn = client1
|
||||
expiration_days = -1
|
||||
tls_www_client
|
||||
encryption_key
|
||||
signing_key
|
||||
'';
|
||||
} ''
|
||||
mkdir "$out"
|
||||
|
||||
certtool -p --bits 2048 --outfile "$out/ca.key"
|
||||
certtool -s --template "$caTemplate" --load-privkey "$out/ca.key" \
|
||||
--outfile "$out/ca.crt"
|
||||
certtool -p --bits 2048 --outfile "$out/server.key"
|
||||
certtool -c --template "$certTemplate" \
|
||||
--load-ca-privkey "$out/ca.key" \
|
||||
--load-ca-certificate "$out/ca.crt" \
|
||||
--load-privkey "$out/server.key" \
|
||||
--outfile "$out/server.crt"
|
||||
|
||||
certtool -p --bits 2048 --outfile "$out/client1.key"
|
||||
certtool -c --template "$userCertTemplate" \
|
||||
--load-privkey "$out/client1.key" \
|
||||
--load-ca-privkey "$out/ca.key" \
|
||||
--load-ca-certificate "$out/ca.crt" \
|
||||
--outfile "$out/client1.crt"
|
||||
'';
|
||||
|
||||
in {
|
||||
name = "mosquitto";
|
||||
meta = with pkgs.lib; {
|
||||
maintainers = with maintainers; [ peterhoeg ];
|
||||
maintainers = with maintainers; [ pennae peterhoeg ];
|
||||
};
|
||||
|
||||
nodes = let
|
||||
|
@ -17,77 +63,131 @@ in {
|
|||
};
|
||||
in {
|
||||
server = { pkgs, ... }: {
|
||||
networking.firewall.allowedTCPPorts = [ port ];
|
||||
networking.firewall.allowedTCPPorts = [ port tlsPort ];
|
||||
services.mosquitto = {
|
||||
inherit port;
|
||||
enable = true;
|
||||
host = "0.0.0.0";
|
||||
checkPasswords = true;
|
||||
users.${username} = {
|
||||
inherit password;
|
||||
acl = [
|
||||
"topic readwrite ${topic}"
|
||||
];
|
||||
settings = {
|
||||
sys_interval = 1;
|
||||
};
|
||||
listeners = [
|
||||
{
|
||||
inherit port;
|
||||
users = {
|
||||
password_store = {
|
||||
inherit password;
|
||||
};
|
||||
password_file = {
|
||||
passwordFile = pkgs.writeText "mqtt-password" password;
|
||||
};
|
||||
hashed_store = {
|
||||
inherit hashedPassword;
|
||||
};
|
||||
hashed_file = {
|
||||
hashedPasswordFile = pkgs.writeText "mqtt-hashed-password" hashedPassword;
|
||||
};
|
||||
|
||||
# disable private /tmp for this test
|
||||
systemd.services.mosquitto.serviceConfig.PrivateTmp = lib.mkForce false;
|
||||
reader = {
|
||||
inherit password;
|
||||
acl = [
|
||||
"read ${topic}"
|
||||
"read $SYS/#" # so we always have something to read
|
||||
];
|
||||
};
|
||||
writer = {
|
||||
inherit password;
|
||||
acl = [ "write ${topic}" ];
|
||||
};
|
||||
};
|
||||
}
|
||||
{
|
||||
port = tlsPort;
|
||||
users.client1 = {
|
||||
acl = [ "read $SYS/#" ];
|
||||
};
|
||||
settings = {
|
||||
cafile = "${snakeOil}/ca.crt";
|
||||
certfile = "${snakeOil}/server.crt";
|
||||
keyfile = "${snakeOil}/server.key";
|
||||
require_certificate = true;
|
||||
use_identity_as_username = true;
|
||||
};
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
client1 = client;
|
||||
client2 = client;
|
||||
};
|
||||
|
||||
testScript = let
|
||||
file = "/tmp/msg";
|
||||
in ''
|
||||
def mosquitto_cmd(binary):
|
||||
testScript = ''
|
||||
def mosquitto_cmd(binary, user, topic, port):
|
||||
return (
|
||||
"${pkgs.mosquitto}/bin/mosquitto_{} "
|
||||
"mosquitto_{} "
|
||||
"-V mqttv311 "
|
||||
"-h server "
|
||||
"-p ${toString port} "
|
||||
"-u ${username} "
|
||||
"-p {} "
|
||||
"-u {} "
|
||||
"-P '${password}' "
|
||||
"-t ${topic}"
|
||||
).format(binary)
|
||||
"-t '{}'"
|
||||
).format(binary, port, user, topic)
|
||||
|
||||
|
||||
def publish(args):
|
||||
return "{} {}".format(mosquitto_cmd("pub"), args)
|
||||
def publish(args, user, topic="${topic}", port=${toString port}):
|
||||
return "{} {}".format(mosquitto_cmd("pub", user, topic, port), args)
|
||||
|
||||
|
||||
def subscribe(args):
|
||||
return "({} -C 1 {} | tee ${file} &)".format(mosquitto_cmd("sub"), args)
|
||||
def subscribe(args, user, topic="${topic}", port=${toString port}):
|
||||
return "{} -C 1 {}".format(mosquitto_cmd("sub", user, topic, port), args)
|
||||
|
||||
def parallel(*fns):
|
||||
from threading import Thread
|
||||
threads = [ Thread(target=fn) for fn in fns ]
|
||||
for t in threads: t.start()
|
||||
for t in threads: t.join()
|
||||
|
||||
|
||||
start_all()
|
||||
server.wait_for_unit("mosquitto.service")
|
||||
|
||||
for machine in server, client1, client2:
|
||||
machine.fail("test -f ${file}")
|
||||
def check_passwords():
|
||||
client1.succeed(publish("-m test", "password_store"))
|
||||
client1.succeed(publish("-m test", "password_file"))
|
||||
client1.succeed(publish("-m test", "hashed_store"))
|
||||
client1.succeed(publish("-m test", "hashed_file"))
|
||||
|
||||
# QoS = 0, so only one subscribers should get it
|
||||
server.execute(subscribe("-q 0"))
|
||||
check_passwords()
|
||||
|
||||
# we need to give the subscribers some time to connect
|
||||
client2.execute("sleep 5")
|
||||
client2.succeed(publish("-m FOO -q 0"))
|
||||
def check_acl():
|
||||
client1.succeed(subscribe("", "reader", topic="$SYS/#"))
|
||||
client1.fail(subscribe("-W 5", "writer", topic="$SYS/#"))
|
||||
|
||||
server.wait_until_succeeds("grep -q FOO ${file}")
|
||||
server.execute("rm ${file}")
|
||||
parallel(
|
||||
lambda: client1.succeed(subscribe("-i 3688cdd7-aa07-42a4-be22-cb9352917e40", "reader")),
|
||||
lambda: [
|
||||
server.wait_for_console_text("3688cdd7-aa07-42a4-be22-cb9352917e40"),
|
||||
client2.succeed(publish("-m test", "writer"))
|
||||
])
|
||||
|
||||
# QoS = 1, so both subscribers should get it
|
||||
server.execute(subscribe("-q 1"))
|
||||
client1.execute(subscribe("-q 1"))
|
||||
parallel(
|
||||
lambda: client1.fail(subscribe("-W 5 -i 24ff16a2-ae33-4a51-9098-1b417153c712", "reader")),
|
||||
lambda: [
|
||||
server.wait_for_console_text("24ff16a2-ae33-4a51-9098-1b417153c712"),
|
||||
client2.succeed(publish("-m test", "reader"))
|
||||
])
|
||||
|
||||
# we need to give the subscribers some time to connect
|
||||
client2.execute("sleep 5")
|
||||
client2.succeed(publish("-m BAR -q 1"))
|
||||
check_acl()
|
||||
|
||||
for machine in server, client1:
|
||||
machine.wait_until_succeeds("grep -q BAR ${file}")
|
||||
machine.execute("rm ${file}")
|
||||
def check_tls():
|
||||
client1.succeed(
|
||||
subscribe(
|
||||
"--cafile ${snakeOil}/ca.crt "
|
||||
"--cert ${snakeOil}/client1.crt "
|
||||
"--key ${snakeOil}/client1.key",
|
||||
topic="$SYS/#",
|
||||
port=${toString tlsPort},
|
||||
user="no_such_user"))
|
||||
|
||||
check_tls()
|
||||
'';
|
||||
})
|
||||
|
|
|
@ -18,4 +18,4 @@ foldl
|
|||
};
|
||||
})
|
||||
{}
|
||||
[ 20 21 22 ]
|
||||
[ 21 22 ]
|
||||
|
|
55
third_party/nixpkgs/nixos/tests/openresty-lua.nix
vendored
Normal file
55
third_party/nixpkgs/nixos/tests/openresty-lua.nix
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
import ./make-test-python.nix ({ pkgs, lib, ... }:
|
||||
let
|
||||
lualibs = [
|
||||
pkgs.lua.pkgs.markdown
|
||||
];
|
||||
|
||||
getPath = lib: type: "${lib}/share/lua/${pkgs.lua.luaversion}/?.${type}";
|
||||
getLuaPath = lib: getPath lib "lua";
|
||||
luaPath = lib.concatStringsSep ";" (map getLuaPath lualibs);
|
||||
in
|
||||
{
|
||||
name = "openresty-lua";
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ bbigras ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
webserver = { pkgs, lib, ... }: {
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
package = pkgs.openresty;
|
||||
|
||||
commonHttpConfig = ''
|
||||
lua_package_path '${luaPath};;';
|
||||
'';
|
||||
|
||||
virtualHosts."default" = {
|
||||
default = true;
|
||||
locations."/" = {
|
||||
extraConfig = ''
|
||||
default_type text/html;
|
||||
access_by_lua '
|
||||
local markdown = require "markdown"
|
||||
markdown("source")
|
||||
';
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
testScript = { nodes, ... }:
|
||||
''
|
||||
url = "http://localhost"
|
||||
|
||||
webserver.wait_for_unit("nginx")
|
||||
webserver.wait_for_open_port(80)
|
||||
|
||||
http_code = webserver.succeed(
|
||||
f"curl -w '%{{http_code}}' --head --fail {url}"
|
||||
)
|
||||
assert http_code.split("\n")[-1] == "200"
|
||||
'';
|
||||
})
|
123
third_party/nixpkgs/nixos/tests/seafile.nix
vendored
Normal file
123
third_party/nixpkgs/nixos/tests/seafile.nix
vendored
Normal file
|
@ -0,0 +1,123 @@
|
|||
import ./make-test-python.nix ({ pkgs, ... }:
|
||||
let
|
||||
client = { config, pkgs, ... }: {
|
||||
virtualisation.memorySize = 256;
|
||||
environment.systemPackages = [ pkgs.seafile-shared pkgs.curl ];
|
||||
};
|
||||
in {
|
||||
name = "seafile";
|
||||
meta = with pkgs.stdenv.lib.maintainers; {
|
||||
maintainers = [ kampfschlaefer schmittlauch ];
|
||||
};
|
||||
|
||||
nodes = {
|
||||
server = { config, pkgs, ... }: {
|
||||
virtualisation.memorySize = 512;
|
||||
services.seafile = {
|
||||
enable = true;
|
||||
ccnetSettings.General.SERVICE_URL = "http://server";
|
||||
adminEmail = "admin@example.com";
|
||||
initialAdminPassword = "seafile_password";
|
||||
};
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
virtualHosts."server" = {
|
||||
locations."/".proxyPass = "http://unix:/run/seahub/gunicorn.sock";
|
||||
locations."/seafhttp" = {
|
||||
proxyPass = "http://127.0.0.1:8082";
|
||||
extraConfig = ''
|
||||
rewrite ^/seafhttp(.*)$ $1 break;
|
||||
client_max_body_size 0;
|
||||
proxy_connect_timeout 36000s;
|
||||
proxy_read_timeout 36000s;
|
||||
proxy_send_timeout 36000s;
|
||||
send_timeout 36000s;
|
||||
proxy_http_version 1.1;
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
networking.firewall = { allowedTCPPorts = [ 80 ]; };
|
||||
};
|
||||
client1 = client pkgs;
|
||||
client2 = client pkgs;
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
|
||||
with subtest("start seaf-server"):
|
||||
server.wait_for_unit("seaf-server.service")
|
||||
server.wait_for_file("/run/seafile/seafile.sock")
|
||||
|
||||
with subtest("start seahub"):
|
||||
server.wait_for_unit("seahub.service")
|
||||
server.wait_for_unit("nginx.service")
|
||||
server.wait_for_file("/run/seahub/gunicorn.sock")
|
||||
|
||||
with subtest("client1 fetch seahub page"):
|
||||
client1.succeed("curl -L http://server | grep 'Log In' >&2")
|
||||
|
||||
with subtest("client1 connect"):
|
||||
client1.wait_for_unit("default.target")
|
||||
client1.succeed("seaf-cli init -d . >&2")
|
||||
client1.succeed("seaf-cli start >&2")
|
||||
client1.succeed(
|
||||
"seaf-cli list-remote -s http://server -u admin\@example.com -p seafile_password >&2"
|
||||
)
|
||||
|
||||
libid = client1.succeed(
|
||||
'seaf-cli create -s http://server -n test01 -u admin\@example.com -p seafile_password -t "first test library"'
|
||||
).strip()
|
||||
|
||||
client1.succeed(
|
||||
"seaf-cli list-remote -s http://server -u admin\@example.com -p seafile_password |grep test01"
|
||||
)
|
||||
client1.fail(
|
||||
"seaf-cli list-remote -s http://server -u admin\@example.com -p seafile_password |grep test02"
|
||||
)
|
||||
|
||||
client1.succeed(
|
||||
f"seaf-cli download -l {libid} -s http://server -u admin\@example.com -p seafile_password -d . >&2"
|
||||
)
|
||||
|
||||
client1.sleep(3)
|
||||
|
||||
client1.succeed("seaf-cli status |grep synchronized >&2")
|
||||
|
||||
client1.succeed("ls -la >&2")
|
||||
client1.succeed("ls -la test01 >&2")
|
||||
|
||||
client1.execute("echo bla > test01/first_file")
|
||||
|
||||
client1.sleep(2)
|
||||
|
||||
client1.succeed("seaf-cli status |grep synchronized >&2")
|
||||
|
||||
with subtest("client2 sync"):
|
||||
client2.wait_for_unit("default.target")
|
||||
|
||||
client2.succeed("seaf-cli init -d . >&2")
|
||||
client2.succeed("seaf-cli start >&2")
|
||||
|
||||
client2.succeed(
|
||||
"seaf-cli list-remote -s http://server -u admin\@example.com -p seafile_password >&2"
|
||||
)
|
||||
|
||||
libid = client2.succeed(
|
||||
"seaf-cli list-remote -s http://server -u admin\@example.com -p seafile_password |grep test01 |cut -d' ' -f 2"
|
||||
).strip()
|
||||
|
||||
client2.succeed(
|
||||
f"seaf-cli download -l {libid} -s http://server -u admin\@example.com -p seafile_password -d . >&2"
|
||||
)
|
||||
|
||||
client2.sleep(3)
|
||||
|
||||
client2.succeed("seaf-cli status |grep synchronized >&2")
|
||||
|
||||
client2.succeed("ls -la test01 >&2")
|
||||
|
||||
client2.succeed('[ `cat test01/first_file` = "bla" ]')
|
||||
'';
|
||||
})
|
127
third_party/nixpkgs/nixos/tests/web-apps/peertube.nix
vendored
Normal file
127
third_party/nixpkgs/nixos/tests/web-apps/peertube.nix
vendored
Normal file
|
@ -0,0 +1,127 @@
|
|||
import ../make-test-python.nix ({pkgs, ...}:
|
||||
{
|
||||
name = "peertube";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ izorkin ];
|
||||
|
||||
nodes = {
|
||||
database = {
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.10"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
firewall.allowedTCPPorts = [ 5432 6379 ];
|
||||
};
|
||||
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
enableTCPIP = true;
|
||||
authentication = ''
|
||||
hostnossl peertube_local peertube_test 192.168.2.11/32 md5
|
||||
'';
|
||||
initialScript = pkgs.writeText "postgresql_init.sql" ''
|
||||
CREATE ROLE peertube_test LOGIN PASSWORD '0gUN0C1mgST6czvjZ8T9';
|
||||
CREATE DATABASE peertube_local TEMPLATE template0 ENCODING UTF8;
|
||||
GRANT ALL PRIVILEGES ON DATABASE peertube_local TO peertube_test;
|
||||
\connect peertube_local
|
||||
CREATE EXTENSION IF NOT EXISTS pg_trgm;
|
||||
CREATE EXTENSION IF NOT EXISTS unaccent;
|
||||
'';
|
||||
};
|
||||
|
||||
services.redis = {
|
||||
enable = true;
|
||||
bind = "0.0.0.0";
|
||||
requirePass = "turrQfaQwnanGbcsdhxy";
|
||||
};
|
||||
};
|
||||
|
||||
server = { pkgs, ... }: {
|
||||
environment = {
|
||||
etc = {
|
||||
"peertube/password-posgressql-db".text = ''
|
||||
0gUN0C1mgST6czvjZ8T9
|
||||
'';
|
||||
"peertube/password-redis-db".text = ''
|
||||
turrQfaQwnanGbcsdhxy
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.11"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.2.11 peertube.local
|
||||
'';
|
||||
firewall.allowedTCPPorts = [ 9000 ];
|
||||
};
|
||||
|
||||
services.peertube = {
|
||||
enable = true;
|
||||
localDomain = "peertube.local";
|
||||
enableWebHttps = false;
|
||||
|
||||
database = {
|
||||
host = "192.168.2.10";
|
||||
name = "peertube_local";
|
||||
user = "peertube_test";
|
||||
passwordFile = "/etc/peertube/password-posgressql-db";
|
||||
};
|
||||
|
||||
redis = {
|
||||
host = "192.168.2.10";
|
||||
passwordFile = "/etc/peertube/password-redis-db";
|
||||
};
|
||||
|
||||
settings = {
|
||||
listen = {
|
||||
hostname = "0.0.0.0";
|
||||
};
|
||||
instance = {
|
||||
name = "PeerTube Test Server";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
client = {
|
||||
environment.systemPackages = [ pkgs.jq ];
|
||||
networking = {
|
||||
interfaces.eth1 = {
|
||||
ipv4.addresses = [
|
||||
{ address = "192.168.2.12"; prefixLength = 24; }
|
||||
];
|
||||
};
|
||||
extraHosts = ''
|
||||
192.168.2.11 peertube.local
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
start_all()
|
||||
|
||||
database.wait_for_unit("postgresql.service")
|
||||
database.wait_for_unit("redis.service")
|
||||
|
||||
database.wait_for_open_port(5432)
|
||||
database.wait_for_open_port(6379)
|
||||
|
||||
server.wait_for_unit("peertube.service")
|
||||
server.wait_for_open_port(9000)
|
||||
|
||||
# Check if PeerTube is running
|
||||
client.succeed("curl --fail http://peertube.local:9000/api/v1/config/about | jq -r '.instance.name' | grep 'PeerTube\ Test\ Server'")
|
||||
|
||||
client.shutdown()
|
||||
server.shutdown()
|
||||
database.shutdown()
|
||||
'';
|
||||
})
|
|
@ -1,8 +1,9 @@
|
|||
{ lib, stdenv
|
||||
{ lib
|
||||
, stdenv
|
||||
, fetchFromGitHub
|
||||
, autoreconfHook
|
||||
, cmake
|
||||
, pkg-config
|
||||
, fltk
|
||||
, jansson
|
||||
, rtmidi
|
||||
, libsamplerate
|
||||
, libsndfile
|
||||
|
@ -10,51 +11,65 @@
|
|||
, alsa-lib
|
||||
, libpulseaudio
|
||||
, libXpm
|
||||
, libXinerama
|
||||
, libXcursor
|
||||
, catch2
|
||||
, nlohmann_json
|
||||
, flac
|
||||
, libogg
|
||||
, libvorbis
|
||||
, libopus
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "giada";
|
||||
version = "0.16.4";
|
||||
version = "unstable-2021-09-24";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "monocasual";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "0qyx0bvivlvly0vj5nnnbiks22xh13sqlw4mfgplq2lbbpgisigp";
|
||||
# Using master with https://github.com/monocasual/giada/pull/509 till a new release is done.
|
||||
rev = "f117a8b8eef08d904ef1ab22c45f0e1fad6b8a56";
|
||||
sha256 = "01hb981lrsyk870zs8xph5fm0z7bbffpkxgw04hq487r804mkx9j";
|
||||
fetchSubmodules = true;
|
||||
};
|
||||
|
||||
configureFlags = [
|
||||
"--target=linux"
|
||||
"--enable-system-catch"
|
||||
NIX_CFLAGS_COMPILE = [
|
||||
"-w"
|
||||
"-Wno-error"
|
||||
];
|
||||
|
||||
cmakeFlags = [
|
||||
"-DCMAKE_INSTALL_BINDIR=bin"
|
||||
"-DCMAKE_BUILD_TYPE=Release"
|
||||
];
|
||||
|
||||
nativeBuildInputs = [
|
||||
autoreconfHook
|
||||
cmake
|
||||
pkg-config
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
rtmidi
|
||||
fltk
|
||||
libsndfile
|
||||
libsamplerate
|
||||
jansson
|
||||
rtmidi
|
||||
libXpm
|
||||
jack2
|
||||
alsa-lib
|
||||
libXpm
|
||||
libpulseaudio
|
||||
libXinerama
|
||||
libXcursor
|
||||
catch2
|
||||
nlohmann_json
|
||||
jack2
|
||||
flac
|
||||
libogg
|
||||
libvorbis
|
||||
libopus
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
sed -i 's:"deps/json/single_include/nlohmann/json\.hpp":<nlohmann/json.hpp>:' \
|
||||
src/core/{conf,init,midiMapConf,patch}.cpp
|
||||
local fixup_list=(
|
||||
src/core/kernelMidi.cpp
|
||||
src/gui/elems/config/tabMidi.cpp
|
||||
src/utils/ver.cpp
|
||||
)
|
||||
for f in "''${fixup_list[@]}"; do
|
||||
substituteInPlace "$f" \
|
||||
--replace "<RtMidi.h>" "<${rtmidi.src}/RtMidi.h>"
|
||||
done
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
|
@ -63,6 +78,5 @@ stdenv.mkDerivation rec {
|
|||
license = licenses.gpl3;
|
||||
maintainers = with maintainers; [ petabyteboy ];
|
||||
platforms = platforms.all;
|
||||
broken = stdenv.hostPlatform.isAarch64; # produces build failure on aarch64-linux
|
||||
};
|
||||
}
|
||||
|
|
|
@ -13,17 +13,17 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "helvum";
|
||||
version = "0.3.0";
|
||||
version = "0.3.1";
|
||||
|
||||
src = fetchFromGitLab {
|
||||
domain = "gitlab.freedesktop.org";
|
||||
owner = "ryuukyu";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
sha256 = "sha256-AlHCK4pWaoNjR0eflxHBsuVaaily/RvCbgJv/ByQZK4=";
|
||||
sha256 = "sha256-f6+6Qicg5J6oWcafG4DF0HovTmF4r6yfw6p/3dJHmB4=";
|
||||
};
|
||||
|
||||
cargoSha256 = "sha256-mAhh12rGvQjs2xtm+OrtVv0fgG6qni/QM/oRYoFR7U8=";
|
||||
cargoSha256 = "sha256-zGa6nAmOOrpiMr865J06Ez3L6lPL0j18/lW8lw1jPyU=";
|
||||
|
||||
nativeBuildInputs = [ clang copyDesktopItems pkg-config ];
|
||||
buildInputs = [ glib gtk4 pipewire ];
|
||||
|
|
|
@ -4,16 +4,16 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "librespot";
|
||||
version = "0.3.0";
|
||||
version = "0.3.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "librespot-org";
|
||||
repo = "librespot";
|
||||
rev = "v${version}";
|
||||
sha256 = "0n7h690gplpp47gdj038g6ncgwr7wvwfkg00cbrbvxhv7kzqqa1f";
|
||||
sha256 = "1fv2sk89rf1vraq823bxddlxj6b4gqhfpc36xr7ibz2405zickfv";
|
||||
};
|
||||
|
||||
cargoSha256 = "0qakvpxvn84ppgs3qlsfan4flqkmjcgs698w25jasx9ymiv8wc3s";
|
||||
cargoSha256 = "1sal85gsbnrabxi39298w9njdc08csnwl40akd6k9fsc0fmpn1b0";
|
||||
|
||||
cargoBuildFlags = with lib; [
|
||||
"--no-default-features"
|
||||
|
|
|
@ -7,13 +7,13 @@
|
|||
|
||||
python3Packages.buildPythonApplication rec {
|
||||
pname = "mpdevil";
|
||||
version = "1.3.0";
|
||||
version = "1.4.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "SoongNoonien";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "1wa5wkkv8kvzlxrhqmmhjmrzcm5v2dij516dk4vlpv9sazc6gzkm";
|
||||
sha256 = "1zx129zl6bjb0j3f81yx2641nsj6ck04q5f0v0g8f08xgdwsyv3b";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -31,5 +31,6 @@ stdenv.mkDerivation {
|
|||
license = licenses.mit;
|
||||
platforms = platforms.all;
|
||||
maintainers = with maintainers; [ netcrns ];
|
||||
mainProgram = "orca";
|
||||
};
|
||||
}
|
||||
|
|
58
third_party/nixpkgs/pkgs/applications/audio/pocket-casts/default.nix
vendored
Normal file
58
third_party/nixpkgs/pkgs/applications/audio/pocket-casts/default.nix
vendored
Normal file
|
@ -0,0 +1,58 @@
|
|||
{ lib, stdenv, fetchurl, dpkg, autoPatchelfHook, makeWrapper, electron_12,
|
||||
alsa-lib, gtk3, libXScrnSaver, libXtst, mesa, nss }:
|
||||
|
||||
let
|
||||
# Using Electron 12 to solve errors regarding threading
|
||||
electron = electron_12;
|
||||
|
||||
in stdenv.mkDerivation rec {
|
||||
pname = "pocket-casts";
|
||||
version = "0.5.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/felicianotech/pocket-casts-desktop-app/releases/download/v${version}/${pname}_${version}_amd64.deb";
|
||||
sha256 = "sha256-frBtIxwRO/6k6j0itqN10t+9AyNadqXm8vC1YP960ts=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
dpkg
|
||||
autoPatchelfHook
|
||||
makeWrapper
|
||||
];
|
||||
|
||||
buildInputs = [ alsa-lib gtk3 libXScrnSaver libXtst mesa nss ];
|
||||
|
||||
dontBuild = true;
|
||||
dontConfigure = true;
|
||||
|
||||
unpackPhase = ''
|
||||
dpkg-deb -x ${src} ./
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
mv usr $out
|
||||
mv opt $out
|
||||
mv "$out/opt/Pocket Casts" $out/opt/pocket-casts
|
||||
mv $out/share/icons/hicolor/0x0 $out/share/icons/hicolor/256x256
|
||||
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
postFixup = ''
|
||||
substituteInPlace $out/share/applications/pocket-casts.desktop --replace '"/opt/Pocket Casts/pocket-casts"' $out/bin/pocket-casts
|
||||
substituteInPlace $out/share/applications/pocket-casts.desktop --replace '/usr/share/icons/hicolor/0x0/apps/pocket-casts.png' "pocket-casts"
|
||||
makeWrapper ${electron}/bin/electron \
|
||||
$out/bin/pocket-casts \
|
||||
--add-flags $out/opt/pocket-casts/resources/app.asar
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Pocket Casts webapp, packaged for the Linux Desktop";
|
||||
homepage = "https://github.com/felicianotech/pocket-casts-desktop-app";
|
||||
license = licenses.mit;
|
||||
maintainers = with maintainers; [ wolfangaukang ];
|
||||
platforms = [ "x86_64-linux" ];
|
||||
};
|
||||
}
|
|
@ -8,13 +8,13 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "pt2-clone";
|
||||
version = "1.34";
|
||||
version = "1.36";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "8bitbubsy";
|
||||
repo = "pt2-clone";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-JT3I06qm3oljsySIgK5xP2RC3KAb5QBrNVdip0ds4KE=";
|
||||
sha256 = "sha256-QyhBoWCkj7iYXAFsyVH6+XH2P/MQEXZQfAcUDu4Rtco=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ cmake ];
|
||||
|
|
|
@ -10,14 +10,14 @@ let
|
|||
# If an update breaks things, one of those might have valuable info:
|
||||
# https://aur.archlinux.org/packages/spotify/
|
||||
# https://community.spotify.com/t5/Desktop-Linux
|
||||
version = "1.1.68.628.geb44bd66";
|
||||
version = "1.1.68.632.g2b11de83";
|
||||
# To get the latest stable revision:
|
||||
# curl -H 'X-Ubuntu-Series: 16' 'https://api.snapcraft.io/api/v1/snaps/details/spotify?channel=stable' | jq '.download_url,.version,.last_updated'
|
||||
# To get general information:
|
||||
# curl -H 'Snap-Device-Series: 16' 'https://api.snapcraft.io/v2/snaps/info/spotify' | jq '.'
|
||||
# More examples of api usage:
|
||||
# https://github.com/canonical-websites/snapcraft.io/blob/master/webapp/publisher/snaps/views.py
|
||||
rev = "52";
|
||||
rev = "53";
|
||||
|
||||
deps = [
|
||||
alsa-lib
|
||||
|
@ -80,7 +80,7 @@ stdenv.mkDerivation {
|
|||
# https://community.spotify.com/t5/Desktop-Linux/Redistribute-Spotify-on-Linux-Distributions/td-p/1695334
|
||||
src = fetchurl {
|
||||
url = "https://api.snapcraft.io/api/v1/snaps/download/pOBIoZ2LrCB3rDohMxoYGnbN14EHOgD7_${rev}.snap";
|
||||
sha512 = "be6f1cb650924eb9e244497374d1dfe6136d28056dbecc7000a03341a4bb4c6ab2c83ec6c707bd6f57afde95262230eafbde08e9c7a7dfcacdf660eb10499f3a";
|
||||
sha512 = "ed991691c99fe97ed9ff5d0f5cc9a8883c176fa3b3054293c37d545abbb895c6260afdf1c8c0828d62c36ea7ab384e166b6151effb4614c93e4fa712319a08a3";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ makeWrapper wrapGAppsHook squashfsTools ];
|
||||
|
|
|
@ -36,13 +36,13 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "strawberry";
|
||||
version = "0.9.3";
|
||||
version = "1.0.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "jonaski";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
sha256 = "sha256-OOdHsii6O4okVHDhrqCNJ7WVB0VKPs8q0AhEY+IvflE=";
|
||||
sha256 = "sha256-m1BB5OIeCIQuJpxEO1xmb/Z8tzeHF31jYg67OpVWWRM=";
|
||||
};
|
||||
|
||||
buildInputs = [
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "synthv1";
|
||||
version = "0.9.15";
|
||||
version = "0.9.23";
|
||||
|
||||
src = fetchurl {
|
||||
url = "mirror://sourceforge/synthv1/${pname}-${version}.tar.gz";
|
||||
sha256 = "047y2l7ipzv00ly54f074v6p043xjml7vz0svc7z81bhx74vs0ix";
|
||||
sha256 = "sha256-0V72T51icT/t9fJf4mwcMYZLjzTPnmiCbU+BdwnCmw4=";
|
||||
};
|
||||
|
||||
buildInputs = [ qtbase qttools libjack2 alsa-lib liblo lv2 ];
|
||||
|
|
|
@ -1,60 +1,56 @@
|
|||
{ stdenv
|
||||
, dpkg
|
||||
, lib
|
||||
, autoPatchelfHook
|
||||
{ lib
|
||||
, stdenv
|
||||
, fetchurl
|
||||
, gtk3
|
||||
, glib
|
||||
, desktop-file-utils
|
||||
, autoPatchelfHook
|
||||
, dpkg
|
||||
, alsa-lib
|
||||
, libjack2
|
||||
, harfbuzz
|
||||
, fribidi
|
||||
, pango
|
||||
, freetype
|
||||
, libglvnd
|
||||
, curl
|
||||
, libXcursor
|
||||
, libXinerama
|
||||
, libXrandr
|
||||
, libXrender
|
||||
, libjack2
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "tonelib-jam";
|
||||
version = "4.6.6";
|
||||
version = "4.7.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://www.tonelib.net/download/0509/ToneLib-Jam-amd64.deb";
|
||||
sha256 = "sha256-cizIQgO35CQSLme/LKQqP+WzB/jCTk+fS5Z+EtF7wnQ=";
|
||||
url = "https://www.tonelib.net/download/0930/ToneLib-Jam-amd64.deb";
|
||||
sha256 = "sha256-xyBDp3DQVC+nK2WGnvrfUfD+9GvwtbldXgExTMmCGw0=";
|
||||
};
|
||||
|
||||
buildInputs = [
|
||||
dpkg
|
||||
gtk3
|
||||
glib
|
||||
desktop-file-utils
|
||||
alsa-lib
|
||||
libjack2
|
||||
harfbuzz
|
||||
fribidi
|
||||
pango
|
||||
freetype
|
||||
];
|
||||
|
||||
nativeBuildInputs = [
|
||||
autoPatchelfHook
|
||||
dpkg
|
||||
];
|
||||
|
||||
unpackPhase = ''
|
||||
mkdir -p $TMP/ $out/
|
||||
dpkg -x $src $TMP
|
||||
'';
|
||||
buildInputs = [
|
||||
stdenv.cc.cc.lib
|
||||
alsa-lib
|
||||
freetype
|
||||
libglvnd
|
||||
] ++ runtimeDependencies;
|
||||
|
||||
runtimeDependencies = map lib.getLib [
|
||||
curl
|
||||
libXcursor
|
||||
libXinerama
|
||||
libXrandr
|
||||
libXrender
|
||||
libjack2
|
||||
];
|
||||
|
||||
unpackCmd = "dpkg -x $curSrc source";
|
||||
|
||||
installPhase = ''
|
||||
cp -R $TMP/usr/* $out/
|
||||
mv $out/bin/ToneLib-Jam $out/bin/tonelib-jam
|
||||
mv usr $out
|
||||
substituteInPlace $out/share/applications/ToneLib-Jam.desktop --replace /usr/ $out/
|
||||
'';
|
||||
|
||||
runtimeDependencies = [
|
||||
(lib.getLib curl)
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
description = "ToneLib Jam – the learning and practice software for guitar players";
|
||||
homepage = "https://tonelib.net/";
|
||||
|
|
|
@ -47,6 +47,8 @@ stdenv.mkDerivation {
|
|||
meta = with lib; {
|
||||
homepage = "http://www.areca-backup.org/";
|
||||
description = "An Open Source personal backup solution";
|
||||
# Builds fine but fails to launch.
|
||||
broken = true;
|
||||
license = licenses.gpl2;
|
||||
maintainers = with maintainers; [ pSub ];
|
||||
platforms = with platforms; linux;
|
||||
|
|
23
third_party/nixpkgs/pkgs/applications/backup/urbackup-client/default.nix
vendored
Normal file
23
third_party/nixpkgs/pkgs/applications/backup/urbackup-client/default.nix
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
{ stdenv, lib, fetchzip, wxGTK30, zlib, zstd }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "urbackup-client";
|
||||
version = "2.4.11";
|
||||
|
||||
src = fetchzip {
|
||||
url = "https://hndl.urbackup.org/Client/${version}/urbackup-client-${version}.tar.gz";
|
||||
sha256 = "0cciy9v1pxj9qaklpbhp2d5rdbkmfm74vhpqx6b4phww0f10wvzh";
|
||||
};
|
||||
|
||||
configureFlags = [ "--enable-embedded-cryptopp" ];
|
||||
buildInputs = [ wxGTK30 zlib zstd ];
|
||||
|
||||
meta = with lib; {
|
||||
description = "An easy to setup Open Source client/server backup system";
|
||||
longDescription = "An easy to setup Open Source client/server backup system, that through a combination of image and file backups accomplishes both data safety and a fast restoration time";
|
||||
homepage = "https://www.urbackup.org/index.html";
|
||||
license = licenses.agpl3;
|
||||
platforms = platforms.linux;
|
||||
maintainers = [ maintainers.mgttlinger ];
|
||||
};
|
||||
}
|
|
@ -1,19 +1,7 @@
|
|||
{ lib, stdenv, fetchFromGitHub, fetchurl, linkFarmFromDrvs, makeWrapper,
|
||||
dotnetPackages, dotnetCorePackages, altcoinSupport ? false
|
||||
}:
|
||||
{ lib, buildDotnetModule, fetchFromGitHub, dotnetCorePackages
|
||||
, altcoinSupport ? false }:
|
||||
|
||||
let
|
||||
deps = import ./deps.nix {
|
||||
fetchNuGet = { name, version, sha256 }: fetchurl {
|
||||
name = "nuget-${name}-${version}.nupkg";
|
||||
url = "https://www.nuget.org/api/v2/package/${name}/${version}";
|
||||
inherit sha256;
|
||||
};
|
||||
};
|
||||
dotnetSdk = dotnetCorePackages.sdk_3_1;
|
||||
in
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
buildDotnetModule rec {
|
||||
pname = "btcpayserver";
|
||||
version = "1.2.4";
|
||||
|
||||
|
@ -24,35 +12,29 @@ stdenv.mkDerivation rec {
|
|||
sha256 = "sha256-vjNJ08twsJ036TTFF6srOGshDpP7ZwWCGN0XjrtFT/g=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ];
|
||||
projectFile = "BTCPayServer/BTCPayServer.csproj";
|
||||
nugetDeps = ./deps.nix;
|
||||
|
||||
buildPhase = ''
|
||||
export HOME=$TMP/home
|
||||
export DOTNET_CLI_TELEMETRY_OPTOUT=1
|
||||
export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1
|
||||
dotnet-sdk = dotnetCorePackages.sdk_3_1;
|
||||
dotnet-runtime = dotnetCorePackages.aspnetcore_3_1;
|
||||
|
||||
nuget sources Add -Name tmpsrc -Source $TMP/nuget
|
||||
nuget init ${linkFarmFromDrvs "deps" deps} $TMP/nuget
|
||||
|
||||
dotnet restore --source $TMP/nuget ${lib.optionalString altcoinSupport ''/p:Configuration="Altcoins-Release"''} BTCPayServer/BTCPayServer.csproj
|
||||
dotnet publish --no-restore --output $out/share/$pname ${lib.optionalString altcoinSupport "-c Altcoins-Release"} BTCPayServer/BTCPayServer.csproj
|
||||
'';
|
||||
dotnetFlags = lib.optionals altcoinSupport [ "/p:Configuration=Altcoins-Release" ];
|
||||
|
||||
# btcpayserver requires the publish directory as its working dir
|
||||
# https://github.com/btcpayserver/btcpayserver/issues/1894
|
||||
installPhase = ''
|
||||
makeWrapper $out/share/$pname/BTCPayServer $out/bin/$pname \
|
||||
--set DOTNET_ROOT "${dotnetSdk}" \
|
||||
--run "cd $out/share/$pname"
|
||||
preInstall = ''
|
||||
makeWrapperArgs+=(--run "cd $out/lib/btcpayserver")
|
||||
'';
|
||||
|
||||
dontStrip = true;
|
||||
postInstall = ''
|
||||
mv $out/bin/{BTCPayServer,btcpayserver}
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Self-hosted, open-source cryptocurrency payment processor";
|
||||
homepage = "https://btcpayserver.org";
|
||||
maintainers = with maintainers; [ kcalvinalvin earvstedt ];
|
||||
license = lib.licenses.mit;
|
||||
platforms = lib.platforms.linux;
|
||||
license = licenses.mit;
|
||||
platforms = platforms.linux;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -6,14 +6,14 @@
|
|||
|
||||
let chia = python3Packages.buildPythonApplication rec {
|
||||
pname = "chia";
|
||||
version = "1.2.9";
|
||||
version = "1.2.10";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "Chia-Network";
|
||||
repo = "chia-blockchain";
|
||||
rev = version;
|
||||
fetchSubmodules = true;
|
||||
sha256 = "sha256-ZDWkVCga/NsKOnj5HP0lnmnX6vqw+I0b3a1Wr1t1VN0=";
|
||||
sha256 = "sha256-TzSBGjgaE0IWaqJcCIoO/u+gDh17NtAqhE8ldbbjNIE=";
|
||||
};
|
||||
|
||||
postPatch = ''
|
||||
|
|
|
@ -7,12 +7,12 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "eclair";
|
||||
version = "0.6.1";
|
||||
revision = "d3ae323";
|
||||
version = "0.6.2";
|
||||
revision = "6817d6f";
|
||||
|
||||
src = fetchzip {
|
||||
url = "https://github.com/ACINQ/eclair/releases/download/v${version}/eclair-node-${version}-${revision}-bin.zip";
|
||||
sha256 = "0hmdssj6pxhvadrgr1svb2lh7hfbd2axr5wsl7glizv1a21g0l2c";
|
||||
sha256 = "038r9mblm2r8mkxnv65k29r7xj22dff5gmvzv9xiy5zf9i45mmk8";
|
||||
};
|
||||
|
||||
propagatedBuildInputs = [ jq openjdk11 ];
|
||||
|
@ -33,6 +33,6 @@ stdenv.mkDerivation rec {
|
|||
homepage = "https://github.com/ACINQ/eclair";
|
||||
license = licenses.asl20;
|
||||
maintainers = with maintainers; [ prusnak ];
|
||||
platforms = [ "x86_64-linux" "x86_64-darwin" ];
|
||||
platforms = platforms.unix;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -17,7 +17,6 @@
|
|||
, qtbase ? null
|
||||
, qttools ? null
|
||||
, python3
|
||||
, openssl
|
||||
, withGui
|
||||
, withWallet ? true
|
||||
}:
|
||||
|
@ -25,11 +24,11 @@
|
|||
with lib;
|
||||
stdenv.mkDerivation rec {
|
||||
pname = if withGui then "elements" else "elementsd";
|
||||
version = "0.18.1.12";
|
||||
version = "0.21.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/ElementsProject/elements/archive/elements-${version}.tar.gz";
|
||||
sha256 = "84a51013596b09c62913649ac90373622185f779446ee7e65b4b258a2876609f";
|
||||
sha256 = "0d9mcb0nw9qqhv0jhpddi9i4iry3w7b5jifsl5kpcw82qrkvgfgj";
|
||||
};
|
||||
|
||||
nativeBuildInputs =
|
||||
|
@ -38,7 +37,7 @@ stdenv.mkDerivation rec {
|
|||
++ optionals stdenv.isDarwin [ hexdump ]
|
||||
++ optionals withGui [ wrapQtAppsHook ];
|
||||
|
||||
buildInputs = [ boost libevent miniupnpc zeromq zlib openssl ]
|
||||
buildInputs = [ boost libevent miniupnpc zeromq zlib ]
|
||||
++ optionals withWallet [ db48 sqlite ]
|
||||
++ optionals withGui [ qrencode qtbase qttools ];
|
||||
|
||||
|
@ -79,8 +78,5 @@ stdenv.mkDerivation rec {
|
|||
maintainers = with maintainers; [ prusnak ];
|
||||
license = licenses.mit;
|
||||
platforms = platforms.unix;
|
||||
# Qt GUI is currently broken in upstream
|
||||
# No rule to make target 'qt/res/rendered_icons/about.png', needed by 'qt/qrc_bitcoin.cpp'.
|
||||
broken = withGui;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -2,12 +2,12 @@
|
|||
|
||||
let
|
||||
pname = "ledger-live-desktop";
|
||||
version = "2.33.1";
|
||||
version = "2.34.3";
|
||||
name = "${pname}-${version}";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/LedgerHQ/${pname}/releases/download/v${version}/${pname}-${version}-linux-x86_64.AppImage";
|
||||
sha256 = "1k1h37fbpsib9h8867m2dsfacdjs78gdm61gvrin5gpw1zj10syz";
|
||||
sha256 = "07r7gfn44c4bdcq9rgs6v4frrl2g004lh9lcsrj6rbqy6949r9j2";
|
||||
};
|
||||
|
||||
appimageContents = appimageTools.extractType2 {
|
||||
|
|
|
@ -1,19 +1,6 @@
|
|||
{ lib, stdenv, fetchFromGitHub, fetchurl, linkFarmFromDrvs, makeWrapper,
|
||||
dotnetPackages, dotnetCorePackages
|
||||
}:
|
||||
{ lib, buildDotnetModule, fetchFromGitHub, dotnetCorePackages }:
|
||||
|
||||
let
|
||||
deps = import ./deps.nix {
|
||||
fetchNuGet = { name, version, sha256 }: fetchurl {
|
||||
name = "nuget-${name}-${version}.nupkg";
|
||||
url = "https://www.nuget.org/api/v2/package/${name}/${version}";
|
||||
inherit sha256;
|
||||
};
|
||||
};
|
||||
dotnetSdk = dotnetCorePackages.sdk_3_1;
|
||||
in
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
buildDotnetModule rec {
|
||||
pname = "nbxplorer";
|
||||
version = "2.2.11";
|
||||
|
||||
|
@ -24,31 +11,20 @@ stdenv.mkDerivation rec {
|
|||
sha256 = "sha256-ZDqzkANGMdvv3e5gWCYcacUYKLJRquXRHLr8RAzT9hY=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ];
|
||||
projectFile = "NBXplorer/NBXplorer.csproj";
|
||||
nugetDeps = ./deps.nix;
|
||||
|
||||
buildPhase = ''
|
||||
export HOME=$TMP/home
|
||||
export DOTNET_CLI_TELEMETRY_OPTOUT=1
|
||||
export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1
|
||||
dotnet-sdk = dotnetCorePackages.sdk_3_1;
|
||||
dotnet-runtime = dotnetCorePackages.aspnetcore_3_1;
|
||||
|
||||
nuget sources Add -Name tmpsrc -Source $TMP/nuget
|
||||
nuget init ${linkFarmFromDrvs "deps" deps} $TMP/nuget
|
||||
|
||||
dotnet restore --source $TMP/nuget NBXplorer/NBXplorer.csproj
|
||||
dotnet publish --no-restore --output $out/share/$pname -c Release NBXplorer/NBXplorer.csproj
|
||||
postInstall = ''
|
||||
mv $out/bin/{NBXplorer,nbxplorer}
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
makeWrapper $out/share/$pname/NBXplorer $out/bin/$pname \
|
||||
--set DOTNET_ROOT "${dotnetSdk}"
|
||||
'';
|
||||
|
||||
dontStrip = true;
|
||||
|
||||
meta = with lib; {
|
||||
description = "Minimalist UTXO tracker for HD Cryptocurrency Wallets";
|
||||
maintainers = with maintainers; [ kcalvinalvin earvstedt ];
|
||||
license = lib.licenses.mit;
|
||||
platforms = lib.platforms.linux;
|
||||
license = licenses.mit;
|
||||
platforms = platforms.linux;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -7,16 +7,16 @@
|
|||
}:
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "polkadot";
|
||||
version = "0.9.11";
|
||||
version = "0.9.12";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "paritytech";
|
||||
repo = "polkadot";
|
||||
rev = "v${version}";
|
||||
sha256 = "17a0g4sijc1p9fy5xh8krs3y1hc75s17ak0hfhpi231gs4fl20pd";
|
||||
sha256 = "1d1ppj8djqm97k18cbdvbgv9a5vhvxdgjiqair0bmxc44hwapl65";
|
||||
};
|
||||
|
||||
cargoSha256 = "07yzdchpzs2g1f8fzhaj11yybd2d8lv9ib859z7122anxzdr0028";
|
||||
cargoSha256 = "09kcacz836sm1zsi08mmf4ca5vbqc0lwwaam9p4vi0v4kd45axx9";
|
||||
|
||||
nativeBuildInputs = [ clang ];
|
||||
|
||||
|
|
|
@ -17,8 +17,8 @@ let
|
|||
sha256Hash = "1j1fxl4vzq3bln2z9ycxn9imjgy55yd1nbl7ycmsi90bdp96pzj0";
|
||||
};
|
||||
latestVersion = { # canary & dev
|
||||
version = "2021.2.1.1"; # "Android Studio Chipmunk (2021.2.1) Canary 1"
|
||||
sha256Hash = "1fn0jv6ybgdhgpwhamw16fjqbg2961ir9jhbjzanysi7y3935nbv";
|
||||
version = "2021.2.1.2"; # "Android Studio Chipmunk (2021.2.1) Canary 2"
|
||||
sha256Hash = "0xvn9zgn4cc9lhjynhiavmvx8bdzg4kcllmhg7xv18kp6wz4lh6z";
|
||||
};
|
||||
in {
|
||||
# Attributes are named by their corresponding release channels
|
||||
|
|
|
@ -38,13 +38,13 @@ let
|
|||
in
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "cudatext";
|
||||
version = "1.146.0";
|
||||
version = "1.148.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "Alexey-T";
|
||||
repo = "CudaText";
|
||||
rev = version;
|
||||
sha256 = "sha256-YK4nLQvRdgS7hq5a9uVfVjUAgkM/sYXiKjbt0QNzcok=";
|
||||
sha256 = "sha256-/wvtIPF/1HneW0zuT7+VCixemkw91MdU0S66bz2y48U=";
|
||||
};
|
||||
|
||||
postPatch = ''
|
||||
|
|
|
@ -11,13 +11,13 @@
|
|||
},
|
||||
"ATFlatControls": {
|
||||
"owner": "Alexey-T",
|
||||
"rev": "2021.09.14",
|
||||
"sha256": "sha256-j69UkRNdVdzMITBHMT1QwAsYX9S0fts5/0PCroCGtL8="
|
||||
"rev": "2021.10.19",
|
||||
"sha256": "sha256-NO1q4qDXZ0x0G6AtcRP9xnFDWuBzOvxq8G7I76LgaBw="
|
||||
},
|
||||
"ATSynEdit": {
|
||||
"owner": "Alexey-T",
|
||||
"rev": "2021.10.03",
|
||||
"sha256": "sha256-JGw/GbQNLAgHhDm/EgCGvzPpd8rqQo2FhmAL51XIekw="
|
||||
"rev": "2021.10.27",
|
||||
"sha256": "sha256-7DlnO7IeCFLU1A+HJt4CFXoHWfhAr52tBvfPNHieXMM="
|
||||
},
|
||||
"ATSynEdit_Cmp": {
|
||||
"owner": "Alexey-T",
|
||||
|
@ -26,8 +26,8 @@
|
|||
},
|
||||
"EControl": {
|
||||
"owner": "Alexey-T",
|
||||
"rev": "2021.10.03",
|
||||
"sha256": "sha256-Kbjzn4Rp+/oTNgFMlzlkQEeob0Z4VidqJ/+wuNHS580="
|
||||
"rev": "2021.10.21",
|
||||
"sha256": "sha256-RyRpHihmmr/EeVWk9CR0S3pvKy0FzqLZNGti33+4fkI="
|
||||
},
|
||||
"ATSynEdit_Ex": {
|
||||
"owner": "Alexey-T",
|
||||
|
@ -36,8 +36,8 @@
|
|||
},
|
||||
"Python-for-Lazarus": {
|
||||
"owner": "Alexey-T",
|
||||
"rev": "2021.07.27",
|
||||
"sha256": "sha256-izCyBNRLRCizSjR7v9RhcLrQ6+aQA4eejCHFUzJ0IpE="
|
||||
"rev": "2021.10.27",
|
||||
"sha256": "sha256-ikXdDUMJ9MxRejEVAhwUsXYVh0URVFHzEpnXuN5NGpA="
|
||||
},
|
||||
"Emmet-Pascal": {
|
||||
"owner": "Alexey-T",
|
||||
|
|
|
@ -384,10 +384,10 @@
|
|||
elpaBuild {
|
||||
pname = "boxy-headings";
|
||||
ename = "boxy-headings";
|
||||
version = "2.1.0";
|
||||
version = "2.1.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/boxy-headings-2.1.0.tar";
|
||||
sha256 = "021w4ic028jsq7vxz1jgnfny9dymcz6v112b3b3nwyw3g3dnc62f";
|
||||
url = "https://elpa.gnu.org/packages/boxy-headings-2.1.2.tar";
|
||||
sha256 = "0jyfp41jw33kmi7832x5x0mgh5niqvb7dfc7q00kay5q9ixg83dq";
|
||||
};
|
||||
packageRequires = [ boxy emacs org ];
|
||||
meta = {
|
||||
|
@ -707,6 +707,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
coterm = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "coterm";
|
||||
ename = "coterm";
|
||||
version = "1.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/coterm-1.2.tar";
|
||||
sha256 = "0jl48bi4a4fkk7p2nj2bx0b658wrjw0cvab5ds6rid44irc8b1mn";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/coterm.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
counsel = callPackage ({ elpaBuild, emacs, fetchurl, ivy, lib, swiper }:
|
||||
elpaBuild {
|
||||
pname = "counsel";
|
||||
|
@ -1146,10 +1161,10 @@
|
|||
elpaBuild {
|
||||
pname = "eev";
|
||||
ename = "eev";
|
||||
version = "20211011";
|
||||
version = "20211024";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/eev-20211011.tar";
|
||||
sha256 = "1a71qam6z5s3zl7fvxpsnabbqxh8a7llm1524nxs2353pb6ksfra";
|
||||
url = "https://elpa.gnu.org/packages/eev-20211024.tar";
|
||||
sha256 = "165mscb1kpgd3db92vklglnaph60rvrr8wm3hpkhrbyac100ryji";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -1484,10 +1499,10 @@
|
|||
elpaBuild {
|
||||
pname = "flymake-proselint";
|
||||
ename = "flymake-proselint";
|
||||
version = "0.2.1";
|
||||
version = "0.2.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/flymake-proselint-0.2.1.tar";
|
||||
sha256 = "08hbz8k3idr1gb98q3ssmzsdya5afjxl25l9xzqp9q2w5krc8433";
|
||||
url = "https://elpa.gnu.org/packages/flymake-proselint-0.2.2.tar";
|
||||
sha256 = "0v43d2cszrq8lzshm17x6aiqbkzwz5kj8x5sznc3nip9gaqsrfv1";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -3591,10 +3606,10 @@
|
|||
elpaBuild {
|
||||
pname = "shell-command-plus";
|
||||
ename = "shell-command+";
|
||||
version = "2.3.1";
|
||||
version = "2.3.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/shell-command+-2.3.1.tar";
|
||||
sha256 = "0g8pcrkkh3bxcxxbasnz834gi3pvhlkpf011fvmlhwzswypcyqmy";
|
||||
url = "https://elpa.gnu.org/packages/shell-command+-2.3.2.tar";
|
||||
sha256 = "03hmk4gr9kjy3238n0ys9na00py035j9s0y8d87c45f5af6c6g2c";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -3632,6 +3647,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
sketch-mode = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "sketch-mode";
|
||||
ename = "sketch-mode";
|
||||
version = "1.0.3";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/sketch-mode-1.0.3.tar";
|
||||
sha256 = "17xa8754zp07izgd3b9hywlwd1jrbzyc5y1rrhin7w6r0pyvqs51";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/sketch-mode.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
slime-volleyball = callPackage ({ cl-lib ? null, elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "slime-volleyball";
|
||||
|
@ -4219,10 +4249,10 @@
|
|||
elpaBuild {
|
||||
pname = "vc-hgcmd";
|
||||
ename = "vc-hgcmd";
|
||||
version = "1.14";
|
||||
version = "1.14.1";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/vc-hgcmd-1.14.tar";
|
||||
sha256 = "0pg6fg0znsmky3iwdpxn2sx5bbn72kw83s077000ilawi6zqwc2d";
|
||||
url = "https://elpa.gnu.org/packages/vc-hgcmd-1.14.1.tar";
|
||||
sha256 = "12izw5ln22xdgwh6mqm6axzdfpcnqq7qcj72nmykrbsgpagp5fy6";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -4281,10 +4311,10 @@
|
|||
elpaBuild {
|
||||
pname = "verilog-mode";
|
||||
ename = "verilog-mode";
|
||||
version = "2021.9.23.89128420";
|
||||
version = "2021.10.14.127365406";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/verilog-mode-2021.9.23.89128420.tar";
|
||||
sha256 = "1sgmkmif44npghz4nnag1w91qrrylq36175cjj87lcdp22s6isgk";
|
||||
url = "https://elpa.gnu.org/packages/verilog-mode-2021.10.14.127365406.tar";
|
||||
sha256 = "0d842dwg98srv73nkg69c7x24rw20mxgqmb4k1qcbl02bwxkfmsm";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
|
@ -4604,10 +4634,10 @@
|
|||
elpaBuild {
|
||||
pname = "xref";
|
||||
ename = "xref";
|
||||
version = "1.3.0";
|
||||
version = "1.3.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.gnu.org/packages/xref-1.3.0.tar";
|
||||
sha256 = "0bw2cbxmjavzhmpd9gyl41d4c201p535jrfz3b7jb5zw12jdnppl";
|
||||
url = "https://elpa.gnu.org/packages/xref-1.3.2.tar";
|
||||
sha256 = "13bsaxdxwn14plaam0hsrswngh3rm2k29v5ybjgjyjy4d5vwz78j";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
|
|
@ -1,5 +1,65 @@
|
|||
{ callPackage }:
|
||||
{
|
||||
afternoon-theme = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "afternoon-theme";
|
||||
ename = "afternoon-theme";
|
||||
version = "0.1";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/afternoon-theme-0.1.tar";
|
||||
sha256 = "0aalwn1hf0p756qmiybmxphh4dx8gd5r4jhbl43l6y68fdijr6qg";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/afternoon-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
alect-themes = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "alect-themes";
|
||||
ename = "alect-themes";
|
||||
version = "0.10";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/alect-themes-0.10.tar";
|
||||
sha256 = "0j5zwmxq1f9hlarr1f0j010kd3n2k8hbhr8pw789j3zlc2kmx5bb";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/alect-themes.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
ample-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "ample-theme";
|
||||
ename = "ample-theme";
|
||||
version = "0.3.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/ample-theme-0.3.0.tar";
|
||||
sha256 = "0b5a9pqvmfc3h1l0rsmw57vj5j740ysnlpiig6jx9rkgn7awm5p1";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/ample-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
anti-zenburn-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "anti-zenburn-theme";
|
||||
ename = "anti-zenburn-theme";
|
||||
version = "2.5.1";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/anti-zenburn-theme-2.5.1.tar";
|
||||
sha256 = "06d7nm4l6llv7wjbwnhfaamrcihichljkpwnllny960pi56a8gmr";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/anti-zenburn-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
apache-mode = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "apache-mode";
|
||||
|
@ -15,6 +75,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
apropospriate-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "apropospriate-theme";
|
||||
ename = "apropospriate-theme";
|
||||
version = "0.1.1";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/apropospriate-theme-0.1.1.tar";
|
||||
sha256 = "11m80gijxvg4jf9davjja3bvykv161ggsrg7q0bihr0gq0flxgd7";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/apropospriate-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
arduino-mode = callPackage ({ elpaBuild, emacs, fetchurl, lib, spinner }:
|
||||
elpaBuild {
|
||||
pname = "arduino-mode";
|
||||
|
@ -75,6 +150,24 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
color-theme-tangotango = callPackage ({ color-theme
|
||||
, elpaBuild
|
||||
, fetchurl
|
||||
, lib }:
|
||||
elpaBuild {
|
||||
pname = "color-theme-tangotango";
|
||||
ename = "color-theme-tangotango";
|
||||
version = "0.0.6";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/color-theme-tangotango-0.0.6.tar";
|
||||
sha256 = "0lfr3xg9xvfjb12kcw80d35a1ayn4f5w1dkd2b0kx0wxkq0bykim";
|
||||
};
|
||||
packageRequires = [ color-theme ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/color-theme-tangotango.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
crux = callPackage ({ elpaBuild, fetchurl, lib, seq }:
|
||||
elpaBuild {
|
||||
pname = "crux";
|
||||
|
@ -90,6 +183,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
cyberpunk-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "cyberpunk-theme";
|
||||
ename = "cyberpunk-theme";
|
||||
version = "1.22";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/cyberpunk-theme-1.22.tar";
|
||||
sha256 = "1kva129l8vwfvafw329znrsqhm1j645xsyz55il1jhc28fbijp51";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/cyberpunk-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
d-mode = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "d-mode";
|
||||
|
@ -120,6 +228,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
dracula-theme = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "dracula-theme";
|
||||
ename = "dracula-theme";
|
||||
version = "1.7.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/dracula-theme-1.7.0.tar";
|
||||
sha256 = "0vbi9560phdp38x5mfl1f9rp8cw7p7s2mvbww84ka0gfz0zrczpm";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/dracula-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
editorconfig = callPackage ({ cl-lib ? null
|
||||
, elpaBuild
|
||||
, emacs
|
||||
|
@ -159,10 +282,10 @@
|
|||
elpaBuild {
|
||||
pname = "flymake-kondor";
|
||||
ename = "flymake-kondor";
|
||||
version = "0.1.0";
|
||||
version = "0.1.2";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/flymake-kondor-0.1.0.tar";
|
||||
sha256 = "0fn9vnrqy5nmv07jv2ry0xs90rkb92qhrh7j5pdikw7zykcwlbdd";
|
||||
url = "https://elpa.nongnu.org/nongnu/flymake-kondor-0.1.2.tar";
|
||||
sha256 = "17mmn9mj4zl5f7byairkgxz6s2mrq73q3219s73c0b2g0g846krn";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -356,6 +479,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
git-modes = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "git-modes";
|
||||
ename = "git-modes";
|
||||
version = "1.4.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/git-modes-1.4.0.tar";
|
||||
sha256 = "1pag50l0rl361p1617rdvhhdajsmq9b1lyi94g16hibygdn7vaff";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/git-modes.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
gnuplot = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "gnuplot";
|
||||
|
@ -416,6 +554,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
haml-mode = callPackage ({ cl-lib ? null, elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "haml-mode";
|
||||
ename = "haml-mode";
|
||||
version = "3.1.10";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/haml-mode-3.1.10.tar";
|
||||
sha256 = "1qkhm52xr8vh9zp728ass5kxjw7fj65j84m06db084qpavnwvysa";
|
||||
};
|
||||
packageRequires = [ cl-lib emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/haml-mode.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
haskell-mode = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "haskell-mode";
|
||||
|
@ -638,6 +791,36 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
material-theme = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "material-theme";
|
||||
ename = "material-theme";
|
||||
version = "2015";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/material-theme-2015.tar";
|
||||
sha256 = "027plf401y3lb5y9hzj8gpy9sm0p1k8hv94pywnagq4kr9hivnb9";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/material-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
monokai-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "monokai-theme";
|
||||
ename = "monokai-theme";
|
||||
version = "3.5.3";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/monokai-theme-3.5.3.tar";
|
||||
sha256 = "15b5ijkb0wrixlw13rj02x7m0r3ldqfs3bb6g48hhbqfapd6rcx0";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/monokai-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
multiple-cursors = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "multiple-cursors";
|
||||
|
@ -792,10 +975,10 @@
|
|||
elpaBuild {
|
||||
pname = "rust-mode";
|
||||
ename = "rust-mode";
|
||||
version = "0.5.0";
|
||||
version = "1.0.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/rust-mode-0.5.0.tar";
|
||||
sha256 = "03z1nsq1s3awaczirlxixq4gwhz9bf1x5zwd5xfb88ay4kzcmjwc";
|
||||
url = "https://elpa.nongnu.org/nongnu/rust-mode-1.0.0.tar";
|
||||
sha256 = "0ch3hf954iy5hh5zyjjg68szdk5icppmi8nbap27wfwgvhvyfa67";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
|
@ -882,6 +1065,36 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
solarized-theme = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "solarized-theme";
|
||||
ename = "solarized-theme";
|
||||
version = "1.3.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/solarized-theme-1.3.0.tar";
|
||||
sha256 = "0wa3wp9r0h4y3kkiw8s4pi1zvg22yhnpsp8ckv1hp4y6js5jbg65";
|
||||
};
|
||||
packageRequires = [ emacs ];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/solarized-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
subatomic-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "subatomic-theme";
|
||||
ename = "subatomic-theme";
|
||||
version = "1.8.1";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/subatomic-theme-1.8.1.tar";
|
||||
sha256 = "0j496l7c2rwgxk2srcf1a70z63y48q5bs9cpx95212q7rl20zhip";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/subatomic-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
swift-mode = callPackage ({ elpaBuild, emacs, fetchurl, lib, seq }:
|
||||
elpaBuild {
|
||||
pname = "swift-mode";
|
||||
|
@ -927,6 +1140,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
ujelly-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "ujelly-theme";
|
||||
ename = "ujelly-theme";
|
||||
version = "1.2.9";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/ujelly-theme-1.2.9.tar";
|
||||
sha256 = "04h86s0a44cmxizqi4p5h9gl1aiqwrvkh3xmawvn7z836i3hvxn9";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/ujelly-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
vc-fossil = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "vc-fossil";
|
||||
|
@ -1017,6 +1245,21 @@
|
|||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
zenburn-theme = callPackage ({ elpaBuild, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "zenburn-theme";
|
||||
ename = "zenburn-theme";
|
||||
version = "2.7.0";
|
||||
src = fetchurl {
|
||||
url = "https://elpa.nongnu.org/nongnu/zenburn-theme-2.7.0.tar";
|
||||
sha256 = "1x7gd5w0g47kcam88lm605b35y35iq3q5f991a84l050c8syrkpy";
|
||||
};
|
||||
packageRequires = [];
|
||||
meta = {
|
||||
homepage = "https://elpa.gnu.org/packages/zenburn-theme.html";
|
||||
license = lib.licenses.free;
|
||||
};
|
||||
}) {};
|
||||
zig-mode = callPackage ({ elpaBuild, emacs, fetchurl, lib }:
|
||||
elpaBuild {
|
||||
pname = "zig-mode";
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -17,6 +17,8 @@
|
|||
, makeFontsConf
|
||||
, libglvnd
|
||||
, libxkbcommon
|
||||
, stdenv
|
||||
, enableWayland ? stdenv.isLinux
|
||||
, wayland
|
||||
, xorg
|
||||
}:
|
||||
|
@ -96,9 +98,18 @@ rustPlatform.buildRustPackage rec {
|
|||
}))
|
||||
];
|
||||
|
||||
postFixup = ''
|
||||
postFixup = let
|
||||
libPath = lib.makeLibraryPath ([
|
||||
libglvnd
|
||||
libxkbcommon
|
||||
xorg.libXcursor
|
||||
xorg.libXext
|
||||
xorg.libXrandr
|
||||
xorg.libXi
|
||||
] ++ lib.optionals enableWayland [ wayland ]);
|
||||
in ''
|
||||
wrapProgram $out/bin/neovide \
|
||||
--prefix LD_LIBRARY_PATH : ${lib.makeLibraryPath [ libglvnd libxkbcommon wayland xorg.libXcursor xorg.libXext xorg.libXrandr xorg.libXi ]}
|
||||
--prefix LD_LIBRARY_PATH : ${libPath}
|
||||
'';
|
||||
|
||||
postInstall = ''
|
||||
|
@ -115,7 +126,7 @@ rustPlatform.buildRustPackage rec {
|
|||
homepage = "https://github.com/Kethku/neovide";
|
||||
license = with licenses; [ mit ];
|
||||
maintainers = with maintainers; [ ck3d ];
|
||||
platforms = platforms.linux;
|
||||
platforms = platforms.unix;
|
||||
mainProgram = "neovide";
|
||||
};
|
||||
}
|
||||
|
|
|
@ -3,13 +3,13 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "texstudio";
|
||||
version = "4.0.0";
|
||||
version = "4.0.2";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "${pname}-org";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
sha256 = "0fapgc6dvzn47gmhxkqymwi3818rdiag33ml57j2mfmsi5pjxi0f";
|
||||
sha256 = "sha256-SCrWoIZan8mFwQoXaXvM0Ujdhcic3FbmfgKZSFXFBGE=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ qmake wrapQtAppsHook pkg-config ];
|
||||
|
|
48
third_party/nixpkgs/pkgs/applications/editors/thiefmd/default.nix
vendored
Normal file
48
third_party/nixpkgs/pkgs/applications/editors/thiefmd/default.nix
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
{ lib, stdenv, fetchFromGitHub, wrapGAppsHook, cmake, desktop-file-utils, glib
|
||||
, meson, ninja, pkg-config, vala, clutter, discount, gtk3, gtksourceview4, gtkspell3
|
||||
, libarchive, libgee, libhandy, libsecret, link-grammar, webkitgtk }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "thiefmd";
|
||||
version = "0.2.4";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "kmwallio";
|
||||
repo = "ThiefMD";
|
||||
rev = "v${version}-easypdf";
|
||||
sha256 = "sha256-YN17o6GtpulxhXs+XYZLY36g9S8ggR6URNLrjs5PEoI=";
|
||||
fetchSubmodules = true;
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
cmake desktop-file-utils glib meson wrapGAppsHook
|
||||
ninja pkg-config vala
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
clutter discount gtk3 gtksourceview4 gtkspell3
|
||||
libarchive libgee libhandy libsecret link-grammar
|
||||
webkitgtk
|
||||
];
|
||||
|
||||
dontUseCmakeConfigure = true;
|
||||
|
||||
postInstall = ''
|
||||
mv $out/share/applications/com.github.kmwallio.thiefmd.desktop \
|
||||
$out/share/applications/thiefmd.desktop
|
||||
substituteInPlace $out/share/applications/thiefmd.desktop \
|
||||
--replace 'Exec=com.github.kmwallio.' Exec=$out/bin/
|
||||
|
||||
makeWrapper $out/bin/com.github.kmwallio.thiefmd \
|
||||
$out/bin/thiefmd \
|
||||
--prefix XDG_DATA_DIRS : "${gtk3}/share/gsettings-schemas/${gtk3.name}/"
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Markdown & Fountain editor that helps with organization and management";
|
||||
homepage = "https://thiefmd.com";
|
||||
license = licenses.gpl3Only;
|
||||
platforms = platforms.linux;
|
||||
maintainers = with maintainers; [ wolfangaukang ];
|
||||
};
|
||||
}
|
|
@ -81,25 +81,25 @@ let
|
|||
installPhase = ''
|
||||
runHook preInstall
|
||||
'' + (if stdenv.isDarwin then ''
|
||||
mkdir -p "$out/Applications/${longName}.app" $out/bin
|
||||
mkdir -p "$out/Applications/${longName}.app" "$out/bin"
|
||||
cp -r ./* "$out/Applications/${longName}.app"
|
||||
ln -s "$out/Applications/${longName}.app/Contents/Resources/app/bin/${sourceExecutableName}" $out/bin/${executableName}
|
||||
ln -s "$out/Applications/${longName}.app/Contents/Resources/app/bin/${sourceExecutableName}" "$out/bin/${executableName}"
|
||||
'' else ''
|
||||
mkdir -p $out/lib/vscode $out/bin
|
||||
cp -r ./* $out/lib/vscode
|
||||
mkdir -p "$out/lib/vscode" "$out/bin"
|
||||
cp -r ./* "$out/lib/vscode"
|
||||
|
||||
ln -s $out/lib/vscode/bin/${sourceExecutableName} $out/bin/${executableName}
|
||||
ln -s "$out/lib/vscode/bin/${sourceExecutableName}" "$out/bin/${executableName}"
|
||||
|
||||
mkdir -p $out/share/applications
|
||||
ln -s $desktopItem/share/applications/${executableName}.desktop $out/share/applications/${executableName}.desktop
|
||||
ln -s $urlHandlerDesktopItem/share/applications/${executableName}-url-handler.desktop $out/share/applications/${executableName}-url-handler.desktop
|
||||
mkdir -p "$out/share/applications"
|
||||
ln -s "$desktopItem/share/applications/${executableName}.desktop" "$out/share/applications/${executableName}.desktop"
|
||||
ln -s "$urlHandlerDesktopItem/share/applications/${executableName}-url-handler.desktop" "$out/share/applications/${executableName}-url-handler.desktop"
|
||||
|
||||
mkdir -p $out/share/pixmaps
|
||||
cp $out/lib/vscode/resources/app/resources/linux/code.png $out/share/pixmaps/code.png
|
||||
mkdir -p "$out/share/pixmaps"
|
||||
cp "$out/lib/vscode/resources/app/resources/linux/code.png" "$out/share/pixmaps/code.png"
|
||||
|
||||
# Override the previously determined VSCODE_PATH with the one we know to be correct
|
||||
sed -i "/ELECTRON=/iVSCODE_PATH='$out/lib/vscode'" $out/bin/${executableName}
|
||||
grep -q "VSCODE_PATH='$out/lib/vscode'" $out/bin/${executableName} # check if sed succeeded
|
||||
sed -i "/ELECTRON=/iVSCODE_PATH='$out/lib/vscode'" "$out/bin/${executableName}"
|
||||
grep -q "VSCODE_PATH='$out/lib/vscode'" "$out/bin/${executableName}" # check if sed succeeded
|
||||
'') + ''
|
||||
runHook postInstall
|
||||
'';
|
||||
|
@ -162,9 +162,9 @@ let
|
|||
|
||||
# restore desktop item icons
|
||||
extraInstallCommands = ''
|
||||
mkdir -p $out/share/applications
|
||||
mkdir -p "$out/share/applications"
|
||||
for item in ${unwrapped}/share/applications/*.desktop; do
|
||||
ln -s $item $out/share/applications/
|
||||
ln -s "$item" "$out/share/applications/"
|
||||
done
|
||||
'';
|
||||
|
||||
|
|
29
third_party/nixpkgs/pkgs/applications/editors/your-editor/default.nix
vendored
Normal file
29
third_party/nixpkgs/pkgs/applications/editors/your-editor/default.nix
vendored
Normal file
|
@ -0,0 +1,29 @@
|
|||
{ lib, stdenv, fetchFromGitHub }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "your-editor";
|
||||
version = "1203";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "kammerdienerb";
|
||||
repo = "yed";
|
||||
rev = "608418f2037dc4ef5647e69fcef45302c50f138c";
|
||||
sha256 = "KqK2lcDTn91aCFJIDg+h+QsTrl7745So5aiKCxPkeh4=";
|
||||
};
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
patchShebangs install.sh
|
||||
./install.sh -p $out
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Your-editor (yed) is a small and simple terminal editor core that is meant to be extended through a powerful plugin architecture";
|
||||
homepage = "https://your-editor.org/";
|
||||
license = with licenses; [ mit ];
|
||||
platforms = platforms.linux;
|
||||
maintainers = with maintainers; [ uniquepointer ];
|
||||
mainProgram = "yed";
|
||||
};
|
||||
}
|
|
@ -16,13 +16,13 @@ in
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "imagemagick";
|
||||
version = "6.9.12-19";
|
||||
version = "6.9.12-26";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "ImageMagick";
|
||||
repo = "ImageMagick6";
|
||||
rev = version;
|
||||
sha256 = "sha256-8KofT9aNd8SXL0YBQ0RUOTccVxQNacvJL1uYPZiSPkY=";
|
||||
sha256 = "sha256-oNorY/93jk1v5BS1T3wqctXuzV4o8JlyZtHnsNYmO4U=";
|
||||
};
|
||||
|
||||
outputs = [ "out" "dev" "doc" ]; # bin/ isn't really big
|
||||
|
|
|
@ -18,13 +18,13 @@ in
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "imagemagick";
|
||||
version = "7.1.0-9";
|
||||
version = "7.1.0-11";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "ImageMagick";
|
||||
repo = "ImageMagick";
|
||||
rev = version;
|
||||
sha256 = "sha256-9eeOY6TvNykWA3yyQH1UR3ahdhOja87I9rsie9fMbso=";
|
||||
sha256 = "sha256-z7ZpoB8NlcS5NVyoW0ngSlakCcb5qC3bh3xDVYuWS6w=";
|
||||
};
|
||||
|
||||
outputs = [ "out" "dev" "doc" ]; # bin/ isn't really big
|
||||
|
|
76
third_party/nixpkgs/pkgs/applications/graphics/ciano/default.nix
vendored
Normal file
76
third_party/nixpkgs/pkgs/applications/graphics/ciano/default.nix
vendored
Normal file
|
@ -0,0 +1,76 @@
|
|||
{ lib
|
||||
, stdenv
|
||||
, fetchFromGitHub
|
||||
, desktop-file-utils
|
||||
, ffmpeg
|
||||
, gobject-introspection
|
||||
, granite
|
||||
, gtk
|
||||
, imagemagick
|
||||
, libgee
|
||||
, libhandy
|
||||
, libsecret
|
||||
, libsoup
|
||||
, meson
|
||||
, ninja
|
||||
, pkg-config
|
||||
, python
|
||||
, vala
|
||||
, wrapGAppsHook
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "ciano";
|
||||
version = "0.2.4";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "robertsanseries";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
hash = "sha256-nubm6vBWwsHrrmvFAL/cIzYPxg9B1EhnpC79IJMNuFY=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
desktop-file-utils
|
||||
meson
|
||||
ninja
|
||||
pkg-config
|
||||
python
|
||||
vala
|
||||
wrapGAppsHook
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
ffmpeg
|
||||
imagemagick
|
||||
granite
|
||||
gtk
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
chmod +x meson/post_install.py
|
||||
patchShebangs meson/post_install.py
|
||||
'';
|
||||
|
||||
dontWrapGApps = true;
|
||||
|
||||
postFixup = let
|
||||
binPath = lib.makeBinPath [
|
||||
ffmpeg
|
||||
imagemagick
|
||||
];
|
||||
in
|
||||
''
|
||||
wrapProgram $out/bin/com.github.robertsanseries.ciano \
|
||||
--prefix PATH : ${binPath} "''${gappsWrapperArgs[@]}"
|
||||
ln -s $out/bin/com.github.robertsanseries.ciano $out/bin/ciano
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://github.com/robertsanseries/ciano";
|
||||
description = "A multimedia file converter focused on simplicity";
|
||||
license = licenses.gpl3Plus;
|
||||
maintainers = with maintainers; [ AndersonTorres ];
|
||||
platforms = platforms.linux;
|
||||
};
|
||||
}
|
|
@ -1,7 +1,6 @@
|
|||
{ lib
|
||||
, mkDerivation
|
||||
, fetchFromGitHub
|
||||
, fetchpatch
|
||||
, cmake
|
||||
, dxflib
|
||||
, eigen
|
||||
|
@ -10,6 +9,7 @@
|
|||
, LASzip
|
||||
, libLAS
|
||||
, pdal
|
||||
, pcl
|
||||
, qtbase
|
||||
, qtsvg
|
||||
, qttools
|
||||
|
@ -19,30 +19,21 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "cloudcompare";
|
||||
version = "2.11.2"; # Remove below patch with the next version bump.
|
||||
# Released version(v2.11.3) doesn't work with packaged PCL.
|
||||
version = "unstable-2021-10-14";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "CloudCompare";
|
||||
repo = "CloudCompare";
|
||||
rev = "v${version}";
|
||||
sha256 = "0sb2h08iaf6zrf54sg6ql6wm63q5vq0kpd3gffdm26z8w6j6wv3s";
|
||||
rev = "1f65ba63756e23291ae91ff52d04da468ade8249";
|
||||
sha256 = "x1bDjFjXIl3r+yo1soWvRB+4KGP50/WBoGlrH013JQo=";
|
||||
# As of writing includes (https://github.com/CloudCompare/CloudCompare/blob/a1c589c006fc325e8b560c77340809b9c7e7247a/.gitmodules):
|
||||
# * libE57Format
|
||||
# * PoissonRecon
|
||||
# In a future version it will also contain
|
||||
# * CCCoreLib
|
||||
fetchSubmodules = true;
|
||||
};
|
||||
|
||||
patches = [
|
||||
# TODO: Remove with next CloudCompare release (see https://github.com/CloudCompare/CloudCompare/pull/1478)
|
||||
(fetchpatch {
|
||||
name = "CloudCompare-fix-for-PDAL-2.3.0.patch";
|
||||
url = "https://github.com/CloudCompare/CloudCompare/commit/f3038dcdeb0491c4a653c2ee6fb017326eb676a3.patch";
|
||||
sha256 = "0ca5ry987mcgsdawz5yd4xhbsdb5k44qws30srxymzx2djvamwli";
|
||||
})
|
||||
];
|
||||
|
||||
nativeBuildInputs = [
|
||||
cmake
|
||||
eigen # header-only
|
||||
|
@ -55,6 +46,7 @@ mkDerivation rec {
|
|||
LASzip
|
||||
libLAS
|
||||
pdal
|
||||
pcl
|
||||
qtbase
|
||||
qtsvg
|
||||
qttools
|
||||
|
@ -63,15 +55,14 @@ mkDerivation rec {
|
|||
];
|
||||
|
||||
cmakeFlags = [
|
||||
# TODO: This will become -DCCCORELIB_USE_TBB=ON in a future version, see
|
||||
# https://github.com/CloudCompare/CloudCompare/commit/f5a0c9fd788da26450f3fa488b2cf0e4a08d255f
|
||||
"-DCOMPILE_CC_CORE_LIB_WITH_TBB=ON"
|
||||
"-DCCCORELIB_USE_TBB=ON"
|
||||
"-DOPTION_USE_DXF_LIB=ON"
|
||||
"-DOPTION_USE_GDAL=ON"
|
||||
"-DOPTION_USE_SHAPE_LIB=ON"
|
||||
|
||||
"-DPLUGIN_GL_QEDL=ON"
|
||||
"-DPLUGIN_GL_QSSAO=ON"
|
||||
|
||||
"-DPLUGIN_IO_QADDITIONAL=ON"
|
||||
"-DPLUGIN_IO_QCORE=ON"
|
||||
"-DPLUGIN_IO_QCSV_MATRIX=ON"
|
||||
|
@ -80,6 +71,8 @@ mkDerivation rec {
|
|||
"-DPLUGIN_IO_QPDAL=ON" # required for .las/.laz support
|
||||
"-DPLUGIN_IO_QPHOTOSCAN=ON"
|
||||
"-DPLUGIN_IO_QRDB=OFF" # Riegl rdblib is proprietary; not packaged in nixpkgs
|
||||
|
||||
"-DPLUGIN_STANDARD_QPCL=ON" # Adds PCD import and export support
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
|
|
|
@ -53,13 +53,13 @@ let
|
|||
python = python2.withPackages (pp: [ pp.pygtk ]);
|
||||
in stdenv.mkDerivation rec {
|
||||
pname = "gimp";
|
||||
version = "2.10.24";
|
||||
version = "2.10.28";
|
||||
|
||||
outputs = [ "out" "dev" ];
|
||||
|
||||
src = fetchurl {
|
||||
url = "http://download.gimp.org/pub/gimp/v${lib.versions.majorMinor version}/${pname}-${version}.tar.bz2";
|
||||
sha256 = "17lq6ns5qhspd171zqh76yf98xnn5n0hcl7hbhbx63cc6ribf6xx";
|
||||
sha256 = "T03CLP8atfAm/qoqtV4Fd1s6EeGYGGtHvat5y/oHiCY=";
|
||||
};
|
||||
|
||||
patches = [
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{ callPackage, ... } @ args:
|
||||
|
||||
callPackage ./generic.nix (args // {
|
||||
version = "5.0.0-beta1";
|
||||
version = "5.0.0-beta2";
|
||||
kde-channel = "unstable";
|
||||
sha256 = "1p5l2vpsgcp4wajgn5rgjcyb8l5ickm1nkmfx8zzr4rnwjnyxdbm";
|
||||
sha256 = "0hwh6k40f4kmwg14dy0vvm0m8cx8n0q67lrrc620da9mign3hjs7";
|
||||
})
|
||||
|
|
|
@ -98,7 +98,7 @@ stdenv.mkDerivation rec {
|
|||
passthru = {
|
||||
updateScript = gnome.updateScript {
|
||||
packageName = pname;
|
||||
versionPolicy = "none";
|
||||
versionPolicy = "odd-unstable";
|
||||
};
|
||||
};
|
||||
|
||||
|
|
58
third_party/nixpkgs/pkgs/applications/misc/binance/default.nix
vendored
Normal file
58
third_party/nixpkgs/pkgs/applications/misc/binance/default.nix
vendored
Normal file
|
@ -0,0 +1,58 @@
|
|||
{ lib, stdenv, fetchurl, dpkg, autoPatchelfHook, makeWrapper, electron_12,
|
||||
alsa-lib, gtk3, libxshmfence, mesa, nss, popt }:
|
||||
|
||||
let
|
||||
electron = electron_12;
|
||||
|
||||
in stdenv.mkDerivation rec {
|
||||
pname = "binance";
|
||||
version = "1.25.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://github.com/binance/desktop/releases/download/v${version}/${pname}-${version}-amd64-linux.deb";
|
||||
sha256 = "sha256-oXXzrRhdaWP8GcWI/Ugl8BrDWomZ+hsy5Om0+ME+zY0=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
dpkg
|
||||
autoPatchelfHook
|
||||
makeWrapper
|
||||
];
|
||||
|
||||
buildInputs = [ alsa-lib gtk3 libxshmfence mesa nss popt ];
|
||||
|
||||
libPath = lib.makeLibraryPath buildInputs;
|
||||
|
||||
dontBuild = true;
|
||||
dontConfigure = true;
|
||||
|
||||
unpackPhase = ''
|
||||
dpkg-deb -x ${src} ./
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
mv usr $out
|
||||
mv opt $out
|
||||
|
||||
runHook postInstall
|
||||
'';
|
||||
|
||||
postFixup = ''
|
||||
substituteInPlace $out/share/applications/binance.desktop --replace '/opt/Binance' $out/bin
|
||||
|
||||
makeWrapper ${electron}/bin/electron \
|
||||
$out/bin/binance \
|
||||
--add-flags $out/opt/Binance/resources/app.asar \
|
||||
--prefix LD_LIBRARY_PATH : ${libPath}
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "Binance Cryptoexchange Official Desktop Client";
|
||||
homepage = "https://www.binance.com/en/desktop-download";
|
||||
license = licenses.unfree;
|
||||
maintainers = with maintainers; [ wolfangaukang ];
|
||||
platforms = [ "x86_64-linux" ];
|
||||
};
|
||||
}
|
|
@ -26,11 +26,11 @@ let
|
|||
in
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "blender";
|
||||
version = "2.93.2";
|
||||
version = "2.93.5";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://download.blender.org/source/${pname}-${version}.tar.xz";
|
||||
sha256 = "sha256-nG1Kk6UtiCwsQBDz7VELcMRVEovS49QiO3haIpvSfu4=";
|
||||
sha256 = "1fsw8w80h8k5w4zmy659bjlzqyn5i198hi1kbpzfrdn8psxg2bfj";
|
||||
};
|
||||
|
||||
patches = lib.optional stdenv.isDarwin ./darwin.patch;
|
||||
|
|
|
@ -27,19 +27,20 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "calibre";
|
||||
version = "5.29.0";
|
||||
version = "5.30.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://download.calibre-ebook.com/${version}/${pname}-${version}.tar.xz";
|
||||
sha256 = "sha256-9ymHEpTHDUM3NAGoeSETzKRLKgJLRY4eEli6N5lbZug=";
|
||||
sha256 = "058dqqxhc3pl4is1idlnc3pz80k4r681d5aj4a26v9acp8j7zy4f";
|
||||
};
|
||||
|
||||
# https://sources.debian.org/patches/calibre/5.29.0+dfsg-1
|
||||
# https://sources.debian.org/patches/calibre/5.30.0+dfsg-1
|
||||
patches = [
|
||||
# allow for plugin update check, but no calibre version check
|
||||
(fetchpatch {
|
||||
name = "0001_only_plugin_update.patch";
|
||||
url = "https://sources.debian.org/data/main/c/calibre/5.29.0%2Bdfsg-1/debian/patches/0001-only-plugin-update.patch";
|
||||
url =
|
||||
"https://sources.debian.org/data/main/c/calibre/${version}%2Bdfsg-1/debian/patches/0001-only-plugin-update.patch";
|
||||
sha256 = "sha256-aGT8rJ/eQKAkmyHBWdY0ouZuWvDwtLVJU5xY6d3hY3k=";
|
||||
})
|
||||
]
|
||||
|
|
|
@ -1,47 +0,0 @@
|
|||
# Notes by Charles Duffy <charles@dyfis.net> --
|
||||
#
|
||||
# - The new version of OpenMP does not allow outside variables to be referenced
|
||||
# *at all* without an explicit declaration of how they're supposed to be
|
||||
# handled. Thus, this was an outright build failure beforehand. The new
|
||||
# pragmas copy the initial value from the outer scope into each parallel
|
||||
# thread. Since these variables are all constant within the loops, this is
|
||||
# clearly correct. (Not sure it's *optimal*, but quite sure it isn't
|
||||
# *wrong*).
|
||||
# - Upstream has been contacted -- I'm a Lulzbot customer with an active
|
||||
# support contract and sent them the patch. That said, they're in the middle
|
||||
# of some major corporate churn (sold themselves out of near-bankruptcy to an
|
||||
# out-of-state business entity formed as a holding company; moved to that
|
||||
# state; have been slowly restaffing after), so a response may take a while.
|
||||
# - The patch is purely my own work.
|
||||
|
||||
--- curaengine/src/support.cpp.orig 2020-03-28 10:38:01.953912363 -0500
|
||||
+++ curaengine/src/support.cpp 2020-03-28 10:45:28.999791908 -0500
|
||||
@@ -854,7 +854,7 @@
|
||||
const double tan_angle = tan(angle) - 0.01; // the XY-component of the supportAngle
|
||||
xy_disallowed_per_layer[0] = storage.getLayerOutlines(0, false).offset(xy_distance);
|
||||
// for all other layers (of non support meshes) compute the overhang area and possibly use that when calculating the support disallowed area
|
||||
- #pragma omp parallel for default(none) shared(xy_disallowed_per_layer, storage, mesh) schedule(dynamic)
|
||||
+ #pragma omp parallel for default(none) firstprivate(layer_count, is_support_mesh_place_holder, use_xy_distance_overhang, z_distance_top, tan_angle, xy_distance, xy_distance_overhang) shared(xy_disallowed_per_layer, storage, mesh) schedule(dynamic)
|
||||
for (unsigned int layer_idx = 1; layer_idx < layer_count; layer_idx++)
|
||||
{
|
||||
Polygons outlines = storage.getLayerOutlines(layer_idx, false);
|
||||
@@ -1054,7 +1054,7 @@
|
||||
const int max_checking_layer_idx = std::min(static_cast<int>(storage.support.supportLayers.size())
|
||||
, static_cast<int>(layer_count - (layer_z_distance_top - 1)));
|
||||
const size_t max_checking_idx_size_t = std::max(0, max_checking_layer_idx);
|
||||
-#pragma omp parallel for default(none) shared(support_areas, storage) schedule(dynamic)
|
||||
+#pragma omp parallel for default(none) firstprivate(max_checking_idx_size_t, layer_z_distance_top) shared(support_areas, storage) schedule(dynamic)
|
||||
for (size_t layer_idx = 0; layer_idx < max_checking_idx_size_t; layer_idx++)
|
||||
{
|
||||
support_areas[layer_idx] = support_areas[layer_idx].difference(storage.getLayerOutlines(layer_idx + layer_z_distance_top - 1, false));
|
||||
--- curaengine/src/layerPart.cpp.orig 2020-03-28 10:36:40.381023651 -0500
|
||||
+++ curaengine/src/layerPart.cpp 2020-03-28 10:39:54.584140465 -0500
|
||||
@@ -49,7 +49,7 @@
|
||||
{
|
||||
const auto total_layers = slicer->layers.size();
|
||||
assert(mesh.layers.size() == total_layers);
|
||||
-#pragma omp parallel for default(none) shared(mesh, slicer) schedule(dynamic)
|
||||
+#pragma omp parallel for default(none) firstprivate(total_layers) shared(mesh, slicer) schedule(dynamic)
|
||||
for (unsigned int layer_nr = 0; layer_nr < total_layers; layer_nr++)
|
||||
{
|
||||
SliceLayer& layer_storage = mesh.layers[layer_nr];
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue